Student issues warning to people who use Grammarly

@maggiesavannah1/TikTok EnesPhography/ShutterStock (Licensed)

‘This is literally my biggest fear’: Student issues warning after false positive. She even deleted Grammarly

‘It’ll never happen to me because I’m super careful in the way that I write papers.’

 

Phil West

Trending

A Liberty University student who claims her paper was incorrectly flagged for AI text generation is warning other students of one of the knottiest issues facing higher education institutions today.

The video, created by TikToker maggiesavannah1 (@maggiesavannah1), details her experience in a class when Turnitin, a company that provides plagiarism and now AI detection services for academic clients, flagged her paper for using AI-generated text even though she claims she didn’t use it. The video, put up on May 3, has generated more than 2.2 million views.

In the lengthy video, she describes her situation, including her former thinking that “it’ll never happen to me because I’m super careful in the way that I write papers.” She even, at the urging of another TikToking student covered by the Daily Dot, deleted Grammarly from her computer for fear that a Grammarly-assisted paper might be flagged by an AI detector.

One of her professors at Liberty contacted her about a paper, wanting to check on the origins of her paper with her, as the Turnitin detector that the school uses “gave me an AI score of 35%.” She added, “I really appreciate that she actually asked me about it and gave me the benefit of the doubt rather than just assuming automatically that I use AI.”

She was able to persuade her professor that the paper was her original work, and in the process, brought plenty of receipts (shared on her Linktree) to show that not only is AI detection relatively new, but is not altogether accurate.

She included a Washington Post article from April 2023 that explored the issue, including a quote from Turnitin chief product officer Annie Chechitelli, name-checked by the creator in the video. “Our job is to create directionally correct information for the teacher to prompt a conversation,” Chechitelli said in the article. “I’m confident enough to put it out in the market, as long as we’re continuing to educate educators on how to use the data.”

Turnitin’s AI detection under fire

But concerns about the efficacy of Turnitin’s AI detection has led some schools to press pause on it. In December, for example, the University of California, Irvine issued a statement noting, “The similarity detection functionality of Turnitin is context-rich: if student content is flagged as
being similar to something submitted or published elsewhere, instructors receive information
about the match and can quickly begin assessing whether the report points to an educational
opportunity on use of proper citations or may indicate plagiarism. On the other hand, the AI
detection tool offers no such context, stating only that the identified percentage of the
submission ‘has been determined to be generated by AI.’”

That statement went on to note that Turnitin’s AI detection feature “hinges on next-word probability: the concept that, as ChatGPT and similar models output a string of text, they are simply choosing the most likely word that should go after the word they have just chosen, based on the many millions of pages
of text they have been fed as part of their ‘training.’ Turnitin argues that humans, by contrast,
choose words in an ‘inconsistent and idiosyncratic’ fashion, so detection tools can exploit this
difference to flag AI-generated text.”

That is leading to a disputed number of “false positives,” with Turnitin initially claiming less than 1% but Originality.ai contending it’s more like 2%, and adding, “AI content detection is not perfect and it does produce false positives. These false positives can be very painful for anyone that created original content.”

The Washington Post story did a test run with 16 different papers “got over half of them at least partly wrong,” hitting on six of the 16, but failing on three, and getting what the article termed “partial credit on the remaining seven, where it was directionally correct but misidentified some portion of ChatGPT-generated or mixed-source writing.”

@maggiesavannah1 #greenscreen i was wrongfully accused of using ai to write a final paper. what would you do in my situation? @Marley Stevens has a GREAT series on her profile about grammarly being flagged as ai and her advocacy against it. i will update yall when i know more 🩷 #ai #college #falselyaccused ♬ original sound – maggiesavannah1

Liberty University’s policy notes, “If your written instructions do not expressly permit AI use, then be warned that you are not permitted to do so,” but adds that if AI is allowed, it should be cited.

One commenter to the creator’s initial video shared a Change.org petition addressing Turnitin’s use of AI detection, but it only has a modest 56 signatures.

The Daily Dot has reached out to the creator via TikTok direct message and to Liberty University and Turnitin via email.

Update June 3, 11:46am CT: A Turnitin rep tells the Daily Dot: “There is a broad spectrum of academic policies on the use of AI in student writing. Any positive AI score should only be used as an indication for the educator to investigate further. In the rare cases where an individual believes they were mistakenly accused of misusing AI, the educator’s knowledge of the student and their past work is critical to making an evidence-based decision. For instance, as referenced in the video, students can share the paper’s revision history with the educator, which will clarify how the paper was written and whether AI was potentially misused. Turnitin is committed to advancing education through technologies such as AI and fostering academic integrity.”

 
The Daily Dot