If your newsfeed looks anything like mine, you’ve already been inundated with articles about Artificial Intelligence, ChatGPT, AI art, and the like. You’ve maybe even received warnings, cautioning about the potential for students to use such programs to plagiarize or submit AI-generated materials.
The proverbial AI-genie is out of the bottle, and instructors will have to live with the reality that there will be students who use AI-generated or AI-enhanced materials as their own work. Fortunately, like the cyber security arms race, a similar arms race between AI generators and AI detection is emerging, as competing programs pop up left and right.
GPT-2 Output Detector Demo
OpenAI, the team that created Chat-GPT, has released a tool, called GPT-2 Output Detector Demo. It examines the text and then gives the user an idea of how likely it is that the text is AI-generated.
However, the jury is still out on whether it’s an effective tool for detecting plagiarism. Some seriously doubt its effectiveness, while others are more hopeful. It’s possible though that the tool will only continue to improve (while AI generators also continue to improve).
OpenAI is also working on “watermarking” the output generated by Chat-GPT, but it appears to be a work-in-progress, and many experts are skeptical of its practicality.
Giant Language Model Test Room (GLTR) is another similar tool, created in collaboration by the MIT-IBM Watson AI lab and HarvardNLP. Basically, the more likely GLTR can predict the next word in a sentence, the more likely the text is AI-generated. It highlights passages that indicate it may have been written by AI, but not enough studies have been done yet to determine how effective it is.
A computer science student from Princeton, Edward Tian, also recently published a tool that purports to detect whether the text is written by ChatGPT. In an effort to combat AI plagiarism, he worked on the program over his winter break. Currently, it can only calculate 1500 words at a time.
All of these tools can be used together to see if something was AI-generated, but as these are also in their early stages, their accuracy isn’t foolproof.
Something to keep in mind, however, is that as AIs become more and more sophisticated, it’s possible that people can be falsely accused of using an AI to create a piece of work. For example, Ben Moran, an artist based in Vietnam, posted one of his works on the subreddit r/Art, but was accused of posting AI art and was then banned by the moderators.
And issues surrounding academic integrity are not necessarily race-neutral. One survey found that Black and Asian/Asian-American students are accused of plagiarism more than their peers. One of my own core memories is being accused of plagiarizing a poem for a middle school assignment–in retrospect, it was a low-stakes event, but in my eyes as an adolescent, it was quite traumatic to be accused of plagiarism.
While these tools can be useful in detecting the potential for AI-generated work, it’s also important to keep in mind that these tools are not perfect. There are real limitations, and it’s essential to be cautious when asserting plagiarism when there’s a real possibility you can be wrong.
Here’s a table that analyzes my article using the above tools.
|GPT-2 Output Detector Demo||GLTR||GPTZero|
|“AI v. AI v. AI“||Real: 99.97%Fake: 0.03%||Top 10 word: 549|
Top 100 word: 115
Top 1000 word: 48
|Average perplexity: 76.92|
GPTZero Score: 53.93