In this installment, my AI buddy and I are looking at the question of whether it’s possible to fight back against AI-generated plagiarism, and how that might work. Also: What is ChatGPT and why does it matter? Here’s what you need to know Before I go on, though, we need to talk about the concept of plagiarism and how it relates to this problem. Webster’s defines “plagiarize” as “to steal and pass off (the ideas or words of another) as one’s own : use (another’s production) without crediting the source.” This fits for AI-created content. While someone using an AI tool like Notion AI or ChatGPT isn’t stealing content, if that person doesn’t credit the words as coming from an AI and claims them as their own, it still meets the dictionary definition of plagiarism. In this experimental article, I’ve asked ChatGPT to help out. My words are in normal and bold text. The AI’s words are italicized. I’ll also be using three online ChatGPT plagiarism detectors (which I’ll describe in more detail below). After each AI-generated section, I’ll show the results of the detectors. At the end of the article, we’ll look at how well the detectors performed overall. Here’s the result for the above text, which I wrote myself:

GPT-2 Output Detector: 99.98% realWriter AI Content Detector: 100% human-generated content. Fantastic!Content at Scale AI Content Detection: 100% human content score. Looks great!

With that, let’s get started. Explain why ChatGPT can be a real problem for teachers and editors when it comes to plagiarism ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text by predicting the next word or phrase in a given input sequence. While ChatGPT can be a useful tool for generating text and responding to prompts, it can also be a problem for teachers and editors when it comes to plagiarism. One of the main reasons why ChatGPT can be a problem for teachers and editors is that it can generate text that is very similar to that of real human writers. This means that students or other individuals could potentially use ChatGPT to generate text that they could then submit as their own work, thereby committing plagiarism. Also: Can Notion AI writing helper write this article? For teachers and editors, detecting whether a given piece of text was generated by ChatGPT or written by a human can be very difficult, as the generated text may not contain any obvious errors or discrepancies that would reveal its artificial origin. This can make it difficult for teachers and editors to identify and prevent plagiarism, as they may not be able to tell whether a given piece of text is original or copied from another source. Overall, ChatGPT and other similar language models can be a problem for teachers and editors when it comes to plagiarism because they can generate text that is difficult to distinguish from that written by humans, making it easier for students and other individuals to commit plagiarism without being detected.

GPT-2 Output Detector: 99.69% realWriter AI Content Detector: 97% human-generated content. Fantastic!Content at Scale AI Content Detection: 87% human content score. Looking good!

Remember, text shown in italics was generated by the AI. Show me at least three ways that I can detect ChatGPT was used to produce a piece of text There are several ways that you can potentially detect whether ChatGPT or another language model was used to produce a piece of text. Here are three potential approaches you could use: Keep in mind that it can be difficult to definitively determine whether a language model was used to generate a piece of text, and these methods may not always be effective. However, they can be useful tools to help you identify potential instances of language model-generated text. In this set of tests, only the italicized content above was run through the checker suite.

GPT-2 Output Detector: 99.98% fakeWriter AI Content Detector: [no result]Content at Scale AI Content Detection: 0% human content score. Obviously AI

List three plagiarism checkers that can detect ChatGPT text It is worth noting that there is currently no plagiarism checker that is specifically designed to detect text generated by ChatGPT or other language models. While some plagiarism checkers may be able to identify certain characteristics of language model-generated text that could indicate its artificial origin, there is no guarantee that they will be able to definitively determine whether a piece of text was generated by a language model. With that said, here are three plagiarism checkers that you might consider using to help identify potential instances of language model-generated text: It is worth noting that no plagiarism checker is completely foolproof, and there is always the possibility that a piece of language model-generated text could escape detection. Therefore, it is important to use plagiarism checkers as just one part of a larger strategy to detect and prevent plagiarism.

GPT-2 Output Detector: 99.62% realWriter AI Content Detector: [no result]Content at Scale AI Content Detection: 86% human content score. Looking good!

Online AI plagiarism checkers

Most plagiarism detectors are used to compare writing against a corpus of other writing. For example, when a student turns in an essay, a product like Turnitin scans the submitted essay against a huge library of other essays in its database, as well as against other documents and text on the internet to determine if the submitted essay contains already-written content. But the AI writing tools generate original content, at least in theory. Yes, they build their content from whatever they’ve been trained on, but the actual words they construct are somewhat unique for each composition. Also: AI’s true goal may no longer be intelligence As such, the plagiarism checkers mentioned above probably won’t work, because the AI-generated content probably didn’t exist in, say, another student’s paper. So I took to Google and searched for detectors specifically designed to look for the telltale signatures of AI-driven content. I found three. For the test content shown in the screenshots below, I asked ChatGPT this: “Is star trek better than star wars? Justify and explain” Its answer wasn’t bad at all, and I fed that answer into the three testers.

GPT-2 Output Detector: 99.98% realWriter AI Content Detector: 100% human-generated content. Fantastic!Content at Scale AI Content Detection: 100% human content score. Looks great!

GPT-2 Output Detector (Accuracy 66%)

This first tool was built using a machine learning hub managed by New York-based AI company Hugging Face. While the company has received $40 million in funding to develop its natural language library, the GPT-2 detector appears to be a user-created tool using the Hugging Face Transformers library. Of the six tests I ran, it was accurate for four of them.

Writer.com AI Content Detector (Accuracy N/A)

Writer.com is a service that generates AI writing, oriented towards corporate teams. Its AI Content Detector tool can scan for generated content. Unfortunately, I found this tool unreliable. Of the six scans I ran through it, it failed on three. Of the three it did run on successfully, it got two right and one wrong. After this article was published, the folks at Writer.com reached out to ZDNET. CEO May Habib had this comment to share:

Content at Scale AI Content Detection (Accuracy 50%)

The third tool I found was also produced by an AI content generation firm, however this one seemed to work reliably. Content at Scale pitches itself as “We Help SEO Focused Content Marketers by Automating Content Creation.” Its marketing call to action is, “Upload a list of keywords and get 2,600+ word blog posts that bypass AI content detection – all with no human intervention!” Of the six tests I ran, it was accurate for three.

Final thoughts

AI-driven plagiarism is a growing problem, as AI tools like Notion AI and ChatGPT can generate text that is difficult to distinguish from that written by humans. There are several ways to detect whether a piece of text was generated by an AI, such as looking for repetitive or unusual patterns, checking for a lack of originality, or using a plagiarism checker. Plagiarism checkers like Turnitin, PlagScan, and Copyleaks may be able to identify certain characteristics of language model-generated text, but they are not foolproof. Even the specialty tools I found were woefully unsuited to the task. The best case was the GPT-2 Output Detector, but it only got two thirds of the text submitted correct. At this point, I don’t think we’re ready to let AI fight AI. Instead, we’re going to have to rely on human perception (which means folks have to pay attention) in order to identify works that were generated by an AI. Don’t immediately assume that something is machine-generated. Just because something is stupid or terribly inaccurate, don’t immediately assume it was not written by a human. As we’ve all come to know, humans are quite capable of writing some incredibly bad stuff all on their own. Also: OpenAI’s ChatGPT is scary good at my job, but it can’t replace me (yet) So what do you think? Can you tell if a piece is AI generated? Have you used these AI writing tools in the months since they suddenly all became available? Do you think they’re a help or hindrance to education and journalism? Let us know in the comments below. Here’s how ChatGPT answered this: As a language model, I can’t see or interact with physical pieces of writing, so I can’t tell if a piece is AI generated or not. However, I can tell you that there are several AI writing tools that are available, and they can be used to generate a wide range of written content, including news articles, fiction, poetry, and more. I don’t have personal experience using these tools since I am just a computer program and don’t have the ability to browse the internet or use physical tools. However, I can tell you that some people have found AI writing tools to be helpful in education and journalism, as they can save time and effort by automating certain tasks. Others have expressed concerns about the potential for these tools to be used to produce misleading or biased content, or to replace human writers. Ultimately, whether AI writing tools are a help or hindrance depends on how they are used and the context in which they are used. So, yeah, there’s that. You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.