Since the announcement in late November 2022, ChatGPT has been used by more than a million people. The AI community is excited, and it’s clear that the web is increasingly flooded with AI-generated content. People use it to code WooCommerce upload files, write brand stories, and improve emails.
ChatGPT is an example of OpenAI’s extensive GPT-3 language model that provides surprisingly human-like answers to questions. The trick – and the danger – of these big language models lies in the illusion of accuracy.
The sentences they make look right – they use the right words in the right order. But the AI doesn’t know what that means. These examples work by predicting the next word most likely to appear in a sentence. They need to know if something is right or wrong and confidently present true information even if it isn’t. In a politically charged, media-driven online world, these AI tools can further disrupt the information we use. The results can be devastating if they work in the real world with real products.
Here are some things you can find to determine if AI wrote an article.
1. Content at Scale AI Detector
The Content at Scale team recently released a free AI analysis that provides the best tool for quickly identifying AI content. The tool has been trained on billions of data pages and can be tested with 25k characters at a time.
To use the tool, paste a description into the search field and submit it for discovery. In a few seconds, you will see the Human Content Score (what appears to be the text written by a human) and get a line-by-line of the parts of your content that are suspected or confirmed by AI.
2. Using OpenAI’s AI Text Classifier
OpenAI, the AI research company behind ChatGPT, has released a new tool to distinguish between AI-generated and human-generated text.
Although it is impossible to identify AI-written texts with 100% accuracy, OpenAI believes that its new tool can help reduce false information written by humans in AI-generated content. OpenAI said in a press release that its new AI text classifier could limit the ability to run disinformation campaigns, use AI tools to explore content, and turn people into chatbots.
When tested on typing English text, the tool can correctly identify whether it was written by AI 26% of the time. But he also mistakenly believed that human text was written by AI 9% of the time. OpenAI claims its tool is longer than text, so it probably takes less than 1000 characters to run the test.
3. Originality.ai
If you’re looking for a business-led analysis to determine if an email is published and written by AI, check out Originality. This tool uses a combination of GPT-3 and other natural language models (both trained on large datasets) to identify whether it is in it looks like it can be seen.
The former is the only tool not using AI to run ChatGPT and GPT 3.5 (the most advanced generative language tools) properly.
For those who want to check their writing automatically and easily, Originality is the right tool. Unlike Content at Scale, Originality stores your tests in your account list. This is useful if you need to review many things frequently. Remember, nothing is final.
4. Training the human eye
There is no silver bullet for AI scenarios. The virtual machine is not your solution for finding text, just as a security filter is not your solution for reducing bias.
To have a chance to solve the problem, we need more technical improvements and more transparency when people interact with AI, and people will need to learn to see the signs of AI-written sentences. It would be great to have a plugin in Chrome or whatever web browser you use to know if your web page has machine-generated text.
The good news is that humans can be trained to notice AI text better. An expert created a game to see how many sentences a computer could make before the player realized it wasn’t human and found that humans have improved over time.
A bland, informational voice
AI systems construct text by predicting which word should go next based on patterns observed in the many languages they analyze. The results of these processes are somewhat predictable. Because they rely on statistical analysis, they rarely break conventions or do weird things with language.
Even if you ask the AI system to write in a strong voice or in a different way, it usually achieves this not by distorting the form but by adding strange words. It’s like an AI taking a piece of text and using a search engine to randomly replace different words instead of changing the structure predictably.
One way to determine if a machine wrote something is to find text that sounds clear and consistent. AI systems rarely write in the first person, rarely share personal information, and do not use serious words like love, hate, fear, beauty, etc.
Imagine a voice that could be used when they write a dictionary. This is the voice that AI usually writes. If you see that the content is written in a voice, it looks like AI created it. Sometimes people try this sound to show authority, so this method could improve! But a slow, steady, and prominent voice can be a clear sign of AI.
Text that’s too perfect
As the old saying goes, it is only human that makes mistakes. Writers make many mistakes. But since AI systems are machines, they rarely do grammatically stupid things. Likewise, they may choose the wrong word but rarely misspell words.
According to the MIT Technology Review, typos are usually a good sign that the text was written by a human and not by artificial intelligence. If the text you view is complete and follows the instructions, it is probably computer generated.
The author is also a bit sloppy regarding grammar or working with four of them. It could be great! However, if the text follows the rules and there are no typos, it may have been written by a computer.
5. Verify sources & author credibility
It might need to be put in the right place for a single blog, but it’s worth mentioning. If you’re reading an article and a section seems randomly related to posted content, that’s your first red flag. Such articles will get affected by the ‘Helpful Content Update’ hit of Google.
More importantly, you should check the sources used in the article, if any. If the author uses sources from the website or says something without a source, the author is not doing their research or using a lot of AI-generated content.
Conclusion
AI will likely get better over time and write in a way that people can tell it apart from what others write. ChatGPT and GPT-3 are big steps in this direction.
The good news for writers and those looking for AI-generated content is that it has yet to exist. These simple methods and the specialized tools we’ve discussed here can go a long way in separating AI text from human work.