ChatGPT and Misuse: Thoughts and Tools

ChatGPT’s entrance into public awareness has been an incredible phenomenon to observe. Almost immediately a slew of articles appeared, and continue to be published, that raise concerns about its potential to be misused and its ability to further generate and spread misinformation. These articles all have merit, especially when considering ChatGPT for disciplines where decisions based on inaccurate third party data can have dire consequences, such as in healthcare and for corporate risk. In terms of AI generated content, it is ironic that with each step of technology advancement, we are in turn, having to beat the disinformation drum more and more loudly.

The problem is not with technology, but with the initial guiding intent for development, which needs to first and foremost consider and address content verification and content abuse challenges. Will investors and the market appreciate the potential of startups that develop AI generated content with focused disinformation-forward intent?

Currently there are tools being launched and developed to help with the misuse of ChatGPT. Here are some examples.

OpenAI – ChatGPT creator, OpenAI, has recently announced they have trained a classifier to help distinguish between human-written text and text “written by AIs from a variety of providers.” OpenAI cites misinformation campaigns, academic dishonesty, and human positioning of AI chatbots as examples of how good classifiers can inform false claims. The classifier, they openly admit, is not fully reliable and list various limitations of its use. They have recognized the importance of identifying AI-written text for educators and have developed a resource page specifically for them on the use of ChatGPT. Note: OpenAi discontinued the AI Classifier tool. This notice was recently posted by OpenAI: As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy. We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated. – This AI content detector has received high scores for its offerings. describes itself as a complete content creation quality control tool. It provides not only a highly accurate AI content detector for AI-written text created by popular LLMs, but also a Plagiarism Checker and a Readability Score Checker for high ranking in Google. Founder Jon Gillham writes detailed posts about how their AI content detection works and a guide that helps readers and content providers understand AI detectors and their limitations. Note: This entry was added August 2, 2023.

GPTZeroNPR and other news sources have reported on an app developed by Edward Tian that quickly determines if an essay was written by a human or ChatGPT. “To determine whether an excerpt is written by a bot, GPTZero uses two indicators: ‘perplexity’ and ‘burstiness.’ Perplexity measures the complexity of text; if GPTZero is perplexed by the text, then it has a high complexity and it’s more likely to be human-written. However, if the text is more familiar to the bot — because it’s been trained on such data — then it will have low complexity and therefore is more likely to be AI-generated.”

Newtral – Also picking up media press, is Newtral, a startup that is working on improving fact-checking through automation on a “language agnostic” platform based on Artificial Intelligence and Machine Learning. According to Wired, Newtral “began developing its multilingual AI language model, ClaimHunter, in 2020, funded by the profits from its TV wing, which produces a show fact-checking politicians, and documentaries for HBO and Netflix.”

Content At ScaleAI Detector – This tool is interesting because it is developed by Content At Scale, a company that helps content publishers with SEO by automating content creation. Using their AI Detector, users can paste or write in content to know if any of it is written by AI. Their ChatGPT detector “works at a deeper level than a generic AI classifier and detects robotic sounding content.”

JustAnswer Chat Verifier – For actual data verification with a focused human element, JustAnswer Chat Verifier was announced on February 22. This verifier tool relies on human-centered verification that enables users “to quickly verify the accuracy of results generated by ChatGPT with board-certified doctors, licensed accountants, lawyers and other vetted professionals in over 150 categories.” Users submit GPT answers, and are then connected with a JustAnswer expert for reviewing and accuracy feedback. JustAnswer is not a new company, it has been around since 2003.


Photo by Levart_Photographer on Unsplash

Copyright © Copyright 2024 Cottrill Research. Site By Hunter.Marketing