ChatGPT’s entrance into public awareness has been an incredible phenomenon to observe. Almost immediately a slew of articles appeared, and continue to be published, that raise concerns about its potential to be misused and its ability to further generate and spread misinformation. These articles all have merit, especially when considering ChatGPT for disciplines where decisions based on inaccurate third party data can have dire consequences, such as in healthcare and for corporate risk. In terms of AI generated content, it is ironic that with each step of technology advancement, we are in turn, having to beat the disinformation drum more and more loudly.
The problem is not with technology, but with the initial guiding intent for development, which needs to first and foremost consider and address content verification and content abuse challenges. Will investors and the market appreciate the potential of startups that develop AI generated content with focused disinformation-forward intent?
Currently there are tools being launched and developed to help with the misuse of ChatGPT. Here are some examples. Note: The results of the verifiers I tested using the same samples of text were similar and consistent.
OpenAI – ChatGPT creator, OpenAI, has recently announced they have trained a classifier to help distinguish between human-written text and text “written by AIs from a variety of providers.” OpenAI cites misinformation campaigns, academic dishonesty, and human positioning of AI chatbots as examples of how good classifiers can inform false claims. The classifier, they openly admit, is not fully reliable and list various limitations of its use. They have recognized the importance of identifying AI-written text for educators and have developed a resource page specifically for them on the use of ChatGPT.
GPTZero – NPR and other news sources have reported on an app developed by Edward Tian, a Princeton University 22-year-old senior that quickly determines if an essay was written by a human or ChatGPT. “To determine whether an excerpt is written by a bot, GPTZero uses two indicators: ‘perplexity’ and ‘burstiness.’ Perplexity measures the complexity of text; if GPTZero is perplexed by the text, then it has a high complexity and it’s more likely to be human-written. However, if the text is more familiar to the bot — because it’s been trained on such data — then it will have low complexity and therefore is more likely to be AI-generated.”
Newtral – Also picking up media press, is Newtral, a startup that is working on improving fact-checking through automation on a “language agnostic” platform based on Artificial Intelligence and Machine Learning. According to Wired, Newtral “began developing its multilingual AI language model, ClaimHunter, in 2020, funded by the profits from its TV wing, which produces a show fact-checking politicians, and documentaries for HBO and Netflix.”
Content At ScaleAI Detector – This tool is interesting because it is developed by Content At Scale, a company that helps content publishers with SEO by automating content creation. Using their AI Detector, users can paste or write in content to know if any of it is written by AI. Their ChatGPT detector “works at a deeper level than a generic AI classifier and detects robotic sounding content.”
JustAnswer Chat Verifier – For actual data verification with a focused human element, JustAnswer Chat Verifier was announced on February 22. This verifier tool relies on human-centered verification that enables users “to quickly verify the accuracy of results generated by ChatGPT with board-certified doctors, licensed accountants, lawyers and other vetted professionals in over 150 categories.” Users submit GPT answers, and are then connected with a JustAnswer expert for reviewing and accuracy feedback. JustAnswer is not a new company, it has been around since 2003.
Photo by Levart_Photographer on Unsplash