In June 2023, as seen in various published reports and releases, there was a marked uptick in concern for AI generated content that misleads and/or misinforms. Examples include:
- The United Nations released a policy brief that addresses information integrity on digital platforms and states this about recent advances in artificial intelligence: “While holding almost unimaginable potential to address global challenges, there are serious and urgent concerns about the equally powerful potential of recent advances in artificial intelligence – including image generators and video deepfakes – to threaten information integrity. Recent reporting and research have shown that generative artificial intelligence tools generated mis- and disinformation and hate speech, convincingly presented to users as fact.”
- Over 300 technology experts participated in a new Pew Research Center report with most having “great expectations for digital advances across many aspects of life by 2035,” but 79% are “more concerned than excited about coming technological change or equally concerned and excited.” One category of concern is “harm to human knowledge,” where there is fear that “the best of knowledge will be lost or neglected in a sea of mis- and disinformation,” and that “basic facts will be drowned out in a sea of entertaining distractions, bald-faced lies and targeted manipulation.”
- Results from a study in Science Advances indicate that GPT-3, in comparison with humans, “can produce accurate information that is easier to understand, but it can also produce more compelling disinformation… [and] humans cannot distinguish between tweets generated by GPT-3 and written by real Twitter users.”
- The FBI released a public service announcement warning of malicious actors creating deepfakes of photos or videos that are altered “into explicit content” for “the purpose of harassing victims or sextortion schemes.”
Meanwhile, in response to growing concerns of AI-generated misinformation, news credibility rating provider, NewsGuard, launched its ”Unreliable AI-Generated News Tracking Center,” which reports on and updates the increasing number of “Unreliable AI-Generated News” websites (UAINs) that NewsGuard analysts have identified. As of June 28, 277 UAINs have been identified that span 13 languages, including Arabic, Chinese, Czech, Dutch, English, French, Indonesian, Italian, Korean, Portuguese, Tagalog, Thai, and Turkish.
In May, Google announced it is launching a new tool called About this image, that allows users to determine whether an image is reliable/credible. It shows context like “when the image and similar images were first indexed by Google, where it may have first appeared, and where else it’s been seen online (like on news, social, or fact checking sites).” In addition, new capabilities will allow for every one of their AI-generated images to have a markup in the original file to provide context if a user comes across it outside of their platforms. Creators will be able to add similar markups and users will be able to see these from publishers such as, Midjourney, Shutterstock, and others.
Image by Jensen Art Co from Pixabay.