The growing war on authenticity, which includes fake news and production of counterfeit goods, now extends to include deepfakes. Originating from a Reddit account, the term deepfake refers to the practice of creating a video or audio segment that makes a person appear to be verbalizing or doing something different than what they are actually saying or doing. This statement nicely describes deepfakes: “The goal is to create digital video and audio that appears ‘real.’ A picture used to be worth a thousand words – and a video worth a million – but deepfake technology means that ‘seeing’ is no longer ‘believing.’”
The practices of creating fake news, deepfakes, and counterfeit products are not new (think journalism during the 1890s, the use of body doubles for security and entertainment, and fake versions of popular products, such as designer handbags or electronic goods). The advancement of technology combined with expanding social communication avenues allow for these practices to become more prolific and realistic, which is why there is increased concern. Particularly for deepfakes, according to IEEE Spectrum, “the main ingredient in deepfakes is machine learning, which has made it possible to produce deepfakes much faster at a lower cost.”
Why We Need To Talk About Deepfakes
According to Wired, deepfake videos are not a big problem but the worry is they could eventually become powerful weapons for “political misinformation, hate speech, or harassment.” In addition, the technology for making deepfakes is now relatively simple and freely available.
The concern for increased activity is to be noted. As of June 2020, Sensity (formerly Deeptrace) identified 49,081 deepfake videos, an increase of more than 330% since July 2019. Entertainment is by far the most targeted industry and they identified a significant increase “in the number of Instagram, Twitch, and Youtube personalities being targeted.” Increases in target backgrounds from business (4.1%) and politics (4%) were also reported.
Focusing on the use of deepfake technology in business, it is easy to foresee troubling possibilities. According to Shuti Pro, threats come from the possible usage of fake videos of CEOS to tarnish business reputation, commit fraud and extortion, and enhance market manipulation. In August 2019, The Wall Street Journal reported that a CEO of an energy firm based in the UK unwittingly participated in a fraudulent transfer of funds sent to a Hungarian supplier. The CEO thought he was speaking to the chief executive of the firm’s German parent company, who asked him to send the funds.
How To Identify Deepfakes
- The MIT Media Lab has developed the Detect Fakes website as part of a research project that strives to identify techniques to “counteract AI-generated misinformation.” They built the website to enable visitors to see the answers for themselves, instead of trying to explain the difference between altered videos and non-altered videos. The trick is to understand there are several artifacts you can look for as opposed to one “single tell-tale sign.” Specific questions to ask are provided as you pay attention to the face, cheeks and forehead, glasses, facial hair and moles, blinking and size and color of the lips.
- Here a nice listing five of five ways to spot a deepfake (Techerati)
- Note resolution and quality differences between facial components and the rest of the video
- Watch for frames where the face is obscured or at a sharp angle
- Be wary of inconsistently scaled faces
- Keep an eye on inconsistent border features
- Look out for inconsistent skin tones/”shimmering”
- Investigate the Source – As with evaluating a fake news story, look closely at the source of the video. Where did it come from and what was the motivation for producing it? Check other sources that you trust to see if the video has been widely shared. See if any debunking sites have featured the video, such as FactCheck.org, or Snopes.com by entering the the subject of the video and the term video or viral video.
Two of the more well-known tools for fighting against deepfakes are Sensity (formerly Deeptrace) and Reality Defender. Reality Defender is a non-partisan collective that brings together leaders in media forensic research, AI, technology and journalism and is mostly for reporters. Sensity, founded in 2018, and based out of Amsterdam in the Netherlands is a visual threat intelligence company that provides monitoring and detection technologies to individuals and organizations to fight against deepfake threats.
As with anything related to using information that leads to critical decision-making the most important action to take is to always be mindful about the content you are consuming. In short, be aware, cautious, and diligent.