Creating Deepfakes in Minutes

Deepfakes Can Be Created in Minutes for Just a Few Dollars


Wharton School's associate professor, Ethan Mollick, recently used his own image on an AI platform to craft a deepfake video. At a quick glance, his LinkedIn video seemed typical for a Wharton School business professor: a check-shirt-clad Mollick discussing entrepreneurship, a subject he knows well. There were subtle oddities in his speech and motions, but nothing significant enough to catch the eye unless you knew him personally.


However, this was not the real Ethan Mollick. What viewers saw was an artificially created version of the professor, his likeness and voice replicated with the help of sophisticated AI tools.


Mollick undertook this experiment out of curiosity and was surprised at the simplicity and affordability of the process. For Mollick, the excitement around the evolving AI landscape is coupled with concern, as he sees the potential for misuse in spreading disinformation.


His role at Wharton involves teaching about innovation, and he’s woven AI tools into his curriculum. He’s also started to document his experiences with AI on social media and through a newsletter. Mollick began his exploration with OpenAI's ChatGPT, prompted it to draft a speech on entrepreneurship, and was impressed with the output. Afterward, he turned to a voice cloning tool, using a short audio sample of himself, and completed the deepfake with an app that animated his photo to match the synthetic voice.


It took Mollick a mere $11 and eight minutes to create a convincingly fake version of himself delivering a lecture he’s never actually given.


He posted the fictitious lecture online, both as a demonstration of the technology and as a caution about the immediate dangers posed by such AI capabilities. The fact that virtually anyone could be spoofed so convincingly, Mollick argues, is an unsettling development.


With the technology no longer confined to the future, there’s a real-time issue with deepfakes. They're being exploited for humorous content, but also for malicious intentions, including political manipulation. A fake video of President Biden talking about drafting soldiers to Ukraine created by activist Jack Posobiec is an example of the latter.


AI-generated content is not just an entertainment tool; it's become a tool for scams and propaganda. A recent incident involved Chinese bots on social media promulgating deepfake videos. And con artists are leveraging synthetic audio to craft emergency scenarios to defraud victims.


Gary Marcus, a cognitive scientist at NYU, warns that the widespread availability of AI tools compromises our trust in online content. He envisions a world awash with convincing, AI-generated misinformation too voluminous for individuals to discern.


While pictures and videos might be easier to identify as forgeries, written content produced by AI is trickier to recognize. AI’s ability to mimic human language patterns has led to advanced tools capable of creating text that sounds convincingly human. These tools, like Google’s Bard, generate content like articles, tweets, and dialogues with a high degree of plausibility.


The generation of deceptive text poses a significant concern: it may lead to large-scale propaganda efforts becoming accessible to more people with malintent. As AI continues to craft more compelling and less detectable fakes, the danger of such "firehose of falsehood" methodologies, as termed by Steve Bannon, becomes a real threat. This concept involves overwhelming the public discourse with an excess of misleading information.


To date, there is no confirmed instance of AI-generated text being used in propaganda or influence operations, but the concern looms large.

In response, tech companies are rushing to develop measures to mitigate misuse and the AI's propensity to generate false narratives. However, some AI technologies have already found their way into less controllable environments, like the leak of a language tool by Meta to the forum 4chan.


Amidst the tech industry's push to integrate AI into a range of applications, concerns are mounting about the pace at which these tools are being deployed without a clear understanding of their potential for harm. Aza Raskin from the Center for Humane Technology points out the danger of embedding AI into our fundamental digital infrastructure before ensuring its safety.


Professor Mollick, despite having showcased the ease of creating a deepfake, shares the anxiety that the rapid advances in AI may outpace our ability to manage its influence. With the proverbial "cat out of the bag," society now must grapple with the pervasive reach of these powerful tools.


Taken from https://www.npr.org/2023/03/23/1165146797/it-takes-a-few-dollars-and-8-minutes-to-create-a-deepfake-and-thats-only-the-sta