At our recent KB4-CON, Dr. Lydia Kostopoulos, Disruptive Technology Educator, gave us a demonstration of the latest in Deep Fake technologies.
Now, Scientists at Stanford University, Princeton University, the Max Planck Institute for Informatics, and Adobe Research have demonstrated a new software platform that allows post manipulation of video. They position the research as a post production tool but you have to be naive to not see this as a nice tool to create deep fake video with the software and a text editor. It produces frighteningly realistic deep fake video and gives us a chilling peek at the future of this technology and where it is headed unless we create and make available forensic synthetic video detection tools.
Yes, this has the capacity to be weaponized. In evil hands, it can sow political and social chaos and spread disinformation on a wide scale.
In the hands of scammers, deep fakes are going to be used in the social engineering tool kit by the bad guys for extortion purposes or possibly to try to manipulate crypto money markets, or stocks.
Over this weekend, a deep fake video of Mark Zuckerberg was posted on Instagram, with his words artificially manipulated. After viewing the refinements in this technology, I can see how a trained eye can spot the artifacts in the video. However, even this is good enough that it could fool most people.
The refinements in this new production technique make it very hard to determine a deep fake to the untrained eye and much easier to tweak deep fakes using a text editor (with the software) to rescript and perfect the lip synch. Of course to counter this researchers will try to create deep fake AI detectors, with hidden watermarks or signatures. However, as we’ve seen in cyber, there is always a whack a mole fight and one-upmanship between the good guys and the bad guys. The fight likely continues at the AI level.
The researchers realize this and caution “finally, it is important that we as a community continue to develop forensics, fingerprinting and verification techniques (digital and non-digital) to identify manipulated video. Such safeguarding measures would reduce the potential for misuse while allowing creative uses of video editing technologies like ours.”
You can view the researcher’s demo video here. Let me know what you think?