A recent article in The Next Web showed how AI can magically remove a person or object from a video background as computer algorithms “clips” the person walking across a street out of the video. The video was created with open source tools but its not perfect. It leaves wobbly lines where the person was deleted from the video. But it will give you an idea of where this technology is and where it is rapidly going.
Evolution of Deep Fakes
Because Deepfake video allows the creator(s) to use AI /machine learning editing tools to alter the original video content it has the potential for large scale social engineering ie: sextortion, political disinformation and election tampering, chaos in economic markets, and social unrest where cultures are in conflict through various means of dissemination and disinformation. Posting and sharing to social media or forwarding in email are obvious.
Many of these AI Video /audio editing tools are available as open-source research projects but require a fair amount of technical skills to create a good deep fake. What is considered a good deep fake? It depends. It some cases good enough is good enough to fool the average person.
Deep Fakes are in a state of evolution as the software gets better and better at not leaving behind evidence of tampering or what researchers call (artifacts). We’d call them noticeable glitches. Recent improvements include simple text editing algorithms that allow the manipulation of voice by simple typing.
Going one step further – the researchers have been able to create tools that seamlessly create realistic voice by simply typing. The output matching the facial expressions almost seamlessly creating realistic Deep Fakes with little visible artifacts. Probably good enough to fool you and I.
The average person does not pay close attention to detail nor have a long attention span or always see what they think they see. I’m sure you’ve seen good magicians deceive their audience by misdirection. We can assume people will forward these fakes in email or post them to social media after a quick view.
If this sounds like an escalating war …it is.
This could cause considerable short term damage before the experts inevitably analyze and out the bogus video. But Deep Fake technology will continue to improve until the point at which a human eye cannot discern a single pixel clue. That’s where Artificial Intelligence becomes necessary to examine video pixel by pixel to determine signs of alteration.
AI will for sure be used by the bad guys to deceive good guys AI detectors. Researchers realize this will be an ongoing problem. Some researchers have proposed watermarking but they already realize that sophisticated watermarking will likely be defeated as AI algorithms learn what to look for and how to remove it. If this sounds like an escalating war …it is.