Researchers Block Audio Eavesdropping Via Insertion of Predictable Background Whispering

Researchers discovered a novel way to block audio eavesdropping that uses voice recognition software to eavesdrop on conversations ie: voice assistants. While this is effective against voice recognition software it doesn’t appear to currently offer any protection from a trojan /malware infected machine that allows a human to monitor the microphone. And would a compromised computer allow an attacker access to the “predictable cloaking stream”?


A complex problem

Microphones are embedded into nearly all electronic devices today, and the high level of automated eavesdropping users experience when they get ads for products mentioned in private conversations.

Many researchers have previously attempted to mitigate this risk by using white noise that could fool automatic speech recognition systems up to a point.

However, using any of the existing real-time voice concealing methods in practical situations is impossible, as audio requires near-instantaneous computation which is not feasible with today’s hardware, the researchers say.

The only way to address this problem is to develop a predictive model that would keep up with human speech, identify its characteristics, and generate disruptive whispers based on what words are expected next.

Neural voice camouflage

Building upon deep neural network forecasting models applied to packet loss concealment, Columbia’s researchers developed a new algorithm based on what they call a “predictive attacks” model.


Read more on Bleeping Computer.