News

AI Could Stop Snooping By Predicting What You’ll Say

But is it a good idea to add another listening device?

  • Researchers have developed a method of encrypting conversations to prevent malicious microphones from capturing our conversations.
  • The method is important because it works in real time over streaming audio and with minimal training.
  • Experts applaud the research, but believe it’s not of much use to the average smartphone user.

BraunS/Getty Images

We are surrounded by smart devices with microphones, but what if they have been compromised to spy on us?

In an effort to protect our conversations from eavesdroppers, Columbia University researchers have developed a method of neural voice camouflage that disrupts automatic speech recognition systems in real time without disturbing people.

“With the invasion of [smart voice-activated devices] in our lives, the idea of ​​privacy is starting to evaporate because these listening devices are always on and monitoring what is being said,” Charles Everette, Director of Cyber ​​Advocacy, Deep Instinct, told Lifewire via email. . “This research is a direct response to the need to hide or camouflage an individual’s voice and conversations from these wiretaps, known or unknown in an area.”

talk more

Researchers have developed a system that generates silent sounds that you can play in any room to prevent malicious microphones from eavesdropping on your conversations.

The way this kind of spying tech reminds Everette of noise-canceling headphones. Instead of generating quiet sounds to cancel out background noise, the researchers played background sounds that disrupt artificial intelligence (AI) algorithms that interpret sound waves into understandable sound.

Such mechanisms to camouflage a person’s voice are not unique, but what distinguishes Neural Voice Camouflage from other methods is that it works in real time over the audio stream.

“To operate on live speech, our approach must predict [the correct scrambling audio] in the future so that they can be reproduced in real time”, the researchers point out in their article. Currently, the method works for most of the English language.

Hans Hansen, CEO of Brand3D, told Lifewire that the research is very important because it addresses a major weakness in current AI systems.

In an email conversation, Hansen explained that current deep learning AI systems in general and natural speech recognition in particular work after processing millions of records of speech data collected from thousands of speakers. . In contrast, Neural Voice Camouflage works after conditioning it to just two seconds of voice input.

“Personally, if I’m concerned about what devices are listening, my solution would not be to add another listening device looking to generate background noise.”

Bad tree?

Brian Chappell, chief security strategist at BeyondTrust, says that search is most beneficial for business users who fear being in the middle of compromised devices listening for keywords that indicate valuable information is being spoken.

“Where this technology would potentially be most attractive is in a more authoritarian surveillance state where AI voice and video fingerprinting analysis is used against citizens,” James Maude, principal cybersecurity researcher at BeyondTrust, told Lifewire by email. email.

Maude suggested that a better alternative would be to implement privacy controls on how data is captured, stored and used by these devices. Furthermore, Chappell believes that the Seeker method’s usefulness is limited because it is not designed to shut down human listening.

“For the home, keep in mind that, in theory at least, using such a tool will cause Siri, Alexa, Google Home and any other system activated with a spoken trigger word to ignore it,” Chappell said. .

A businessman hiding under a conference table taking notes on a conference among other people.

Jacobs Stock Photography Ltd / Getty Images

But experts believe that with the increasing inclusion of AI/ML-specific technology in our smart devices, it is very possible that this technology will come to our phones in the near future.

Maude is concerned that artificial intelligence technologies can quickly learn to distinguish between noise and real audio. She thinks that while the system may be initially successful, it could quickly become a game of cat and mouse when a listening device learns to filter out interfering noise.

More worryingly, Maude pointed out that anyone using it could, in fact, draw attention to themselves, as disrupting voice recognition would seem unusual and could indicate that they are trying to hide something.

“Personally, if I’m concerned about listening devices, my solution would not be to add another listening device looking to generate background noise,” Maude shared. “Especially since it only increases the risk that a device or app will be hacked and be able to listen to me.”

Học Wiki

The #hocwiki website provides basic electronic knowledge about capacitors, resistors, and knowledge of circuits, hoping to bring you the most useful online electronic knowledge.

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *

Back to top button