Race to hide your voice

- Advertisement -


Your voice shows more about you than you think. For example, to the human ear, your voice can instantly give away your mood – it’s easy to tell if you’re excited or upset. But machines can learn much more: determine your age, gender, ethnicity, socioeconomic status, health status, and more. The researchers even succeeded generate face images based on the information contained in the voice data of individuals.

- Advertisement -

As machines get better at understanding you from your voice, companies are making money. Voice recognition systems — from Siri and Alexa to those that use your voice as a password — have exploded in recent years as artificial intelligence and machine learning have unlocked the ability to understand not just what you say, but who you are. . Big voice can be $20 billion industry for several years. And as the market grows, privacy researchers are increasingly looking for ways to protect people from having their voice data used against them.

Voice Threats
- Advertisement -

Both the words you say and the way you pronounce them can be used to identify you, says Emmanuel Vincent, a senior researcher in voice technology at France’s National Institute for Research in Digital Sciences and Technology (Inria), but this is just the beginning. . “You’ll also find other information about your emotions or health,” says Vincent.

“These additional pieces of information help build a more complete profile—then it will be used for all kinds of targeted advertising,” says Vincent. In addition to the fact that your voice data could end up in a large area of ​​data used to show you online ads, there is also the risk that hackers can access the location where your voice data is stored and use it to impersonate you. . A small number of these cloning incidents already happenedproving the value of your vote. Simple robocall scams have also caught people saying yes to use confirmation in payment fraud.

- Advertisement -

TikTok changed its privacy policy last year and started collection of voice prints— a vague term for the data contained in your voice — about people in the US along with other biometric data such as your face print. More broadly, call centers are using AI to analyze people’s behavior. “Behavior and Emotions” during telephone conversations and evaluate “tone, tempo and pitch of each individual word” for development profiles of people and increase sales. “We are almost in a situation where systems that recognize who you are and tie everything together exist, but there are no protections – and they are still far from being easy to use,” says Henry Turner, who researched the security of voice systems at Oxford University.

Hidden meaning

Your voice is played through difficult process including your lungs and your vocal apparatus, throat, nose, mouth, and sinuses. According to Rebecca Kleinberger, a voice researcher at the MIT Media Lab, more than a hundred muscles are activated when you speak. “A lot of it is the brain,” says Kleinberger.

Researchers are experimenting with four ways to increase the privacy of your voice, says Natalia Tomashchenko, a researcher at the University of Avignon, France who studies voice and is first author research work based on the results voice privacy engineering challenge. None of the methods are perfect, but they are being explored as possible ways to increase privacy in the infrastructure that processes your voice data.

The first is obfuscation, which attempts to completely obscure the speaker. Think of the Hollywood image of a hacker completely distorting his voice during a phone call as he explains a diabolical plot or ransom (or hacktivist collective). Anonymous commercials). Simple voice changing equipment allows anyone to quickly change the sound of their voice. More advanced speech-to-text-to-speech systems can transcribe what you say and then reverse the process as well as say it with a new voice.

Second, says Tomashenko, researchers are studying distributed and federated learning– where your data doesn’t leave your device, but machine learning models are still learning to recognize speech, share your learning with a larger system. Another approach involves building an encrypted infrastructure to protect people’s voices from eavesdropping. However, most efforts are focused on voice anonymization.

Anonymization attempts to preserve the human voice while removing as much information as possible that could be used to identify you. Speech anonymization efforts currently involve two separate strands: anonymizing the content of what someone says by removing or replacing any sensitive words in files before they are saved, and anonymizing the voice itself. Currently, most voice anonymization efforts involve transmitting someone’s voice through experimental software that changes some parameters of the voice signal to make it sound different. This may include changing the pitch, replacing speech segments with information from other voices, and synthesizing the final output.

Does anonymization technology work? Male and female voice clips that were anonymized as part of the Voice Privacy Challenge in 2020 definitely sound different. They’re more robotic, sound a little painful, and may – at least to some listeners – belong to a different person than the original voice clips. “I think this can already guarantee a much higher level of protection than doing nothing, which is the current status,” says Vincent, who was able to make it easier to identify people when anonymizing. research work. However, humans are not the only listeners. Rita Singh, an associate professor at Carnegie Mellon University’s Institute of Language Technology, says that complete de-identification of a voice signal is impossible because machines will always have the potential to make connections between attributes and people, even connections that are obscure. people. “Does anonymity apply to a human listener or a machine listener?” says Sri Narayanan, professor of electrical and computer engineering at the University of Southern California.

“True anonymity is not possible without a complete change of voice,” says Singh. “When you completely change your voice, it’s not the same voice anymore.” Regardless, it’s still worth developing voice privacy technology, Singh adds, as no privacy or security system is completely secure. Fingerprints and Face Recognition on iPhone fake in the pastbut in general they are still an effective method of protecting people’s privacy.

Goodbye Alexa

Your voice is increasingly being used as a way to verify your identity. For example, more and more banks and other companies are analyzing your voice prints with your permission in order to change your password. There is also the possibility of voice analysis to detect illness before other signs become apparent. But the technology of cloning or spoofing someone’s voice is advancing rapidly.

If you have a few minutes of recording someone’s voice, or in some cases a few seconds, it is possible to recreate that voice using machine learning −The Simpsons’ voice actors can be replaced with deep fake voice clones, for example. And commercial tools for recreating voices easily accessible online. “Definitely more work is required to identify the speaker and convert speech to text and text to speech than to protect people from any of these technologies,” says Turner.

Many of the voice anonymization technologies currently being developed are still far from being used in the real world. Once they’re ready to use, it’s likely companies will have to implement the tools themselves to protect their customers’ privacy—there’s not much people can do to protect their voice at the moment. Avoiding calls to call centers or companies that use voice analysis, and avoiding the use of voice assistants, can limit the amount of recording of your voice and reduce the potential for attacks.

But the biggest protection can come from court cases and defense. Europe GDPR covers biometric data, including people’s voices, in its privacy protection. Guidelines to tell people should be told how their data is being used and give consent if they are identified, as well as some restrictions on personalization. Meanwhile, in the United States, courts in Illinois, which have some of the strictest biometric laws in the country, are increasingly handling cases involving people’s voice data. McDonald’s, Amazon and Google are all before judicial control about how they use people’s voice data. Decisions in these cases may establish new rules for protecting the voice of the people.


Credit: www.wired.com /

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox