Everything but your voice sounds like “speak with your hands” in these headphones.

- Advertisement -


We’ve had active noise cancellation in wireless headphones for a while, but it mostly helps the person wearing the headphones drown out the outside world. If you’ve been on the other end of a phone conversation with someone wearing them, you’ll notice that the microphones still pick up much more than the voice you’re trying to pay attention to. This is what open source project ClearBuds tries to solve by adding a layer of deep learning and audio processing to the mix.

- Advertisement -

I can say a few thousand words here (and still can), but if an image is worth 1,000 words, then a 23-second video at 30 frames per second is worth almost 700,000 words, and I just can’t compete with that. Check it:

- Advertisement -

The ClearBuds project is the result of a research initiative by three University of Washington researchers who were roommates during the pandemic. The system includes a microphone system and real-time machine learning systems that can run on a smartphone.

- Advertisement -

Most headphones only use sound from one of the headphones to send audio to the phone. The ClearBuds system sends two streams, which can then be parsed and processed quickly enough to be used for live audio such as video or phone calls. The command algorithm suppresses any non-speech sounds and then amplifies the speaker’s voice.

“ClearBuds differ from other wireless earbuds in two key ways,” said co-author Maruchi Kim, a doctoral student at Paul G. Allen School of Computer Science and Engineering. “First, the ClearBuds use a dual-mic array. The microphones in each earpiece create two synchronized audio streams that provide information and allow us to spatially separate sounds coming from different directions with higher resolution. Secondly, the lightweight neural network further enhances the voice of the speaker.”

“Because the speaker’s voice is near and about the same distance from the two headphones, the neural network can be trained to focus only on their speech and eliminate background sounds, including other voices,” said co-author Ishan Chatterjee. “This method is very similar to how your own ears work. They use the time difference between sounds coming into your left and right ears to determine where the sound is coming from.”

View full project pageand cross your fingers that this technology will find its way into headphones soon, because honestly, I can’t wait to hear dogs barking, cars approaching, and my niece singing. we’re not talking about Bruno-but-No in the background. Okay, let’s be honest, I’m going to miss singing. Although everything else can be done.


Credit: techcrunch.com /

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox