Facebook's AI research could spur smarter AR glasses and robots

DMCA / Correction Notice
- Advertisement -


Facebook is working on augmented reality glasses.

- Advertisement -

Facebook envisions a future in which you’ll learn to play the drums or create a new recipe by wearing augmented reality glasses or other artificial intelligence-powered devices. To make that future a reality, the social network needs its own AI system to see it through your eyes.

advertisement

“This is the world where we will have wearable devices that can help you and me provide information or bring back memories at the right times in our daily lives,” said Kristen Grauman, a lead research scientist at Facebook. The technology could eventually be used to analyze our movements, she said, to help us find misplaced items like our keys.

That future is still a way off, as evidenced by Facebook. Ray-Ban Branded Smart Glasses, which debuted in September without AR effects. Part of the challenge is training AI systems to better understand pictures and videos captured by people from their perspective so that AI can help people remember important information.

before-after-detectron-modelfinal.png

- Advertisement -

Facebook says it’s challenging for computers to analyze video shot from a first-person perspective.

Facebook said it recruited 750 people in collaboration with 13 universities and laboratories to capture more than 2,200 hours of first-person video over two years. Participants, who lived in the UK, Italy, India, Japan, Saudi Arabia, Singapore, the US, Rwanda and Colombia, shot videos of themselves playing sports, shopping, watching their pets or gardening. They used a variety of wearable devices, including GoPro cameras, Vuzix Blade smart glasses and ZShads video recording sunglasses.

Starting next month, Facebook researchers will be able to request access to this collection of data, which the social network said is the world’s largest collection of unscripted videos in the first person. The new project called Ego4D provides a glimpse into how a tech company can improve technologies like AR, virtual reality and robotics so that they can play a bigger role in our daily lives.

The company’s work comes at a time of turmoil for Facebook. The social network has faced scrutiny from lawmakers, advocacy groups and the public. wall street journal Published a series of stories about how the company’s internal research shows it was aware of the platform’s disadvantages, even though it publicly undermined them. Francis Haugen, a former Facebook product manager turned whistleblower who testified before Congress last week about the contents of thousands of pages of confidential documents taken before leaving the company in May. she’s gonna testify in UK and meet Facebook’s Semi-Independent Oversight Board In the near future.

Even before Haugen’s revelations, Facebook’s smart glasses sparked concern from critics, who were concerned that the device could be used to secretly record people. During its research into first-person video, the social network said it addressed privacy concerns. Camera wearers could view and delete their own videos, and the company blurred out the faces of onlookers and license plates that were captured.

Fostering more AI research

screen-shot-2021-10-13-at-11-03-48-am.png

Laundry and cooking look different in videos from different countries.

As part of the new project, Facebook said, it created five benchmark challenges for researchers. Benchmarks include episodic memory, so you know what happens when; forecasting, so that computers know what you’re likely to do next; and hand and object manipulation, to understand what a person is doing in the video. The last two criteria are understanding who said what in a video, when and with which participant.

“It just sets it up once to start,” Grauman said. “It’s usually quite powerful because now you’ll have a systematic way to evaluate the data.”

Helping AI understand first-person video can be challenging because computers typically learn from images that are shot from a viewer’s third-person perspective. Challenges like motion blur and footage from different angles arise when you record yourself kicking a soccer ball or riding a roller coaster.

Facebook said it is looking to expand the project to other countries. The company said it’s important to diversify video footage because if AR glasses are helping a person cook curry or do laundry, the AI ​​assistant needs to understand that those activities may look different in different regions of the world. .

Facebook said the video dataset covers a wide variety of activities shot in 73 locations across nine countries. The participants included people of various ages, genders and professions.

The COVID-19 pandemic also created limitations for the research. For example, more footage in the data set is of public events rather than stay-at-home activities such as cooking or crafting.

Some of the universities that have partnered with Facebook include the University of Bristol in the UK, Georgia Tech in the US, the University of Tokyo in Japan, and the Universidad de los Andes in Colombia.

- Advertisement -

Stay on top - Get the daily news in your inbox

Recent Articles

Related Stories