Perceptron: AI bias may arise from annotation instructions

- Advertisement -


The research on machine learning and artificial intelligence, which is now a key technology in almost every industry and company, is too voluminous to read in its entirety. This column, Perceptron (formerly deep science), seeks to collect some of the most important recent discoveries and papers, especially in but not limited to artificial intelligence, and explain why they are important.

- Advertisement -

This week in AI, new research shows how bias, a common problem in AI systems, can start with instructions given to people hired to annotate data, from which AI systems learn to make predictions. Collaborators find that annotators pick up patterns in instructions that cause them to make annotations that are then overrepresented in the data, tempting the AI ​​system to those annotations.

- Advertisement -

Today, many artificial intelligence systems “learn” to understand images, video, text, and audio from examples that have been tagged by annotators. Labels allow systems to extrapolate relationships between examples (for example, the relationship between the heading “kitchen sink” and a photo of a kitchen sink) to data that the systems have not seen before (for example, photos of kitchen sinks that have not been used). not included in the data used to “train” the model).

It works remarkably well. But annotation is an imperfect approach – annotators introduce biases into the table that can leak into the trained system. For example, studies have shown that average annotator more likely to label phrases in African American Colloquial English (AAVE), an informal grammar used by some black Americans, as toxic, label-trained leading AI toxicity detectors consider AAVE to be disproportionately toxic.

- Advertisement -

As it turns out, annotator bias may not be the only reason for bias in training labels. In preprint study Researchers at Arizona State University and the Allen Institute for Artificial Intelligence investigated whether the source of the bias could lie in the instructions written by the creators of the datasets, which serve as a guide for commentators. Such instructions usually include a brief description of the task (for example, “Tag all the birds in these photos”) along with a few examples.

Parmar and others.

Image credits: Parmar and others.

The researchers studied 14 different “reference” datasets used to measure the performance of natural language processing or artificial intelligence systems that can classify, generalize, translate, and otherwise analyze or manipulate text. When examining task instructions provided to annotators who worked on the datasets, they found evidence that the instructions forced the annotators to follow certain patterns, which were then propagated to the datasets. For example, more than half of the annotations in Quoref, a dataset designed to test the ability of AI systems to understand when two or more expressions refer to the same person (or thing), begin with the phrase “What’s your name?” a phrase present in a third of the instructions for a dataset.

The phenomenon, which the researchers call “instruction bias,” is particularly worrisome because it suggests that systems trained on instruction/annotation biased data may not perform as well as originally intended. Indeed, the co-authors found that instruction bias overestimates the performance of systems and that these systems often fail to generalize beyond instruction patterns.

On the positive side, large systems such as OpenAI GPT-3 have been shown to be generally less sensitive to instruction bias. But the study serves as a reminder that AI systems, like humans, are prone to developing biases from sources that aren’t always obvious. Finding these sources and mitigating the impact on the environment is an insurmountable task.

In a less sobering article, scientists from Switzerland concluded that facial recognition systems are not easily fooled by realistic AI-edited faces. “Morphing attacks,” as they’re called, involve using AI to change a photo on an ID card, passport, or other identification document in order to bypass security systems. The co-authors created “morphs” using AI (Nvidia StyleGAN 2) and tested them on four modern face recognition systems. According to them, the morphs did not pose a serious threat, despite their realistic appearance.

Elsewhere in the field of computer vision, researchers at Meta have developed an AI-based “assistant” that can remember the characteristics of a room, including the location and context of objects, in order to answer questions. Detailed in the preprint, the work is most likely part of the Meta project. Project Nazare an initiative to develop augmented reality glasses that use AI to analyze their surroundings.

Meta-egocentric AI

Image credits: Meta

Designed for use on any camera-equipped body-worn device, the explorer system analyzes footage to create “semantically rich and efficient scene memories” that “encode spatiotemporal information about objects.” The system remembers where the objects are and when they appeared on the video, and, in addition, fixes in its memory the answers to questions that the user can ask about the objects. For example, to the question “Where did you last see my keys?” the system can indicate that the keys were on the table in the living room that morning.

The meta that reportedly plans to release full-featured augmented reality glasses in 2024, announced its plans for “egocentric” AI last October by launching Ego4D, a long-term AI research project with “egocentric perception”. At the time, the company said the goal was to teach AI systems to understand social cues, how the actions of an AR device user can affect their surroundings, and how hands interact with objects, among other things.

From language and augmented reality to physical phenomena, an AI model has proven useful in studying waves at MIT — how and when they break. Although it seems a little mysterious, the truth is that wave models are essential both for building structures in and near water and for modeling how the ocean interacts with the atmosphere in climate models.

Image credits: Massachusetts Institute of Technology

Waves are usually roughly modeled by a set of equations, but researchers trained a machine learning model on hundreds of waves in a 40-foot water tank filled with sensors. By observing the waves and making predictions based on empirical data and then comparing them to theoretical models, AI has helped show where the models fell short.

The startup is born from research at EPFL, where Thibaut Asselborn completed his PhD in handwriting analysis. turned into a full-fledged educational application. Using algorithms he developed, the app (called School Rebound) can identify habits and corrective actions in as little as 30 seconds while a child writes on an iPad with a stylus. They are presented to the child in the form of games that help him write more clearly, reinforcing good habits.

“Our scientific model and rigor is important and sets us apart from other existing applications,” Asselborn said in a press release. “We have received letters from teachers who have seen their students improve their skills by leaps and bounds. Some students even come before class to practice.”

Image credits: Duke University

Another new finding in elementary schools is the identification of hearing problems during routine check-ups. These screenings, which some readers may remember, often use a device called a tympanometer that must be operated by trained audiologists. If it is not available in, say, an isolated school district, children with hearing problems may never get the help they need on time.

Samantha Robler and Susan Emmett at Duke decide to build a tympanometer that essentially works by itself, sending the data to the smartphone app, where it is interpreted by the AI ​​model. Anything that is of concern will be noted and the child can be further evaluated. It’s not a replacement for an expert, but it’s much better than nothing and can help identify hearing problems much earlier in places without the proper resources.


Credit: techcrunch.com /

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox