When AI breaks bad

DMCA / Correction Notice
- Advertisement -


a new report About artificial intelligence and its effects warns that AI has reached a turning point and its negative effects can no longer be ignored.

- Advertisement -

big picture: For all the sci-fi concerns about wide-scale job losses from ultra-intelligent machines or automation – both of which will require artificial intelligence far more capable than it has ever been developed – the bigger concern may be That’s what happens if the AI ​​doesn’t work as intended.

Background: The AI100 project – launched by Eric Horwitz, who served as Microsoft’s first chief scientific officer, and hosted by the Stanford Institute on Human-Centered AI (HAI) – provides a longitudinal study of one such technology. To do what appears to be. Moving on day by day.

  • The new update, published Thursday – the second in a planned century of work – gathers input from a committee of experts to examine the state of AI between 2016 and 2021.
  • “It’s effectively IPCC for the AI ​​community,” says Toby Walsh, an AI expert at the University of New South Wales and a member of the project’s standing committee.
advertisement

What are you saying: The panel found that AI has made remarkable progress over the past five years, particularly in the area of ​​natural language processing (NLP) – the ability of AI to analyze and generate human language.

  • The experts concluded that “to date, the economic importance of AI has been comparatively small,” but the technology has advanced to the point where “real-world impacts on people, institutions and culture” are occurring.

Hunt: It means AI has reached a point Where in the real world it’s getting harder to miss its downsides – and harder to stop.

  • “All you have to do is open the newspaper, and you can see the real risks and threats to democratic principles, mental health, and more,” says Walsh.
- Advertisement -

Between the lines: The most immediate concern about AI is what will happen if it is reinforced in daily life before its kinks are fully worked out.

  • Companies have begun to employ OpenAI’s massive GPT-3 NLP model to analyze customer data and produce content, but there are persistent problems with encoded bias in large text-generating systems. a new paper released this week found that the largest models often re-spread lies and misinformation.
  • Walsh points to Australia, which announced this week start using With its two largest states allowing police to use facial recognition technology to check whether people in COVID-19 quarantine are remaining at home.
  • “This has already been implemented without debate, even though we know that facial recognition carries serious risks of bias, especially for people of color,” he says.

Reference: Australia’s move is an attempt to use AI to solve difficult social problems like pandemics – what the panel calls “techno-solutionism” – rather than treating AI as it should be: a tool among many .

  • Used to determine who receives bank loans or insurance, the panel may have what the panel calls an “aura of neutrality and fairness” because it appears to be the product of a machine rather than a human, but an AI that Decisions that “may” be the result of biased historical judgments or gross discrimination.”
  • “The racism, sexism, ageism in our society is going to be part of the AI ​​system we’ve built,” Walsh says.
  • Until we realize this fact, AI can inadvertently perpetuate existing social evils, hiding human biases inside the black box of an algorithm.

what to watch: Whether governments or companies, listen to critics like UN High Commissioner for Human Rights Michelle Bachelet, who earlier this week called for adjournment On the sale and use of AI that could pose a threat to human rights – particularly in law enforcement.

.

- Advertisement -

Stay on top - Get the daily news in your inbox

Recent Articles

Related Stories