Home Finance specialist Don’t worry about the AI ​​getting sentient. Don’t worry about finding new ways to discriminate against people.

Don’t worry about the AI ​​getting sentient. Don’t worry about finding new ways to discriminate against people.

0

Getty Images

  • The story of a Google engineer saying the company created sentient AI recently went viral.

  • Google’s AI chatbot is not responsive, seven experts told Insider.

  • Three experts told Insider that AI bias is a much bigger concern than sensitivity.

First, the good news: sentient AI is nowhere near a reality. Now the bad news: there are many other problems with the AI.

A story about supposedly sentient AI recently went viral. Google engineer Blake Lemoine has revealed his belief that an enterprise chatbot named LaMDA (Language Model for Dialogue Applications) has achieved sentience.

Seven AI experts who spoke to Insider were unanimous in their rejection of Lemoine’s theory that LaMDA was a sentient being. They included a Google employee who worked directly with the chatbot.

However, the AI ​​doesn’t need to be smart to do serious damage, experts told Insider.

The bias of AI, when it replicates and amplifies historical human discriminatory practices, is well documented.

Facial recognition systems have been found to display racial and gender bias, and in 2018 Amazon shut down a recruiting AI tool it had developed because it systematically discriminated against female candidates.

“When predictive algorithms or so-called ‘AI’ are so widely used, it can be hard to recognize that these predictions are often based on little more than a rapid regurgitation of opinions, stereotypes or lies from the crowd,” says Dr. Nakeema Stefflbauer, an AI ethicist and CEO of the Women in tech Frauenloop network.

“Perhaps it’s fun to speculate about the ‘sensitivity’ of automatically generating historically correlated word strings, but it’s a dishonest exercise when, right now, algorithmic predictions unfairly exclude, stereotype and target individuals and communities based on data pulled from, say, Reddit,” she told Insider.

Professor Sandra Wachter from the University of Oxford detailed in a recent paper that not only does AI show biases against protected characteristics like race and gender, but it finds new ways to categorize and discriminate. the people.

For example, the browser you use to apply for a job could mean that AI recruiting systems favor or downgrade your application.

Wachter’s concern is the lack of a legal framework to prevent AI from finding new ways to discriminate.

“We know that AI picks up patterns of past injustices in hiring, loans, or criminal justice and carries them into the future. But AI also creates new clusters that are not protected by law. to make important decisions,” she said.

“These issues require urgent responses. Let’s deal with them first and worry about sentient AI if and when we are actually about to cross that bridge,” adds Wachter.

Laura Edelson, a computer science researcher at New York University, says AI systems also provide a loophole for people who use them when they prove to be discriminatory.

“A common use case for machine learning systems is to make decisions that humans don’t want to make as a way of abdicating responsibility. ‘It’s not me, it’s the system,'” said she told Insider.

Stefflbauer thinks the hype around sentient AI is actively overshadowing more pressing issues around AI bias.

“We are derailing the work of world-class AI ethics researchers who need to debunk these stories of algorithmic evolution and ‘sensitivity’ so that there is no time or media attention given to the increasing damage that predictive systems allow.”

Read the original article on Business Insider