Covering Scientific & Technical AI | Friday, December 13, 2024

More Academic Rigor – and Less Hype – Are Needed to Solve AI Bias Problems 

Let’s face it: We’re infatuated with AI. From smart chatbots to image recognition to self-driving cars, we’re absolutely enamored with the superpower-like abilities it gives us. But unless we incorporate stronger processes to identify and remediate biased data and biased algorithms, experts say, we run the risk of automating bad decisions at a truly ghoulish scale.

We’re getting way out in front of our skis with AI, according to Patrick Hall, the principal scientist at the AI-focused law firm bnh.ai and a visiting professor at George Washington University.

“Machine learning has been used in banking and national security and these different narrow sectors since before the dawn of personal computing,” Hall says. “But what’s new is it’s being deployed like bubblegum machines, and people just aren’t testing it property. That includes specifically testing for bias, but also does the thing work?”

Hall cited the Gender Shades project as an example of a harmful effect of poorly implemented machine learning. The project, which was spearheaded by MIT Media Lab’s Joy Buolamwini and former Google data scientist Timnit Gebru (a 2021 Datanami Person to Watch) that identified differences in how facial recognition systems used by law enforcement worked with different groups of people.

“The accuracy disparity between white males and women of color was 40%,” Hall tells Datanami. “[That’s] superhuman accuracy recognition of white males, and very poor recognition accuracy of women of color.”

The Gender Shades project evaluates the accuracy of AI powered gender classification products.

 

That leads to all sorts of bad outcomes, such as arresting the wrong person, which has occurred several times with these automated systems, Hall says. Other places where biased algorithms and biased data can cause poor outcomes include employment, housing, and credit.

Business leaders in historically regulated industry, like finance, are aware of the problems with AI. But many outside of that space are clueless, Hall says.

“I would say in those regulated area people are being thoughtful,” he says. “Outside of that, it’s the Wild West.”

Read the rest of this story in its entirety here on our sister web site, Datanami.

About the author: Alex Woodie

Alex Woodie has written about IT as a technology journalist for more than a decade. He brings extensive experience from the IBM midrange marketplace, including topics such as servers, ERP applications, programming, databases, security, high availability, storage, business intelligence, cloud, and mobile enablement. He resides in the San Diego area.

AIwire