More Than One in Three Firms Burned by AI Bias

Bias in AI systems can lead to significant losses for companies, according to a new survey by an enterprise AI company.

More than one in three companies (36 percent) revealed that they had suffered losses due to AI bias in one or more algorithms, noted the data robot survey of more than 350 US and UK technologists, including CIOs, CIOs, IT managers, data scientists, and development leaders using or planning to use AI.

Of the companies damaged by AI bias, more than half lost revenue (62 percent) or customers (61 percent), while nearly half lost employees (43 percent) and more than a third incurred fees. lawsuits for litigation (35 percent), according to the investigation, which was carried out in collaboration with the World Economic Forum and world academic leaders.

Biased AI can affect revenue in a number of ways, said Kay Firth-Butterfield, director of AI and machine learning and a member of the executive committee of the World Economic Forum, an international lobbying and non-governmental organization based in Cologny, Switzerland.

“If you pick the wrong person through a biased HR algorithm, that could hurt revenue,” he told TechNewsWorld.

“If you are lending money and you have a biased algorithm, you will not be able to grow your business because you will always be lending to a small subset of people that you have always been lending money to,” he added.

Unintentional but still harmful

Survey participants also revealed that algorithms used by their organizations inadvertently contributed to bias against people by gender (34 percent), age (32 percent), race (29 percent), sexual orientation (19 percent). and religion (18 percent).

“AI-based discrimination, even if unintentional, can have serious revenue, reputational and regulatory impacts.” Forrester warned in a recent report on AI fairness.

“While most organizations embrace fairness in AI as a principle, implementing the processes to practice it consistently is challenging,” he continued. “There are multiple criteria for evaluating the fairness of AI systems, and determining the correct approach depends on the use case and its social context.”

Mathew Feeney, Project Director for Emerging Technologies at the Cato Institute, a Washington, DC think tank, explained that AI bias is complicated, but the bias that many people attribute to AI systems is a product of the data used to train the system.

“One of the most prominent uses of AI in the news these days is facial recognition,” he told TechNewsWorld. “There has been widespread documentation of racial bias in facial recognition.

“The systems are much less reliable when it comes to identifying black people,” he explained. “That happens when a system is trained with photos that do not represent enough people of a particular racial group or the photos of that group are not of good quality.”

“It is not necessarily caused by any nefarious intent on the part of the engineers and designers, but is a product of the data used to train the system,” he said.

“People who create algorithms bring their own biases into creating those algorithms,” added Firth-Butterfield. “If a 30-year-old white man creates an algorithm, the biases that it brings are likely to be different from those of a 30-year-old African-American woman.”

Bias versus discrimination

Daniel Castro, vice president of the Information Technology and Innovation Foundation, a public policy and research organization in Washington, DC, argued that people play fast and loose with the term AI bias.

“I would define AI bias as a constant error in the precision of an algorithm, that is, a difference between an estimate and its true value,” he told TechNewsWorld.

“Most companies have strong market incentives to bias AI systems because they want their algorithms to be accurate,” he said.

“For example,” he continued, “if the algorithm incorrectly recommends the optimal product to a buyer, then the company is leaving money on the table for a competitor.”

“There are also reputational reasons why companies want to eliminate AI bias, as their products or services can be viewed as lacking,” he added.

He explained that sometimes market forces to eliminate bias are ineffective.

“For example, if a government agency uses an algorithm to estimate property values ​​for tax purposes, there may not be a good market mechanism to correct for bias,” he explained. “In these cases, the government must provide alternative oversight, such as through transparency measures.”

“But sometimes people refer to AI bias when they really only mean discrimination,” he added. “If a landlord discriminates against certain tenants, we must enforce existing anti-discrimination laws, whether the landlord uses an algorithm or a human being to discriminate against others.”

Regulation on the wings

The DataRobot survey also asked participants about AI regulation. Eight in 10 of the technologists (81 percent) said government regulation could be helpful in two areas: defining and preventing bias.

Yet nearly half of those surveyed (45 percent) admitted they were concerned that regulation could increase the cost of doing business.

Additionally, nearly a third of those surveyed (32 percent) expressed concern that without regulation, certain groups of people could be harmed.

“You’re seeing a lot of calls for that kind of thing, but the AI ​​is too broad when it comes to regulation,” Feeney said. “You are talking about facial recognition, driverless cars, military applications and many others.”

There will be a lot of discussion about AI regulation in 2022, global professional services firm Deloitte predicted, though it doesn’t believe full enforcement of the regulations will take place until 2023.

Some jurisdictions may even attempt to ban subfields of AI altogether, such as facial recognition in public spaces, social scoring and subliminal techniques, he noted.

“AI is tremendously promising, but we are likely to see increased scrutiny in 2022 as regulators seek to better understand the privacy and data security implications of emerging AI applications and implement strategies to protect consumers,” he said. Paul Silverglate, Deloittes US Technology Industry Leader. in a press release.

“Technology companies are at a convergence point where they can no longer leave ethical issues like this to fate,” he warned. “What is needed is a holistic approach to addressing ethical responsibility. Companies that take this approach, especially in newer areas like AI, can expect greater acceptance, more trust, and higher revenue. ”

Leave a Comment