In 2018, Liz O’Sullivan and her colleagues at a popular artificial intelligence startup began working on a system that could automatically remove nudity and other sexually explicit images. off the internet.
They sent millions of photos online to workers in India, who spent weeks adding tags to pornographic material. The data paired with the photos will be used to teach the AI software how to recognize indecent images. But after the photos were tagged, Ms O’Sullivan and her team noticed a problem: Indian workers classified all images of same-sex couples as indecent.
For Ms O’Sullivan, that moment showed how easily bias can creep into artificial intelligence – and how often. She said it was a “cruel game of Whac-a-Mole”.
This month, 36-year-old New Yorker O’Sullivan was appointed chief executive officer of a new company, Parity. A startup is one of many organizations, including more than a dozen startups and some of the biggest names in the tech industry, that provide tools and services designed to identify and eliminate disruption. bias from AI systems.
Before long, businesses may need that help. In April, the Federal Trade Commission warned against the sale of AI systems that are racially biased or could prevent individuals from getting jobs, housing, insurance or other benefits. other benefits. A week later, the European Union published draft regulations that could punish companies that provide such technology.
It’s not clear how regulators might be biased towards the police. Last week, the National Institute of Standards and Technology, a government research lab that regularly provides policy information, released a proposal detailing how businesses can combat the bias in AI, including changes in the way technology is conceived and built.
Many in the tech industry believe businesses must start preparing for a crackdown. “Some types of law or regulation are not,” said Christian Troncoso, senior director of regulatory policy at the Software Alliance, a trade group representing some of the largest and oldest software companies. avoidable. “Every time there is one of these terrible stories about AI, it destroys public trust and confidence.”
Over the years, studies have shown that facial recognition services, healthcare systems, and even talking digital assistants can be biased toward women, people of color, and people of color. other disadvantaged groups. Amid growing complaints about the issue, several local regulators have taken action.
In late 2019, state regulators in New York opened an investigation into UnitedHealth Group after a study found that an algorithm used by hospitals prioritized care for white patients. black patients, even when white patients are healthier. Last year, the state investigated the credit service Apple Card after claiming it discriminated against women. Regulators have ruled that Goldman Sachs, the company that operates the card, does not discriminate, while the status of the UnitedHealth investigation is unclear.
UnitedHealth spokesman Tyler Mason said the company’s algorithm was misused by one of its partners and was free from racial bias. Apple declined to comment.
More than $100 million has been invested in the past six months in companies exploring ethical issues related to artificial intelligence, after $186 million last year, according to PitchBook, a research firm that tracks activity. financial action.
But efforts to tackle the problem reached a climax this month when the Software Alliance released a detailed framework to combat bias in AI, including acknowledging that certain automated technologies requires constant human supervision. The trade group believes the document can help companies change their behavior and can show regulators and lawmakers how to get the problem under control.
Although they have been criticized for bias in their own systems, Amazon, IBM, Google, and Microsoft also provide tools to combat it.
Ms. O’Sullivan said there is no simple solution to bias in AI. A more pressing issue is that some in the industry question whether the problem is as widespread or harmful as she believes.
“Metamorphosis doesn’t happen overnight — and that’s even more true when you’re talking about big companies,” she said. “You’re trying to change not just one person’s mind, but many.”
When O’Sullivan began advising businesses on AI bias more than two years ago, O’Sullivan was often met with skepticism. Many executives and engineers espouse what they call “fairness through ignorance,” arguing that the best way to build equitable technology is to ignore issues like race and gender. .
Increasingly, companies are building systems that learn tasks by analyzing large amounts of data, including images, sounds, text, and statistics. The belief is that if a system learns from as much data as possible, fairness will follow.
But as Ms O’Sullivan saw after tagging was implemented in India, bias can enter the system when designers choose the wrong data or arrange it in the wrong way. Studies show that facial recognition services can be biased against women and people of color when they are trained in photo collections dominated by white men.
Designers can be blind to these problems. Workers in India – where same-sex relationships were still illegal at the time and where attitudes towards gays and lesbians were very different from those in the US – classified the photos. as they see fit.
Ms. O’Sullivan saw the holes and pitfalls of artificial intelligence while working for Clarifai, the company that runs the tagging project. She said she left the company after realizing it was building systems for the military that she believed could eventually be used to kill people. Clarifai did not respond to a request for comment.
Now, she believes, after years of public complaints about bias in AI — not to mention the threat of regulation — attitudes are changing. In a new framework to curb harmful bias, the Software Alliance warned of perceived fairness, arguing that this argument is not true.
Ms O’Sullivan said: ‘They admit that you need to flip the rocks and see what’s underneath.
However, there is still resistance. She said a recent clash at Google, where two ethics researchers were pushed out, was indicative of the situation at many companies. Efforts to combat bias often clash with corporate culture and the relentless push to build new technology, get it out the door, and start making money.
It is also difficult to know how serious the problem is. Jack Clark, one of the authors of the AI Index, said: “We have very little data needed to model broader social safety issues with these systems, including bias,” said Jack Clark, one of the authors of the AI Index. “Many things that the average person cares about – such as fairness – have yet to be measured in a disciplined or large-scale way.”
Ms. O’Sullivan, a college philosophy student and member of the American Civil Liberties Union, is building her company around a tool designed by Rumman Chowdhury, a Renowned AI ethics researcher who spent several years at business consulting firm Accenture before joining Twitter.
While other startups, like Fiddler AI and Weights and Biases, offer tools to monitor AI services and identify potentially biased behavior, Parity’s technology aims to analyze data, technologies and methods a business uses to build its services and then identify areas of risk and recommend changes.
The tool uses artificial intelligence technology that can be misleading in its own right, revealing the double-edged nature of AI – and the difficulty of Ms. O’Sullivan’s task.
Tools can identify bias in AI as imperfect, just as AI is imperfect. But the power of such a tool, she says, is pinpointing potential problems — to get people to scrutinize the issue.
Ultimately, she explains, the aim is to create a broader dialogue between people with multiple points of view. Trouble occurs when the issue is ignored – or when the people discussing the issue are on the same page.
“You need different perspectives. But can you get truly diverse perspectives at one company? ‘ asked Mrs. O’Sullivan. “That’s a very important question that I’m not sure I can answer.”