Skip to main content

Google execs say we need a plan to stop A.I. algorithms from amplifying racism

Two Google executives said Friday that bias in artificial intelligence is hurting already marginalized communities in America, and that more needs to be done to ensure that this does not happen. X. Eyeé, outreach lead for responsible innovation at Google, and Angela Williams, policy manager at Google, spoke at (Not IRL) Pride Summit, an event organized by Lesbians Who Tech & Allies, the world’s largest technology-focused LGBTQ organization for women, non-binary and trans people around the world.

Recommended Videos

In separate talks, they addressed the ways in which machine learning technology can be used to harm the black community and other communities in America — and more widely around the world.

Please enable Javascript to view this content

https://twitter.com/TechWithX/status/1276613096300146689

Williams discussed the use of A.I. for sweeping surveillance, its role in over-policing, and its implementation for biased sentencing. “[It’s] not that the technology is racist, but we can code in our own unconscious bias into the technology,” she said. Williams highlighted the case of Robert Julian-Borchak Williams, an African American man from Detroit who was recently wrongly arrested after a facial recognition system incorrectly matched his photo with security footage of a shoplifter. Previous studies have shown that facial recognition systems can struggle to distinguish between different black people. “This is where A.I. … surveillance can go terribly wrong in the real world,” Williams said.

X. Eyeé also discussed how A.I. can help “scale and reinforce unfair bias.” In addition to the more quasi-dystopian, attention-grabbing uses of A.I., Eyeé focused on the way in which bias could creep into more seemingly mundane, everyday uses of technology — including Google’s own tools. “At Google, we’re no stranger to these challenges,” Eyeé said. “In recent years … we’ve been in the headlines multiple times for how our algorithms have negatively impacted people.” For instance, Google has developed a tool for classifying the toxicity of comments online. While this can be very helpful, it was also problematic: Phrases like “I am a black gay woman” were initially classified as more toxic than “I am a white man.” This was due to a gap in training data sets, with more conversations about certain identities than others.

There are no overarching fixes to these problems, the two Google executives said. Wherever problems are found, Google works to iron out bias. But the scope of potential places where bias can enter systems — from the design of algorithms to their deployment to the societal context under which data is produced — means that there will always be problematic examples. The key is to be aware of this, to allow such tools to be scrutinized, and for diverse communities to be able to make their voices heard about the use of these technologies.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Google CEO Sundar Pichai warns of dangers of A.I. and calls for more regulation
Google & Alphabet CEO Sundar Pichai

Citing concerns about the rise of deepfakes and the potential abuses of facial recognition technology, Google CEO Sundar Pichai declared in an op-ed in the Financial Times that artificial intelligence should be more tightly regulated: "We need to be clear-eyed about what could go wrong" with A.I., Pichai wrote.

The Alphabet and Google executive wrote about the positive developments that A.I. can bring, such as recent work by Google finding that A.I. can detect breast cancer more accurately than doctors, or Google's project to use A.I. to more accurately predict rainfall in local areas. But he also warned that "history is full of examples of how technology’s virtues aren’t guaranteed" and that "[t]he internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread."

Read more
Performance leaks for AMD’s RX 9070 XT are all over the place
RX 7900 XTX and RX 7900 XT on a pink background.

We're in that exciting period leading up to the release of some of next year's best graphics cards, and that means leaks and predictions are coming out every single day. The last few weeks have really brought into focus AMD's next-gen flagship, which is now said to be called the RX 9070 XT. But now, more than ever, we're seeing a lot of conflicting information about the kind of performance we can expect from the top RDNA 4 card. The latest leaks see it falling within a stone's throw of Nvidia's RTX 4080.

According to zhangzhonghao on the Chiphell forums (who is a frequent leaker in the CPU and GPU space), the difference between the RX 9070 XT and the RTX 4080 is just 5%. They didn't specify which card was the winner, though, and we don't know which games they were tested in.

Read more
Apple’s futuristic iPhone display may not be released for a while longer
Someone holding an iPhone 16, showing a home screen.

If you wish to use an iPhone with virtually no bezels around the screen, you will need to wait a little longer than initially thought. A new industry report says the release of Apple's long-rumored OLED display with "zero bezels" for the iPhone has slid further into an uncertain timeline.

South Korean outlet The Elec, which was the first to report of the existence of a "zero-bezel" iPhone display, has now reported the launch date is unforeseeable because the technology "is not yet developed enough."

Read more