Melissa Joskow / Media Matters
Criticizing Rep. Alexandria Ocasio-Cortez (D-NY) has become something of a pastime for conservative media since the rising Democratic star landed on their radar following her primary victory in late June 2018. Since then, they’ve rarely passed up an opportunity to pounce on gaffes -- real or imagined, big or small. A new attempt by The Daily Wire’s Ryan Saavedra to catch the 29-year-old representative in an embarrassing situation has left him the subject of ridicule.
On January 21, Ocasio-Cortez sat with author Ta-Nehisi Coates for a wide-ranging conversation. During the talk, the freshman representative brought up the idea of bias being effectively built in to algorithms, specifically referring to facial recognition software.
“They always have these racial inequities that get translated because algorithms are still made by human beings. And those algorithms are still pegged to those -- to basic human assumptions,” said Ocasio-Cortez. “They’re just automated. … If you don’t fix the bias, then you’re automating the bias.”
Saavedra posted a video of this comment to Twitter, snarking that the congresswoman (whom he once called “dumb-dumb”) was wrong about algorithms being biased as they are “driven by math.”
Socialist Rep. Alexandria Ocasio-Cortez (D-NY) claims that algorithms, which are driven by math, are racist pic.twitter.com/X2veVvAU1H
— Ryan Saavedra (@RealSaavedra) January 22, 2019
Ocasio-Cortez was right, Saavedra was wrong, and Twitter was quick to let him know. Naturally, he doubled down.
When Parker Higgins, director of special projects at Freedom of the Press, pushed back on Saavedra’s initial claim, Saavedra called him a “moron,” pointing to a study about facial recognition software that happened to have the word “mathematical” in its title but didn’t mention bias.
She was talking about facial recognition technology, which is driven by math, you moron
“Mathematical Modeling for Face Recognition System”: https://t.co/vxtdQmqrqH https://t.co/t85eImda7Q— Ryan Saavedra (@RealSaavedra) January 22, 2019
Saavedra came back to this point later that day in an article titled “AOC Snaps: World Could End In 12 Years, Algorithms Are Racist, Hyper-Success Is Bad.” The article plays on a number of anti-Ocasio-Cortez talking points -- increasingly embraced by conservative media -- that are aimed at painting her as uninformed and unqualified. Right-wing media have mocked her argument about the urgency of acting on climate change, and her comment about the world ending in 12 years was clearly exaggeration, but the most recent report published by the United Nations Intergovernmental Panel on Climate Change has stressed that the next 12 years will play a pivotal role in determining whether we’ll be able to avert global climate disaster
As for her point about algorithms, the criticism against Saavedra wasn’t over the idea that math is involved in algorithms. Math is involved in much of what we do, from baking a pie to making change for a $20 bill. The criticism was that Saavedra seemed to incorrectly believe that because algorithms involve math, they can’t be racist or biased in some way. Yet just a few months ago, he was accusing social media companies of using algorithms that are biased against conservatives, a popular conspiracy theory on the right that is not supported by data.
Most people get their news on Facebook, which allows advertisers to target customers in certain zip codes.
Whose to say Facebook isn’t altering what news people see in specific voting districts?
There needs to be oversight of FB’s algorithms, operations, & news distribution.— Ryan Saavedra (@RealSaavedra) March 19, 2018
Racial bias in algorithms is a well-documented reality.
Bias in algorithms should absolutely be taken seriously by policymakers -- especially as more of our economy becomes automated or relies on artificial intelligence.
In July 2018, the ACLU published the results of a test it ran using Rekognition, Amazon’s facial-recognition technology, which has law enforcement applications. The ACLU ran photos of all members of Congress through the software, matching them up against a database of 25,000 publicly available arrest photos. The results wrongly matched 28 members with photos from the database and showed a disproportionately high percentage of false matches for people of color. While they make up just 20 percent of Congress, people of color accounted for 39 percent of false matches. The stats confirmed the findings of a study that these technologies are simply less accurate on darker-skinned individuals.
Here, the ACLU explains some of the real-life consequences algorithms-gone-wrong can have on people’s lives:
If law enforcement is using Amazon Rekognition, it’s not hard to imagine a police officer getting a “match” indicating that a person has a previous concealed-weapon arrest, biasing the officer before an encounter even begins. Or an individual getting a knock on the door from law enforcement, and being questioned or having their home searched, based on a false identification.
An identification — whether accurate or not — could cost people their freedom or even their lives. People of color are already disproportionately harmed by police practices, and it’s easy to see how Rekognition could exacerbate that. A recent incident in San Francisco provides a disturbing illustration of that risk. Police stopped a car, handcuffed an elderly Black woman and forced her to kneel at gunpoint — all because an automatic license plate reader improperly identified her car as a stolen vehicle.
Safiya U. Noble, author of Algorithms of Oppression: How Search Engines Reinforce Racism, explained to Media Matters in an email that Saavedra’s misconceptions about algorithms were actually pretty common. She wrote:
Many people have been taught that math, computer science, and engineering are value-free, neutral, and objective; but the truth is that all kinds of values are imbued into the products and projects that are made by people who work in industries that use these disciplines. We now have decades of empirical research that show the many ways that technologies can be designed and deployed to discriminate, whether intentionally or not. It’s factually incorrect to assert that the technologies designed by people are value-free when we have so much evidence to the contrary. My own research reveals the ways that racism and sexism are reinforced in digital technologies, and what’s at stake when we are ignorant about these projects. I think [Ocasio-Cortez] is challenging us to understand that we need more public policy interventions, and she’s right.
Technology is only as good as the people who create it. Each person has biases, both implicit and explicit. As Ocasio-Cortez noted during her conversation with Coates, if bias isn’t addressed at the development level, all algorithms will do is automate that bias, potentially making existing problems even worse.