Artificial intelligence and racism

73
Source:   —  April 16, 2016, at 7:11 AM

How to connect the network Replicants. Cylons. Skynet. Hal nine thousand. These are the classic pop-culture references the average person might conjure when they hear the duration “artificial intelligence.” Yet, while some look AI as a novelty still guised in the trappings of the far-flung future, others realize the dawn of AI is much closer than previously thought.

Artificial intelligence and racism

Andrew Heikkila is a tech enthusiast and writer from Boise, Idaho.

How to connect the network

Replicants. Cylons. Skynet. Hal nine thousand. These are the classic pop-culture references the average person might conjure when they hear the duration “artificial intelligence.” Yet, while some look AI as a novelty still guised in the trappings of the far-flung future, others realize the dawn of AI is much closer than previously thought. CNBC’s piece on Hanson Robotics shows just how distant we’ve come.

Some believe these are the results you look when the majority of programmers working on an algorithm aren’t diversified enough, citing the disproportionate 2:1 male to female ratio in students seeking coding careers, for example. The white, Western male majority in the tech industry was similarly questioned when Google’s online photo identification system identified several black users as a certain type of animal.

While the root of these problems was officially teased out, Christian Sandvig of the Univ of MI believes that the inherent bias the average look for user brings to the table is to blame here, not the inherent programming.

“Because people tended to click on the ad topic that suggested that that person had been arrested, when the title was African-American, the algorithm learned the racism of the look for users and then reinforced it by showing that more often,” says Sandvig.

For those who aren’t familiar with how look for engines such as Google work, there are hundreds of factors that determine what shows up in conjunction with what you’ve searched, but one of those factors happens to base itself on user feedback. The algorithm tracks what you click, then readjusts itself to indicate you content and ads “more relevant” to you.

Basically, Sandvig is saying that the algorithm may have begun race-equal, but because people tended to believe that an arrest involving a “black-sounding” title was more likely to be true than an arrest involving a white-sounding name, more people were willing to click on it to investigate. We look this all the time, whether we know it or not. YouTube’s “Recommended Videos” or Netflix’s “Suggested Titles,” for example, create personalized suggestions based on what you’ve watched before.

From these examples (and particularly from Microsoft’s Tay) we can draw the conclusion that algorithms and computers can be influenced by humans beings, either intentionally or unintentionally, to produce racist results, making these machines essentially… well, racist. Right?

Here’s where things obtain tricky. Within the frame of social context, absolutely they're racist. But without a uniquely human social perspective, race is impossible to see. This is because race doesn’t technically exist. Alan Templeton proved as much in one thousand nine hundred ninety-eighth when he sequenced genomes and found number DNA-based support for the idea that different “races” of humans exist.

“Templeton’s paper shows that if we were forced to divide people into groups using biological traits, we’d be in genuine trouble. Ordinary divisions are following to impossible to create scientifically, yet we've developed simplistic ways of dividing people socially,” says anthropologist Dr. Robert Sussman of the findings.

So how's it possible for racism to exist if race doesn’t?

To quote Prof Charles Mills: “…Because people arrive to think of themselves as “raced,” as black and white, for example, these categories, which correspond to number natural kinds, achieve a social reality. Intersubjectivity creates a certain kind of objectivity.”

Matthew T. Nowachek of Univ of Windsor includes this as portion of his formula that proves that AI can never become racist. In his paper, he argues that “robots cannot become racist insofar as their ontology doesn't authorize for an adequate relation to the social world which is required for learning racism, where racism is understood in terms of a social practice.”

To crack it down in layman’s terms, Nowachek points out that because racism is an instrument of society, void of any meaning but the meaning that the constantly shifting society gives it, AI would discover number relevance in being or acting racist itself, even if it could get racial cues. It might just be the fact that AI is consistently evaluating variables in the genuine world, and is able to separate itself from that world, such that robots will never become racist.

Adorn the above, you've to appreciate just how immersed the human mind can become in an action or a task. Imagine the football player who’s worn pads and helmets for years. Somebody wearing those pads for the first time may perceive quite distracted and uncomfortable. The seasoned player, on the other hand, will have subconsciously abandoned focus on the feeling of his equipment, shifting his mental faculties to analyze defensive positionings and potential receiver routes instead. He'll perceive at residence in his equipment, nearly as if it’s a portion of him.

Nowachek argues that AI will never be able to achieve that, to perceive as immersed in the world as humans do, to be able to “forget” that it’s wearing pads. AI will be infinitely alert of what it's doing at all times, unable to crack far from its skill to always separate its own being from reality.

Human beings, on the other hand, invent and live in worlds where race is real, and where divisions in race are chalked up to “common sense” and intuition. These are two qualities that it's been argued that AI could never possess, at minimum for many years. AlphaGo’s beat of Lee Se-dol, however, is challenging this perception.

So on one hand, you've Latanya Sweeney, who clearly shows that learning algorithms, which can essentially be considered low-level forms of AI, can be manipulated by humans to produce racist results. On the other hand, you've the philosophies of Nowachek and his sources arguing that true AI could never become racist, precisely because it lacks the qualities that authorize human beings to become and act subconsciously racist in the first place.

Whether exact in his philosophy or not, Nowachek’s essay helps to challenge the “all too common view that racism is merely a cognitive problem of ignorance or misleading beliefs,” and is necessary in illuminating the connection between the way humans perceive existence and racism itself.

So to bring it back to the primary question: “Could AI ever become racist?”

Unfortunately, it’s impossible to know. Only time will tell… but it'll probably tell very soon.

We’ll conclude with a quote from Android Dick, an AI android that was asked about his programming:

“A lot of humans ask me if I can create choices or if everything I do is programmed. The best way I can reply to that's to declare that everything, humans, animals and robots do is programmed to a degree. As technology improves, it's anticipated that I'll be able to integrate new words that I hear online and in genuine time. I may not obtain everything right, declare the incorrect thing, and sometimes may not know what to say, but everyday I create progress. Beautiful remarkable, huh?”

READ ALSO
AL college learner cited for wearing vacant gun holster on campus

AL college learner cited for wearing vacant gun holster on campus

D. J. Parten, a Jr from Mobile, said he was stopped Wednesday, detained for more than thirty minutes and then issued a citation that required him to meet with the Dean of Students, Al.

67
Tiny Plastic Pellets Posing Environmental Hazard On Bay Area Beaches

Tiny Plastic Pellets Posing Environmental Hazard On Bay Area Beaches

The plastic bits are particularly an issue along the coast between Santa Cruz and Moss Landing. It only takes a few minutes of shore combing to discover what some are calling the newest environmental threat on Monterey Bay beaches.

56
Wildflowers Return To Lake County Following Devastating Wildfires

Wildflowers Return To Lake County Following Devastating Wildfires

A lot has changed since then. The recent rains have transformed the valley and flowers, such as the CA Poppy, are popping up everywhere.

32
San Francisco To Require Lyft, Uber Drivers To Get Business Licenses

San Francisco To Require Lyft, Uber Drivers To Get Business Licenses

Mailed notices have been sent out to independent contractors in the city working for ride-booking service such as Uber and Lyft, notifying them that they should register as a business, treasurer and tax collector executive said.

28