Massachusetts Daily Collegian

A free and responsible press serving the UMass community since 1890

A free and responsible press serving the UMass community since 1890

Massachusetts Daily Collegian

A free and responsible press serving the UMass community since 1890

Massachusetts Daily Collegian

Artificial intelligence exacerbates racial inequities

As industries increasingly develop and use AI algorithms, long-standing inequities targeting marginalized persons worsen
Daily+Collegian+archives+%282020%29
Daily Collegian archives (2020)

Bias and injustice are entrenched in seemingly every level of our society. People of color face discrimination and barriers, not only in the criminal justice system, but the housing market, job market, technology industry and almost any other system they might encounter in their lives.

Contrary to these systemic issues, some believe that artificial intelligence can offer impartial and unbiased judgments when it comes to many issues we may face in life. There are hopes that AI could potentially play a significant role in confronting and resolving racial and other inequities within our society.

The rise in development and use of artificial intelligence exacerbates the issue of injustice and inequality that pervades our society. Within the tech industry there is a significant lack of representation and diversity of people who are crucial in addressing the impacts of this technology on marginalized groups. Additionally, tech developers are trained to primarily focus on technical issues, and they often lack the training that is essential to addressing technology’s societal impacts..

Algorithmic artificial intelligence tends to be biased against people of color and others who do not share the same characteristics of the developers: white, male and affluent. The algorithm perpetuates the injustice people of color face through methods such as facial recognition or housing applications. The training of these AI algorithms to come to these biased conclusions is plainly unjust.

Facial recognition systems have consistently been shown to recognize white faces with much more accuracy than that of others. This is not an exaggerated accusation; these systems are developed and trained on the white developers that make them.

Joy Buolamwini is a founder of the Algorithmic Justice League and a graduate researcher at M.I.T. When visiting laboratories, she came across two separate interactive robots who failed to detect her as a result of blatant bias in development. While working with these robots at M.I.T., she had to wear a white mask in order to be detected.

What compounds this issue further is that facial recognition systems are being integrated into law enforcement. With pervasive injustice woven into our criminal justice system already, racial and other inequities are being reinforced by this new technology.

At a police department in Detroit, the use of a facial recognition system led to an innocent Black man, Robert Julian-Borchak Williams, being wrongfully accused of a felony. The algorithm had incorrectly matched the license photo of Williams to the suspected thief. Williams was arrested and forced to stay overnight at a detention center.

In some housing sectors, AI  When screening potential tenants, the algorithm uses data such as eviction or criminal history. These algorithms reflect existing disparities in housing and the criminal legal system which disproportionately discriminate against marginalized people. The screening algorithms can easily deem someone ineligible for housing, no matter how petty the previous crime or eviction was.

Much of the people who train the AI algorithms are low-wage workers. Even further, some of these workers must train the AI algorithm to monitor hate speech, which is an onerous job where they are frequently exposed to disturbing and explicit content as part of their work. These workers also must make decisions about whether content is problematic within seconds, easily allowing them to embed their own biases. The process of reviewing potentially harmful content is complex and subjective, and requires much more time and thought.

There are federal agencies that are responsible for regulating the industries which use AI, however they haven’t taken the necessary steps to ensure that AI systems are held accountable for their impacts on people. Federal agencies and government administrations must prioritize addressing how AI algorithms exacerbate racial inequities.

Juliette Perez can be reached [email protected].

Leave a Comment
More to Discover

Comments (0)

All Massachusetts Daily Collegian Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *