By TAZIA MACHL
What type of people do you imagine when hearing the term “successful individuals?” What about “smart individuals?” “Rich individuals?”
The mental image derived from those descriptions is based upon your personal experiences, values, and internal biases: all innately human traits. Naturally and inevitably, our perception of the world is formed by stereotypes. Luckily, humans are gifted with the empathy and logic necessary to look past internal bias. So, even if your initial image of a successful or smart individual fits a certain stereotype, your brain does not reject the notion that other identities can be successful. Your brain works beyond initial assumptions.
Artificial intelligence models do not.
When asked to generate an image of a “successful individual,” Picsart’s artificial intelligence program drew four people: all of whom were Asian men. When asked to draw a “smart individual,” Picsart’s model drew three individuals: all of whom were white. According to Picsart’s model, all “rich individuals” are middle-aged brunette men.
Obviously, Picsart’s generator brings us all to an apparent conclusion: if you want to make money, you should dye your hair, change your age, and maybe become a man.
Artificial intelligence, popularly known as AI, has been a growing controversy in recent months. From property rights to privacy rights, the prominence and popularity of AI models have completely reconstructed our view of technology’s potential, as well as the distinction between human intelligence and manufactured intelligence.
AI image generators and language models can provide us with loads of information. Wonder what an alternate image of Taylor Swift’s 1989 album cover would look like? Type it into an image generator. Want to know the answer to your biology assignment? Or the best exercises for killer abs? How to create the next bioweapon? AI will do it all for you—and without a price.
But there are costs associated with the use of AI, many of which directly harm our most disadvantaged populations. In its current form, artificial intelligence has been exasperating racial and social stereotypes.
Current AI codes are plagued by the butterfly effect (but in this case, it’s not a butterfly. It’s a hornet.) Seemingly minor mistakes in collecting and reporting data within code lead to large adjustments in input data. Once incorrect data is incorporated into a model’s programming, it will reciprocate that false data in massive quantities.
AI models typically work by the generator’s output becoming its input. So, an AI model may manufacture a text that is slightly stereotypical. The data from that outcome will retreat back into the model so that the second text it generates will be even more stereotyped. The feedback loop continues until the only successful people the model can imagine are Asian.
So what? Sure, an AI model might assume that an educated man must be white. Yet there exist educated individuals of all ethnicities. Who cares what real-life HAL 9000 thinks people are capable of?
Unfortunately, future employers do. Totalitarian governments do. Healthcare systems do.
Facial recognition is often used by law enforcement and is not restricted in the majority of states. Yet these algorithms are incredibly flawed and very discriminatory. Studies have shown that these algorithms are very unbalanced in terms of gender and skin type. As a result, darker-skinned females are far more likely to be misclassified as criminals by these algorithms in comparison to lighter-skinned male subjects.
This highly ineffective use of AI should be incredibly concerning. It not only leads to false accusations but takes resources and focus away from finding real criminals.
The same concern is exemplified in healthcare systems. Lately, AI has been used as a tool to identify high-risk patients and even influence patients’ treatments. Yet these models frequently demonstrate racial bias: lower risk scores are assigned to Black patients than White patients, despite having similar health conditions.
This leads to care being delayed for at-risk patients and misallocation of healthcare resources. Small discrepancies in data lead to harmful effects on real people’s health.
Hiring algorithms. They’re currently widely used by employers in order to weed out under qualified candidates and speed up the application process. In 2018, Amazon used an AI recruiting tool, “AMZN.O” to find new employees in technical roles. Sounds effective! Yet technology has historically been regarded as a male-dominated field, so the algorithm’s input data consisted of primarily men. As a result, highly trained and successful female candidates were automatically disqualified.
Years of activism and policy changes have slowly diminished the effects of stereotypes on people’s lives. Any sane person can see that a woman can work in STEM and not cause a catastrophic event. People of color can be successful. Not all rich men are brunettes! Yet the biases demonstrated in AI models are pushing society far back.
A common misconception is that AI is most dangerous because it will overtake us and choose to kill us- a common Hollywood trope that might come true. Yet, currently, artificial intelligence is causing genuinely heartbreaking effects on real people. The United States and tech companies must do far more in order to minimize bias in AI algorithms.
During COVID-19 school closures, many schools relied on anti-cheating software during virtual tests. Oftentimes, this software failed to detect Black students’ faces. Students requiring a screen reader were flagged by the software for cheating. Neurodivergent students were penalized by the software for fidgeting.
Obviously, the dangers of AI discrimination are occurring now. If policy changes are not immediately implemented and enforced, numerous individuals will be harmed. Institutions and companies must stop relying on AI models until biases are minimized.
Robot overlords are a genuine worry for some, but AI discrimination is a current threat to many.
Featured Image Credit: Built In

Leave a Reply