Coded Bias: An Absolute AI Reality.

Gibran Registe-Charles
2 min readJan 10, 2023

--

AI and code bias are increasingly relevant issues as AI systems are being used in a variety of fields, including healthcare, criminal justice, and finance.

Code bias refers to the idea that the algorithms that power AI systems can perpetuate and amplify existing biases in society. This can have serious consequences, as AI systems are often used to make important decisions that can have significant impacts on individuals and communities.

One example of code bias in AI systems is the use of facial recognition software for law enforcement purposes. Studies have shown that these systems are more likely to misidentify people of colour, leading to concerns about the potential for racial profiling and other forms of discrimination.

Code bias can also occur in healthcare AI systems.

For example, a study published in the journal Nature found that an AI system used to predict which patients were at risk of heart attacks was more accurate for White patients than for Black patients.

This is due to the fact that the system was trained on data from mostly white patients, leading to bias in the algorithms that power the system.

One way to mitigate code bias in AI systems is to ensure that the data used to train the algorithms is representative and diverse. This can help ensure that the algorithms are not biased towards any particular group. It is also important to regularly review and assess the results produced by AI systems to ensure that they are not perpetuating existing biases.

Another way to mitigate code bias is to be transparent about how AI systems are being used and how decisions are being made.

This can help build trust in AI systems and ensure that they are being used ethically. It is also important to consider the potential impacts of AI systems on marginalised and disadvantaged communities and to work to ensure that these communities are not disproportionately affected by code bias.

In conclusion, code bias in AI systems is a significant issue that can have serious consequences. It is important to take steps to mitigate code bias, including using diverse and representative data sets and being transparent about how AI systems are being used. By addressing code bias, we can help ensure that AI systems are used ethically and fairly.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Gibran Registe-Charles
Gibran Registe-Charles

No responses yet

Write a response