Page 332 - Artificial Intellegence_v2.0_Class_11
P. 332
Reporting Bias
When the training dataset's event frequency doesn't precisely reflect reality, this kind of AI bias develops. Consider a
case where a technology for detecting customer fraud underperformed in a remote location, giving all of the customers
there an unjustifiably high fraud score.
It was observed that every historical inquiry in the area had been classified as a fraud case by the training dataset the
tool was using. Due to the remoteness of the place, fraud case investigators wanted to confirm that each fresh claim was
false before making the trip to the area. As a result, there were much more fake events than there should have been in
the training dataset.
Selection Bias
When training data is either not representative or is chosen without sufficient randomization, this kind of AI bias
emerges. The study by Joy Buolamwini, Timnit Gebru, and Deborah Raji, in which they examined three commercial
image recognition software, provides a clear illustration of the selection bias. 1,270 photos of parliament members
from European and African nations were to be categorized using the AI models. Due to the absence of diversity in the
training data, the study discovered that all three algorithms performed better on male than female faces and shown
more pronounced bias against women with darker skin tones, failing on over one in three women of colour.
Group Attribution Bias
Group attribution bias occurs when data teams generalize individual truths to whole groups that the individual is or is
not a member of. This kind of artificial intelligence bias can be found in admissions and recruiting tools that may favour
applicants who graduated from particular colleges and display prejudice against those who didn't.
Implicit Bias
When AI decisions are drawn based on individual experiences that may not be applicable more broadly, this kind of
bias emerges. For instance, despite their conscious conviction in gender equality, data scientists may find it difficult
to link women to prominent positions in business if they have picked up on cultural cues about women serving as
housekeepers. This example is similar to the gender bias in Google Images.
We should work to create algorithms that are equitable to everyone given the expanding usage of AI in delicate fields
like banking, criminal justice, and healthcare. Businesses must also make an effort to lessen bias in AI systems. Even a
rumour that an AI system is biased can drive away customers and harm a company's reputation. On the other hand,
relying on an AI solution that functions accurately for all genders, races, ages, and cultural backgrounds is much more
likely to give higher value and appeal to a wider and more diversified group of potential customers.
How Data-Driven Decisions can be De-biased
“AI bias does not come from AI algorithms; it comes from people.” Some people may disagree with the statement that
bias does not come from people but data sets. However, people collect data. A book may reflect the author's bias. Like
books, datasets also have authors. They are collected
according to people's instructions or preferences. Establish processes to test for and
ML and AI are technologies that are always based on diminish bias in AI systems.
algorithms created by humans. Like any man-made Engage in fact-based conversations
thing, these algorithms tend to take into account about potential biases in human decisions.
the biases of their creators. As artificial intelligence
algorithms learn from data, any historical data can Adopt a multidisciplinary approach.
quickly generate biased AI models that make decisions
based on unfair data sets. Some steps can be taken to Invest more in AI bias research.
manage bias in AI. Here are some of them:
330 Touchpad Artificial Intelligence (Ver. 2.0)-XI

