Page 204 - Artificial Intellegence_v2.0_Class_11
P. 204
Algorithm
Technology is supposed to be unbiased. Unfortunately, this is not the case. Parenting, experience, and culture shape
people and therefore internalize certain assumptions about the world around them. AI is the same. It is constructed by
algorithms designed and adjusted by these people, and it tends to "think" the way it is taught.
Visit the website PortraitAI art generator (https://ai-art.tokyo/en/). You upload your selfies here, and the
artificial intelligence uses its understanding of Baroque and Renaissance portraits to draw your portrait in
the manner of a master. If you are white, the result is very good. The drawback is that the most popular
paintings of this era were of European whites. So, the database consists of mainly white people and an
algorithm that relies on the same database when drawing images tends to make you look ‘fairer’.
Societal Bias
Societal AI bias occurs when an AI acts in ways that reflect social intolerance or institutional discrimination. Initially, the
algorithms and data themselves seem unbiased, but their results reinforce societal biases.
Here is an example of social bias in AI-based decision-making. Google recently found itself in trouble because a feature
in its advertising system allows advertisers, including owners or employers, to discriminate against transgenders. Those
who advertise on Google or Google-owned YouTube can choose to exclude people who have not been identified as
male or female. This allows advertisers, intentionally or unintentionally, to discriminate against people who are non-
male or female, which violates the anti-discrimination law. Since then, Google has changed its advertising settings.
This is an example of algorithmic data bias formed by social bias, which gives people the opportunity to further
strengthen their biases through technology.
What do we do about the Biases in AI?
Have a factual dialogue about underlying human biases. With more advanced tools for detecting bias in machines, we
can raise our standards for humans. This can take the form of running an algorithm, and then a human decision maker
can compare the results. The human decision maker can also use "interpretation techniques" to help identify the reasons
that led to the model's decision, to understand the reasons for possible discrepancies. The important thing is that
when we find biases, it is not enough to change the algorithm; business leaders must also improve the human-driven
processes behind it.
Consider how humans and machines can work together to reduce bias. Some “human-in-the-loop” systems make
recommendations or provide options for humans to choose from. The transparency of these algorithms in their
recommendations can help people to understand how much weight they should be given.
Companies should invest more, provide more data, and take a multidisciplinary approach to bias investigations (while
respecting privacy of course) to continue to make progress in this area. Major efforts to make algorithm designer choices
more transparent and incorporate ethics into computer science courses are a starting point.
Brainy Fact
Virtual assistants like Siri and Alexa have been severely criticized for ‘gender bias’, i.e., having only female
voices. As of March 31, 2021 when Apple released the iOS beta update, users can choose between male and
female voices when enabling Siri. Similarly, Amazon has rolled out India’s first celebrity voice feature on Alexa
with the movie star Mr. Amitabh Bachchan.
202 Touchpad Artificial Intelligence (Ver. 2.0)-XI

