Page 227 - Ai_417_V3.0_C9_Flipbook
P. 227
Task #Problem Solving & Logical Reasoning
Take your parents' smartphone with their permission and make a list of apps installed on it. Now, surf
the Internet and find out the ethical and privacy concerns that are related to these apps. Write YES if any
ethical or privacy concern related to the app, otherwise write NO.
Sr. No. App Name Ethical or Privacy Concern
AI Bias
Can we trust AI systems? Not yet. AI technology may inherit human biases due to biases in training data. Consider
the following examples:
Example 1: Why do most images that show up when you do an image search for “doctor” depict white men?
Example 2: Why do most images that show up when you do an image search for “shirts” depict men's shirts?
Example 3: Why do most search results show "women’s salons" when you search for salons nearby?
Example 4: Why do virtual assistants often have female voices?
“AI bias is a phenomenon that occurs when an algorithm produces results that are systematically prejudiced towards
certain gender, language, race, wealth, etc., and therefore, produces skewed or learned output. Algorithms can have
built-in biases because they are created by individuals who have conscious or unconscious preferences that may go
undiscovered until the algorithms are used publicly."
What are the Sources of AI Bias?
Some of the sources of AI bias are:
• Data: AI systems are the result of the data that is fed into them. The data used to train the AI system is the first
step to check for biasness. The dataset for AI systems should be realistic and need to be of a sufficient size.
However, the largest data collected from the real world may also reflect human subjectivity and underlying social
biases. The Amazon AI recruitment system is a good example. It was found that their recruitment system was
not selecting candidates in a gender-neutral way. The machine learning algorithm was based on the number
of resumes submitted over a period of 10 years and most of them were men, so it favoured men over women.
• Algorithms: The algorithms themselves do not add bias to an AI model, but they can amplify existing biases. Let's
look at an example of an image classifier model trained on images in the public domain—pictures of people's
kitchens. It so happens that most of the images are of women rather than men. AI algorithms are designed to
maximise accuracy. Therefore, an AI algorithm may decide that the people in the kitchen are women, despite
some of the images being of men.
AI Reflection, Project Cycle and Ethics 225

