Page 203 - Artificial Intellegence_v2.0_Class_11
P. 203
Bias is one of the biggest challenges facing AI. Although all programmers try to have absolute factual data, there is an
inevitable bias when exploring the depth to which AI can be used.
An inherent problem with AI systems is that they are as good or as bad as the trained data. Bad data usually carries racial,
gender, community, or ethnic bias. If the bias in the algorithm that makes important decisions is not recognised, it may
lead to unethical and unfair consequences.
In the future, these biases may intensify further, as many AI recruitment systems will continue to use incorrect data for
training. Therefore, the immediate need is to train these systems with unbiased data and develop algorithms that are
easy to interpret.
What are the sources of AI bias?
Their are mainly three sources of AI bias:
Data
Sources
of AI Bias
Societal Algorithm
Bias
Data
AI system are as good as the data we feed them. Inputting skewed data into the system leads to AI bias. AI system don’t
understand whether their training data is right or wrong and represents a broader base.
Amazon developed an AI tool for recruitment, but the company soon realised its new system was ‘gender-biased’.
Amazon's computer models were trained to inspect applications by observing patterns in resumes submitted to the
company over a 10-year period. Most resumes were submitted by men, so the AI system started ‘favouring’ males for
tech jobs in the company.
Experiential Learning
Video Session
Scan the QR code or visit the following link to watch the video and answer the question
given below: Amazon's Sexist Recruitment AI
https://www.youtube.com/watch?v=JOzQjT-hJ8k
1. Was the Amazon recruitment system biased towards women?
2. How can this 'mistake' be avoided in future?
AI Values (Ethical Decision Making) 201

