Page 65 - Ai V2.0 Flipbook C8
P. 65

• Societal bias: Societal bias is when AI reflects stereotypes or prejudices that exist in society
                   and culture.
                   Example: If you search for images of a “nurse” and the AI mostly shows women, or if you
                   search for “doctor” and it mostly shows men, the AI is reflecting gender stereotypes that are
                   common in society.



                 Popular AI Bias Examples
                 Let us learn about some popular AI Bias examples:


                 Amazon recruitment
                 With the aim to introduce AI in its recruitment process, Amazon introduced an AI project in 2014.
                 The objective of this project was to remove the mundane job of selecting, analysing and sorting
                 the resume of different applicants. After a year, Amazon realised that the AI system was not
                 functioning properly and was biased on usage of words like “women”.
                 This happened because Amazon took data from the past 10 years to train its AI model. Since

                 there was a male dominance in the tech industry, Amazon’s workforce had 60% male employees.
                 So, the recruitment system learnt wrongly that the male candidates were superior, it dropped
                 all the resumes which had something like “participated in women’s hockey”. This resulted in
                 Amazon to stop using the recruitment system.


                 COMPAS
                 Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) of the US used
                 Machine Learning to predict the chances for repetition among criminal suspects. Later, many
                 states have stopped the usage of software in the beginning of the 21st century after it was

                 exposed to be biased against people of colour.

                 Healthcare risk algorithm

                 A  healthcare  product  was  used  in  the  USA  to  predict  which  patients  would  likely  need
                 extra  medical  care.  Later,  it  was  found  that  the  algorithm  was  biased  by  selecting  only
                 white patients.


                 Facebook Ads
                 In 2019, Facebook started allowing its advertisers to post housing and employment ads which
                 were excluding people from different race, religion, gender, etc. Later, this tech giant was sued
                 by  the  US  Department  of  Housing  and  Urban  Development  for  purposefully  targeting  their
                 advertisement. Later, the company announced that it will stop allowing this.

                 Facebook's initially started its face recognition feature that identifies faces with their unique
                 facial features. But this also proved to be an AI bias as it does not perform well with non-male
                 and non-white individuals. Facebook has now discontinued this feature.






                                                                                             AI Ethics    63
   60   61   62   63   64   65   66   67   68   69   70