Page 97 - Click_GPT_c8_Flipbook
P. 97

Amazon Recruitment
                 With the aim to introduce AI in its recruitment process, Amazon introduced an AI project in 2014.
                 The objective of this project was to remove the mundane job of selecting, analysing and sorting
                 the resume of different applicants. After a year, Amazon realised that the AI system was not
                 functioning properly and was biased on usage of words like “women”.

                 This happened because Amazon took data from the past 10 years to train its AI model. Since
                 there was a male dominance in the tech industry, Amazon’s workforce had 60% male employees.
                 So, the recruitment system learnt wrongly that the male candidates were superior, it dropped
                 all the resumes which had something like “participated in women’s hockey”. This resulted in
                 Amazon to stop using the recruitment system.

                 COMPAS
                 Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) of the US
                 used Machine Learning to predict the chances for repetition among criminal suspects. Later,
                 many states have stopped the usage of software in the beginning of the 21st century after it
                 was exposed to be biased against people of colour.


                 Healthcare Risk Algorithm
                 A healthcare product was used in the USA to predict which patients would likely need extra
                 medical care. Later, it was found that the algorithm was biased by selecting only white patients.

                 Facebook Ads
                 In 2019, Facebook started allowing its advertisers to post housing and employment ads which
                 were excluding people from different race, religion, gender, etc. Later, this tech giant was sued
                 by the US Department of Housing and Urban Development for purposefully targeting their
                 advertisement. Later, the company announced that it will stop allowing this.

                 Facebook’s face recognition feature identifies faces with their unique facial features. But this also
                 proved to be an AI bias as it does not perform well with non-male and non-white individuals.
                 Facebook is trying to fix this problem with its open source database.

                 Prevention from Biases

                 Following are some ways to prevent AI Bias:
                    • Awareness of biases can lead to its prevention.
                    • Selecting the training data that is large enough and represents the group appropriately.

                    • Running enough testing and validation to secure a bias-free algorithm and data set.
                    • Closely monitoring the system, as it learns over the period when it works, to identify if any bias
                   creeps in.


                         AI Access

                 AI Access can be acquired by two means:
                 1.   Data availability: AI needs access to huge data sets so that it can analyse it and draw

                    conclusions and learn from it.

                                                                                               AI Ethics    95
   92   93   94   95   96   97   98   99   100   101   102