Page 410 - AI_Ver_3.0_class_11
P. 410

AI Bias in Real Life
              AI bias in real life means that the decisions made by AI systems are not always fair or accurate because they are
              influenced  by  biases.  Just  like  people,  AI  systems  can  make  unfair  decisions  if  they  are  trained  on  biased  data  or
              programmed with biased rules.


                            Brainy Fact


                    Tay  was  an  Artificial  Intelligence  Chatbot  that  was  originally  launched  by  Microsoft
                    Corporation on Twitter on March 23, 2016. When the bot started posting inflammatory
                    and offensive tweets through its Twitter account, it caused a subsequent controversy,
                    prompting Microsoft to shut down the service just 16 hours after its launch. According
                    to Microsoft, this was caused by trolls that "attacked" the service, as the bot's responses
                    were based on its interaction with people on Twitter.



              Healthcare
              AI  model  utilised  for  medical  diagnosis  can  be  biased  if  there  is  an  over-representation  of  one
              demographic group in the training data compared to another. For example, an AI system for melanoma
              detection (a skin disease) trained primarily on fair skinned individuals may not perform well for darker
              skinned ones, thus leading to misdiagnosis and delayed treatment.

                              Education
                              Drafting essays or grading standardised tests with AI can discriminate against test takers based on
                              culture, dialect, or the way of writing. Failure to recognise cultural expressions and languages may
                              unfairly disadvantage students from specific regions.

              Finance
              Biased training data or attributes may inadvertently lead some AI algorithms used in credit scoring to
              discriminate against certain demographics. For instance, if historical loan approving data has prejudice
              lending  practices  patterns,  then  AIs  trained  on  such  models  might  initiate  them  hence  making  it
              difficult for lower-level communities to have equal access to loans.

                              Criminal Justice
                              If AI tools, trained on a system that had biases during the sentencing of inmates before their trial or
                              parole process, then subsequently it could also be predictive of bias. For instance, because past cases
                              disproportionately target people from certain races or socio-economic classes, this might result into
                              categorising many people from these groups as high risks thereby leading to unfairness in sentencing
                              procedures.



                               Task                                                     #Experiential Learning



                   Visit the website PortraitAI art generator (https://ai-art.tokyo/en/). Users upload their selfies
                   here, and the Artificial Intelligence uses its understanding of Baroque and Renaissance portraits
                   to draw their portrait in the style of a master. If a person is white, the result is very good. The
                   drawback is that the most popular paintings of this era were of European whites. So, the database consists
                   of mainly white people and an algorithm that relies on the same database tends to make you look ‘fairer’
                   while drawing your image.


                    408     Touchpad Artificial Intelligence (Ver. 3.0)-XI
   405   406   407   408   409   410   411   412   413   414   415