Page 104 - Ai_C10_Flipbook
P. 104

Task                                                            21 st  Century   #Technology Literacy
                                                                                              Skills

                 Case Study: Self-driving cars
                 Scenario 1:
                 Imagine it's the year 2030. Self-driving cars, once just a futuristic concept, are now a common sight on the roads. You
                 own one yourself, using it for your daily commute and enjoying the ease it brings to your life.
                 Of course, with all the advanced features it offers, the car is quite expensive. Now, picture this: one day, your father
                 is on his way to the office in his self-driving car. He is seated in the back while the car drives itself. Suddenly, a small
                 boy runs into the road. The situation unfolds so quickly that the car has only two options:
                 Continue straight and severely injure the boy.
                 Take a sharp right turn to avoid the boy, crashing into a metal pole, damaging the car, and injuring your father.
                 This scenario highlights a critical ethical dilemma that developers face when designing a self-driving car's decision-
                 making algorithm. Ultimately, the morality of the developer influences the machine's choices --what they believe is
                 the "right" decision will be prioritised in the car's programming.
                 If you were in the developer's position, how would you approach this dilemma?

                 Scenario 2:
                 Let's now assume that the car has hit the boy who stepped in front of it. Considering this as an accident, who should
                 be held responsible, and why?
                 1.  The owner of the car.                         2.  The manufacturing company.

                 3.  The developer who designed the car’s algorithm.   4.  The boy who ran into the road and was severely injured.

              Types of Ethical Frameworks


              The various types of ethical frameworks are classified as follows:

                                          Ethical Frameworks of AI



                     Sector-based                                         Value-based


                       Bioethics                       Rights                  Utility                 Virtue


              The description of these types of ethical frameworks is as follows:
                 • Sector-based ethical frameworks:  These  frameworks  focus  on  an  ethical  challenge  specific  to  a  field  or
                 industry. They are trained to focus on a particular sector such as technology, finance or healthcare. For instance,
                 in technology, key considerations include data privacy and the responsible development of AI. In healthcare,
                 the emphasis is on making fair decisions that respect everyone’s rights.

                 o  Bioethics: Bioethics is an interdisciplinary framework used in healthcare to solve tough ethical problems. It
                    combines ideas and principles from fields like medicine, law, and philosophy to help doctors, patients, and
                    researchers make fair and respectful decisions. Bioethics ensures that healthcare decisions are made fairly,
                    respectfully, and in ways that protect everyone’s rights. This helps build trust and improve the overall quality
                    of care.
                 • Value-based ethical frameworks: These frameworks concentrate on essential ethical principles and values such
                 as honesty, respect and fairness that influence decision-making. They are based on different moral beliefs and
                 help us judge whether actions are right or wrong, encouraging ethical behaviour. They are further categorised as:


                    102     Artificial Intelligence Play (Ver 1.0)-X
   99   100   101   102   103   104   105   106   107   108   109