Page 421 - AI_Ver_3.0_class_11
P. 421

B.   Long answer type questions.
                       1.  Explain the five pillars of AI ethics.
                     Ans.  The five pillars of AI ethics are fairness, robustness, explainability, transparency, and privacy. The description of each
                          are as follows:
                          •   Fairness: Fairness is a crucial aspect of ethics in AI because it ensures that AI systems treat all individuals and
                             groups equitably and without bias. In AI, fairness means that the outcomes produced by algorithms do not
                             disproportionately harm or advantage specific demographics based on characteristics such as race, gender,
                             religion, ethnicity, or socioeconomic status.
                          •   Explainability: Explainability in AI is crucial for ensuring that the decisions made by AI systems are understandable
                             to  humans.  It  pertains  to  the  transparency  and  clarity  of  AI  systems,  enabling  users  to  understand  the
                             decision-making process and forecasts of algorithms.
                          •   Robustness: Robustness in AI ethics refers to the capacity of AI systems to perform reliably and accurately across
                             various conditions, while also minimising unintended consequences and harmful impacts. It is a fundamental
                             aspect of ethical AI because unreliable or biased systems can lead to significant societal harm.
                          •   Transparency: Transparency in AI means being open and clear about how AI systems are created, how they
                             work, and what impacts they might have. It involves providing straightforward information about the data,
                             algorithms, and decision-making processes used in AI applications. This openness encourages accountability,
                             allows for scrutiny, and helps people make informed choices about the ethical and social implications of AI
                             technologies.

                          •   Privacy: Privacy involves individuals having control over their personal information and avoiding unwarranted
                             interference in their lives. It encompasses the right to keep aspects of one's life private, such as private messages,
                             activities, and data. Privacy is crucial as it safeguards individual autonomy, dignity, and freedom from unnecessary
                             intrusion.
                       2.  Discuss the concept of the ethical dilemma with an example.
                     Ans.  An ethical dilemma is a situation in which a person faces a choice between conflicting moral principles or values. It
                          often involves tough decisions where there are competing interests or where doing what is considered right may
                          result in undesirable outcomes. Ethical dilemmas can arise in various contexts, such as in personal relationships,
                          professional  settings,  or  societal  issues.  Resolving  ethical  dilemmas  requires  thoughtful  consideration  of  the
                          consequences of different actions and balancing conflicting ethical concerns.
                          Let us understand the concept of ethical dilemma with the help of an example.
                          Scenario: You work for a pharmaceutical company developing a new drug to treat a rare disease. During clinical
                          trials, it becomes evident that the drug is effective in treating the disease, but it also has significant side effects in
                          a small percentage of patients. The company is under pressure to release the drug quickly due to the urgent need
                          for treatment, but there are concerns about the potential harm caused by the side effects.
                            Ethical Dilemma: On one hand, releasing the drug could provide relief to patients suffering from the rare disease,
                          potentially saving lives, and improving quality of life. On the other hand, there’s a risk of causing harm to patients
                          due to the side effects, which could lead to serious health complications or even fatalities.

                       3.  Explain the different sources of bias in AI systems and how they can lead to unfair outcomes.
                     Ans.  Bias in AI systems can stem from several sources, including training data bias, algorithmic bias, and cognitive bias.
                          •   Training data bias: This occurs when the data used to train AI systems is unrepresentative, incomplete, or
                             skewed. For instance, if a medical AI system is trained primarily on data from male patients, it may not perform
                             well for female patients, leading to misdiagnoses. Similarly, an AI used for loan approvals might be biased if it
                             primarily includes applicants from affluent neighbourhoods, thereby ignoring applicants from poorer areas.
                          •   Algorithmic bias: This type of bias arises during the design and implementation of algorithms. If an AI hiring
                             algorithm is trained on historical data that reflects biased hiring decisions, such as favouring one demographic
                             group over another, the algorithm may perpetuate these biases in new hiring recommendations.



                                                                                         AI Ethics and Values   419
   416   417   418   419   420   421   422   423   424   425   426