Page 285 - Touhpad Ai
P. 285

●    Cognitive bias: This refers to systematic patterns of deviation from rationality or objectivity in
                                  judgement, influenced by factors like emotions and personal experiences. For example, a person
                                  who strongly believes that climate change is not real might dismiss scientific evidence supporting it,
                                  thus reinforcing their existing beliefs. Cognitive biases can lead to irrational or partial judgements,
                                  impacting AI development and application.
                             Each of these biases can result in AI systems that unfairly discriminate against certain groups, leading to
                             unethical and unfair outcomes in various sectors such as healthcare, finance, and criminal justice.
                    4.  List down the different key strategies to avoid Bias in AI.

                        Ans.   Following are key strategies to avoid bias:
                             ●    Use diverse and representative data: The data used for training AI should properly represent
                                  different groups of people who may be affected by AI decisions.
                             ●    Apply pre-processing and post-processing techniques: Before training the AI model, data should
                                  be cleaned, balanced, and prepared carefully (pre-processing). After training, results should also
                                  be checked and adjusted (post-processing).

                             ●    Develop fairness-aware algorithms: The algorithms or the rules that tell AI how to make decisions
                                  should be designed with fairness in mind.

                             ●    Conduct regular audits and rigorous testing: AI systems should not be left unchecked after they
                                  are created. They must go through frequent audits, monitoring, and testing to see how they perform
                                  with real-world data.

                             ●    Involve diverse teams and domain experts: Creating fair AI requires a team of people from different
                                  fields — including computer scientists, social scientists, ethicists, and people from underrepresented
                                  communities.
                             ●    Promote transparency and explainability: AI systems should be designed so that their decisions
                                  can be easily explained and understood by humans.
                             ●    Keep improving AI: AI is not a one-time project; it requires constant care and improvement.
                                  Developers should regularly review, update, and monitor AI systems to make sure they stay in line
                                  with current social values and adapt to changes in society.
                             ●    Collaborate across departments: To make AI fair and accountable, organisations should encourage
                                  teamwork among different departments, such as data, legal, compliance, and policy teams.
                             ●    Follow ethical standards and regulation: Developers and organisations should follow clear ethical
                                  codes and respect laws related to privacy, data use, and algorithmic fairness.

                                                                                                   21 st
                 C.  Competency-based questions:     HOTS                                         Century   #Interdisciplinary
                                                                                                   Skills
                    1.   Aditi, a young entrepreneur, applies for a business loan to expand her small enterprise. She has an excellent
                        credit score, a solid business plan, and a history of successful repayments. However, the bank she approaches
                        uses an AI system to evaluate loan applications. This AI system was trained primarily on data from affluent urban
                        areas and does not consider factors relevant to Aditi’s rural context.
                          Despite her strong qualifications, the AI system flags her application as high risk because the training data does
                        not adequately represent rural entrepreneurs. As a result, Aditi’s loan application is denied, and she is unable to
                        expand her business. What are the potential consequences of using biased AI systems in financial services, and
                        how can such biases be addressed to ensure fair treatment of all applicants?
                        Ans.  Using biased AI systems in financial services can lead to unfair treatment of certain groups, such as rural
                             entrepreneurs like: Aditi. This can result in qualified applicants being denied opportunities, perpetuating
                             economic disparities.




                                                                                               Ethical Practices in AI  283
   280   281   282   283   284   285   286   287   288   289   290