Page 335 - Artificial Intellegence_v2.0_Class_11
P. 335

2.  Explain societal bias.
                     Ans.  Societal AI bias occurs when an AI acts in ways that reflect social intolerance or institutional discrimination. Initially,
                          the algorithms and data themselves seem unbiased, but their results reinforce societal biases.
                       3.  Why should AI systems be transparent and reliable?
                     Ans.  The artificial intelligence system should operate reliably as expected during its entire life cycle. The AI system should
                          not pose an unreasonable security risk and should take security measures corresponding with the magnitude of
                          the potential risk. AI systems should be monitored and tested to ensure that they continue to serve their intended
                          purpose.  Continuous  risk  management  should  be  used  to  resolve  identified  issues.  Responsibilities  should  be
                          clearly and appropriately specified to ensure the robustness and safety of AI systems.

                       4.  Give two examples of how AI is helping people.
                     Ans.  i.  AI systems use machine learning and environmental data to predict natural disasters and alert people.
                          ii.   Machine learning and sensors are used by Bionic limbs and exoskeletons to read body position and terrain to
                             improve mobility.

                       5.  List any 3 principles of Ethical AI.
                     Ans.  •   Human-centered values
                             •  Genderless, unbiased AI
                             •  Sharing the benefits of AI systems to all mankind

                 B.   Long answer type questions.
                       1.  What do mean by bias in AI? Give an example.
                     Ans.  Bias is any prejudice against individuals or groups, especially in ways that are considered unfair. "Bias in AI" is
                          a term used to describe situations where ML-based data analysis systems are biased against certain groups of
                          people. These prejudices usually reflect the prevailing social prejudices about race, gender, biological sex, age, and
                          culture. For example, MIT grad student Joy Buolamwini was working with facial analysis software when she noticed
                          a problem: the software didn't detect her face—because the coders of the algorithm hadn't taught it to identify a
                          broad range of skin tones and facial structures.
                       2.  Why is AI ethical framework required?
                     Ans.  An AI Ethical framework is required for the following reasons:
                          i.   Achieve reliable and fairer results for everyone
                          ii.   Reduce the risk of negative impact on those affected by AI systems
                          iii.   Companies  and  governments  follow  the  highest  ethical  standards  while  designing,  developing,  and
                              implementing AI systems.
                       3.  Comment on the statement “AI bias does not come from AI algorithms; it comes from people.”
                     Ans.  ML and AI are technologies that are always based on algorithms created by humans. Like any man-made thing, these
                          algorithms tend to take into account the biases of their creators. As artificial intelligence algorithms learn from data,
                          any historical data can quickly generate biased AI models that make decisions based on unfair data sets.
                       4.  What steps should be taken to reduce bias in AI?
                     Ans.  Following are some of the ways through which we can reduce AI bias:
                          i.   Establish processes to test for and diminish bias in AI systems.
                          ii.   Engage in fact-based conversations about potential biases in human decisions.
                          iii.  Adopt a multidisciplinary approach.
                          iv.  Invest more in AI bias research.
                       5.  Will AI systems ever be free from bias?
                     Ans.  Humans have numerous biases, and new biases are still being identified at an increasing rate. Therefore, like AI
                          systems, it may be impossible to have a completely fair human mind. Ultimately, humans create distorted data, and
                          humans and AI algorithms examine the data to identify and eliminate distortions.


                                                                                   AI Values (Bias Awareness)      333
   330   331   332   333   334   335   336   337   338   339   340