Page 64 - Ai V2.0 Flipbook C8
P. 64

Some common types of AI bias are as follows:
                     • Data bias: Data bias happens when the data used to train an AI system doesn’t represent
                     everyone fairly. This could be because certain groups are missing or underrepresented.
                    Example: If you train an AI to recognise animals using only pictures of cats, it might never
                     recognise a dog, because it has never seen one. It "thinks" only cats exist because its learning
                     was limited.

                     • Algorithm bias: This bias occurs due to the way the AI algorithms are designed. Sometimes,
                     even if the data is fair, the rules built into the AI may still favour or discriminate against certain
                     people or outcomes.
                    Example: Suppose a loan approval AI gives more weight to zip codes with higher incomes.
                     If these zip codes mostly belong to wealthier groups, poorer applicants—even if qualified—
                     might get rejected unfairly.
                     • Selection bias: This happens when the data used for training is chosen in a way that doesn't

                     fairly represent the larger group or population.
                    Example: If a language AI is trained only on books from the U.S., it may not understand
                     British or Indian English words, slang, or spelling differences. The AI becomes biased toward
                     American English.
                     • Confirmation bias: AI can develop confirmation bias when it’s trained to look for patterns
                     that support a certain idea, ignoring evidence that contradicts it.

                    Example: If a news-recommending AI learns that you click on stories about “smartphones
                     being  bad  for  health,”  it  might  keep  showing  similar  stories  and  never  show  articles  with
                     different views, making your view more extreme.
                     • Measurement bias: Measurement bias happens when the tools or methods used to collect
                     data don't work the same way for everyone.
                    Example: A health app that measures heart rate using light sensors may be tested mainly on
                     people with lighter skin tones, and might not give accurate readings for people with darker
                     skin tones, leading to unfair results.

                     • Exclusion bias: Exclusion bias occurs when important factors or groups are left out of the
                     training data.
                    Example: If a job recommendation AI is trained without including women's job history, it may
                     suggest fewer or less relevant jobs to women, reducing their opportunities unfairly.

                     • Group Attribution bias: This is when AI generalises based on group identity rather than
                     individual differences.
                    Example: If an AI sees that a few children struggled with math, it might wrongly assume that
                     all children are bad at math, which is an unfair and inaccurate generalisation.
                     • Historical bias: Historical bias comes from using old or outdated data that contains existing
                     societal unfairness.

                    Example:  If  an  AI  is  trained  on  old  hiring  records  where  mostly  men  were  given  jobs,  it
                     might “learn” that men are better workers and continue the unfair pattern by preferring male
                     applicants.


                           62     Touchpad Artificial Intelligence (Ver. 2.0)-VIII
   59   60   61   62   63   64   65   66   67   68   69