Page 163 - AI Ver 3.0 class 10_Flipbook
P. 163
• Non-maleficence (Do Not Harm): This principle focuses on avoiding actions that could harm others whether
intentional or unintentional to an individual or a community.
• Justice: This principle ensures fairness in distributing healthcare resources, treatments, and opportunities. It
emphasises on equality and avoiding discrimination in medical decision-making.
Brainy Fact
Maleficence refers to the deliberate act of causing harm, injury, or wrongdoing to others.
CASE STUDY: AI for Early Detection of Mental Health Issues
Background
A healthcare organisation used an AI system to help mental health of professionals. Identify people at high
risk for mental health problems like depression and anxiety. The goal was to spot these issues early, improve
patient care, and use resources more effectively. The AI analysed various data sources, including medical
records, demographics, social media activity, and behavioural patterns, to predict the likelihood of mental
health disorders.
However, the system introduced unintended consequences, resulting in biased treatment towards specific
patient groups.
The Problem It Caused
The AI system incorrectly flagged women from low-income communities as having a higher risk of mental
health issues based on factors like their social media activity and signs of financial stress. However, many of
these women were not experiencing mental health disorders but were dealing with financial difficulties and
caregiving responsibilities.
Conversely, the system failed to identify individuals from wealthier backgrounds who might also be at risk but
did not exhibit obvious signs of stress, leading to missed opportunities for an early intervention.
Why the Problem Happened?
• Bias in data: The AI model was trained on datasets that primarily represented wealthier individuals with better
access to healthcare. Social media data further amplified the issue, as people from lower-income communities
often have limited access to online mental health resources.
• Overemphasis on social media: The system relied excessively on social media activity (like post frequency and
tone) to predict mental health without considering how social or economic stress factors can affect someone's
mental health.
• Ignoring social factors: The AI didn't take into account the broader social issues, like poverty or lack of access
to care, which can significantly affect mental health.
The Ethical Problems
• Bias: The AI overestimated the mental health risks for low-income communities while overlooking others who
needed care. This exacerbated existing healthcare inequalities.
Revisiting AI Project Cycle & Ethical Frameworks for AI 161

