Page 106 - Ai_C10_Flipbook
P. 106
• Justice: This principle ensures fairness in distributing healthcare resources, treatments, and opportunities. It
emphasises on equality and avoiding discrimination in medical decision-making.
CASE STUDY: AI for Early Detection of Mental Health Issues
Background
A healthcare organisation used an AI system to help mental health of professionals. Identify people at high risk for
mental health problems like depression and anxiety. The goal was to spot these issues early, improve patient care,
and use resources more effectively. The AI analysed various data sources, including medical records, demographics,
social media activity, and behavioural patterns, to predict the likelihood of mental health disorders.
However, the system introduced unintended consequences, resulting in biased treatment towards specific patient
groups.
The Problem It Caused
The AI system incorrectly flagged women from low-income communities as having a higher risk of mental health
issues based on factors like their social media activity and signs of financial stress. However, many of these
women were not experiencing mental health disorders but were dealing with financial difficulties and caregiving
responsibilities.
Conversely, the system failed to identify individuals from wealthier backgrounds who might also be at risk but did
not exhibit obvious signs of stress, leading to missed opportunities for an early intervention.
Why the Problem Happened?
• Bias in data: The AI model was trained on datasets that primarily represented wealthier individuals with better
access to healthcare. Social media data further amplified the issue, as people from lower-income communities
often have limited access to online mental health resources.
• Overemphasis on social media: The system relied excessively on social media activity (like post frequency and
tone) to predict mental health without considering how social or economic stress factors can affect someone's
mental health.
• Ignoring social factors: The AI didn't take into account the broader social issues, like poverty or lack of access
to care, which can significantly affect mental health.
The Ethical Problems
• Bias: The AI overestimated the mental health risks for low-income communities while overlooking others who
needed care. This exacerbated existing healthcare inequalities.
• Inaccuracy: If the AI makes wrong predictions, it could lead to unnecessary treatments for some individuals
while others at risk went unnoticed, compromising the system's reliability.
• Privacy Concerns: Using social media activity for risk predictions raised concerns about people's privacy and
whether they were properly informed about how their data was being used.
Using Bioethics to Fix the Problem
By applying four principles of bioethics, we can improve fairness and effectiveness of AI system:
• Respect for Autonomy
o Transparency and consent: Patients must be informed about how their data is used and given the choice
to opt out of having their personal or social media data analysed.
104 Artificial Intelligence Play (Ver 1.0)-X

