Page 275 - Touhpad Ai
P. 275
European Union’s Ethics Guidelines for Trustworthy AI
Focus: Ethical guidelines for AI development and deployment in the EU.
Components:
u Rules for AI you can trust, like letting people make their own choices, avoiding harm, being fair, and taking responsibility.
u Needs for AI to be clear, understandable, and able to be checked in detail.
u Ideas for making sure people are incharge and there are ways to check and take responsibility for AI that affects
society a lot.
IBM’s AI Fairness 360 is an open-source toolkit designed to
address bias in machine learning models. It includes over 70 fairness metrics
to help users detect bias, indicating its robust capability for identifying potential bias
sources. The toolkit also offers more than 10 algorithms for mitigating bias, such as optimising
the preprocessing stage, prejudice remover, and regular algorithms. With its comprehensive
features, educational resources, and validation mechanisms, AI Fairness 360 aims to promote
fairness and equity in AI applications.
BRAINY Building Trusted AI pipelines
FACT
(Using Open-Source)
Was it tampered with? Is it fair? Is it easy to understand? Is it accountable?
ROBUSTNESS FAIRNESS EXPLAINABILITY LINEAGE
Ensuring Privacy of Users and Their Data
One of the most important parts of Artificial Intelligence (AI) ethics is to protect the privacy of users and keep their personal
information safe. AI systems often use huge amounts of data to learn and make smart decisions. This data may include
names, photos, phone numbers, addresses, and even sensitive information like medical records or financial details.
If this information is not properly protected, it can be misused, leaked, or stolen. That is why AI developers must always
follow strong privacy and security rules to keep users safe.
Why Privacy is Important in AI?
Privacy means that a person has the right to control how their personal information is collected, used, and shared. In
AI, protecting privacy builds trust between the user and the system.
If users think that an AI app or website is not safe, they will not use it. But if they feel that their information is well-protected,
they are more likely to trust the system. Respecting privacy is not just a legal duty. It is also a moral responsibility.
Ways to Protect User Privacy in AI
Following are some different ways to protect user privacy in AI:
u Collect only what is needed: AI should only collect the data that is truly needed for its purpose. For example, a fitness
app may only need your step count and not your full contact list.
u Anonymise and hide details: Before using data to train an AI system, private details like names, phone numbers, or
addresses should be removed. This process is called anonymisation.
u Get user permission: AI systems should always ask for permission before collecting or using any personal information.
Users should also have the option to withdraw their data if they no longer want to share it.
Ethical Practices in AI 273

