These AI Class 9 Notes Chapter 3 AI Ethics Class 9 Notes simplify complex AI concepts for easy understanding.
Class 9 AI Ethics Notes
Introduction to Ethics Class 9 Notes
Ethics is basically a set of principles or standards that guide people’s behaviour and decisions about what is right and wrong. It helps individuals determine how they should act in different situations, considering what is fair, just, and morally acceptable.
Ethics influences how we interact with others, make choices, and live our lives in a way that aligns with our values and the principles society considers good or acceptable. Essentially, it’s about doing the right thing based on moral beliefs and values.
Morals
Morals are the principles that guide our behavior and our understanding of what is right or wrong. They are influenced by our upbringing, culture, and personal beliefs. Moral principles help us to make choices that are fair, just, and kind.
For example, honesty is a moral value. So, if you find someone’s wallet, your morals would tell you to return it, even though there’s no legal consequence for keeping it.
AI Related to Ethics Class 9 Notes
AI and ethics are closely connected because AI systems are designed to make decisions, just like humans do. But since AI learns from data and patterns, it can sometimes make choices that might not be fair or right. Ethics in AI means figuring out rules and guidelines to make sure AI behaves in ways that align with our values and doesn’t harm anyone. It involves considering things like privacy, fairness, transparency, and the impact AI might have on people’s lives.
Artificial Intelligence (AI) ethics refers to the moral principles and guidelines that govern the development, deployment, and use of AI technologies. It’s like a set of rules or values that help make sure AI systems are fair, safe, and respectful to people.
Imagine AI as a really smart tool that can learn, make decisions, and do things on its own, like recommending movies, driving cars, or helping doctors diagnose diseases. But sometimes, these AI systems can make mistakes or do things that might not be fair or right.
AI ethics demands that AI systems be:
- Fair Treating everyone equally without favoring any group.
- Private Respecting people’s personal information and not sharing it without permission.
- Transparent Allowing people to understand how decisions are made.
- Accountable Having clear responsibility for when things go wrong.
- Safe Designed to avoid causing harm to people and society.
- Unbiased Free from discrimination against any , particular group.
Note : Biased Data will create Bias Decisions.
Bias Class 9 Notes
AI bias means favoring someone or something. When Biased Data is fed to an AI machine while creating the model then the machine will also be biased. AI Bias is an anomaly in the result produced through AI based programs and algorithms because of prejudiced assumptions made during the algorithm development process or prejudices in the training data. For example,
- Most of virtual assistants have a female voice and not a male voice.
- Security systems are trained, based on an individual’s race of gender rather than their actions, movements to commit the crime ,etc.
- In US, a healthcare algorithm was used to decide extra medical facilities for people, AI produced faulty results and favoured white patients over black patients.
Note In 2018, Amazon abandoned its AI recruitment tool because it was discovered to be piased against female candidates.
The scenario shows us how important it is to make sure AI systems don’t learn from biased data. If they do, they might end up making unfair decisions without even realizing it. So, it’s crucial to check and fix these biases to make AI systems are fair to everyone.
Training Data in AI
Data plays an important role in an AI model’s functioning. AI models uses training data to learn and perform its task with high accuracy. Training data may be biased during collection process that will produced biased results. Training Data is a huge collection of labelled information that is used to build an AI model.
For example,
Let us consider that a person Mr. X likes a color ‘Red’ very much. Now, if another dataset stores that ‘Red’ colour is preferred for aggressive nature, then without much representation, this dataset may link that person with aggressive nature. So this is an example of AI bias.
Reason of AI Bias
The reasons you listed are all important factors that contribute to AI bias in various ways:
- Human bias in decisions Our own biases can creep into AI systems at different stages. For instance, programmers might unknowingly weight certain factors in an algorithm that reflect their own biases.
- Flawed and unbalanced data collection If the data used to train an AI is flawed or gathered in a biased way, the AI will learn those biases and perpetuate them.
- Under- or over-representation of specific features When certain features (like race or gender) are under-represented in the training data, the AI might not be able to handle those variations well, leading to biased outcomes.
- Wrong assumptions If the developers make incorrect assumptions about the data or the real world, the AI might inherit those assumptions and produce biased results.
- No proper bias testing If AI systems aren’t rigorously tested for bias, these biases might go undetected and cause problems later.
- No bias mitigation Even after identifying bias, if there aren’t steps taken to reduce its effects, the AI will remain biased.
Note Bias in data collection refers to flawed or unbalanced data with over- or under- representation of data related to specific features or groups or ethnicity etc, in the final data- collection.
AI Access Class 9 Notes
AI Access explain the gap in society, where only upper-class people who can afford AI-enabled devices have the opportunity to access it and people below the poverty line don’t have access to it.
Because of this, a gap has emerged between these two classes of people and it gets widened with the rapid advancement of technology.
The Government has to bring balance in the society by providing infrastructure to common students/people so that everyone will get a chance to access emerging technologies like AI. AI access means making AI more accessible or available to all.
Principles of AI Ethics
The following principles in AI Ethics affect the quality of AI solutions
Human Rights AI systems should be developed and used in a way that respects human rights and fundamental freedoms. This means avoiding applications that discriminate against individuals or groups, or that could be used for surveillance or other intrusive purposes.
Bias AI algorithms can inherit biases from the data they are trained on. This can lead to unfair or discriminatory outcomes.
For example, if an AI used for loan approvals is biased against certain demographics. Mitigating bias is essential for creating fair and trustworthy AI.
Privacy AI systems often collect and use large amounts of personal data. It’s important to ensure that this data is collected, used, and stored ethically, with proper user consent and strong data security measures.
Inclusion AI development should involve diverse teams and consider the needs of all potential users. This helps to avoid building solutions that exclude or disadvantage certain groups of people. By ensuring inclusion, AI can be a powerful tool for promoting equality and accessibility.
Glossary
- AI Ethics refers to the moral principles and guidelines that govern the development, deployment, and use of AI technologies.
- AI bias refers to unfair or prejudiced outcomes that can occur when artificial intelligence systems are trained on data that contains biases.
- Training Data A collection of labelled, annotated information that’s used to build an AI model.