Welcome to the World of AI
The image above was made by AI.
Punch, the baby macaque, is real — born at Ichikawa Zoo in Japan in 2025, raised by zookeepers who gave him a stuffed toy. He went viral in March 2026.
The scene, though, never happened. No zookeeper, no phone, no face unlock. AI generated the whole image in seconds, from a short description in English.
But a real face-recognition system could do what the image shows — tell the real Punch from the toy in milliseconds.
That is what AI does. It finds patterns in data and uses them to make decisions.
Over the next four chapters, you'll learn how AI is built, where it helps, where it fails, and how to think clearly about it.
Chapter 1 begins with the lifecycle — how an AI system actually gets built, stage by stage.
What is an AI Project?
Artificial intelligence is a technology that allows machines to learn from data and make decisions. But AI does not appear by magic — every AI product, from a spam filter to a medical diagnosis tool, is the result of a structured project with defined stages.
A calculator is always right — but a calculator never gets better. An AI improves every time it sees new data. That one property separates AI from every other kind of software.
What separates AI from ordinary software?
- Understand the 4 official CBSE stages of an AI project
- Map those stages to the 6 engineering steps used in practice
- Measure accuracy and interpret what it means
- Distinguish AI (learns from data) from Automation (follows fixed rules)
The 4 CBSE Official Stages
The CBSE Class 8 handbook defines four stages for an AI project. Every question in your exam comes from these four stages.
Scoping the Problem
Defining exactly what problem the AI must solve, who will use it, and what success looks like. A poorly scoped problem leads to a useless AI.
Data Acquisition
Collecting the raw data the AI will learn from — images, text, numbers, sensor readings. More data, correctly labelled, leads to better AI.
Building and Training the AI Model
Choosing a learning algorithm and feeding it the prepared data so it can identify patterns and build a model that makes predictions.
Reflect and Improve
Testing the model, measuring its accuracy, finding its errors, and retraining it with better data or a better algorithm.
Remember: Stage 4 (Reflect and Improve) is a continuous loop — not a finishing line. Real AI systems are constantly being tested and retrained as new data arrives.
Industry note: Engineers in practice often use a 6-stage version — Define → Collect & Prepare → Develop & Train → Evaluate & Refine → Deploy → Monitor & Maintain. Both describe the same work. CBSE groups it into 4 stages; industry splits it into 6. Know both for your exam.
What is Accuracy?
Accuracy tells you what fraction of the AI's predictions were correct.
Formula: Accuracy = (Correct Predictions ÷ Total Predictions) × 100%
Example: An AI checks 100 email messages for spam. It correctly identifies 87. Its accuracy is 87%. But accuracy alone can be misleading — a model that always says "not spam" would have 95% accuracy on a typical inbox while missing every actual spam message.
AI vs Automation — What is the Difference?
Students often confuse AI with automation. The key difference is whether the system learns from data.
| Feature | AI | Automation |
|---|---|---|
| Learns from data? | ✅ Yes — improves over time | ❌ No — fixed rules only |
| Can handle new situations? | ✅ Yes — generalises | ❌ No — breaks outside rules |
| Example | Face recognition, spam detection | ATM cash dispensing, traffic lights |
| Needs training data? | ✅ Yes | ❌ No |
📦 Think & Apply — Delivery Prediction
The AI model learned: delivery is On Time when distance < 10 km, weather is Clear, traffic is Low, and delivery partner is Experienced. Use this rule to fill the Prediction column.
| Order | Distance | Weather | Traffic | Partner | Prediction |
|---|---|---|---|---|---|
| 01 | 5 km | Clear | Low | Experienced | On Time ✅ |
| 02 | 20 km | Rainy | High | New | Delayed ❌ |
| 03 | 10 km | Clear | Low | Experienced | Delayed ❌ (not <10) |
| 04 | 15 km | Rainy | High | New | Delayed ❌ |
| 05 | 8 km | Clear | Low | Experienced | ? |
| 06 | 18 km | Rainy | High | New | ? |
📝 Fill in the Blanks
Q1. The first stage of an AI project, where the problem is clearly defined, is called ✓ Scoping.
Q2. An AI model correctly predicts 14 out of 20 results. Its accuracy is ✓ 70%.
Q3. A traffic light that changes on a fixed timer is an example of ✓ Automation, not AI.
Q4. AI learns by finding ✓ patterns in data.
Q5. After testing, we ✓ improve the AI system.
Pick the Right Answer
Q1. AI learns from:
Q2. In spam detection, the AI's job is to:
Q3. The first stage of an AI project cycle is:
Q4. An AI model predicts 8 out of 10 spam emails correctly. Its accuracy is:
Q5. An AI model improves because:
AI Solving Real Problems — Especially in India
AI is not just a technology of the future — it is solving urgent problems in India right now. The CBSE handbook highlights real projects you should know for your exam.
- Name and explain at least 4 real AI applications from India
- Understand how AI is used in agriculture, healthcare, and conservation
- Explain the difference between AI tools that help professionals vs. tools that replace tasks
India's AI Projects — CBSE Exam Focus
Trail Guard AI uses cameras placed along wildlife corridors. When the AI detects a person in a protected forest at night, it immediately alerts forest rangers — helping stop poaching before it happens.
CROPIC analyses photos of crop leaves taken on a smartphone. The AI identifies diseases like blight or rust and recommends treatment — giving small farmers access to expert diagnosis at zero cost.
Bharat Vistaar, announced in the 2026 Union Budget, is an AI-powered platform giving farmers localised soil conditions, weather, pest alerts, and expert advice in their regional language.
SUMAN SAKHI is an AI chatbot that answers health questions for women in rural areas in their local language. It bridges the gap between remote communities and healthcare information.
The Ayushman Bharat Digital Mission creates a digital health ID for every Indian. AI analyses anonymised health data to identify disease patterns and help the government allocate resources more efficiently.
A hospital's AI analyses a patient's medical history and lifestyle habits to predict their risk of developing diabetes. Which type of healthcare AI is this?
AI systems trained on lakhs of X-ray and MRI images can now detect tuberculosis, diabetic retinopathy, and early cancers with accuracy comparable to specialist doctors — helping in areas where specialists are scarce.
You do not need to be a researcher to build AI. Try it yourself with this free tool.
Teachable Machine by Google lets you train a working AI model in your browser — no coding needed. Go to teachablemachine.withgoogle.com, choose Image Project, create three classes (e.g. pen, pencil, eraser), upload photos, train, and test. You have just completed a full AI project lifecycle.
Pick the Right Answer
Q1. Trail Guard AI is used to:
Q2. CROPIC helps farmers by:
Q3. SUMAN SAKHI is best described as:
Q4. What does the Ayushman Bharat Digital Mission (ABDM) create for every Indian?
Q5. Teachable Machine allows students to:
📝 Fill in the Blanks
1. AI detects humans in protected forests and alerts rangers to prevent poaching.✓ Trail Guard
2. analyses photos of crop leaves to diagnose plant diseases for farmers.✓ CROPIC
3. SUMAN SAKHI is an AI that answers health questions for rural women.✓ chatbot
4. The Ayushman Bharat Digital Mission creates a digital for every Indian.✓ health ID
5. allows students to train AI models without coding.✓ Teachable Machine
When AI Is Unfair
Chapter goal: "Bad data creates unfair AI. Balanced data fixes it. Bias lives in data, not in code."
- Explain what bias in AI is and how it enters through training data
- Describe the Seoul Cloud Story and Joy Buolamwini's research
- Give an example of bias in an Indian context
- Explain what a confidence score is and why it matters
- Propose steps to make AI fairer
3.1 The Seoul Cloud Mystery
An artists' group in Seoul, Korea, gave an AI fifty photographs to analyse. The AI examined all fifty. For every single image, it returned the same result: "Face detected."
There were no faces in any of the photographs. There were clouds — clouds that vaguely resembled faces, as clouds sometimes do.
The AI was not broken. It was doing exactly what it had been trained to do — look for face-shaped patterns. But it had only ever been trained on real faces. It had never been shown what a non-face looks like. So when shown anything approximately the right shape, it concluded: face.
- AI looks only for patterns present in its training data.
- Training data directly shapes what an AI can and cannot see.
- Unbalanced or incomplete training data produces unfair AI.
Why did the AI report that clouds were faces?
3.2 See Bias Happen
Imagine training an AI to tell a cricket bat from a badminton racket. The AI learns the visual patterns — a bat is long, flat, wooden; a racket has an oval frame and strings.
But suppose the training data is unbalanced:
| Label | Training Images |
|---|---|
| Cricket bat | Many |
| Badminton racket | Few |
Show the AI a new image. What will it guess? Cricket bat — most of the time. Not because rackets are harder to see. Because the AI saw many more bats than rackets, and got much more practice at recognising bats.
Key Rule: Correct recognition depends heavily on the balance of the training data.
The School Sports Recommender
A school is building an AI that recommends sports to students, trained on this data from previous years:
| Sport | Boys who played | Girls who played |
|---|---|---|
| Cricket | 55 | 15 |
| Badminton | 10 | 20 |
A new girl joins the school. Based only on this data, what will the AI recommend?
What will the AI recommend? Tap to find out.
Q1. Is the recommendation based on ability — or on a data pattern?
Q2. Is this fair to the new student?
Q3. How would you fix the data to make the recommendation fairer?
The Hindi sentence वो डॉक्टर है does not specify gender. In English it should translate as "She/He is a doctor." Some AI translation systems turn it into "He is a doctor." Why? The training data contained far more examples of male doctors. The AI learned the stereotype present in the data — and replicated it automatically.
Computer scientist Joy Buolamwini, then at MIT, tested commercial face-recognition systems. The systems worked well for lighter-skinned men but made significantly more mistakes for women and people with darker skin tones — up to 34 percentage points higher error rates for darker-skinned women. In one demonstration, a system failed to detect her face at all until she wore a white mask. Her research project, Gender Shades, changed how the industry thought about dataset balance.
A large company's AI hiring tool learned from ten years of past records. Because most previously hired engineers had been men, the AI began favouring resumes that resembled male applicants'. The historical bias had been a human decision. The AI inherited the bias and scaled it up across thousands of applicants.
A biased-but-accurate AI is not a success. It is a failure disguised as one.
3.3 What is a Confidence Score?
The AI does not just say "Cat." It says "90 percent confident — Cat." That percentage is the Confidence Score.
Very sure
Act on the prediction with confidence, though human review is still good practice for high-stakes decisions.
Reasonably confident
Likely correct, but worth a second check before acting — especially if the stakes are high.
Nearly guessing
The AI is barely distinguishing between options. Low confidence scores should always trigger human review before any action is taken.
Note: these are illustrative anchors, not fixed thresholds. Different AI systems define confidence differently.
In medical AI: Acting on a 55-percent confidence diagnosis without a doctor reviewing it could seriously harm a patient. The confidence score is the AI telling you how much to trust it.
3.4 How to Make AI Fair
Fairness does not happen automatically. Three things must be done deliberately.
Bias Check
Test the AI on different groups of users. Does it work equally well across genders, skin tones, languages, regions? Identify where errors cluster.
Human Supervision
Humans must review AI decisions — especially for high-stakes outcomes: loans, medical diagnoses, job applications. AI assists. Humans decide.
Transparency
The people affected by an AI decision must be able to understand how it was made — and challenge it if it was wrong.
3.5 Points to Remember
- AI is only as fair as its training data.
- Bias in AI usually lives in the data, not the code.
- Always check the confidence score. Low confidence means the AI is guessing — get a human to review.
- Fairness needs three things: bias-testing, human supervision, transparency.
Exercises
Fill the Gaps
1. Joy Buolamwini's research project was called .✓ Gender Shades
2. AI bias usually enters through biased .✓ training data
3. The percentage showing how certain an AI is about its prediction is called the .✓ confidence score
4. When training data does not represent all groups equally, the AI develops .✓ bias
5. Data that includes diverse examples of all groups the AI will encounter is called .✓ representative data
Using AI Responsibly
As AI becomes more powerful, questions about how it should and should not be used become more urgent. Ethics in AI is not a philosophical luxury — it is a practical necessity.
- Define privacy in a digital context and understand digital footprints
- Explain what deepfakes are and why they are dangerous
- Discuss who is accountable when an AI system makes a harmful decision
- Apply ethical reasoning to AI scenarios
Privacy and Digital Footprints
Every time you use a digital service — searching, watching videos, making purchases — you leave a digital footprint: a trail of data about your behaviour, preferences, and location. AI systems can analyse these footprints to build detailed profiles of who you are, often without you realising it.
Definition: Privacy is the right to control who has access to information about you and how that information is used.
When AI systems collect and use personal data without consent or transparency, they violate privacy. This is why data protection laws — like India's Digital Personal Data Protection Act (2023) — are important.
Deepfakes and Misinformation
A deepfake is a video, image, or audio clip generated by AI that shows a real person doing or saying something they never did. As AI improves, deepfakes become harder to detect. They can be used to:
- Spread political misinformation
- Damage someone's reputation
- Commit fraud by impersonating executives or officials
- Generate fake images that violate someone's privacy or reputation
Ahead of recent Indian state elections, deepfake videos of political leaders endorsing other candidates or making false statements were circulated on WhatsApp. The Election Commission of India had to issue specific guidance on AI-generated content in political advertising.
Before you believe or forward anything: Check where the information came from. Verify on a trusted news website. Read beyond the headline. If a headline made you instantly angry or afraid, be extra careful — that is often the intent.
A friend sends you a shocking video of a politician saying something outrageous. The voice sounds real. The video looks real. What is the first thing you do?
Accountability — Who is Responsible?
When an AI system makes a decision that harms someone — a loan application wrongly rejected, a medical diagnosis that is incorrect, a self-driving car that causes an accident — who is accountable?
There is no simple answer, but the key principle is: human oversight must always exist. AI systems should not be allowed to make irreversible, high-stakes decisions without a human in the loop who can review, override, and be held responsible.
Principle of Responsible AI: Every AI system must have a human or institution that can be held accountable for its decisions and outcomes.
Pick the Right Answer
Q1. AI ethics focuses on:
Q2. Privacy means:
Q3. AI bias can occur when:
Q4. Before sharing information online, you should:
Q5. AI systems learn patterns from:
📝 Fill in the Blanks
1. Misinformation means information.✓ false
2. Fair AI systems treat people .✓ equally
3. Humans must remain for AI decisions.✓ responsible
4. AI systems learn patterns from .✓ data
5. Incorrect or misleading information shared online is called .✓ misinformation
6. Humans must remain for decisions made using AI systems.✓ accountable
Chapter Quiz
8 questions covering all four chapters. Score 6/8 or above to unlock your certificate.
Flashcard Revision
Tap any card to reveal the definition.
Where AI is Happening in India
Eight real AI projects from across the country. Tap each one to read the full story.
Finished the module? Generate your completion certificate.