Deep Learning Unit-1
Deep Learning Unit-1
U n i t - 1 P a g e 1 | 40
DEEP LEARNING III AIML SEM-II 2
5. Facial Recognition
o Example: FaceID on iPhones uses AI to recognize your face and unlock your
phone. Similarly, airports use AI-powered facial recognition to identify
passengers quickly and improve security. AI is also used in social media
platforms like Facebook to automatically tag people in photos.
6. AI in Healthcare
o Example: AI is used to help doctors diagnose diseases. For example, AI systems
can analyze X-rays, MRIs, or CT scans and find patterns that indicate diseases
like cancer or heart disease. AI is also used to help predict a patient’s risk for
certain conditions based on medical data.
7. Smart Home Devices
o Example: Devices like Nest Thermostat use AI to learn your temperature
preferences and adjust accordingly to save energy. Ring Doorbell uses AI for
facial recognition and motion detection, alerting homeowners when someone is
at the door or when motion is detected outside.
8. AI in Finance
o Example: Banks use AI for fraud detection by analyzing transaction patterns
and identifying unusual activity. Robo-advisors, like those offered by
Betterment and Wealthfront, use AI to help manage investment portfolios and
make financial recommendations.
9. Autonomous Drones
o Example: Amazon Prime Air and other companies are using AI-powered
drones to deliver packages. The drones navigate through the air using AI to
avoid obstacles and find the most efficient delivery routes.
10. AI in Retail
• Example: Amazon Go stores use AI to allow customers to shop without going to a
checkout counter. Sensors and cameras track what customers pick up and automatically
charge them when they leave the store.
11. AI in Education
• Example: AI-powered tutoring systems, like Knewton and Socratic, help students
learn by providing personalized learning experiences. AI can track a student's progress
and offer custom lessons based on their strengths and weaknesses.
12. AI in Agriculture
• Example: AI-powered drones and sensors are used to monitor crop health, predict
harvest times, and detect pests. John Deere uses AI in its machinery to optimize
planting, fertilization, and harvesting.
U n i t - 1 P a g e 2 | 40
DEEP LEARNING III AIML SEM-II 3
13. AI in Sports
• Example: In professional sports, AI is used to track player performance, analyze
strategies, and predict game outcomes. For example, AI can analyze soccer players
movements and help coaches understand how to improve their team’s tactics.
14. AI in Entertainment and Media
• Example: AI is used to create realistic visual effects in movies and TV shows.
Companies like Disney use AI for animation, motion capture, and to generate
computer-generated characters. AI also helps in creating deepfake videos, where an
actor’s face can be swapped with another person’s using AI technology.
U n i t - 1 P a g e 3 | 40
DEEP LEARNING III AIML SEM-II 4
• AI in healthcare, where AI can help doctors diagnose diseases from medical images,
analyze health data, and predict patient outcomes.
Key Idea: Deep Learning has allowed AI to perform complex tasks like recognizing images,
understanding speech, and even making decisions in real-time.
Summary of AI Evolution
• 1950s - 1960s: The idea of AI was born, and early experiments tried to simulate human
thinking.
• 1960s - 1970s: AI grew with rule-based systems, but progress was slow.
• 1970s - 1990s: AI faced a slowdown due to limitations, known as the AI Winter.
• 1990s - 2000s: AI shifted to Machine Learning, allowing computers to learn from data.
• 2010s - Present: AI exploded with Deep Learning, improving tasks like image
recognition and voice processing.
• Future: AI may become even smarter, with the potential for Artificial General
Intelligence (AGI).
The evolution of AI shows how far we've come, from simple ideas to powerful technologies,
and there is still much more to come!
U n i t - 1 P a g e 5 | 40
DEEP LEARNING III AIML SEM-II 6
3. AI Improves Decision-Making
Why is this important?
Humans can make decisions based on experience, intuition, or available information.
However, AI can make decisions based on large amounts of data much faster than a human
can. This helps people make better, more informed choices.
• AI systems can analyze data and give recommendations that might be too complex
for humans to calculate easily.
U n i t - 1 P a g e 6 | 40
DEEP LEARNING III AIML SEM-II 7
• AI can also predict outcomes. For example, AI can predict the weather, the stock
market, or even a person's likelihood of developing a health problem.
Example: In business, AI helps companies decide what products to sell, how to price them,
and where to advertise by analyzing past sales data and trends. This can lead to better profits
and smarter strategies.
U n i t - 1 P a g e 7 | 40
DEEP LEARNING III AIML SEM-II 8
U n i t - 1 P a g e 8 | 40
DEEP LEARNING III AIML SEM-II 9
1. Supervised Learning
In supervised learning, the machine is given labeled data to learn from. Labeled data means
that for each piece of data, we already know the correct answer (like knowing that a picture
shows a cat or a dog).
U n i t - 1 P a g e 9 | 40
DEEP LEARNING III AIML SEM-II 10
How it works:
• You provide the machine with lots of data and the correct answers (labels).
• The machine looks at these examples, learns the patterns, and uses those patterns to
predict the answer for new data.
Example: If you want to teach a computer to identify fruits, you give it lots of pictures of fruits
along with their names (apple, banana, orange, etc.). Over time, the computer learns to
recognize fruits based on the patterns in the images.
Real-life example: Email spam filters: Supervised learning is used to train spam filters. You
give the computer emails that are marked as "spam" or "not spam." The computer learns from
these examples and then can predict whether a new email is spam.
2. Unsupervised Learning
In unsupervised learning, the machine is given unlabeled data, meaning the machine doesn't
know the correct answer. The goal is for the machine to find patterns or group the data on its
own.
How it works:
• You give the machine a large amount of data without any labels or answers.
• The machine looks for patterns and tries to group similar things together.
Example: If you give the machine a collection of pictures without telling it what’s in the
pictures, the machine might group similar pictures together, such as all the images of animals
in one group and all the images of buildings in another.
Real-life example: Customer segmentation: Unsupervised learning is often used in
marketing. A company might have customer data but not know how to categorize them. The
machine can group customers based on their behaviors (like shopping habits or interests) to
help the company target their marketing efforts.
3. Semi-Supervised Learning
This combines labeled and unlabeled data. It’s helpful when labeling data is expensive or
time-consuming.
• Example:
A dataset has 1000 images of cats and dogs, but only 100 are labeled. The model uses
labeled images to learn basic patterns and then applies this knowledge to classify the
unlabeled images.
• Real-Time Example:
o Medical Diagnosis: Doctors may label a few medical images with diseases, and
the model uses these to learn and classify other unlabeled images.
U n i t - 1 P a g e 10 | 40
DEEP LEARNING III AIML SEM-II 11
4.Reinforcement Learning
In reinforcement learning, the machine learns by trial and error, just like how a child learns
to ride a bicycle. It tries things, makes mistakes, and gets feedback (rewards or punishments),
which helps it improve its actions over time.
How it works:
• The machine is given a goal or task but doesn’t know how to achieve it at first.
• It takes actions and receives feedback (positive or negative) based on how well it did.
• Over time, the machine learns which actions lead to good results (rewards) and which
lead to bad results (punishments).
Example: If you teach a computer to play a game like chess or Go, the machine makes moves,
gets points for good moves, and loses points for bad moves. After playing many games, it learns
the best strategy.
Real-life example: Self-driving cars use reinforcement learning to make decisions. The car
"learns" by driving in different situations (like in traffic or during bad weather) and adjusts its
behavior to improve over time.
driving cars. Machine learning can be divided into three main types: supervised learning,
unsupervised learning, and reinforcement learning, each with different ways of helping
computers learn and solve problems. As machine learning continues to grow, it will play an
even bigger role in shaping the future of technology.
U n i t - 1 P a g e 12 | 40
DEEP LEARNING III AIML SEM-II 13
3. Kernel Methods
Kernel methods transform data into a higher-dimensional space to make it easier to find
patterns or separate groups. The most famous kernel method is the Support Vector Machine
(SVM).
• Example:
Imagine you have a set of points in a 2D space that cannot be separated by a straight
line. A kernel method transforms this 2D data into a 3D space where a plane can
separate the points.
• Real-Time Example:
o Face Detection: Kernel methods are used in computer vision tasks like
identifying whether a face is present in an image.
4. Decision Trees
A decision tree is like a flowchart that splits data based on conditions to make decisions. Each
node in the tree represents a decision based on a feature.
• Example:
Suppose you're building a model to decide if someone is eligible for a loan:
o Is the income > $50,000?
▪ Yes → Check credit score.
▪ No → Not eligible.
o Is the credit score > 700?
▪ Yes → Eligible.
▪ No → Not eligible.
• Real-Time Example:
o Loan Approval: Banks use decision trees to evaluate whether a customer
qualifies for a loan based on income, credit score, and repayment history.
U n i t - 1 P a g e 13 | 40
DEEP LEARNING III AIML SEM-II 14
5. Random Forests
A random forest is a collection of decision trees where each tree gives a prediction, and the
majority vote is taken as the final result. It reduces errors and improves accuracy.
• Example:
If you have three decision trees predicting whether a fruit is an apple or orange:
o Tree 1: Apple.
o Tree 2: Orange.
o Tree 3: Apple.
The random forest takes the majority vote, so the prediction is "Apple."
• Real-Time Example:
o Fraud Detection: Credit card companies use random forests to detect
fraudulent transactions by analyzing patterns in spending behavior.
U n i t - 1 P a g e 14 | 40
DEEP LEARNING III AIML SEM-II 15
2. Precision
U n i t - 1 P a g e 15 | 40
DEEP LEARNING III AIML SEM-II 16
3. Recall (Sensitivity)
4. F1-Score
U n i t - 1 P a g e 16 | 40
DEEP LEARNING III AIML SEM-II 17
5. Confusion Matrix
A confusion matrix is a table that shows the model’s predictions compared to the
actual results. It includes:
o True Positives (TP): Correctly predicted positive cases.
o True Negatives (TN): Correctly predicted negative cases.
o False Positives (FP): Incorrectly predicted positive cases.
o False Negatives (FN): Missed positive cases.
o Example:
For a medical test predicting disease:
▪ TP: The test correctly predicts the patient has the disease.
▪ TN: The test correctly predicts the patient doesn’t have the disease.
▪ FP: The test falsely predicts the patient has the disease.
▪ FN: The test falsely predicts the patient doesn’t have the disease.
o Real-Time Example:
▪ Medical Diagnosis: Doctors analyze the confusion matrix to understand
how often a test misses cases or gives false alarms.
By evaluating models carefully, we ensure they perform well on real-world tasks and avoid
errors that could lead to incorrect predictions or decisions.
U n i t - 1 P a g e 17 | 40
DEEP LEARNING III AIML SEM-II 18
1. Overfitting
Overfitting happens when the model learns too much from the training data, including noise
and irrelevant details. It becomes overly complex and memorizes the training data instead of
understanding general patterns. This means it performs very well on training data but poorly
on new data.
• Why It Happens:
o The model is too complex (e.g., too many features or parameters).
o Training for too long on the same dataset.
• Example:
Imagine you're studying for a math test by memorizing every question in the practice
book. On the practice test, you do great because you've memorized the answers. But in
the real exam, where the questions are different, you fail because you didn’t understand
the concepts.
Real-Time Examples:
• Chatbot Training: If a chatbot is trained only on customer conversations from one
company, it might respond well to similar phrases but fail with new or diverse customer
questions.
• Face Recognition: A face recognition model trained on a small dataset of specific faces
may fail to recognize new faces because it memorized details of the training faces.
2. Underfitting
Underfitting occurs when the model is too simple and doesn’t learn enough from the training
data. It fails to capture important patterns and performs poorly on both training and test data.
• Why It Happens:
o The model is too simple (e.g., not enough features or too few parameters).
o The training process is stopped too early.
o The data provided is insufficient or not well-preprocessed.
• Example:
Imagine you study for a math test using only the first page of your textbook. You don’t
U n i t - 1 P a g e 18 | 40
DEEP LEARNING III AIML SEM-II 19
learn enough, so you fail both the practice test and the real exam because you didn’t
study most of the material.
• Real-Time Examples:
o Weather Prediction: A model that only uses temperature to predict rain might
underfit because it ignores other factors like humidity and wind speed.
o Stock Market Predictions: A model that considers only one or two variables, like
past prices, may fail to capture the complexity of market trends and give inaccurate
predictions.
Model Too complex (memorizes noise and Too simple (misses important
Complexity irrelevant data) patterns)
Training too long or using an overly Using a model that's too basic or not
Cause
detailed model trained enough
By avoiding overfitting and underfitting, we create models that not only perform well on
training data but also make accurate predictions on new, unseen data. This balance is called
generalization, and it's the ultimate goal in machine learning.
2. Spam Detection
o Email services like Gmail use probabilistic models to classify emails as spam
or not spam.
U n i t - 1 P a g e 21 | 40
DEEP LEARNING III AIML SEM-II 22
o Example:
▪ Words like "win" and "prize" increase the probability of an email being
spam.
▪ If the model predicts 85% chance spam, the email goes to the spam
folder.
3. Medical Diagnosis
o Doctors use probabilistic models to predict diseases based on symptoms and test
results.
o Example:
▪ A patient has symptoms of headache and fever.
▪ Model prediction:
▪ 80% chance of flu.
▪ 15% chance of dengue.
▪ 5% chance of something else.
▪ The doctor treats for flu since it’s most likely.
5. Product Recommendations
o Platforms like Amazon or Netflix use probabilistic models to suggest products
or movies.
o Example:
▪ Based on your past viewing, the model predicts:
▪ 70% chance you’ll like "Inception."
▪ 50% chance you’ll like "Interstellar."
▪ It recommends "Inception" first.
U n i t - 1 P a g e 22 | 40
DEEP LEARNING III AIML SEM-II 23
U n i t - 1 P a g e 23 | 40
DEEP LEARNING III AIML SEM-II 24
U n i t - 1 P a g e 24 | 40
DEEP LEARNING III AIML SEM-II 25
Real-Life Impact
While early neural networks couldn't do much compared to today's AI, they laid the foundation
for big developments in:
• Banking: Detecting fraudulent transactions.
• Healthcare: Diagnosing diseases using patient data.
• Technology: Improving early speech and handwriting recognition systems.
These networks were simple but were a major step toward the complex AI systems we use
today.
U n i t - 1 P a g e 25 | 40
DEEP LEARNING III AIML SEM-II 26
o This transformation is done without explicitly moving the data, saving time and
computation.
3. Learning Patterns: After transformation, a machine learning model like a Support
Vector Machine (SVM) can find the boundary between different categories.
2. Versatile: Works well with many types of data, including images, text, and numbers.
o Example: Analyzing customer reviews to classify them as positive or negative.
3. Powerful with Small Data: Performs well even with limited data, unlike deep learning,
which often requires large datasets.
Kernel methods are like special lenses that help machine learning models see patterns in data
that are otherwise hard to detect. They are still widely used in areas where complex patterns
need to be identified efficiently.
U n i t - 1 P a g e 27 | 40
DEEP LEARNING III AIML SEM-II 28
U n i t - 1 P a g e 28 | 40
DEEP LEARNING III AIML SEM-II 29
U n i t - 1 P a g e 29 | 40
DEEP LEARNING III AIML SEM-II 30
4. Movie Recommendations
o Example: Streaming services like Netflix use decision trees to recommend
movies.
o How It Works:
▪ "Do you like action movies?"
▪ Yes → "Do you like superheroes?"
▪ No → "Do you prefer romantic comedies?"
▪ Based on answers, the service recommends movies you’re most likely
to enjoy.
U n i t - 1 P a g e 30 | 40
DEEP LEARNING III AIML SEM-II 31
Conclusion
In summary, decision trees are powerful tools in machine learning for making decisions based
on a series of questions. They are used in many areas like customer service, healthcare,
banking, and entertainment to make quick, interpretable decisions based on data.
U n i t - 1 P a g e 31 | 40
DEEP LEARNING III AIML SEM-II 32
U n i t - 1 P a g e 32 | 40
DEEP LEARNING III AIML SEM-II 33
U n i t - 1 P a g e 33 | 40
DEEP LEARNING III AIML SEM-II 34
Conclusion
In summary, a Random Forest is a powerful machine learning technique that combines
multiple decision trees to make more accurate predictions. It’s widely used in various real-
world applications like email spam detection, disease diagnosis, and customer behavior
analysis due to its ability to handle complex data and reduce overfitting.
U n i t - 1 P a g e 34 | 40
DEEP LEARNING III AIML SEM-II 35
U n i t - 1 P a g e 35 | 40
DEEP LEARNING III AIML SEM-II 36
U n i t - 1 P a g e 37 | 40
DEEP LEARNING III AIML SEM-II 38
Conclusion
In summary, Gradient Boosting Machines (GBM) are a powerful machine learning technique
that combines multiple decision trees to make more accurate predictions. By focusing on
correcting the errors of previous models, GBM creates a strong, reliable model that can be used
in many real-world applications, from fraud detection to customer churn prediction. Despite its
strengths, GBM requires careful tuning and can be computationally intensive.
U n i t - 1 P a g e 38 | 40
DEEP LEARNING III AIML SEM-II 39
Important Questions
Illustrate the structure and working of early neural networks. (Discuss their role in the
development of deep learning.)
Or
What are Early Neural Networks, and how do they function?
Or
Discuss the limitations of Early Neural Networks and their impact on the field.
Or
Describe the architecture of early neural networks and analyze their limitations in
comparison to modern deep learning models.
4. Kernel Methods
Analyze the role of kernel methods in Machine Learning. (Explain their importance in
handling non-linear data and give examples.)
Or
What are Kernel Methods in Machine Learning?
Or
Explain how Kernel Methods enhance the performance of Machine Learning
algorithms
U n i t - 1 P a g e 39 | 40
DEEP LEARNING III AIML SEM-II 40
5. Decision Trees
Explain decision trees and their use in classification and regression tasks. (Provide a real-
world example to illustrate their application.)
Or
What is a Decision Tree, and how is it constructed?
Or
Discuss the advantages and disadvantages of using Decision Trees in Machine
Learning.
Or
Discuss the construction of decision trees and evaluate their advantages and
disadvantages in classification tasks.
Differentiate between Random Forests and Gradient Boosting Machines. (Highlight their
strengths, limitations, and use cases with examples.)
Or
What is a Random Forest, and how does it improve upon individual Decision Trees?
Or
Explain the concept of Gradient Boosting Machines and their advantages over other
ensemble methods.
U n i t - 1 P a g e 40 | 40