0% found this document useful (0 votes)
114 views18 pages

CLASS 10 AI Chapter 2

The document discusses the importance of data features and exploration in AI projects, emphasizing their roles in building effective recommendation systems and improving model accuracy. It outlines the AI project cycle, including problem scoping, data acquisition, exploration, modeling, and evaluation, and introduces concepts like reinforcement learning and the differences between rule-based and learning-based systems. Additionally, it covers neural networks, dimensionality reduction, the significance of data quality (GIGO), and reliable data sources.

Uploaded by

arorapuneet234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
114 views18 pages

CLASS 10 AI Chapter 2

The document discusses the importance of data features and exploration in AI projects, emphasizing their roles in building effective recommendation systems and improving model accuracy. It outlines the AI project cycle, including problem scoping, data acquisition, exploration, modeling, and evaluation, and introduces concepts like reinforcement learning and the differences between rule-based and learning-based systems. Additionally, it covers neural networks, dimensionality reduction, the significance of data quality (GIGO), and reliable data sources.

Uploaded by

arorapuneet234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

CLASS 10 – AI CHAPTER 2

Data feature plays a crucial role in the Al Project Cycle.' Explain with the help of an example.

A data feature is a specific piece of information or characteristic of the data that we use to
teach an AI system. For example, if we want the AI to predict a person's height based on their
age and weight, "age" and "weight" are the data features. These features help the AI learn and
make accurate predictions. Choosing the right features is essential for the AI system to work
effectively and give correct answers.

Example: Personalized Product Recommendations

Suppose you want to develop an AI-based recommendation system for an online shopping
website. The goal is to provide personalized product recommendations to users based on their
browsing and purchase history. You collect data for each user, and the features you gather
include:

1. Browsing History: The products the user has viewed or interacted with while exploring the
website.
2. Purchase History: The products the user has bought in the past.
3. Product Categories: The categories of products the user has shown interest in (e.g.,
electronics, clothing, books, etc.).
4. Reviews and Ratings: The user's feedback and ratings for products they have purchased.

The AI system will use these features to understand the user's preferences and interests, and
then suggest relevant products that they might be interested in purchasing.

Choosing the right data features is essential for the effectiveness of the recommendation
system.

In summary, data features are fundamental in building an effective recommendation system


as they provide valuable information about user preferences and behavior.
2. Data exploration play an important role in the development of Al model. Do you agree?

Data exploration is an essential step before training an AI system. It involves looking closely
at the data to understand its quality, patterns, and relationships. By exploring the data, we can
identify and fix issues, select the most relevant information, and make the AI system more
accurate and effective. Without data exploration, the AI model may not work well and could
give wrong answers.
You use charts and graphs to see the data better. You also ask questions and test ideas you
have about the data

Data Exploration in AI Model Development

What is Data Exploration? Data exploration is the initial step in analyzing the data that will
be used to build an AI model. It involves understanding the data’s structure, patterns, and
characteristics.

Why is Data Exploration Important?

 Understanding Data: It helps in understanding what data is available and how it can
be used.
 Identifying Patterns: Helps in spotting trends, patterns, and outliers in the data.
 Data Cleaning: Identifies missing or inconsistent data that needs to be fixed.
 Feature Selection: Helps in choosing the most relevant features for the model.

How to Perform Data Exploration?

 Visualizing Data: Using charts and graphs to see the data visually.
 Summary Statistics: Calculating mean, median, mode, and standard deviation to get
a sense of data distribution.
 Correlation Analysis: Checking how different features relate to each other.

Example: Predicting House Prices

1. Data Collection:
o Description: Gather data on house prices, sizes, locations, number of
bedrooms, etc.
o Example: A real estate company collects data on houses sold in the past year.
2. Data Exploration:
o Visualizing Data:
 Example: Create a scatter plot to see the relationship between house
size and price.
 Observation: Larger houses generally have higher prices, but there are
some exceptions.
o Summary Statistics:
 Example: Calculate the average price of houses in different
neighbourhoods.
 Observation: Houses in certain neighbourhoods are consistently more
expensive.
o Correlation Analysis:
 Example: Check if the number of bedrooms correlates with house
price.
 Observation: More bedrooms usually mean a higher price, but the
correlation is not as strong as with house size.

Benefits of Data Exploration:

 Improved Model Accuracy: By understanding data patterns, we can build better


models.
o Example: Knowing that house size strongly affects price helps in creating a
more accurate prediction model.
 Efficient Feature Selection: Helps in choosing the right features for the model.
o Example: Including features like location and house size, and excluding less
relevant ones, improves model performance.
 Data Quality Assurance: Ensures data is clean and reliable.
o Example: Identifying and correcting outliers or missing values ensures the
model is trained on accurate data.

Summary:

Data exploration is a crucial step in AI model development. It involves understanding the


data through visualization, statistics, and correlation analysis. For example, when predicting
house prices, exploring data helps identify key factors like house size and location, leading to
more accurate and reliable models.

3. 'In reinforcement learning, the machine is bound to learn from experiences only. Comment.

Reinforcement learning (RL) is a type of machine learning where a machine (like a computer
program) learns by interacting with its environment. Think of it like training a pet. Just like a
pet learns what to do by getting rewards (like treats) for good behavior and no rewards or
even punishments for bad behavior, a machine in RL learns what actions to take based on the
rewards it receives.

In simple terms, reinforcement learning is all about learning from experiences. The machine
tries different actions and learns which ones get the best rewards. Over time, it gets better at
making decisions that maximize these rewards. So, the machine isn't given a set of
instructions to follow. Instead, it figures out the best actions on its own by trying things out
and learning from what happens.

4. 'Rule-based system are static whereas systems based on learning approach are continuously
updated. Explain.

What is a Rule-Based System? A rule-based system is an AI system that follows a set of


predefined rules to make decisions. These rules are created by human experts and remain
static until manually updated.
What is a Learning-Based System? A learning-based system, such as those using machine
learning, is an AI system that learns from data and experiences. It continuously updates its
knowledge base and improves its performance over time.
Key Differences:
Static Nature of Rule-Based Systems:
 Predefined Rules: The rules in a rule-based system are created by experts and do not
change unless manually updated.
o Example: A medical diagnosis system that uses fixed rules to diagnose
diseases based on symptoms. If new diseases or symptoms emerge, the rules
need to be manually updated by experts.
 Limited Adaptability: These systems cannot adapt to new situations or data without
human intervention.
o Example: A spam filter that uses fixed rules to detect spam emails. If
spammers find new ways to bypass these rules, the system cannot detect the
new spam emails until the rules are updated.
Continuous Learning in Learning-Based Systems:
 Data-Driven Learning: Learning-based systems use algorithms to learn from data.
They continuously update their models as new data becomes available.
o Example: A recommendation system for an online shopping site that suggests
products to users based on their browsing and purchase history. As users
interact with the site, the system learns and provides better recommendations.
 High Adaptability: These systems can adapt to new patterns and trends without
manual updates.
o Example: A self-driving car that learns from its driving experiences. It
continuously improves its ability to navigate different environments and
handle various driving conditions.
Why Rule-Based Systems are Static:
 Human Expertise Dependency: They rely on human experts to define the rules.
Changes require human intervention.
o Example: A financial fraud detection system using fixed rules needs experts
to update the rules when new fraud techniques are discovered.
Why Learning-Based Systems are Continuously Updated:
 Self-Improvement: They use algorithms to learn from data and experiences, leading
to continuous improvement.
o Example: A voice recognition system like Siri or Google Assistant improves
its accuracy by learning from user interactions and correcting errors over time.

1. Write a short note on the Al Project life cycle.

The AI project cycle is a process to create and use Artificial Intelligence (AI) systems
effectively.
The stages in a standard AI project cycle are :
1. **Problem Scoping:**
- This is the starting point of any AI project. You define the specific goals you want to
achieve with the AI system and identify the problems it will help solve. For example,
improving customer service, predicting sales, or automating repetitive tasks.

2. **Data Acquisition:**
- In this stage, you gather and collect the relevant data needed to train the AI system.

3. **Data Exploration:**
- Once you have the data, you explore and analyze it to find useful information and
patterns. This step helps you understand the data better
4. ** Data Modelling:**
- Now, you use the collected and analyzed data to train various AI models. The AI system
needs to learn from the data to make accurate predictions or decisions that align with the
goals set in the problem scoping stage.

5. **Evaluation:**

- In this stage, you assess the outputs or predictions generated by the trained AI models.
You evaluate their performance and select the best-suited AI system for deployment based on
how well they meet the defined goal

2. What do you mean by Problem Scoping?

Problem scoping is the first step in any project or task. It's like understanding the problem
before finding a solution , Problem scoping is the process of defining and understanding a
problem before trying to solve it. It involves identifying the problem, understanding its
context, and setting clear objectives for the solution.

What is 4W Framework of Problem Scoping ?

The 4W Framework is a simple and effective approach for problem scoping, which involves
asking four essential questions:

1. **What is the problem?**


- Clearly define the problem you want to solve. Be specific and identify the root cause of
the issue.

2. **Why is it a problem?**
- Understand the significance of the problem and its impact. Consider who is affected and
why it's crucial to find a solution.

3. **Who is involved?**
- Identify the stakeholders who are related to the problem, including those affected by it and
those who can contribute to the solution.

4. **When and where does the problem occur?**


- Determine the context of the problem, including when and where it happens. This helps in
understanding the scope and limitations.

Using the 4W Framework, you gain a clear understanding of the problem's scope, context,
and importance, which is vital for devising an effective solution. By addressing these
questions, you lay a strong foundation for the rest of the project and ensure that the approach
aligns with the desired outcomes.

3. Define Neural Network.

A neural network is a computational model inspired by the way biological neural networks in
the human brain process information. It consists of interconnected nodes, called neurons,
organized into layers. Each neuron receives input, processes it through an activation function,
and produces an output. Neural networks are capable of learning from data through a process
called training, where they adjust the strength of connections (weights) between neurons to
improve their ability to perform tasks such as pattern recognition, classification, regression,
and more complex decision-making tasks.

4. Write any two applications of Neural Network.

Neural networks, a type of artificial intelligence model inspired by the human brain, find
applications in various fields. Here are two notable examples:

1. Image Recognition: Neural networks are extensively used for tasks like image
classification and object detection. For instance, in autonomous vehicles, neural
networks can identify pedestrians, vehicles, and road signs from camera images in
real-time, enabling safe driving decisions.
2. Natural Language Processing (NLP): Neural networks are employed in NLP
applications such as language translation, sentiment analysis, and chatbots. They can
understand and generate human language, allowing machines to interact with users
more naturally and effectively.

5. Define Dimensionality Reduction.


Dimensionality reduction is a technique used in machine learning and data analysis to reduce
the number of input variables or features under consideration. Its main goal is to simplify the
dataset while retaining as much relevant information as possible. This reduction is achieved
by transforming the dataset into a lower-dimensional space, typically with fewer variables
than the original dataset.

The benefits of dimensionality reduction include:

1. Improved Computation Efficiency: By reducing the number of features,


computations become faster and more manageable, especially for complex models.
2. Visualization: It facilitates easier visualization of data in lower-dimensional spaces,
aiding in understanding patterns and relationships.
3. Reduced Overfitting: Dimensionality reduction can help mitigate the risk of
overfitting by focusing on the most significant features and reducing noise.

Common techniques for dimensionality reduction include Principal Component Analysis


(PCA), t-distributed Stochastic Neighbor Embedding (t-SNE), and Linear Discriminant
Analysis (LDA), each suitable for different types of data and objectives.

6. What do you mean by GIGO?

"GIGO" means "Garbage In, Garbage Out." It's a saying that tells us if you put bad or wrong
information into a computer, you'll get bad or wrong results out. In other words, the quality of
what comes out depends on the quality of what you put in. This idea is important in computer
systems and AI because using incorrect or poor-quality data can lead to unreliable or
inaccurate outcomes. It reminds us to always use accurate and good-quality information for
better results.
 Example: If you feed wrong or bad data into a system, it will produce incorrect results.
For instance, using outdated information for weather forecasting can lead to inaccurate
predictions.

 Importance: It highlights the critical role of using accurate and reliable data to ensure the
reliability and correctness of computational outputs.

7. Name some of the sources from where you can get accurate and reliable data.

Obtaining accurate and reliable data is crucial for various applications. Here are some sources
where you can typically find such data:

1. Government Databases: Government agencies often provide public access to a


wealth of reliable data, ranging from economic indicators to demographic statistics.
Examples include data.gov in the United States and data.gov.uk in the United
Kingdom.
2. Research Institutions: Universities, research institutes, and think tanks often publish
data related to their research findings. These datasets are typically peer-reviewed and
can be trusted for accuracy.
3.  International Organizations: Organizations like the World Bank, United
Nations, and OECD collect and publish extensive datasets on global economic, social,
and environmental indicators. Their data is usually comprehensive and reliable.
4.  Scientific Journals: Data published in peer-reviewed scientific journals is
rigorously reviewed for accuracy and validity. It's a valuable source for scientific and
academic research.
Differentiate between the following:

a) Data Acquisition and Data Exploration

Characteristic Data Acquisition Data Exploration

Process of collecting or Process of analysing and examining collected data to


obtaining data from various understand its characteristics, identify patterns, and
Definition sources gain insights

Gather raw data for further Discover meaningful information within the data, such
Purpose analysis as trends, correlations, anomalies, or hidden insights

Data in original format (Raw Analysing data in detail, including visualization and
Nature of Data Data) statistical analysis

Data collection, data import,


Methods data recording Data visualization, summary statistics,

Examples recording customer orders Creating histograms, Pie Chart , Line chart
b) Data Modelling and Data Visualization

Aspect Data Modelling (in AI) Data Visualization (in AI)


Data visualization is like taking the smart
Data modelling in AI is all about teaching things the computer learned through data
computers how to recognize patterns, make modelling and presenting them in a way
predictions, and understand information. that's easy for people to understand. It's
It's like giving the computer a set of rules about turning data into pictures, graphs, and
Purpose and methods to learn from data. charts that tell a clear story.
When we do data modelling, we're In data visualization, we focus on creating
essentially creating a guidebook or set of visuals that convey information effectively.
instructions for the computer. These We want to make sure that people can look
Nature of instructions tell the computer how to at these visuals and quickly grasp what's
Activity process data and make smart decisions. going on in the data.
We use mathematical formulas and To create these visuals, we use tools and
algorithms to train the computer. These techniques like charts, graphs, and
formulas help the computer make sense of interactive dashboards. For example, if we
data and learn from it. For example, if we're want to show how well a machine learning
teaching a computer to recognize cats in model can predict something, we might
pictures, we use these formulas to help it make a bar chart that shows the model's
Methods identify the features that make a cat a cat. accuracy.
Examples Data modelling can be used in various AI Data visualization is used to communicate
tasks. For instance, when we train a chatbot the results and insights generated by AI
Aspect Data Modelling (in AI) Data Visualization (in AI)
models. For instance, we might create a
heatmap to show which parts of a city are the
to understand and respond to human most crowded at different times of the day,
language or when we build or we might build a dashboard for a business
recommendation systems that suggest that displays real-time information about
movies or products based on your sales, customer satisfaction, or website
preferences. It's all about teaching the traffic. The goal is to make data
computer to be smart in specific ways. understandable and actionable for people.

c) Machine Learning and Deep Learning

Machine Learning (ML):


 Learning Process: In Machine Learning, the system is given a set
of rules or algorithms and a large amount of data. It learns to
perform a task by identifying patterns in the data.
 Feature Extraction: It often requires human experts to identify
relevant features in the data. These features are like attributes or
characteristics that help the algorithm make decisions.
 Complexity: ML algorithms are typically not very complex. They
might involve linear regression, decision trees, or support vector
machines, for example.
 Applications: ML is used in various tasks like spam detection,
recommendation systems (like those used by Netflix or Amazon),
and even self-driving cars to some extent.
Deep Learning (DL):
 Learning Process: Deep Learning, on the other hand, is a subset
of Machine Learning. It uses a specific type of algorithm called a
neural network, which is inspired by the structure of the human
brain.
 Feature Extraction: DL systems try to learn relevant features from
the data automatically. They don't rely on human experts to specify
the features.
 Complexity: Deep Learning models are very complex. They have
multiple layers (hence the term "deep") of interconnected nodes
that process data in a hierarchical fashion.
 Applications: Deep Learning has been revolutionary in tasks like
image and speech recognition, natural language processing (like
understanding and generating human language), and playing
complex games like Go.
In a Nutshell:
 Machine Learning is like giving a set of rules and data to a system
so it can learn to perform a task by itself. It often requires human
expertise in feature extraction.
 Deep Learning is a specific type of Machine Learning that uses a
complex neural network structure to learn directly from the data,
without much human intervention in feature extraction.
To put it even more simply, you can think of Machine Learning as a
student learning from textbooks

d) Classification and Clustering

Classification is like sorting objects into predefined categories. You already know the labels
you want to assign, and you train a model to learn the rules for putting things in those
categories. For example, you might want to classify emails into "spam" or "not spam."
Clustering, on the other hand, is about finding natural groupings in data without knowing in
advance what those groups should be. It's like organizing a collection of items into piles
based on their similarities, even if you don't know the exact categories beforehand. For
instance, if you have a mix of different fruits, clustering would help you group similar ones
together, even if you didn't know the specific types of fruit beforehand.
In summary, classification is about assigning known labels, while clustering is about
discovering hidden patterns and grouping similar items together without predefined
categories. They serve different purposes and are used in different types of data analysis
tasks.

What do you understand by the term "Classification” ( Supervised


Learning ) ?

In simple terms, classification means sorting or grouping things based on their similarities or
characteristics. It's like putting things into different categories based on certain traits they
share. For example, sorting fruits into groups like apples, oranges, and bananas based on their
shape, colour, and type.

Let's consider an example to better understand this concept. Imagine you have a dataset
containing 100 images of pears and pomegranates. Your goal is to train a computer program
(model) to recognize whether an image shows a pear or a pomegranate. To do this, you
provide the model with a set of labelled examples. For instance, you show it images of pears
and tell it, "This is a pear," and then show it images of pomegranates and say, "This is a
pomegranate."
The model learns from these examples and adjusts its internal parameters to become better at
distinguishing between pears and pomegranates. After this training process, the model is
equipped to classify new images based on what it has learned. When you give it a new,
unseen image, it will predict whether it's a pear or a pomegranate.
Classification models can make decisions or predictions based on data without human
intervention. This can save time and resources in tasks that require categorization or labeling.

What do you Understand by Clustering ?


Clustering in the context of Neural Networks and AI is a way of organizing data into groups
based on their similarities. It's like sorting things into different categories, but without
knowing in advance what those categories should be.
For example, imagine you have a bunch of different fruits, and you want to group them
together based on their shape, colour, and size. Clustering algorithms help the computer
figure out which fruits are similar and should be in the same group.
In the world of Neural Networks and AI, clustering is useful for tasks like customer
segmentation (grouping similar customers together for targeted marketing), image
segmentation (dividing an image into regions with similar characteristics), and anomaly
detection (finding unusual patterns in data).
The goal of clustering is to find natural groupings in the data without any prior knowledge
about what those groups should be. It's a bit like letting the computer discover patterns on its
own!

What do you Understand by Regression (Supervised Learning )?

In the context of Neural Networks and AI, regression refers to a type of task where the goal is
to predict a numerical value based on input data. Unlike classification, which aims to
categorize data into distinct classes, regression is concerned with estimating a continuous
quantity.

For example, let's say you have data on the square footage and number of bedrooms of
houses, and you want to predict their sale prices. This is a regression task because the output
(sale price) is a numerical value.

In a neural network for regression, the network is designed to learn the relationship between
the input features (like square footage and number of bedrooms) and the target variable (sale
price). During training, the network adjusts its parameters to minimize the difference between
its predictions and the actual values in the training data.

Once trained, the neural network can then make accurate predictions about the target variable
for new, unseen data. This makes regression in neural networks a powerful tool for tasks like
price prediction, demand forecasting, and many other scenarios where we're interested in
estimating numerical values based on input features.

Rule based system can only implement narrow Al' Discuss.


Rule-based systems in AI are like computer programs that follow a set of fixed rules. They're
called narrow AI because they can only work within those specific rules. Here's why:

1. Limited Adaptability: These systems can't learn or change on their own. They rely
on rules set by humans and can't handle new situations that aren't in their rules. For
instance, a medical diagnosis system with fixed rules can't diagnose new diseases it
hasn't been programmed for.
2. Manual Updates: Whenever the rules need to change, it requires experts to manually
update them. This makes the systems less flexible and slower to adapt compared to AI
that can learn from new information.
3. Specific Tasks: They're good at tasks where the rules are clear, like determining if an
email is spam based on certain keywords. But they struggle with tasks that need
understanding context or making decisions based on uncertain information.
4. Not Scalable: As the number of rules grows or the problems get more complex,
managing and expanding rule-based systems becomes difficult. They work best for
simple, well-defined tasks in a narrow area.
In essence, while rule-based systems are useful for straightforward tasks with clear rules,
their inability to learn or handle new challenges limits them to narrow applications within AI.

What is the difference between supervised and unsupervised learning? Explain with example.

Supervised vs. Unsupervised Learning


Supervised Learning:
 Definition: In supervised learning, the algorithm learns from labeled data, where each
training example is paired with a corresponding target or outcome variable.
 Example:
o What: Predicting whether an email is spam or not based on labeled examples.
o How: The algorithm learns from a dataset where each email is labeled as
either "spam" or "not spam." It uses features like words and email structure to
predict the label of new, unlabeled emails.
Unsupervised Learning:
 Definition: In unsupervised learning, the algorithm learns from unlabeled data. It
identifies patterns and structures in the data without explicit feedback or guidance.
 Example:
o What: Grouping customers into segments based on their purchasing behavior.
o How: The algorithm analyzes customer data without knowing in advance
which groups exist. It discovers clusters of customers who share similar
buying habits, helping businesses target marketing strategies more effectively.
Simplified Explanation:
 Supervised Learning:
o What: It's like teaching with examples and answers. You show the algorithm
labeled data (examples with correct answers), and it learns to predict outcomes
for new data.
o Example: Teaching a model to recognize different types of animals by
showing it pictures of each animal with their names.
 Unsupervised Learning:
o What: It's like finding patterns without answers. The algorithm explores data
to find hidden structures or groupings.
o Example: Sorting a messy room without labels or instructions, organizing
items based on their similarities and grouping them together.

Aspect Supervised Learning Unsupervised Learning


Learns from labeled data with Learns from unlabelled data, discovers
Definition
known outcomes. patterns.
Predicting email spam (labeled Grouping customers into segments
Example
"spam" or "not spam"). based on behavior.
Requires labeled data (input- Uses unlabelled data, no predefined
Training Data
output pairs). outputs.
Objective Predicts or classifies new data Finds hidden patterns or structures in
Aspect Supervised Learning Unsupervised Learning
based on learned patterns. data.
Feedback Receives explicit feedback from Finds structure without explicit
Mechanism labeled data. feedback or labels.
Speech recognition, image Customer segmentation, anomaly
Applications
classification. detection.
Example Teaching with answers (like a Discovering patterns without answers
Analogy quiz with answers). (like organizing a messy room).

How Artificial Neural Network is similar to Human Neural Network?

Artificial Neural Networks (ANNs) are inspired by the structure and function of the human
brain's neural networks. Here are some key similarities between artificial and human neural
networks:

1. Basic Structure: Both artificial and human neural networks consist of interconnected
nodes (artificial neurons or biological neurons) organized into layers. In ANNs, these
layers include input, hidden, and output layers, while in the human brain, neurons are
interconnected in complex networks.
2. Information Processing: Both networks process information through interconnected
nodes. In ANNs, each node receives inputs, applies a mathematical function
(activation function), and passes an output to the next layer. Similarly, biological
neurons receive signals (inputs), process them through electrochemical signals, and
transmit outputs to other neurons.
3. Learning and Adaptation: ANNs and human neural networks are capable of
learning and adaptation. In artificial networks, learning is achieved through
algorithms that adjust the strength (weights) of connections between neurons based on
training data. Human brains learn through experience, strengthening or weakening
connections (synapses) between neurons based on learning and memory formation.
4. Complexity and Parallel Processing: Both systems exhibit complex patterns of
connectivity and can process multiple pieces of information simultaneously (parallel
processing). ANNs can handle large amounts of data in parallel, similar to how the
brain processes vast amounts of information simultaneously through interconnected
neural pathways.
5. Functionality: Both types of networks are used for pattern recognition, decision-
making, and processing sensory information. ANNs excel in tasks such as image and
speech recognition, while human brains perform complex cognitive functions
including reasoning, emotions, and motor control.

While ANNs aim to mimic the structure and function of human neural networks, they are
simplified models designed to solve specific tasks efficiently. They represent a computational
approach to understanding and replicating aspects of biological intelligence, albeit at a
different level of complexity and scale compared to the human brain.
What do you understand by the term 'Decision Tree'? Explain with example.

A Decision Tree is a graphical representation of decisions and their possible consequences.


It's like a flowchart where each node represents a decision, each branch represents the
outcome of the decision, and each leaf node represents a final decision or outcome.
a Decision Tree is a predictive model that learns from data to make decisions. It's structured
like a flowchart where each internal node represents a "test" on an attribute (e.g., is it
sunny?), each branch represents the outcome of the test (e.g., yes or no), and each leaf node
represents a decision or prediction (e.g., go for a walk or not).

Example:
Imagine you're deciding whether to go for a walk based on the weather:
 Decision Node: The first node asks, "Is it raining?"
o Yes Branch: If it's raining, you might decide not to go for a walk.
o No Branch: If it's not raining, you might then ask, "Is it sunny?"
 Yes Branch: If it's sunny, you decide to go for a walk.
 No Branch: If it's not sunny, you might still go for a walk depending
on other factors like temperature.
In this example, the decision tree helps you make a sequence of decisions based on conditions
(rain and sun) to reach a final outcome (go for a walk or not).

Decision trees are used in various fields for decision-making and problem-
solving because they are easy to understand and visualize, making complex
decisions more manageable.

Example 2: Customer Purchase Decision Tree

Imagine a decision tree to predict if a customer will purchase a product based on


demographic data.

Age
/ \
Young Not Young
/ \
Income > Medium? Yes No
/ \
Gender = Female Yes No
/ \
Purchase No Purchase

Write 4 concern of Sustainable Development Goals for creating an Al project. (For example:
Eradication of poverty across the world.

Here are four concerns related to creating AI projects aligned with Sustainable Development
Goals (SDGs), explained simply:
1. Ethical Data Use: Make sure that the data used in AI projects is collected and used in
a fair and respectful way, especially when dealing with personal information like
health data (SDG 3 - Good Health and Well-being).
2. Avoiding Bias: Ensure that AI algorithms don't unfairly favour or disadvantage
certain groups of people, aiming for fair outcomes for everyone (SDG 4 - Quality
Education).
3. Environmental Impact: Try to reduce the environmental impact of AI technologies,
such as using less energy or creating less electronic waste from AI hardware (SDG 7 -
Affordable and Clean Energy).
4. Equal Access: Ensure that AI benefits reach everyone, especially those who might
not normally have access, to help reduce poverty and inequality (SDG 1 - No
Poverty).

By addressing these concerns, AI projects can better contribute to sustainable development


by being fair, environmentally friendly, and accessible to all.

An organization is deploying iris recognition system to mark the attendance of their


employees. What type of data should be accumulated for it?

The organization should accumulate biometric data related to the iris patterns of their
employees. This includes high-resolution images or scans of each employee's iris, which are
unique to each individual.

 Iris Images: Clear pictures of each employee's eye patterns.

 Biometric Details: Specific measurements and characteristics from these images that are
unique to each person's iris.

 Employee Information: Details like employee IDs or names to link each iris scan to the
right person.

 Attendance Records: Time-stamped logs of when employees use the system to check in
or out.

Rupa is running a food outlet and has an app for it. She needs a chatbot for the app which
would be able to give personalized chat experience to customers. What type of approach must
be used for creating a chatbot? (Rule-based/Learning-based)

Rupa should use a learning-based approach to create the chatbot for her food outlet app.
This means the chatbot will learn from customer interactions over time to provide
personalized responses and better understand what customers need and prefer. It's like
teaching the chatbot to get smarter as it interacts more with customers, making the
conversations more helpful and tailored to each person.

You are the coach of the football team. You have their data. It contains team players'
performance in tournaments, which is arranged in a tabular form. Now, you are required to
find the trends, patterns and connections in the datasets. How will you do this?

To find trends, patterns, and connections in the football team players' performance data, you
can follow these steps:
1. Data Exploration: First, look at the data to understand its structure and variables.
Identify what each column represents (e.g., player name, goals scored, assists, etc.).
2. Descriptive Statistics: Calculate basic statistics like averages, ranges, and
distributions for key metrics (e.g., average goals per game, total assists per player).
3. Visualization: Create graphs and charts (e.g., bar charts, scatter plots) to visualize
relationships between variables. For example, plot goals scored against assists to see
if there's a correlation.
4. Pattern Recognition: Look for recurring patterns or trends in the data. For instance,
do certain players perform consistently well in specific tournaments or against
particular opponents?
5. Statistical Analysis: Use statistical techniques like correlation analysis to quantify
relationships between variables. Determine if performance metrics (e.g., goals,
assists) are correlated with each other or with other factors like playing time or
position.
6. Machine Learning Techniques: Apply machine learning algorithms if you want to
predict future performance based on historical data or identify clusters of players with
similar performance profiles.
7. Contextual Analysis: Consider external factors that could influence performance,
such as weather conditions, team strategy changes, or player injuries.
By systematically exploring and analyzing the data, you can uncover valuable insights that
help optimize player strategies, improve team performance, and make informed decisions as a
coach.

You might also like