0% found this document useful (0 votes)
136 views21 pages

Question Bank With Answers

Uploaded by

NIDA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
136 views21 pages

Question Bank With Answers

Uploaded by

NIDA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Intel AI Training

Question Bank with Answers


Theory questions

1. Write down any 3 examples of machine learning and deep learning algorithms
Machine Learning Algorithms
a) Linear Regression
b) Decision Trees
c) Support Vector Machines (SVM)
Deep Learning Algorithms
a)Convolutional Neural Networks (CNNs)
b)Recurrent Neural Networks (RNNs)
c)Generative Adversarial Networks (GANs)

2. Write down any 5 applications of AI


i)Healthcare
ii)Natural Language Processing (NLP)
iii)Autonomous Vehicles
iv)Finance
v)Recommendation Systems
3. What are the major difference between supervised and unsupervised learning
a)Supervised Learning requires labeled data and focuses on prediction, while
Unsupervised Learning works with unlabeled data to find patterns or
groupings.
b)Supervised learning is typically used for tasks where the outcome is known,
whereas unsupervised learning is used for exploratory analysis when the
outcomes are not predefined.
4. What packages in Python can be used for implementing ML
i)Scikit-learn
ii)TensorFlow
iii)Keras
iv)PyTorch
5. What are the basic difference of Numpy vs Pandas
NumPy: Primarily designed for numerical computing. It provides support for
multi-dimensional arrays and matrices, along with a collection of
mathematical functions to operate on these arrays.
Pandas: Built for data manipulation and analysis. It provides data structures
(like Series and DataFrames) that are optimized for handling labelled data,
making it easier to work with structured datasets.

1
6. Define Artificial Intelligence. Discuss its key components and how they interact
with each other.
Answer: Artificial Intelligence (AI) refers to the simulation of human intelligence in
machines programmed to think and learn. Key components include:
Machine Learning: Algorithms that enable systems to learn from data.
Natural Language Processing (NLP): Allows machines to understand and interpret
human language.
Robotics: The design of robots that can perform tasks autonomously.
Computer Vision: Enabling machines to interpret and process visual information. These
components interact to create intelligent systems that can learn, reason, and act
autonomously.

7. Explain the difference between supervised, unsupervised, and reinforcement


learning. Provide examples for each.
Answer:
Supervised Learning: The model is trained on labeled data (input-output pairs).
Example: Email spam detection.
Unsupervised Learning: The model works on unlabeled data to find hidden patterns.
Example: Customer segmentation in marketing.
Reinforcement Learning: The model learns by interacting with an environment and
receiving feedback in the form of rewards or penalties. Example: Game-playing AI like
AlphaGo.
8. What is the Turing Test? Discuss its significance and limitations in evaluating
AI.
Answer: The Turing Test, proposed by Alan Turing, evaluates a machine's ability to
exhibit intelligent behavior indistinguishable from a human. Its significance lies in
assessing machine intelligence. Limitations include:
It measures mimicry rather than true understanding.
It does not account for emotional intelligence or creativity.

9. What is the importance of data preprocessing in machine learning? List


common preprocessing techniques.
Answer: Data preprocessing is crucial for improving model accuracy and performance.
Common techniques include:
Normalization/Standardization: Scaling features to a similar range.

2
Handling missing values: Imputation or removal of incomplete data.
Encoding categorical variables: Converting categorical data into numerical form.
10. Explain the concept of neural networks. How do they mimic the human brain’s
structure and function?
Neural networks are a fundamental concept in machine learning, inspired by the
structure and functioning of the human brain. They are designed to recognize patterns
and make decisions based on data, similar to how the human brain processes information.
Structure of Neural Networks
A neural network consists of layers of interconnected neurons (also called nodes). These
layers include:
Input Layer: This is where the network receives the raw data (features) as input.
Hidden Layers: These layers process the input data by transforming it through
various operations and abstractions.
Output Layer: This produces the final result or prediction, such as classifying an
image or predicting a value.
Each neuron in a layer is connected to neurons in the next layer, and each connection has
an associated weight that determines the strength of the connection.
Components of a Neuron
Each neuron performs the following functions, analogous to biological neurons:
Weights: The connections between neurons have weights, which control the influence of
one neuron on another. These are like the synapses in the brain, adjusting based on
experience.
Activation Function: After the weighted sum of inputs reaches a neuron, it passes
through an activation function (such as sigmoid, ReLU, or tanh), which determines
whether the neuron "fires" or becomes active.
Bias: Each neuron has a bias value that shifts the weighted sum, influencing the neuron’s
activation.
How Neural Networks Learn
Neural networks learn by adjusting the weights and biases during training, a process that
mimics how the brain strengthens or weakens synaptic connections in response to
experiences. This adjustment is done using algorithms like backpropagation and gradient
descent:
Backpropagation: The algorithm calculates the error by comparing the network’s output
to the desired output. The error is then propagated backward through the network to
adjust the weights in such a way that future predictions will be more accurate.

3
Gradient Descent: This optimization algorithm adjusts the weights by minimizing a loss
function (which measures how far the network’s output is from the desired output).
Mimicking the Human Brain
The structure and function of artificial neural networks (ANNs) are inspired by biological
neural networks. Let’s look at how they are analogous to the human brain:

1. Neurons
Biological neurons: In the brain, neurons process and transmit information. They
receive signals from other neurons, process these signals, and pass them on.
Artificial neurons: In ANNs, each node performs a similar function by taking inputs,
processing them through weights, applying an activation function, and passing the output
to the next layer.
2. Synapses and Weights
Synapses: In the brain, neurons are connected through synapses, which determine the
strength of the signal passed from one neuron to the next. Synapses strengthen with
repeated activity, which is akin to learning.
Weights: In ANNs, the connections between nodes are represented by weights. These
weights are learned and adjusted during training to influence how much input from one
neuron affects the output of another.
3. Learning through Experience
Brain learning: The brain learns by adjusting synaptic strengths through experiences
(like learning to ride a bike or recognizing faces). This process is known as synaptic
plasticity.
Neural network learning: ANNs adjust weights based on feedback (error signals), which
is analogous to the brain strengthening or weakening synaptic connections through
feedback (e.g., rewards, punishments).
4. Parallel Processing
Brain: The human brain processes information in parallel across millions of neurons,
allowing it to handle complex tasks like vision, language, and decision-making
simultaneously.
Neural networks: ANNs also perform parallel processing, with neurons in the hidden
layers processing multiple pieces of data at the same time. This allows them to handle
large amounts of data efficiently.
5. Hierarchical Processing
Brain: The brain processes information hierarchically, starting with simpler patterns (like
edges in vision) and gradually building up to complex patterns (like recognizing a face).

4
Neural networks: In deep learning, the multiple hidden layers of a neural network
progressively extract higher-level features from raw input data. Early layers might detect
basic patterns (e.g., edges in an image), while later layers identify more complex
structures (e.g., shapes, objects).
Example: Image Recognition
To illustrate, consider how neural networks are used in image recognition:
Input Layer: The network receives pixel values of the image.
Hidden Layers: Early layers identify edges or simple patterns in the image. Subsequent
layers recognize more complex shapes (e.g., parts of objects like eyes, ears, etc.).
Output Layer: The network combines the features to classify the image (e.g., identifying
it as a "cat").
This process is similar to how the visual cortex in the brain processes visual stimuli,
detecting simple patterns first, then gradually recognizing complex objects.

11. What are the ethical considerations in the development and deployment of AI
systems?
Answer: Ethical considerations include:
Bias in algorithms leading to unfair treatment.
Privacy concerns with data collection and usage.
Accountability for decisions made by AI systems. Potential societal impacts include job
displacement and the need for regulation to ensure responsible use.
12. Define the term "Natural Language Processing" (NLP). What are its main
challenges?
Answer: NLP is a field of AI focused on the interaction between computers and human
language. Main challenges include:
Ambiguity in language (context-dependent meanings).
Understanding nuances, idioms, and cultural references.
Sentiment analysis in varying tones and contexts.
13. Illustrustrate and solve an example problem using Reinforcement learning
techniques
The objective is to balance a pole on a moving cart. The cart can move left or right,
and the pole starts upright. The goal is to keep the pole balanced as long as possible.
Reference Link : https://www.youtube.com/watch?v=JNKvJEzuNsc

5
Environment:
• State Space: The state consists of four variables:
o Cart position
o Cart velocity
o Pole angle
o Pole angular velocity

Action Space: The agent can take two actions:


• Move the cart to the left
• Move the cart to the right
Reward: The agent receives a reward of +1 for each time step the pole
remains upright (i.e., the angle is within a certain threshold, typically ±15
degrees).
Implementation Steps:
i) Set Up the Environment: You can use the OpenAI Gym library to create the
CartPole environment.
ii) Define the Reinforcement Learning Algorithm: You can use Q-learning or a
simple policy gradient method. For simplicity, let’s use a basic Q-learning
approach.
iii) Training the Agent: The agent will learn through exploration and exploitation
over multiple episodes
14. You are tasked with designing a chatbot for customer service. What algorithms
would you consider, and why? Discuss potential challenges in implementation.
Answer: Algorithms to consider include:
Rule-based Systems: Simple and interpretable, but limited in handling diverse queries.
NLP with Machine Learning: Can understand context better, but requires large datasets.
Challenges include understanding user intent, managing ambiguous queries, and
ensuring a natural conversation flow.

15. What is deep learning, and how does it differ from traditional machine
learning?

Answer:
Deep Learning is a subfield of machine learning that uses artificial neural networks
with many layers (hence "deep") to model complex patterns in data. Traditional
machine learning models often require feature engineering, where a human selects
the relevant features. Deep learning models, however, automatically learn features
from raw data, which makes them especially effective for tasks such as image
recognition, speech processing, and natural language understanding.

6
16. Discuss how you would use reinforcement learning to train an agent in a
simulated environment. What metrics would you use to evaluate its
performance?

Define the Environment:


The environment consists of states, actions, rewards, and transitions. You
must clearly define:
State space: All possible configurations the environment can take.
Action space: All possible actions the agent can take at each state.
Reward function: A scalar feedback signal for each state-action pair, guiding the
agent to desirable outcomes.
Transition dynamics: The rules determining how actions taken by the agent
influence the next state.
Agent Design:
Policy: The agent's decision-making strategy, which maps states to actions. The
policy can be deterministic or stochastic.
Value function: Represents the expected cumulative reward from a given state (or
state-action pair), helping the agent understand which states are valuable.
Exploration vs. Exploitation: Initially, the agent explores the environment by
taking random actions to gather information. Over time, it balances exploration with
exploiting the best-known strategies.
Learning Algorithm:
Algorithms like Q-Learning, SARSA, Deep Q-Networks (DQN), or Policy Gradient
methods (e.g., Proximal Policy Optimization, PPO) can be used depending on the
complexity and nature of the environment.
Model-free vs. Model-based: A model-free approach (e.g., Q-learning) doesn't try to
model the environment's dynamics, while a model-based approach does.
On-policy vs. Off-policy: On-policy methods (e.g., SARSA) learn about the policy being
followed, while off-policy methods (e.g., Q-Learning) learn from actions outside of the
current policy.
Training Loop:
• Initialize the agent’s policy and environment.
• For each episode:
• Reset the environment to an initial state.
• The agent interacts with the environment by taking actions.
• The environment returns the new state and reward based on the action.
• The agent updates its policy based on the reward and observed transition.

7
• Continue this process until the agent reaches a terminal state or predefined
criteria.
• The process repeats across many episodes to help the agent refine its policy
and converge to an optimal strategy.

17. Describe about various steps involved in implanting a machine learning model

Answer:

1. Problem Definition
The first step is to define the problem you're trying to solve, whether it’s
classification, regression, clustering, etc.

2. Data Collection
Gather relevant data that will be used to train the model. The data can be from
different sources such as databases, APIs, or CSV files.

3. Data Preprocessing
Clean the data to remove noise, handle missing values, and transform it into a
format suitable for model training. This often includes:

Handling missing values


Scaling and normalizing data
Encoding categorical variables
4. Split Data into Training and Testing Sets
Split the dataset into training and testing sets to evaluate model performance.
5. Model Selection
Select a suitable machine learning algorithm based on the problem. Common
algorithms include:

Logistic Regression (for classification)


Linear Regression (for regression)
Decision Trees
Support Vector Machines (SVM)
Neural Networks (for complex tasks)
6. Model Training
Fit the model to the training data.
7. Model Evaluation
Evaluate the model on the test data using metrics such as accuracy, precision,
recall, and F1 score (for classification) or RMSE (for regression).
8. Model Tuning
Optimize hyperparameters to improve the model's performance. This might
involve cross-validation and techniques like GridSearch or RandomSearch.
9. Deployment

8
Once the model has been tuned and validated, deploy it into a production
environment where it can be used to make predictions on new data.
10. Monitoring and Maintenance
After deployment, monitor the model's performance over time and update it if
necessary as new data becomes available or conditions change.

18. What is NLP?

Natural Language Processing (NLP) is a subfield of artificial intelligence (AI) that


focuses on enabling computers to understand, interpret, and generate human
(natural) language. NLP bridges the gap between human communication and
machine understanding by allowing computers to process text or voice data in a
way that is meaningful and useful.

NLP combines elements from various fields, including linguistics, computer


science, and machine learning. It involves processing both the syntax (structure)
and semantics (meaning) of language to perform various tasks such as translation,
text summarization, question answering, and more.

19. What are the Key components of NLP?


• Tokenization: Splitting text into smaller units such as words or sentences.
• Stopword Removal: Removing common but insignificant words like "and", "is",
"the".
• Stemming and Lemmatization: Reducing words to their root form (e.g.,
"running" → "run").
• Part-of-Speech Tagging (POS): Identifying the grammatical role of words (noun,
verb, adjective, etc.).
• Named Entity Recognition (NER): Identifying and classifying named entities like
people, organizations, and locations.
• Sentiment Analysis: Determining the sentiment or emotion in a piece of text
(positive, negative, neutral).
• Dependency Parsing: Understanding how words in a sentence relate to each
other.
20. Give few applications of NLP

1. Machine Translation: Automatically translating text or speech from one language


to another (e.g., Google Translate).
2. Speech Recognition: Converting spoken language into text (e.g., virtual assistants
like Siri, Google Assistant).
3. Chatbots and Virtual Assistants: NLP powers conversational agents that can
understand and respond to user queries (e.g., customer support chatbots, Amazon
Alexa).
4. Text Summarization: Automatically generating a concise summary of longer
documents or articles.

9
5. Sentiment Analysis: Understanding the sentiment of a piece of text, which is
especially useful for analysing customer feedback, social media posts, and reviews
(e.g., analysing product reviews to gauge user satisfaction).

6. Text Classification: Categorizing text into predefined categories. This is used in


spam filtering, news categorization, and document classification.
7. Language Generation: Generating human-like text from a machine, used in
content creation, dialogue generation, and story writing (e.g., GPT-based language
models like ChatGPT).
8. Optical Character Recognition (OCR): Converting handwritten or printed text
into digital form (e.g., scanning a document and converting it into editable text).
9. Question Answering: Systems that answer specific questions from large datasets
or knowledge bases (e.g., IBM Watson, Google’s featured snippets).

Problematic Questions
21. Write a program to perform the following arithmetic operations. Input
different values using keyboard Addition, Subtraction, Multiplication, Division

Answer :

num1 = int(input("Enter the first number: "))


num2 = int(input("Enter the second number: "))

# Perform arithmetic operations


addition = num1 + num2
subtraction = num1 - num2
multiplication = num1 * num2
division = num1 / num2

# Display the results


print("Addition Result “ , addition)
print( "Subtraction Result”, subtraction)
print("Multiplication Results “ , multiplication)
print("Division Result ", division)

22. Write a program to find factorial of a number using function


# Function to calculate factorial
def factorial(n):
if n == 0 or n == 1:
return 1
else:
return n * factorial(n - 1)
# Input from user
num = int(input("Enter a number to find its factorial: "))
# Checking for negative inputs

10
if num < 0:
print("Factorial is not defined for negative numbers.")
else:
result = factorial(num)
print("The factorial is “, result")
23. Write a program to check whether the given number is Armstrong or not

# take input from the user


num = int(input("Enter a number: "))
# initialize sum
sum = 0
# find the sum of the cube of each digit
temp = num
while temp > 0:
digit = temp % 10
sum += digit ** 3
##the floor division // rounds the result down to the nearest #whole
number
temp //= 10
# display the result
if num == sum:
print(num,"is an Armstrong number")
else:
print(num,"is not an Armstrong number")

24. Write a program to find largest of three numbers


num1= int(input("Enter number1 "))
num2= int(input("Enter number2 "))
num3= int(input("Enter number3 "))
if (num1 >= num2) and (num1 >= num3):
largest = num1
elif(num2 > num3):
largest = num2
else:
largest = num3
print("Largest is ",largest )
25. Write a program to sort 10 numbers in ascending and descending order in a
list using built in functions

# Input: List of 10 numbers


numbers = []
# Taking 10 numbers from the user
for i in range(10):
num = int(input(f"Enter numbers"))
numbers.append(num)
# Sorting in ascending order
11
ascending_order = sorted(numbers)
# Sorting in descending order
descending_order = sorted(numbers, reverse=True)
# Display results
print("\nOriginal List:", numbers)
print("Sorted in Ascending Order:", ascending_order)
print("Sorted in Descending Order:", descending_order)

26. Write a program to perform matrix operations using numpy library


import numpy as np
# Input: Define two 2x2 matrices
matrix1 = np.array([[1, 2], [3, 4]])
matrix2 = np.array([[5, 6], [7, 8]])
# Matrix Addition
addition = np.add(matrix1, matrix2)
# Matrix Subtraction
subtraction = np.subtract(matrix1, matrix2)
# Matrix Multiplication (Element-wise)
elementwise_multiplication = np.multiply(matrix1, matrix2)
# Matrix Multiplication (Dot product)
dot_product = np.dot(matrix1, matrix2)
# Matrix Transpose
transpose_matrix1 = np.transpose(matrix1)
transpose_matrix2 = np.transpose(matrix2)

# Display Results
print("Matrix 1:\n", matrix1)
print("Matrix 2:\n", matrix2)
print("\nMatrix Addition:\n", addition)
print("Matrix Subtraction:\n", subtraction)
print("Element-wise Multiplication:\n", elementwise_multiplication)
print("Dot Product (Matrix Multiplication):\n", dot_product)
print("Transpose of Matrix 1:\n", transpose_matrix1)

12
print("Transpose of Matrix 2:\n", transpose_matrix2)

27. Demonstrate with a python program to plot the following types of graph using
the given data.

a) Bar Chart b) Histogram c) Boxplot d)Piechart

import matplotlib.pyplot as plt


import seaborn as sns
import numpy as np

# Sample Data
data = [23, 45, 56, 78, 213, 43, 21, 98, 120, 67, 45, 35, 78, 56, 90]

# Plotting Bar Chart


def plot_bar_chart():
categories = ['A', 'B', 'C', 'D', 'E']
values = [5, 7, 3, 8, 4]

plt.figure(figsize=(8, 5))
plt.bar(categories, values, color='skyblue')
plt.title("Bar Chart Example")
plt.xlabel("Categories")
plt.ylabel("Values")
plt.show()

# Plotting Histogram
def plot_histogram():
plt.figure(figsize=(8, 5))
plt.hist(data, bins=7, color='green', alpha=0.7)
plt.title("Histogram Example")
plt.xlabel("Value")
plt.ylabel("Frequency")
plt.show()

# Plotting Boxplot
def plot_boxplot():
plt.figure(figsize=(8, 5))
sns.boxplot(data=data, color='lightblue')
plt.title("Boxplot Example")
plt.show()

# Plotting Pie Chart

13
def plot_pie_chart():
labels = ['Apple', 'Banana', 'Orange', 'Grapes']
sizes = [25, 35, 20, 20]
colors = ['gold', 'lightcoral', 'lightskyblue', 'lightgreen']

plt.figure(figsize=(8, 5))
plt.pie(sizes, labels=labels, colors=colors, autopct='%1.1f%%',
startangle=140)
plt.title("Pie Chart Example")
plt.axis('equal') # Equal aspect ratio ensures that pie chart is drawn as
#a circle.
plt.show()

# Function Calls to plot each chart


plot_bar_chart() # Bar Chart
plot_histogram() # Histogram
plot_boxplot() # Boxplot
plot_pie_chart() # Pie Chart

14
28. Write sample python programs to differentiate between Python lists and
NumPy arrays

Key Differences:
Addition:
Python List: When adding two lists, they are concatenated.
NumPy Array: Supports element-wise addition between two arrays of the same size.
Scalar Multiplication:
Python List: Repeats the elements when multiplied by a scalar.
NumPy Array: Multiplies each element by the scalar.

Mathematical Operations:
Python List: Requires a loop or list comprehension to perform operations like
squaring elements.
NumPy Array: Supports vectorized operations (like squaring elements) without
loops, which is faster and more efficient.

import numpy as np

# Define a Python list and a NumPy array


py_list = [1, 2, 3, 4, 5]
np_array = np.array([1, 2, 3, 4, 5])

# 1. Addition
print("Addition:")
print("Python List:", py_list + [10, 20]) # List concatenation
print("NumPy Array:", np_array + np.array([10, 20, 30, 40, 50])) # Element-wise
addition

# 2. Scalar Multiplication
print("\nScalar Multiplication:")
print("Python List:", py_list * 2) # Repeats the list elements
print("NumPy Array:", np_array * 2) # Multiplies each element by 2

29. Write sample python programs to demonstrate importing data in python

1. Importing CSV Data with Pandas


import pandas as pd

# Load CSV data


data = pd.read_csv('data.csv')
15
# Display the first few rows
print(data.head())
2. Importing Excel Data with Pandas
import pandas as pd

# Load Excel data


data = pd.read_excel('data.xlsx', sheet_name='Sheet1')

# Display the first few rows


print(data.head())
3. Importing JSON Data
import pandas as pd
# Load JSON data
data = pd.read_json('data.json')
# Display the first few rows
print(data.head())
4. Importing Data from a URL
import pandas as pd
# Load CSV data from a URL
url = 'https://people.sc.fsu.edu/~jburkardt/data/csv/hw_200.csv'
data = pd.read_csv(url)
# Display the first few rows
print(data.head())

30. Demonstrate statistical function using Python library

1. Mean, Median, and Mode


import pandas as pd
# Sample Data
data = {'Numbers': [10, 20, 20, 30, 40, 50, 50, 50, 60]}
df = pd.DataFrame(data)
# Mean
mean_value = df['Numbers'].mean()
# Median
median_value = df['Numbers'].median()
# Mode
mode_value = df['Numbers'].mode()[0]
print(f"Mean: {mean_value}")
print(f"Median: {median_value}")
print(f"Mode: {mode_value}")
2. Variance and Standard Deviation
import pandas as pd
# Sample Data
data = {'Numbers': [10, 20, 20, 30, 40, 50, 60]}
16
df = pd.DataFrame(data)
# Variance
variance_value = df['Numbers'].var()
# Standard Deviation
std_dev_value = df['Numbers'].std()
print(f"Variance: {variance_value}")
print(f"Standard Deviation: {std_dev_value}")
3. Correlation Between Two Variables
import pandas as pd
# Sample Data
data = {'X': [1, 2, 3, 4, 5], 'Y': [10, 20, 30, 40, 50]}
df = pd.DataFrame(data)
# Correlation
correlation_value = df['X'].corr(df['Y'])
print(f"Correlation between X and Y: {correlation_value}")

4. Covariance Between Two Variables


import pandas as pd
# Sample Data
data = {'X': [1, 2, 3, 4, 5], 'Y': [2, 4, 6, 8, 10]}
df = pd.DataFrame(data)
# Covariance
covariance_value = df['X'].cov(df['Y'])
print(f"Covariance between X and Y: {covariance_value}")
5. Describe Function for Summary Statistics
import pandas as pd
# Sample Data
data = {'A': [5, 10, 15, 20, 25], 'B': [2, 4, 6, 8, 10]}
df = pd.DataFrame(data)
# Summary statistics using describe()
summary_stats = df.describe()
print(summary_stats)

31. Demonstrate a machine learning model with Python language by importing


necessary packages

Let's go through these steps with a simple demonstration of building a


classification model using the Iris dataset from sklearn.

Dataset source : -
https://gist.github.com/curran/a08a1080b88344b0c8a7

17
https://www.kaggle.com/datasets/uciml/iris

Step 1: Problem Definition


We want to classify iris flowers into 3 species based on their features (sepal
length, sepal width, petal length, petal width).

Step 2: Data Collection


We'll use the built-in Iris dataset from sklearn.

from sklearn.datasets import load_iris


import pandas as pd

# Load dataset
iris = load_iris()
X = iris.data
y = iris.target

# Convert to a DataFrame for better readability


df = pd.DataFrame(data=X, columns=iris.feature_names)
df['species'] = y
df.head()

Step 3: Data Preprocessing


If there is no missing data. We can directly proceed to splitting the data.
Step 4: Split Data into Training and Testing Sets
from sklearn.model_selection import train_test_split

# Split the dataset (80% for training, 20% for testing)


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=42)
Step 5: Model Selection
We'll use a simple Logistic Regression classifier.
from sklearn.linear_model import LogisticRegression

# Initialize the model


model = LogisticRegression(max_iter=200)

Step 6: Model Training


# Train the model
model.fit(X_train, y_train)

Step 7: Model Evaluation


# Evaluate the model
accuracy = model.score(X_test, y_test)
18
print(f"Accuracy: {accuracy:.2f}")

32. Working with Images and OPENCV examples

OpenCV functions for Reading, Showing, Writing an Image File


OpenCV provides the following functions for this purpose −

imread() function − This is the function for reading an image. OpenCV imread()
supports various image formats like PNG, JPEG, JPG, TIFF, etc.

imshow() function − This is the function for showing an image in a window. The
window automatically fits to the image size. OpenCV imshow() supports various
image formats like PNG, JPEG, JPG, TIFF, etc.

imwrite() function − This is the function for writing an image. OpenCV imwrite()
supports various image formats like PNG, JPEG, JPG, TIFF, etc.

Import the OpenCV package as shown :-

import cv2
Now, for reading a particular image, use the imread() function –

image = cv2.imread('image_flower.jpg') # Give full path of the image

For showing the image, use the imshow() function. The name of the window in
which you can see the image would be image_flower.

cv2.imshow('image_flower',image)
cv2.destroyAllwindows()

19
Color Space Conversion
In OpenCV, the images are not stored by using the conventional RGB color, rather they are
stored in the reverse order i.e. in the BGR order. Hence the default color code while reading
an image is BGR. The cvtColor() color conversion function in for converting the image from
one color code to other.

import cv2
image = cv2.imread('image_flower.jpg')
cv2.imshow('BGR_Penguins',image)
image = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
cv2.imshow('gray_penguins',image)

Edge Detection
Humans, after seeing a rough sketch, can easily recognize many object types and their poses.
That is why edges play an important role in the life of humans as well as in the applications
of computer vision. OpenCV provides very simple and useful function called Canny()for
detecting the edges.
import cv2
import numpy as np
image = cv2.imread('Penguins.jpg')
cv2.imwrite(‘edges_Penguins.jpg’,cv2.Canny(image,200,300))
cv2.imshow(‘edges’, cv2.imread(‘‘edges_Penguins.jpg’))

20
Face Detection
Face detection is one of the fascinating applications of computer vision which makes it more
realistic as well as futuristic. OpenCV has a built-in facility to perform face detection. We are
going to use the Haar cascade classifier for face detection.
import cv2
import numpy as np
face_detection= cv2.CascadeClassifier(‘haarcascade_frontalface_default.xml')
img = cv2.imread('AB.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_detection.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
img = cv2.rectangle(img,(x,y),(x+w, y+h),(255,0,0),3)
cv2.imwrite('Face_AB.jpg',img)

21

You might also like