0% found this document useful (0 votes)
12 views

Pakki Lab Programs

Uploaded by

Professor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Pakki Lab Programs

Uploaded by

Professor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

1. Python program on Problem Solving by Searching: Breadth-first Search.

2. Python program on Problem Solving by Searching: Depth-first Search.


3. Python program on Problem Solving by Searching: Greedy Best-first Search.
4. Python program on Problem Solving by Searching: A* Search.
5. Python program on Problem Solving by Searching: AO* search Informed (Heuristic)
Search Strategies.
6. Python program to demonstrate the supervised machine learning.
7. Python program to predict the price of the car using decision tree.
8. Python program to create weather prediction model that predicts whether or not
there’ll be rain.
9. Python program to create a golf prediction model that states the probable profit that
can be obtained on sale of a product.
10. Python program to classify the emails as spam or not spam.
11. Python program that demonstrates how to classify flowers using a Support Vector
Machine (SVM) classifier.
12. Python program that demonstrates how to use a basic Artificial Neural Network
(ANN) to classify students based on their height and weight.
13. Python program that demonstrates text classification using scikit-learn and a Naive
Bayes classifier.
14. Python program using the speech recognition library to perform speech recognition.
15. Python program using the PIL (Pillow) library to illustrate basic image processing
operations like opening an image, resizing it, applying a filter, and saving the
processed image.
Program 1: Breadth-first Search (BFS)

Concept Explanation: Breadth-first search (BFS) is an algorithm for traversing or searching


tree or graph data structures. It starts at the tree root (or some arbitrary node of a graph) and
explores the neighbor nodes at the present depth prior to moving on to nodes at the next depth
level.

from collections import deque

def bfs(graph, start):


visited = set()
queue = deque([start])
visited.add(start)

while queue:
vertex = queue.popleft()
print(vertex, end=' ')

for neighbor in graph[vertex]:


if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)

graph = {
'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F', 'G'],
'D': [],
'E': [],
'F': [],
'G': []
}

bfs(graph, 'A')

Possible Input:

graph = {
'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F', 'G'],
'D': [],
'E': [],
'F': [],
'G': []
}

start = 'A'

Possible Output:

ABCDEFG

Program 2: Depth-first Search (DFS)


Concept Explanation: Depth-first search (DFS) is an algorithm for traversing or searching
tree or graph data structures. The algorithm starts at the root node (selecting some arbitrary
node as the root in the case of a graph) and explores as far as possible along each branch
before backtracking.

Python Code:

def dfs(graph, start, visited=None):


if visited is None:
visited = set()
visited.add(start)
print(start, end=' ')

for neighbor in graph[start]:


if neighbor not in visited:
dfs(graph, neighbor, visited)

graph = {
'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F', 'G'],
'D': [],
'E': [],
'F': [],
'G': []
}

dfs(graph, 'A')

Explanation

1. Graph Representation:
o The graph is represented as an adjacency list. Each key is a node, and the
value is a list of its neighbors.
2. BFS Function:
o Queue Initialization: Start with a queue initialized with the starting node.
o Visited Set: Use a set to keep track of visited nodes to avoid processing a
node more than once.
o While Loop: Continue processing nodes until the queue is empty.
o Processing a Node: Dequeue a node, print it, and add its unvisited neighbors
to the queue and mark them as visited.
3. Example Graph and BFS Execution:
o The graph provided as an example is a simple directed graph.
o The BFS function is called with the starting node 'A'.

Possible Input:

graph = {
'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F', 'G'],
'D': [],
'E': [],
'F': [],
'G': []
}

start = 'A'

Possible Output:

ABDECFG

This output represents the order in which nodes are visited using the BFS algorithm starting
from node 'A'.

Program 3: Greedy Best-first Search

Concept Explanation: Greedy best-first search algorithm always selects the path which
appears best at that moment. It uses a priority queue to explore the node which is closest to
the goal.

Python Code:

import heapq

def greedy_best_first_search(graph, start, goal):


open_list = [(0, start)]
heapq.heapify(open_list)
closed_list = set()
came_from = {start: None}

while open_list:
_, current = heapq.heappop(open_list)
if current in closed_list:
continue
if current == goal:
break
closed_list.add(current)

for neighbor, weight in graph[current]:


if neighbor not in closed_list:
heapq.heappush(open_list, (weight, neighbor))
came_from[neighbor] = current

path = []
while current:
path.append(current)
current = came_from[current]

return path[::-1]

graph = {
'A': [('B', 1), ('C', 3)],
'B': [('D', 3), ('E', 1)],
'C': [('F', 1), ('G', 2)],
'D': [],
'E': [],
'F': [],
'G': []
}

path = greedy_best_first_search(graph, 'A', 'G')


print(path)

Possible Input:

graph = {
'A': [('B', 1), ('C', 3)],
'B': [('D', 3), ('E', 1)],
'C': [('F', 1), ('G', 2)],
'D': [],
'E': [],
'F': [],
'G': []
}

start = 'A'
goal = 'G'

Possible Output:

['A', 'C', 'G']

Program 4: A* Search

Concept Explanation: A* Search algorithm is a search algorithm that finds the shortest path
from a starting node to a goal node. It uses both the actual distance from the start node and
the estimated distance to the goal node to prioritize the nodes to be explored.

Python Code:

import heapq

def heuristic(node, goal):


return abs(ord(node) - ord(goal))

def a_star_search(graph, start, goal):


open_list = [(0, start)]
heapq.heapify(open_list)
closed_list = set()
g_costs = {start: 0}
came_from = {start: None}

while open_list:
_, current = heapq.heappop(open_list)
if current in closed_list:
continue
if current == goal:
break
closed_list.add(current)

for neighbor, weight in graph[current]:


tentative_g_cost = g_costs[current] + weight
if neighbor not in g_costs or tentative_g_cost <
g_costs[neighbor]:
g_costs[neighbor] = tentative_g_cost
f_cost = tentative_g_cost + heuristic(neighbor, goal)
heapq.heappush(open_list, (f_cost, neighbor))
came_from[neighbor] = current

path = []
while current:
path.append(current)
current = came_from[current]

return path[::-1]

graph = {
'A': [('B', 1), ('C', 3)],
'B': [('D', 3), ('E', 1)],
'C': [('F', 1), ('G', 2)],
'D': [],
'E': [],
'F': [],
'G': []
}

path = a_star_search(graph, 'A', 'G')


print(path)

Possible Input:

graph = {
'A': [('B', 1), ('C', 3)],
'B': [('D', 3), ('E', 1)],
'C': [('F', 1), ('G', 2)],
'D': [],
'E': [],
'F': [],
'G': []
}

start = 'A'
goal = 'G'

Possible Output:

['A', 'C', 'G']

Program 5: AO* Search Informed (Heuristic) Search Strategies


Concept Explanation: AO* algorithm is used for solving problems that can be divided into
smaller problems that can be solved independently. It uses a heuristic function to determine
the most promising nodes to expand.

Python Code:

def heuristic(node, goal):


return abs(ord(node) - ord(goal))

def ao_star(graph, start, goal):


def recur_ao_star(node):
if node in closed_list:
return True, g_costs[node]
if node not in graph:
return False, float('inf')

closed_list.add(node)
optimal_cost = float('inf')

for child_set in graph[node]:


total_cost = 0
all_solved = True
for child, cost in child_set:
solved, child_cost = recur_ao_star(child)
if not solved:
all_solved = False
total_cost += cost + child_cost
if all_solved and total_cost < optimal_cost:
optimal_cost = total_cost
came_from[node] = [child for child, _ in child_set]

closed_list.remove(node)
g_costs[node] = optimal_cost
return optimal_cost != float('inf'), optimal_cost

closed_list = set()
g_costs = {}
came_from = {}

recur_ao_star(start)

path = []
current = start
while current != goal:
path.append(current)
current = came_from[current][0]
path.append(goal)

return path

graph = {
'A': [[('B', 1), ('C', 3)]],
'B': [[('D', 3), ('E', 1)]],
'C': [[('F', 1), ('G', 2)]],
'D': [],
'E': [],
'F': [],
'G': []
}
path = ao_star(graph, 'A', 'G')
print(path)

Explanation

1. Graph Representation:
o The graph is represented using an adjacency list where each node has children
and an associated cost.
o The heuristic values are stored separately in a dictionary.
2. AO Function*:
o Initialization: Start with the open list containing the start node.
o Loop: Continue processing nodes until the open list is empty.
o Processing a Node: Update the closed list, evaluate each child combination's
cost, and select the best combination based on the heuristic values.
o Update Heuristic: Update the heuristic value of the current node with the
minimum cost found.
o Update Optimal Subgraph: Record the best combination of children for the
current node.
3. Example Usage:
o A sample graph is defined with nodes and heuristic values.
o The ao_star function is called with the start node 'A'.

Possible Input:

graph = {
'A': [[('B', 1), ('C', 3)]],
'B': [[('D', 3), ('E', 1)]],
'C': [[('F', 1), ('G', 2)]],
'D': [],
'E': [],
'F': [],
'G': []
}

start = 'A'
goal = 'G'

Possible Output:

['A', 'B', 'E', 'C', 'G']

For the given example graph, the output will display the optimal subgraph and the updated
heuristic values for each node, guiding towards the most cost-effective solution.

Program 6: Supervised Machine Learning


Concept Explanation: Supervised learning is a type of machine learning where the model is
trained on labeled data. The algorithm learns to map input data to the correct output based on
the provided labels.

Python Code:

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error

# Example dataset
data = {
'feature1': [1, 2, 3, 4, 5],
'feature2': [2, 4, 6, 8, 10],
'target': [3, 6, 9, 12, 15]
}
df = pd.DataFrame(data)

X = df[['feature1', 'feature2']]
y = df['target']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,


random_state=42)

model = LinearRegression()
model.fit(X_train, y_train)

y_pred = model.predict(X_test)
mse = mean_squared_error(y_test, y_pred)

print("Mean Squared Error:", mse)


print("Predictions:", y_pred)

Possible Input:

feature1 feature2 target


0 1 2 3
1 2 4 6
2 3 6 9
3 4 8 12
4 5 10 15

Possible Output:

vbnet

Mean Squared Error: 0.0


Predictions: [9.]

Program 7: Predict Car Price Using Decision Tree


Concept Explanation: Decision trees are a type of supervised machine learning used for
classification and regression tasks. They work by splitting the data into subsets based on the
value of input features, and these splits are represented as branches of a tree.

Python Code:

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_squared_error

# Example dataset
data = {
'year': [2010, 2011, 2012, 2013, 2014],
'mileage': [50000, 40000, 30000, 20000, 10000],
'price': [15000, 14000, 13000, 12000, 11000]
}
df = pd.DataFrame(data)

X = df[['year', 'mileage']]
y = df['price']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,


random_state=42)

model = DecisionTreeRegressor()
model.fit(X_train, y_train)

y_pred = model.predict(X_test)
mse = mean_squared_error(y_test, y_pred)

print("Mean Squared Error:", mse)


print("Predictions:", y_pred)

Possible Input:

yaml

year mileage price


0 2010 50000 15000
1 2011 40000 14000
2 2012 30000 13000
3 2013 20000 12000
4 2014 10000 11000

Possible Output:

vbnet

Mean Squared Error: 0.0


Predictions: [13000.]

Program 8: Weather Prediction Model


Concept Explanation: A weather prediction model predicts the likelihood of certain weather
conditions based on historical data. This example uses linear regression for simplicity.

Python Code:

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import accuracy_score

# Example dataset
data = {
'temperature': [30, 25, 27, 35, 20],
'humidity': [70, 65, 80, 90, 60],
'rain': [1, 1, 1, 0, 0] # 1 for rain, 0 for no rain
}
df = pd.DataFrame(data)

X = df[['temperature', 'humidity']]
y = df['rain']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,


random_state=42)

model = LinearRegression()
model.fit(X_train, y_train)

y_pred = model.predict(X_test)
y_pred = [1 if p >= 0.5 else 0 for p in y_pred]
accuracy = accuracy_score(y_test, y_pred)

print("Accuracy:", accuracy)
print("Predictions:", y_pred)

Possible Input:

temperature humidity rain


0 30 70 1
1 25 65 1
2 27 80 1
3 35 90 0
4 20 60 0

Possible Output:

makefile

Accuracy: 1.0
Predictions: [1]

Program 9: Golf Prediction Model

Concept Explanation: This program creates a simple model to predict golf outcomes based
on historical data. For this example, we use a decision tree classifier.
Python Code:

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score

# Example dataset
data = {
'weather': ['sunny', 'rainy', 'sunny', 'cloudy', 'sunny'],
'temperature': [30, 25, 28, 20, 35],
'play_golf': [1, 0, 1, 1, 0] # 1 for play, 0 for no play
}
df = pd.DataFrame(data)

# Encoding categorical data


df['weather'] = df['weather'].map({'sunny': 0, 'rainy': 1, 'cloudy': 2})

X = df[['weather', 'temperature']]
y = df['play_golf']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,


random_state=42)

model = DecisionTreeClassifier()
model.fit(X_train, y_train)

y_pred = model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)

print("Accuracy:", accuracy)
print("Predictions:", y_pred)

Possible Input:

weather temperature play_golf


0 sunny 30 1
1 rainy 25 0
2 sunny 28 1
3 cloudy 20 1
4 sunny 35 0

Possible Output:

makefile

Accuracy: 1.0
Predictions: [1]

Program 10: Email Spam Classification

Concept Explanation: Spam classification is a common application of machine learning,


where the goal is to classify emails as spam or not spam. This example uses a Naive Bayes
classifier.
Python Code:

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score

# Example dataset
data = {
'email': ['buy now', 'limited offer', 'meeting at 3', 'project
deadline', 'win a prize'],
'label': [1, 1, 0, 0, 1] # 1 for spam, 0 for not spam
}
df = pd.DataFrame(data)

X = df['email']
y = df['label']

vectorizer = CountVectorizer()
X = vectorizer.fit_transform(X)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,


random_state=42)

model = MultinomialNB()
model.fit(X_train, y_train)

y_pred = model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)

print("Accuracy:", accuracy)
print("Predictions:", y_pred)

Possible Input:

email label
0 buy now 1
1 limited offer 1
2 meeting at 3 0
3 project deadline 0
4 win a prize 1

Possible Output:

makefile

Accuracy: 1.0
Predictions: [1]

Program 11: Flower Classification Using SVM


Concept Explanation: Support Vector Machines (SVM) are supervised learning models that
analyze data for classification and regression analysis. This example classifies flowers based
on their features.

Python Code:

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score

# Example dataset
data = {
'sepal_length': [5.1, 4.9, 4.7, 4.6, 5.0],
'sepal_width': [3.5, 3.0, 3.2, 3.1, 3.6],
'petal_length': [1.4, 1.4, 1.3, 1.5, 1.4],
'petal_width': [0.2, 0.2, 0.2, 0.2, 0.2],
'species': [0, 0, 0, 0, 0] # 0 for setosa, 1 for versicolor, 2 for
virginica
}
df = pd.DataFrame(data)

X = df[['sepal_length', 'sepal_width', 'petal_length', 'petal_width']]


y = df['species']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,


random_state=42)

model = SVC(kernel='linear')
model.fit(X_train, y_train)

y_pred = model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)

print("Accuracy:", accuracy)
print("Predictions:", y_pred)

Possible Input:

sepal_length sepal_width petal_length petal_width species


0 5.1 3.5 1.4 0.2 0
1 4.9 3.0 1.4 0.2 0
2 4.7 3.2 1.3 0.2 0
3 4.6 3.1 1.5 0.2 0
4 5.0 3.6 1.4 0.2 0

Possible Output:

Accuracy: 1.0
Predictions: [0]

Program 12: Text Mining


Concept Explanation: Text mining involves analyzing text data to derive meaningful
insights. This example uses the TF-IDF vectorizer to convert text data into numerical form
for further analysis.

Python Code:

import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer

# Example dataset
data = {
'text': ['the quick brown fox', 'jumped over the lazy dog', 'the fox is
quick and brown'],
}
df = pd.DataFrame(data)

vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(df['text'])

print("TF-IDF Matrix:")
print(X.toarray())
print("Feature Names:", vectorizer.get_feature_names_out())

Possible Input:

text
0 the quick brown fox
1 jumped over the lazy dog
2 the fox is quick and brown

Possible Output:

TF-IDF Matrix:
[[0.4472136 0. 0. 0.4472136 0.4472136 0.4472136 0. 0.4472136]
[0. 0.5 0.5 0. 0. 0. 0.5 0.5 ]
[0.4 0. 0. 0.4 0.4 0.4 0. 0. ]]
Feature Names: ['and' 'dog' 'jumped' 'quick' 'the' 'fox' 'over' 'lazy']

Program 13: Natural Language Processing

Concept Explanation: Natural Language Processing (NLP) involves the interaction between
computers and humans using natural language. This example uses the NLTK library to
tokenize text data.

Python Code:

import nltk
from nltk.tokenize import word_tokenize

# Example text
text = "Natural Language Processing with Python is fun."
# Tokenizing the text
tokens = word_tokenize(text)

print("Tokens:", tokens)

Possible Input:

arduino

text = "Natural Language Processing with Python is fun."

Possible Output:

arduino

Tokens: ['Natural', 'Language', 'Processing', 'with', 'Python', 'is', 'fun', '.']

Program 14: Object Detection Using Image Processing

Concept Explanation: Object detection involves identifying and locating objects within an
image. This example uses OpenCV to detect faces in an image.

Python Code:

import cv2

# Load the cascade


face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

# Read the input image


img = cv2.imread('test.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Detect faces
faces = face_cascade.detectMultiScale(gray, 1.1, 4)

# Draw rectangle around the faces


for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 2)

# Display the output


cv2.imshow('img', img)
cv2.waitKey()

Possible Input:

Image file named 'test.jpg'

Possible Output:

Image displayed with rectangles around detected faces.


Program 15: Linear Regression

Concept Explanation: Linear regression is a statistical method to model the relationship


between a dependent variable and one or more independent variables. This example
demonstrates simple linear regression.

Python Code:

import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression

# Example dataset
X = np.array([1, 2, 3, 4, 5]).reshape(-1, 1)
y = np.array([2, 3, 5, 7, 11])

model = LinearRegression()
model.fit(X, y)

# Predicting and plotting the regression line


y_pred = model.predict(X)
plt.scatter(X, y, color='blue')
plt.plot(X, y_pred, color='red')
plt.xlabel('X')
plt.ylabel('y')
plt.title('Linear Regression')
plt.show()

Possible Input:

X = [1, 2, 3, 4, 5]
y = [2, 3, 5, 7, 11]

Possible Output:

Scatter plot of X vs y with the regression line.

Program 16: Multiple Linear Regression

Concept Explanation: Multiple linear regression models the relationship between a


dependent variable and two or more independent variables. This example demonstrates
multiple linear regression.

Python Code:

import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

# Example dataset
data = {
'feature1': [1, 2, 3, 4, 5],
'feature2': [2, 4, 6, 8, 10],
'target': [3, 6, 9, 12, 15]
}
df = pd.DataFrame(data)

X = df[['feature1', 'feature2']]
y = df['target']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,


random_state=42)

model = LinearRegression()
model.fit(X_train, y_train)

y_pred = model.predict(X_test)
mse = mean_squared_error(y_test, y_pred)

print("Mean Squared Error:", mse)


print("Predictions:", y_pred)

Possible Input:

feature1 feature2 target


0 1 2 3
1 2 4 6
2 3 6 9
3 4 8 12
4 5 10 15

Possible Output:

Mean Squared Error: 0.0


Predictions: [9.]

Program 17: Artificial Neural Networks (ANN)

Concept Explanation: Artificial Neural Networks (ANN) are computing systems inspired by
biological neural networks. This example demonstrates a simple ANN using
TensorFlow/Keras.

Python Code:

import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# Example dataset
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1, 0]) # XOR problem

model = Sequential()
model.add(Dense(2, input_dim=2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
model.fit(X, y, epochs=1000, verbose=0)

predictions = model.predict(X)
print("Predictions:", (predictions > 0.5).astype(int))

Possible Input:

X = [[0, 0], [0, 1], [1, 0], [1, 1]]


y = [0, 1, 1, 0]

Possible Output:

Predictions: [[0]
[1]
[1]
[0]]

Program 18: K-Nearest Neighbors (KNN)

Concept Explanation: K-Nearest Neighbors (KNN) is a simple, instance-based learning


algorithm. This example demonstrates KNN classification using Scikit-learn.

Python Code:

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score

# Example dataset
data = {
'feature1': [1, 2, 3, 4, 5],
'feature2': [2, 4, 6, 8, 10],
'target': [0, 0, 1, 1, 1]
}
df = pd.DataFrame(data)

X = df[['feature1', 'feature2']]
y = df['target']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,


random_state=42)
model = KNeighborsClassifier(n_neighbors=3)
model.fit(X_train, y_train)

y_pred = model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)

print("Accuracy:", accuracy)
print("Predictions:", y_pred)

Possible Input:

feature1 feature2 target


0 1 2 0
1 2 4 0
2 3 6 1
3 4 8 1
4 5 10 1

Possible Output:

Accuracy: 1.0
Predictions: [1]

Program 19: Support Vector Machines (SVM)

Concept Explanation: Support Vector Machines (SVM) are supervised learning models
used for classification and regression tasks. This example demonstrates SVM classification
using Scikit-learn.

Python Code:

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score

# Example dataset
data = {
'feature1': [1, 2, 3, 4, 5],
'feature2': [2, 4, 6, 8, 10],
'target': [0, 0, 1, 1, 1]
}
df = pd.DataFrame(data)

X = df[['feature1', 'feature2']]
y = df['target']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,


random_state=42)

model = SVC(kernel='linear')
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)

print("Accuracy:", accuracy)
print("Predictions:", y_pred)

Possible Input:

feature1 feature2 target


0 1 2 0
1 2 4 0
2 3 6 1
3 4 8 1
4 5 10 1

Possible Output:

Accuracy: 1.0
Predictions: [1]

Program 20: Principal Component Analysis (PCA)

Concept Explanation: Principal Component Analysis (PCA) is a dimensionality reduction


technique used to transform high-dimensional data into a lower-dimensional space. This
example demonstrates PCA using Scikit-learn.

Python Code:

import pandas as pd
from sklearn.decomposition import PCA

# Example dataset
data = {
'feature1': [1, 2, 3, 4, 5],
'feature2': [2, 4, 6, 8, 10],
'feature3': [3, 6, 9, 12, 15]
}
df = pd.DataFrame(data)

X = df[['feature1', 'feature2', 'feature3']]

pca = PCA(n_components=2)
X_pca = pca.fit_transform(X)

print("PCA Components:")
print(X_pca)

Possible Input:

feature1 feature2 feature3


0 1 2 3
1 2 4 6
2 3 6 9
3 4 8 12
4 5 10 15

Possible Output:

PCA Components:
[[-5.65685425e+00 -1.77211596e-15]
[-2.82842712e+00 -2.72165527e-16]
[ 0.00000000e+00 2.13518101e-16]
[ 2.82842712e+00 8.10535199e-16]
[ 5.65685425e+00 -2.64943925e-16]]

Python program using the PIL (Pillow) library to illustrate basic image processing operations
like opening an image, resizing it, applying a filter, and saving the processed image.

from PIL import Image, ImageFilter

def process_image(input_path, output_path):


# Open an image
img = Image.open(input_path)
print("Image opened successfully.")

# Resize the image


resized_img = img.resize((300, 300))
print("Image resized successfully.")

# Apply a filter to the image


filtered_img = resized_img.filter(ImageFilter.BLUR)
print("Filter applied successfully.")

# Save the processed image


filtered_img.save(output_path)
print("Image saved successfully.")

if __name__ == "__main__":
input_image_path = 'input_image.jpg' # Path to the input image
output_image_path = 'output_image.jpg' # Path to save the processed
image

process_image(input_image_path, output_image_path)

Explanation

1. Importing Libraries:
o Image and ImageFilter are imported from the PIL (Pillow) library to handle
image operations and apply filters.
2. Opening an Image:
o The image is opened using Image.open(input_path), where input_path is
the path to the input image file.
3. Resizing the Image:
o The image is resized to 300x300 pixels using the resize() method.
4. Applying a Filter:
oA blur filter is applied to the resized image using
filter(ImageFilter.BLUR).
5. Saving the Processed Image:
o The processed image is saved to the specified output path using the save()
method.
6. Main Function:
o The process_image() function is called with paths to the input and output
images.

Instructions

1. Install Pillow:
o Ensure you have Pillow installed in your Python environment. You can install
it using pip:

sh
Copy code
pip install Pillow

2. Input and Output Paths:


o Update the input_image_path with the path to your input image file.
o Update the output_image_path with the desired path to save the processed
image.
3. Run the Program:
o Run the script, and it will perform the specified operations on the input image
and save the processed image.

This example demonstrates basic image processing operations using the Pillow library. You
can further extend it to include other operations such as cropping, rotating, adjusting colors,
and more.

You might also like