0% found this document useful (0 votes)
5 views18 pages

Ai Lab Document-1

The document contains multiple Python programs demonstrating various algorithms and techniques in data science and machine learning, including breadth-first search, depth-first search, greedy best-first search, linear regression, decision trees, logistic regression, and support vector machines. Each program includes code snippets, example data, and outputs for tasks such as predicting car prices, weather conditions, email classification, and flower classification. The document serves as a practical guide for implementing these algorithms using Python libraries like scikit-learn and pandas.

Uploaded by

chandanasavye
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views18 pages

Ai Lab Document-1

The document contains multiple Python programs demonstrating various algorithms and techniques in data science and machine learning, including breadth-first search, depth-first search, greedy best-first search, linear regression, decision trees, logistic regression, and support vector machines. Each program includes code snippets, example data, and outputs for tasks such as predicting car prices, weather conditions, email classification, and flower classification. The document serves as a practical guide for implementing these algorithms using Python libraries like scikit-learn and pandas.

Uploaded by

chandanasavye
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

AI LAB

PENDING PROGRSMS:4TH,5TH.
1.Python program on problem solving by searching :breadth first
search.
graph={
'5':['3','7'],
'3':['2','4'],
'7':['8'],
'2':[],
'4':['8'],
'8':[]
}
visited=[]#lidt for vidited nodes.
queue=[]#initialize a queue

def bfs(visited,graph,node):#function for BFS


visited.append(node)
queue.append(node)

while queue:
m=queue.pop(0)
print(m,end="")
for neighbour in graph[m]:
if neighbour not in visited:
visited.append(neighbour)
queue.append(neighbour)
#divide code
print("following is the breadth-first search")
bfs(visited,graph,'5')#function calling

output:
following is the breadth-first search
537248

2. Python program on problem solving by searching :depth


first search.
graph={
'5':['3','7'],
'3':['2','4'],
'7':['8'],
'2':[],
'4':['8'],
'8':[]
}
visited=set()#set to keep track of visited nodes of graph.

def dfs(visited,graph,node):#function for dfs


if node not in visited:
print(node)
visited.add(node)
for neighbour in graph[node]:
dfs(visited,graph,neighbour)

#driver code
print("following is the depth-first search")
dfs(visited,graph,'5')
output:

following is the depth-first search


5
3
2

4
8
7

3.Python program on problem solving by searching :Greedy best first search

import heapq
graph = {
'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F'],
'D': [],
'E': ['F'],
'F': []
}
# Heuristic values for each node (example values)
heuristics = {
'A': 6,
'B': 4,
'C': 2,
'D': 7,
'E': 3,
'F': 1
}
# Function to perform Greedy Best-First Search
def gbfs(graph, start, goal):
visited = set()
priority_queue = [(heuristics[start], start)]
while priority_queue:
# Get the node with the lowest heuristic value
_, node = heapq.heappop(priority_queue)
if node not in visited:
print(node)
visited.add(node)

if node == goal:
print(f"Goal {goal} found!")
return

for neighbor in graph[node]:


if neighbor not in visited:
heapq.heappush(priority_queue, (heuristics[neighbor],
neighbor))
# Driver code
gbfs(graph, 'A', 'F')

3. Python program on problem solving by searching :Greedy best first search.

import heapq

class Graph:
def __init__(self, graph, heuristic):
self.graph = graph
self.heuristic = heuristic
self.visited = set()
self.queue = []
heapq.heappush(self.queue, (self.heuristic['A'], 'START',
[])) # (heuristic value, current node, path)

def greedy_best_first_search(self, goal):


while self.queue:
_, node, path = heapq.heappop(self.queue)

if node in self.visited:
continue

self.visited.add(node)
path = path + [node]

if node == goal:
return path

for neighbor, cost in self.graph[node].items():


if neighbor not in self.visited:
heapq.heappush(self.queue,
(self.heuristic[neighbor], neighbor, path))

return None

# Example usage
graph = {
'START': {'A': 10, 'B': 5},
'A': {'C': 5},
'B': {'C': 20},
'C': {'GOAL': 5},
'GOAL': {}
}

heuristic = {
'A': 7, # Heuristic values are estimates
'B': 6,
'C': 2,
'GOAL': 0
}
g = Graph(graph, heuristic)
path = g.greedy_best_first_search('GOAL')

if path:
print("Greedy Best-First Search path:", path)
else:
print("Goal not reachable.")

output:
Greedy Best-First Search path: ['START', 'B', 'C', 'GOAL']

6.Python program to demonstrate the supervised machine learning.

from scipy import stats

x=[5,7,8,7,2,17,2,9,4,11,12,9,6]
y=[99,86,87,88,111,86,103,87,94,78,77,85,86]
slope,intercept,r,p,std_err=stats.linregress(x,y)
def myfunc(x):
return slope*x+intercept
speed=myfunc(10)
print (speed)
output:
85.59308314937454

7.python program to predict the price of the car using decision tree.

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor

data = {
'km_driven':[50000,60000,30000,35000,40000],
'year' :[2016,2014,2018,2015,2017],
'make': ['Toyota','Honda','Toyota','Honda','Toyota'],
'model':['camry','Accord','corolla','Civic','Rav4'],
'price':[22000,18000,24000,17000,26000]
}

df = pd.DataFrame(data)
#df=pd.read_csv('cardekho.csv)
X = df[['km_driven','year']]
y = df['price']
X_train,X_test,y_train,y_test =
train_test_split(X,y,test_size=0.2,random_state=42)
model=DecisionTreeRegressor(random_state=42)
model.fit(X_train,y_train)
predictions=model.predict(X_test)
new_car_features =[[45000,2019]]
predicted_price = model.predict(new_car_features)
print(f'Predicted price for the new car: Rupeess
{predicted_price[0]}')

output:
Predicted price for the new car: Rupeess 24000.0
-OR-
7. Python program to predict the price of the car using decision tree.

# Importing necessary libraries


import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor

data = {
'km_driven': [50000, 60000, 30000, 35000, 40000],
'year': [2016, 2014, 2018, 2015, 2017],
'make': ['Toyota', 'Honda', 'Toyota', 'Honda', 'Toyota'],
'model': ['Camry', 'Accord', 'Corolla', 'Civic', 'Rav4'],
'price': [22000, 18000, 24000, 17000, 26000]
}

df = pd.DataFrame(data)
#df=pd.read_csv('cardekho.csv')
X = df[['km_driven', 'year']]
y = df['price']
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2, random_state=42)
model = DecisionTreeRegressor(random_state=42)
model.fit(X_train, y_train)
predictions = model.predict(X_test)
new_car_features = [[45000, 2019]]
predicted_price = model.predict(new_car_features)
print(f'Predicted price for the new car: Rupees
{predicted_price[0]}')

output:
Predicted price for the new car: Rupees 24000.0

8.Python program of weather prediction model that predicts whether or not their’ll be
rain on a particular day.

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression

data = {
"temp": [25, 38, 27,40],
"humidity": [80, 70, 75,60],
"rain":[1,0,1,0]
}
df = pd.DataFrame(data)
print(df)
X = df[['temp', 'humidity']]
y = df['rain']
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2, random_state=0)
model = LogisticRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
new_data = [[38,90]]
prediction = model.predict(new_data)

if prediction[0] == 1:
print("Prediction: It will rain.")
else:
print("Prediction: It will not rain.")
output:
temp humidity rain

0 25 80 1
1 38 70 0
2 27 75 1
3 40 60 0

C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-
packages\sklearn\base.py:493: UserWarning: X does not have valid feature names, but
LogisticRegression was fitted with feature names
warnings.warn(

Prediction: It will rain.

9.Python program of profit prediction model that states the probable profit that can be
generated from the sale of a product.

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression

data = {
"price": [100, 120, 110],
"advertise": [20, 30, 25],
"sold":[100,90,95],
"profit":[5000,5500,6000]
}
df = pd.DataFrame(data)
print(df)
X = df[['price', 'advertise', 'sold']] # Features
y = df['profit']
# Split the data into training and testing sets (80% train, 20%
test)
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2, random_state=0)

# Initialize the model


model = LinearRegression()

# Train the model


model.fit(X_train, y_train)

# Predict on the test set


y_pred = model.predict(X_test)
# Example prediction for a new product sale
new_data = [[110, 20, 80]] # Price = 110, Advertising Spend =
20, Number of Units Sold = 80
predicted_profit = model.predict(new_data)

print(f"Predicted Profit: ${predicted_profit[0]:.2f}")


output:
price advertise sold profit

0 100 20 100 5000


1 120 30 90 5500
2 110 25 95 6000
C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-
packages\sklearn\base.py:493: UserWarning: X does not have valid feature names, but
LinearRegression was fitted with feature names
warnings.warn(
Predicted Profit: $5333.33

10. Python program to classify the emails as spam or not spam.

import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score, classification_report

emails = [
("Hey, there is a sale going on, don't miss out!", "spam"),
("Meeting agenda for today's discussion", "not spam"),
("Get a free gift card on purchases over $50", "spam"),
("Reminder: Team meeting at 2 PM", "not spam"),
("Limited time offer, get 50% off on all items", "spam"),
("Please review and approve the proposal", "not spam")
]

X = [email[0] for email in emails]


y = [email[1] for email in emails]
# Vectorize the text data
vectorizer = CountVectorizer()
X_vectorized = vectorizer.fit_transform(X)

X_train, X_test, y_train, y_test = train_test_split(X_vectorized,


y, test_size=0.2, random_state=42)

classifier = MultinomialNB()
classifier.fit(X_train, y_train)

y_pred = classifier.predict(X_test)

print("\nClassification Report email:")


print(classification_report(y_test, y_pred))

#new_email = ["Hurry! Limited time offer, buy now and get 30%
off"]
new_email=["Requesting notes"]
new_email_vectorized = vectorizer.transform(new_email)
prediction = classifier.predict(new_email_vectorized)[0]
print(f"\nPrediction for '{new_email[0]}': {prediction}")

output:
Classification Report email:

precision recall f1-score support


not spam 1.00 1.00 1.00 1
spam 1.00 1.00 1.00 1

accuracy 1.00 2
macro avg 1.00 1.00 1.00 2
weighted avg 1.00 1.00 1.00 2

Prediction for 'Requesting notes': not spam

11.Python program that demonstrates how to classify flowers using a support vector
machine(svm) classifier.
import numpy as np
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.metrics import classification_report, accuracy_score

# Load the Iris dataset


iris = load_iris()
X = iris.data
y = iris.target

# Convert to DataFrame for better readability (optional)


iris_df = pd.DataFrame(X, columns=['Sepal Length', 'Sepal Width',
'Petal Length', 'Petal Width'])
iris_df['Class'] = y
print("Iris Dataset:")
print(iris_df.head())

# Split the dataset into training and test sets


X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2, random_state=42)

# Normalize the features


scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

# Create and train the SVM classifier


svm_classifier = SVC(kernel='linear', random_state=42) # Linear
kernel
svm_classifier.fit(X_train, y_train)

# Make predictions on the test set


y_pred = svm_classifier.predict(X_test)

# Evaluate the model


accuracy = accuracy_score(y_test, y_pred)
print(f"\nTest accuracy: {accuracy:.4f}")
# Print the classification report
print("\nClassification Report:")
print(classification_report(y_test, y_pred,
target_names=iris.target_names))

# Predict new data


new_data = np.array([[5.1, 3.5, 1.4, 0.2],
[6.2, 2.8, 4.8, 1.8],
[7.1, 3.0, 5.9, 2.1]])
new_data_scaled = scaler.transform(new_data)
new_predictions = svm_classifier.predict(new_data_scaled)

print("\nPredictions for new data:")


for i, pred in enumerate(new_predictions):
print(f"Sample {i + 1}: {iris.target_names[pred]}")

output:
Iris Dataset:
Sepal Length Sepal Width Petal Length Petal Width Class
0 5.1 3.5 1.4 0.2 0
1 4.9 3.0 1.4 0.2 0
2 4.7 3.2 1.3 0.2 0
3 4.6 3.1 1.5 0.2 0
4 5.0 3.6 1.4 0.2 0

Test accuracy: 0.9667

Classification Report:
precision recall f1-score support
setosa 1.00 1.00 1.00 10
versicolor 1.00 0.89 0.94 9
virginica 0.92 1.00 0.96 11

accuracy 0.97 30
macro avg 0.97 0.96 0.97 30
weighted avg 0.97 0.97 0.97 30

Predictions for new data:


Sample 1: setosa
Sample 2: virginica
Sample 3: virginica

12.Python program that demonstrates how to use a basic artificial neural network(ANN)
to classify students based on their height and weight.

input=[0.1,0.5,0.2]
weight=[0.4,0.3,0.6]
t=0.5

def step(ws):
if ws>t:
return 1
else:
return 0

def percept():
ws=0
for x,w in zip(input,weight):
ws+=x*w
print(ws)
return step(ws)

output=percept()
print(output)
output:
0.04000000000000001
0.19

0.31
0
13.python program that demonstrates text classification using scikit-learn and a naïve
bayes classifier.
import numpy as np
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
newsgroups = fetch_20newsgroups(subset='all', remove=('headers', 'footers', 'quotes'))

X_train, X_test, y_train, y_test = train_test_split(newsgroups.data, newsgroups.target,


test_size=0.2, random_state=42)

model = make_pipeline(
TfidfVectorizer(stop_words='english'),
MultinomialNB()
)
model.fit(X_train, y_train)

y_pred = model.predict(X_test)

accuracy = accuracy_score(y_test, y_pred)


print(f"Accuracy: {accuracy:.2f}")

print("\nClassification Report:")
print(classification_report(y_test, y_pred, target_names=newsgroups.target_names))
print("\nConfusion Matrix:")
print(confusion_matrix(y_test, y_pred))
print("ssssssssssssss")

output:
Accuracy: 0.72

Classification Report:
precision recall f1-score support

alt.atheism 0.74 0.28 0.40 151


comp.graphics 0.70 0.68 0.69 202
comp.os.ms-windows.misc 0.68 0.66 0.67 195
comp.sys.ibm.pc.hardware 0.55 0.78 0.64 183
comp.sys.mac.hardware 0.87 0.67 0.76 205

comp.windows.x 0.90 0.81 0.85 215


misc.forsale 0.79 0.70 0.74 193
rec.autos 0.84 0.76 0.80 196
rec.motorcycles 0.49 0.77 0.60 168

rec.sport.baseball 0.92 0.83 0.88 211


rec.sport.hockey 0.88 0.92 0.90 198
sci.crypt 0.70 0.86 0.77 201
sci.electronics 0.85 0.63 0.72 202

sci.med 0.91 0.86 0.88 194


sci.space 0.80 0.83 0.82 189
soc.religion.christian 0.43 0.94 0.59 202
talk.politics.guns 0.70 0.80 0.75 188
talk.politics.mideast 0.79 0.83 0.81 182

talk.politics.misc 0.92 0.44 0.60 159


talk.religion.misc 0.80 0.03 0.06 136

accuracy 0.72 3770

macro avg 0.76 0.70 0.70 3770


weighted avg 0.76 0.72 0.71 3770

Confusion Matrix:
[[ 42 0 1 1 0 0 0 1 4 1 2 6 0 2 3 68 5 13
1 1]
[ 1 138 14 15 0 6 4 1 7 0 1 6 0 0 4 4 0 1

0 0]
[ 1 14 129 26 3 7 0 0 10 0 0 3 1 0 0 1 0 0
0 0]
[ 0 6 16 142 6 1 3 1 1 0 0 1 3 2 0 1 0 0

0 0]
[ 0 2 8 24 138 0 5 1 12 0 0 7 2 0 3 2 1 0
0 0]
[ 0 15 11 3 1 174 1 1 3 0 2 0 0 0 2 2 0 0

0 0]
[ 0 3 1 26 5 1 135 3 3 0 1 6 2 1 4 1 1 0
0 0]
[ 0 1 0 1 1 1 3 149 15 0 2 3 4 0 3 6 5 1

1 0]
[ 0 2 0 0 0 1 6 8 130 3 2 3 2 1 3 5 2 0
0 0]
[ 0 0 0 0 0 0 1 0 13 176 7 3 0 0 0 8 0 3
0 0]

[ 0 0 0 0 0 0 0 2 6 1 182 1 0 1 0 4 0 1
0 0]
[ 0 2 3 0 0 1 0 0 4 1 2 172 0 1 2 5 4 3
1 0]

[ 0 8 2 19 5 1 12 4 6 1 1 6 127 2 4 2 1 1
0 0]
[ 0 2 1 0 0 0 0 1 6 1 1 0 3 167 2 7 1 1
1 0]

[ 0 3 2 1 0 0 1 1 10 1 0 5 3 1 157 4 0 0
0 0]
[ 1 0 1 0 0 0 0 0 4 1 0 0 0 1 0 190 0 4
0 0]

[ 0 0 1 0 0 0 0 1 10 1 2 9 0 0 2 11 150 0
1 0]
[ 1 1 0 0 0 1 0 0 8 0 0 3 1 1 1 11 2 151
1 0]

[ 1 0 0 0 0 0 0 3 6 1 1 7 1 3 5 22 29 10
70 0]
[ 10 0 1 0 0 0 0 1 5 3 2 3 0 1 1 89 13 3
0 4]]

ssssssssssssss
14.Python program using speech recognition library to perform speech recognition.

import speech_recognition as s
sr=s.Recognizer()
print("listening........")
with s.Microphone() as m:
audio=sr.listen(m)
query=sr.recognize_google(audio,language='eng-in')
print(query)
output:

listening........

15.Python program using the PIL(pillow) library to illustrate basic image processing
operations like opening an image, resizing it, applying a filter, and saving the processed
image.

from PIL import Image,ImageFilter


image_path = "lilly.png"
original_image =Image.open(image_path)
print(f"Original Image Format: {original_image.format}")
print(f"Original Image Size: {original_image.format}")
print(f"Original Image Image Mode: {original_image.format}")
resized_image = original_image.resize((300,200))
blurred_image =
resized_image.filter(ImageFilter.GaussianBlur(radius=2))
output_path ="15processed_image.png"
blurred_image.save(output_path)
print(f"\nProcessed Image saved at:{output_path}")
output:
Original Image Format: PNG
Original Image Size: PNG
Original Image Image Mode: PNG

Processed Image saved at:15processed_image.png

You might also like