0% found this document useful (0 votes)
5 views35 pages

210..127 Ai

The document outlines a course on 'Introduction to Artificial Intelligence' at Vishwakarma Government Engineering College, detailing various practical experiments. Each experiment involves coding tasks using libraries like Pandas, NumPy, and machine learning frameworks to implement algorithms such as decision trees, backpropagation, and stock market predictions. The document includes student information, a certificate of completion, and code snippets for each experiment.

Uploaded by

Ruturaj Nakum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views35 pages

210..127 Ai

The document outlines a course on 'Introduction to Artificial Intelligence' at Vishwakarma Government Engineering College, detailing various practical experiments. Each experiment involves coding tasks using libraries like Pandas, NumPy, and machine learning frameworks to implement algorithms such as decision trees, backpropagation, and stock market predictions. The document includes student information, a certificate of completion, and code snippets for each experiment.

Uploaded by

Ruturaj Nakum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Introduction to Artificial Intelligence (3171105)

VISHWAKARMA GOVERNMENT ENGINEERING COLLEGE –


CHANDKHEDA
Gujarat Technological University, Ahmadabad

Electronics & Communication on Department


Winter 2024-25

SEMESTER – 7

Introduction on of Artificial Intelligence


(Subject Code: 3171105 )

Name of Student : Patel Dharmit


Enrollment Number :210170111127
Introduction to Artificial Intelligence (3171105)

Certificate

T hi s is to certify that S h r i / Ms . Patel Dharmit of Br a n ch


EL EC TR ON IC S AN D CO M M U NI C AT IO N S em est e r 7 TH w i t h
E n ro l lm e nt N o . 2101 70 11 1127 h a s s a t i s f a c t o r i l y co mpl e t e d t h e t e rm w
o r k i n s ub j e c t cod e 3 17 11 05 su bj e c t n am e IN TR OD UC TI ON TO A R
TI FI CI C AL I NTE LLI GEN CE w i t h i n f ou r w a l l s of Vi sh w ak a rm ag o
ve r nm en t En gi nee r i n g Col l eg e , C h an d kh ed a , A hm ed a ba d .

Date of Submission:

Professor Sign:
Introduction to Artificial Intelligence (3171105)

List of Experiments

Sr Date Practical Aim Sign Marks


no
Create a program using the Pandas, NumPy library that
1 implements grouping, filtering, sorting, and merging
operations.
Create a program using a sample dataset heart disease
2
data to implement a decision tree algorithm.
Create a program to implement a back propagation
3
algorithm in python.
Create a program to implement a simple stock
4
market prediction based on historical datasets.
Create a program using NumPy to implement a simple
5
perceptron model.
Create a program to perform sentiment analysis on a
6
textual dataset (Twitter feeds, E-commerce reviews).
Create a program using any machine learning
7
framework like TensorFlow, Keras to implement a
linear regression algorithm.
Create a program using any machine learning
8
framework like TensorFlow, Keras to implement a
simple convolutional, neural network
Create a program using a convolution neural
9
network that identifies objects like water bottles,
cap, books, etc using the webcam.
Create a program using any machine learning
10
framework like TensorFlow, Keras to implement a
Logistic regression algorithm.
Introduction to Artificial Intelligence (3171105)

EXPERIMENT: 1

AIM: Create a program using the Pandas, NumPy library that implements
grouping, filtering, sorting, and merging operations.

Code:

import pandas as pd

# Creating sample dataframes


Placement = {
'Name': ['Nikhil', 'Dharmit', 'Nikunj', 'Zeel'],
'Placement': ['Einfochip', 'Tata', 'Pronesis', 'Tata'],
'Salary': [50000, 60000, 55000, 53000] # corrected 00000 to 0
}

Players = {
'Name': ['Chirag', 'Sanket', 'Dharmit', 'Sumit'], # Updated some names to match
for merging
'Sports': ['Cricket', 'Football', 'Badminton', 'Volleyball'],
'Rating': [2, 8, 6, 4]
}

Placementdata = pd.DataFrame(Placement)
Playersdata = pd.DataFrame(Players)

# Display DataFrames
print("DataFrame 1:\n", Placementdata)
print("\nDataFrame 2:\n", Playersdata)

# Grouping data (salary mean by name)


print("\nGrouped by Name (mean of Salary):\n",
Placementdata.groupby('Name')['Salary'].mean())

# Filtering data (salary > 52000)


print("\nFiltered DataFrame (Salary > 52000):\n",
Placementdata[Placementdata['Salary'] > 52000])

# Sorting data by 'Prices'


print("\nSorted Players DataFrame (by Prices):\n",
Playersdata.sort_values(by='Rating'))
Introduction to Artificial Intelligence (3171105)

# Merging data on 'Name'


# Use 'inner' to only keep matching records, or change to 'outer' or 'left' if needed
print("\nMerged DataFrame (on Name, inner join):\n", pd.merge(Placementdata,
Playersdata, on='Name', how='inner'))

Output:
Introduction to Artificial Intelligence (3171105)

Conclusion:
Introduction to Artificial Intelligence (3171105)

EXPERIMENT: 2

Aim: Create a program using a sample dataset (heart disease data) to implement a
decision tree algorithm.

Code:

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_squared_error, r2_score
import matplotlib.pyplot as plt
from sklearn import tree

# Step 2: Load the dataset


file_path = r'C:\Users\Dell\OneDrive\Desktop\ai practical\Housingprice.csv'
# Update path
df = pd.read_csv(file_path)

# Step 3: Encode categorical columns


df['mainroad'] = df['mainroad'].map({'yes': 1, 'no': 0})
df['guestroom'] = df['guestroom'].map({'yes': 1, 'no': 0})
df['basement'] = df['basement'].map({'yes': 1, 'no': 0})
df['hotwaterheating'] = df['hotwaterheating'].map({'yes': 1, 'no': 0})
df['airconditioning'] = df['airconditioning'].map({'yes': 1, 'no': 0}) # Corrected
column name
df['parking'] = df['parking'].map({'yes': 1, 'no': 0})
df['prefarea'] = df['prefarea'].map({'yes': 1, 'no': 0})
df['furnishingstatus'] = df['furnishingstatus'].map({'furnished': 2, 'semi-furnished': 1,
'unfurnished': 0})

# Step 4: Define features and target


X = df.drop(columns=['price'])
y = df['price']

# Step 5: Train-test split


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,
random_state=42)

# Step 6: Train the Decision Tree Regressor


reg = DecisionTreeRegressor(random_state=42)
reg.fit(X_train, y_train)
Introduction to Artificial Intelligence (3171105)

# Step 7: Predictions and Evaluation


y_pred = reg.predict(X_test)

# Step 8: Visualize the Decision Tree


plt.figure(figsize=(10,8))
tree.plot_tree(reg, feature_names=X.columns, filled=True)
plt.show()

Output:

Conclusion:
Introduction to Artificial Intelligence (3171105)

EXPERIMENT: 3

Aim: Create a program to implement a backpropagation algorithm in python.

Code:

import numpy as np
import pandas as pd

# Load data from CSV


data = pd.read_csv(r'C:\Users\Dell\OneDrive\Desktop\ai
practical\Housingprice.csv')

# Select features and target


X = data[['area', 'bedrooms', 'bathrooms']].values
y = data[['price']].values

# Scale units
X = X / np.amax(X, axis=0) # Normalize features
y = y / np.max(y) # Normalize target price

class NeuralNetwork(object):

def __init__(self):
# Parameters
self.inputSize = 3 # Update to 3 for the selected features
self.outputSize = 1
self.hiddenSize = 4

# Weights
self.W1 = np.random.randn(self.inputSize, self.hiddenSize) # (3x4) weight
matrix from input to hidden layer
self.W2 = np.random.randn(self.hiddenSize, self.outputSize) # (4x1) weight
matrix from hidden to output layer

def feedForward(self, X):


# Forward propagation through the network
self.z = np.dot(X, self.W1) # Dot product of X (input) and first set of weights
(3x4)
Introduction to Artificial Intelligence (3171105)

self.z2 = self.sigmoid(self.z) # Activation function


self.z3 = np.dot(self.z2, self.W2) # Dot product of hidden layer (z2) and second
set of weights (4x1)
output = self.sigmoid(self.z3)
return output

def sigmoid(self, s, deriv=False):


# Activation function

if deriv:
return s * (1 - s)
return 1 / (1 + np.exp(-s))

def backward(self, X, y, output):


# Backward propagate through the network
self.output_error = y - output # Error in output
self.output_delta = self.output_error * self.sigmoid(output, deriv=True) #
Applying derivative of sigmoid to error

self.z2_error = self.output_delta.dot(self.W2.T) # z2 error: how much our


hidden layer weights contribute to output error
self.z2_delta = self.z2_error * self.sigmoid(self.z2, deriv=True) # Applying
derivative of sigmoid to z2 error

# Adjusting weights
self.W1 += X.T.dot(self.z2_delta) # Adjusting first set (input -> hidden)
weights
self.W2 += self.z2.T.dot(self.output_delta) # Adjusting second set (hidden ->
output) weights

def train(self, X, y):


# Train the network
output = self.feedForward(X)
self.backward(X, y, output)

# Initialize and train the Neural Network


NN = NeuralNetwork()

for i in range(1000): # Train the NN 1000 times


NN.train(X, y)
if (i % 100 == 0):
Introduction to Artificial Intelligence (3171105)

# Print the loss every 100 iterations


print("Loss after iteration", i, ":", str(np.mean(np.square(y -
NN.feedForward(X)))))
print("Input: \n" + str(X))
print("Actual Output: \n" + str(y))
print("Predicted Output: \n" + str(NN.feedForward(X)))
print("Loss: " + str(np.mean(np.square(y - NN.feedForward(X)))))

Output:
Introduction to Artificial Intelligence (3171105)

Conclusion:
Introduction to Artificial Intelligence (3171105)

EXPERIMENT: 4

Aim: Create a program to implement a simple stock market prediction based on


historical datasets.

Importing Libraries and data

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt

# Load the dataset


data = pd.read_csv('ADANIPORTS.csv')
print(data.head())
Introduction to Artificial Intelligence (3171105)

# Explicitly specify the format of the date


data['Date'] = pd.to_datetime(data['Date'], format='%d-%m-%Y')
data['Date'] = pd.to_datetime(data['Date'])
data.set_index('Date', inplace=True)

# Let's only use 'Close' price as the feature for prediction


data['Prediction'] = data['Close'].shift(-1) # Predict the next day's closing price
X = np.array(data[['Close']]) # Feature set
X = X[:-1] # Remove the last value as it doesn't have a next day

y = np.array(data['Prediction']) # Labels
y = y[:-1] # Shifted by one day

# Split the data into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train the Linear Regression model


model = LinearRegression()
model.fit(X_train, y_train)
predictions = model.predict(X_test)

plt.figure(figsize=(10, 9))
plt.subplot(311)
plt.plot(data.index[-len(y_test):], y_test, label='True Price', color='green')
plt.subplot(312)
plt.plot(data.index[-len(y_test):], predictions, label='Predicted Price', color='red')
plt.subplot(313)
plt.plot(data.index[-len(y_test):], y_test, label='True Price', color='green')
plt.plot(data.index[-len(y_test):], predictions, label='Predicted Price', color='red')
plt.xlabel('Date')
Introduction to Artificial Intelligence (3171105)

plt.ylabel('Stock Price')
plt.title('Stock Price Prediction')
plt.legend()
plt.show()

# Evaluate the model


accuracy = model.score(X_test, y_test)
print(f'Model Accuracy: {accuracy * 100:.2f}%')

Output:

Conclusion
Introduction to Artificial Intelligence (3171105)

EXPERIMENT: 5

Aim: Create a program using NumPy to implement a simple perceptron model.

Code:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

# Define Perceptron class


class Perceptron(object):
def __init__(self, learning_rate=0.01, n_iter=100, random_state=1):
self.learning_rate = learning_rate
self.n_iter = n_iter
self.random_state = random_state

def fit(self, X, y):


rand = np.random.RandomState(self.random_state)
self.weights = rand.normal(loc=0.0, scale=0.01, size=1 + X.shape[1])
self.errors_ = []

for _ in range(self.n_iter):
errors = 0
for x, target in zip(X, y):
update = self.learning_rate * (target - self.predict(x))
self.weights[1:] += update * x
self.weights[0] += update
errors += int(update != 0.0)
self.errors_.append(errors)
return self

def net_input(self, X):


return np.dot(X, self.weights[1:]) + self.weights[0]

def predict(self, X):


return np.where(self.net_input(X) >= 0.0, 1, -1)
Introduction to Artificial Intelligence (3171105)

# Step 1: Load the dataset


file_path = (r'C:\Users\Dell\OneDrive\Desktop\ai practical\Housingprice.csv')
data = pd.read_csv(file_path)

# Step 2: Preprocess the dataset


# For simplicity, we will focus on binary classification based on 'furnishingstatus'
column
# Converting 'furnishingstatus' into binary labels: 1 for 'furnished', 0 for
'unfurnished'
data['furnishingstatus'] = data['furnishingstatus'].map({'furnished': 1, 'unfurnished':
0, 'semi-furnished': 0})

# Use 'area' and 'parking' as features for simplicity (adjust according to your
preference)
X = data[['area', 'parking']].values
y = data['furnishingstatus'].values

# Step 3: Initialize and train the Perceptron


per = Perceptron(learning_rate=0.1, n_iter=100, random_state=1)
per.fit(X, y)

# Step 4: Predictions and evaluation


y_pred = per.predict(X)
accuracy = np.mean(y == y_pred)
print(f"Accuracy: {accuracy:.2f}")

# Step 5: Plot the data points


plt.figure(figsize=(12, 6))

# Data points
plt.subplot(1, 2, 1)
plt.scatter(X[y == 1][:, 0], X[y == 1][:, 1], color='green', marker='x',
label='Furnished')
plt.scatter(X[y == 0][:, 0], X[y == 0][:, 1], color='red', marker='o',

label='Unfurnished')
plt.xlabel('Area')
plt.ylabel('Parking')
plt.legend(loc='upper right')

# Plot the number of updates vs epochs


plt.subplot(1, 2, 2)
plt.plot(range(1, len(per.errors_) + 1), per.errors_, marker='o')
plt.xlabel('Epochs')
Introduction to Artificial Intelligence (3171105)

plt.ylabel('Number of Updates')
plt.title('Perceptron Training Performance')

plt.tight_layout()
plt.show()

Output:

Conclusion:
Introduction to Artificial Intelligence (3171105)

EXPERIMENT: 6

Aim: Create a program to perform sentiment analysis on a textual dataset


(IMDB dataset)

Code:

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score

# Step 1: Load and Prepare Dataset


data = pd.read_csv(r'C:\Users\Dell\OneDrive\Desktop\ai
practical\Housingprice.csv') # Load the dataset

# Inspecting the dataset to understand its structure


print(data.head()) # Ensure data is loaded before this point
print(data.info())

# Assuming the dataset has columns like 'area', 'bedrooms', 'price', etc.
# Step 2: Data Cleaning and Preparation
# Drop rows with missing values
data = data.dropna()

# Convert categorical features to numerical (if applicable)


categorical_features = ['mainroad', 'guestroom', 'basement', 'hotwaterheating',
'airconditioning', 'prefarea', 'furnishingstatus']
data = pd.get_dummies(data, columns=categorical_features, drop_first=True)

# Define features (X) and target (y)


# 'price' is the target variable
X = data.drop(columns=['price'])
y = data['price']

# Step 3: Train-Test Split


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=42)
Introduction to Artificial Intelligence (3171105)

# Step 4: Train Linear Regression Model


model = LinearRegression()
model.fit(X_train, y_train)

# Step 5: Make Predictions and Evaluate the Model


y_pred = model.predict(X_test)

# Calculate evaluation metrics


mse = mean_squared_error(y_test, y_pred)
r2 = r2_score(y_test, y_pred)

print("Mean Squared Error:", mse)


print("R-squared:", r2)

Output:
Introduction to Artificial Intelligence (3171105)

Conclusion:
Introduction to Artificial Intelligence (3171105)

EXPERIMENT: 7

Aim: Create a program using any machine learning framework like


TensorFlow, Keras to implement a linear regression algorithm

Code:

import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Input
import matplotlib.pyplot as plt
import sys

# Set stdout to UTF-8 encoding to prevent UnicodeEncodeError


sys.stdout = open(sys.stdout.fileno(), mode='w', encoding='utf-8', buffering=1)

# Step 1: Load and Prepare Dataset


# Load the dataset
data = pd.read_csv(r'C:\Users\Dell\OneDrive\Desktop\ai
practical\Housingprice.csv')

# Step 2: Select Feature and Target


# For simplicity, select 'area' as feature and 'price' as target
X = data[['area']].values # Feature matrix (input)
y = data['price'].values # Target vector (output)

# Normalize the data for better model performance


X = (X - X.mean()) / X.std()
y = (y - y.mean()) / y.std()

# Step 3: Split Data into Training and Test Sets


split_index = int(0.8 * len(X))
X_train, X_test = X[:split_index], X[split_index:]
y_train, y_test = y[:split_index], y[split_index:]

# Step 4: Define the Linear Regression Model using Keras


model = Sequential()
model.add(Input(shape=(1,))) # Use Input layer
model.add(Dense(1, activation='linear')) # 1 output
Introduction to Artificial Intelligence (3171105)

# Compile the model with mean squared error loss and Adam optimizer
model.compile(optimizer='adam', loss='mean_squared_error')

# Step 5: Train the Model


history = model.fit(X_train, y_train, epochs=100, verbose=0,
validation_data=(X_test, y_test))

# Step 6: Evaluate the Model


y_pred = model.predict(X_test)

# Step 7: Plot Training Data, Test Predictions, and Regression Line


plt.scatter(X_train, y_train, color='blue', label='Training Data')
plt.scatter(X_test, y_test, color='orange', label='Testing Data')

# Plot the regression line


X_line = np.linspace(X.min(), X.max(), 100).reshape(-1, 1)
y_line = model.predict(X_line)
plt.plot(X_line, y_line, color='red', label='Regression Line')

# Add labels and title


plt.title('Linear Regression using TensorFlow/Keras')
plt.xlabel('Normalized Area')
plt.ylabel('Normalized Price')
plt.legend()

# Show the plot


plt.show()

# Display training loss history


plt.plot(history.history['loss'])
plt.title('Model Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.show()
Introduction to Artificial Intelligence (3171105)

Output:

Conclusion:
Introduction to Artificial Intelligence (3171105)

EXPERIMENT: 8

Aim: Create a program using any machine learning framework like


TensorFlow, Keras to implement a simple convolutional, neuralnetwork.

Convolutional Layer

import tensorflow as tf
from tensorflow.keras import layers, models
import numpy as np
import matplotlib.pyplot as plt

# Step 1: Load and Pre-process MNIST Dataset


# Load the MNIST dataset
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Normalize the pixel values (from 0 to 255 to 0 to 1 range)


x_train, x_test = x_train / 255.0, x_test / 255.0

# Reshape the data to include a channel dimension (since it's a grayscale image)
x_train = x_train.reshape(-1, 28, 28, 1)
x_test = x_test.reshape(-1, 28, 28, 1)

# Step 2: Build a Simple CNN Model


model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),
layers.MaxPooling2D((2, 2)),

layers.Conv2D(64, (3, 3), activation='relu'),


Introduction to Artificial Intelligence (3171105)

layers.MaxPooling2D((2, 2)),

layers.Conv2D(64, (3, 3), activation='relu'),

layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax') # 10 output classes for digits 0-9
])

# Step 3: Compile the Model


model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

# Step 4: Train the Model


history = model.fit(x_train, y_train, epochs=5, validation_data=(x_test, y_test))

# Step 5: Evaluate the Model


test_loss, test_acc = model.evaluate(x_test, y_test, verbose=2)
print(f'\nTest accuracy: {test_acc:.4f}')

# Plotting training and validation loss and accuracy for visualization


plt.figure(figsize=(12, 4))

plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()
Introduction to Artificial Intelligence (3171105)

plt.subplot(1, 2, 2)
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()

Output:

Conclusion:
Introduction to Artificial Intelligence (3171105)

EXPERIMENT: 9

Aim: Create a program using a convolution neural network that identifies


objects like water bottles, caps, books, etc using the webcam.

Face detection using webcam

Code:

import cv2
import numpy as np
import tensorflow as tf
from tensorflow.keras.applications.mobilenet_v2
import preprocess_input, decode_predictions
import sys
import time

# Fixing the encoding issue for Windows console


sys.stdout.reconfigure(encoding='utf-8')

# Load the MobileNetV2 model pre-trained on ImageNet


model = tf.keras.applications.MobileNetV2(weights='imagenet')

# Initialize webcam video capture


cap = cv2.VideoCapture(0) # Use 0 for the default webcam

if not cap.isOpened():
print("Error: Could not open video stream.")
sys.exit()

# Function to process the webcam feed and perform object detection


def detect_objects(frame, model):
# Resize the frame to 224x224 for MobileNetV2
resized_frame = cv2.resize(frame, (224, 224))

# Preprocess the frame for MobileNetV2


x = np.expand_dims(resized_frame, axis=0)
x = preprocess_input(x)

# Make predictions using the model


preds = model.predict(x)

# Decode predictions to get the labels


decoded_preds = decode_predictions(preds, top=3)[0]
Introduction to Artificial Intelligence (3171105)

return decoded_preds

# Function to draw predictions on the frame


def draw_predictions(frame, predictions):
for i, (imagenet_id, label, score) in enumerate(predictions):
if score > 0.5: # Only show predictions above 50% confidence
label_text = f"{label}: {score * 100:.2f}%"
cv2.putText(frame, label_text, (10, 30 + i * 30),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
cv2.rectangle(frame, (10, 10 + i * 30), (200, 40 + i * 30), (0, 255, 0), 1)

# Main loop to capture frames and detect objects


while True:
start_time = time.time() # Start timing for FPS calculation

# Capture frame-by-frame from the webcam


ret, frame = cap.read()
if not ret:
print("Error: Failed to capture image.")
break

# Detect objects in the frame


predictions = detect_objects(frame, model)

# Draw predictions on the frame


draw_predictions(frame, predictions)

# Show the frame with detection results


cv2.imshow('Object Detection', frame)

# Calculate and display FPS


fps = 1 / (time.time() - start_time)
cv2.putText(frame, f"FPS: {fps:.2f}", (10, 50),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0), 2)

# Break the loop on 'q' key press


if cv2.waitKey(1) & 0xFF == ord('q'):
break

# Release the video capture object and close the OpenCV window
cap.release()
cv2.destroyAllWindows()
Introduction to Artificial Intelligence (3171105)

Output:

Conclusion:
Introduction to Artificial Intelligence (3171105)

EXPERIMENT: 10

Aim: Create a program using any machine learning framework like


TensorFlow, Keras to implement a Logistic regression algorithm.

Importing Libraries and data

import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
tf.compat.v1.disable_eager_execution()
data = pd.read_csv('movie_reviews_dataset.csv')
print("Data Shape:", data.shape)
print(data.head())

# Feature Matrix and Labels


x_orig = data['Review'].values
y_orig = data['Sentiment'].values.reshape(-1, 1)

# Vectorize the text data


tfidf = TfidfVectorizer(max_features=1000) # Limit the features to 1000
Introduction to Artificial Intelligence (3171105)

x = tfidf.fit_transform(x_orig).toarray()

# One-hot encode the labels


oneHot = OneHotEncoder()
y = oneHot.fit_transform(y_orig).toarray()

# Split into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)

# Model parameters
alpha, epochs = 0.0035, 500
m, n = X_train.shape
print('m =', m)
print('n =', n)
print('Learning Rate =', alpha)
print('Number of Epochs =', epochs)

# Creating the Model


# Use tf.compat.v1.placeholder for TensorFlow 2.x
X = tf.compat.v1.placeholder(tf.float32, [None, n])
Y = tf.compat.v1.placeholder(tf.float32, [None, 2])

# Trainable Weights and Bias


W = tf.Variable(tf.zeros([n, 2]))
b = tf.Variable(tf.zeros([2]))
Y_hat = tf.nn.sigmoid(tf.add(tf.matmul(X, W), b))

# Sigmoid Cross Entropy Cost Function


cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=Y_hat, labels=Y)
Introduction to Artificial Intelligence (3171105)

# Optimizer
# Use tf.compat.v1.train.GradientDescentOptimizer for compatibility with TensorFlow 2.x
optimizer =
tf.compat.v1.train.GradientDescentOptimizer(learning_rate=alpha).minimize(cost)

# Global Variable Initializer


# Use tf.compat.v1.global_variables_initializer() for TensorFlow 2.x
init = tf.compat.v1.global_variables_initializer()

# Training the model


cost_history, accuracy_history = [], []

with tf.compat.v1.Session() as sess: # Use tf.compat.v1.Session for TensorFlow 2.x


sess.run(init)

for epoch in range(epochs):


# Train the model on the training data
sess.run(optimizer, feed_dict={X: X_train, Y: y_train})

# Calculate the cost and accuracy for the current epoch


c = sess.run(cost, feed_dict={X: X_train, Y: y_train})
correct_prediction = tf.equal(tf.argmax(Y_hat, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

cost_history.append(sum(sum(c)))
accuracy_history.append(accuracy.eval({X: X_train, Y: y_train}) * 100)
if epoch % 100 == 0 and epoch != 0:
print("Epoch " + str(epoch) + " Cost: " + str(cost_history[-1]))

Weight = sess.run(W)
Introduction to Artificial Intelligence (3171105)

Bias = sess.run(b)

# Final accuracy on test data


correct_prediction = tf.equal(tf.argmax(Y_hat, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
test_accuracy = accuracy.eval({X: X_test, Y: y_test}) * 100
print("\nTest Accuracy:", test_accuracy, "%")

# Plot the cost over the epochs


plt.plot(list(range(epochs)), cost_history)
plt.xlabel('Epochs')
plt.ylabel('Cost')
plt.title('Decrease in Cost with Epochs')
plt.show()
# Plot the accuracy over the epochs
plt.plot(list(range(epochs)), accuracy_history)
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.title('Increase in Accuracy with Epochs')
plt.show()

Output:
Introduction to Artificial Intelligence (3171105)

Conclusion:

You might also like