0% found this document useful (0 votes)
15 views34 pages

AI Lab Report BIM

The document contains multiple lab reports detailing various programming tasks, including sentiment analysis using TextBlob and Matplotlib, modeling family relationships in Prolog, implementing supervised learning algorithms on the Kaggle IMDb dataset, and calculating factorials and sums using recursion in Prolog. Each report outlines objectives, tools used, theoretical background, code implementations, discussions, and conclusions. The reports demonstrate practical applications of programming concepts across different languages and frameworks.

Uploaded by

Bigyan Khanal 33
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views34 pages

AI Lab Report BIM

The document contains multiple lab reports detailing various programming tasks, including sentiment analysis using TextBlob and Matplotlib, modeling family relationships in Prolog, implementing supervised learning algorithms on the Kaggle IMDb dataset, and calculating factorials and sums using recursion in Prolog. Each report outlines objectives, tools used, theoretical background, code implementations, discussions, and conclusions. The reports demonstrate practical applications of programming concepts across different languages and frameworks.

Uploaded by

Bigyan Khanal 33
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Lab Report:1

Lab Title:
Sentiment Analysis and Visualization Using TextBlob and Matplotlib

Objectives

1. To analyze textual data and identify sentiments using the TextBlob library.
2. To classify sentiments into positive, negative, and neutral categories.
3. To visualize sentiment distribution using graphical tools from matplotlib.

Tools Used

1. Python 3.x: Programming language for implementing the program logic.


2. TextBlob: Library for sentiment analysis and natural language processing.
3. Matplotlib: Visualization library for creating pie charts and bar graphs.

Theory

Sentiment analysis is a natural language processing (NLP) task that identifies the emotional tone within a
piece of text.

● TextBlob:

○ It computes polarity (sentiment score) ranging from -1 to 1.

○ A positive polarity indicates positive sentiment, a negative polarity indicates negative

sentiment, and a polarity of 0 indicates neutrality.

● Matplotlib: Used to visually represent data trends and distributions.

This program evaluates a sample dataset to classify its sentiments and visualizes the results as pie and
barcharts.

Code
from textblob import TextBlob
import matplotlib.pyplot as plt

# Sample data for sentiment analysis


texts = [
"I love this product. It's absolutely amazing!",
"This is the worst service I have ever received.",
"The movie was okay, not too great, but not bad either.",
"I'm so happy with the results of this project!",
"I feel frustrated and disappointed with this experience.",
"What a wonderful day! I'm feeling great."
]

# Perform sentiment analysis


sentiments = {"Positive": 0, "Negative": 0, "Neutral": 0}
polarity_scores = []

for text in texts:


analysis = TextBlob(text)
polarity_scores.append(analysis.polarity)
if analysis.polarity > 0:
sentiments["Positive"] += 1
elif analysis.polarity < 0:
sentiments["Negative"] += 1
else:
sentiments["Neutral"] += 1

# Visualization
# Pie chart
plt.figure(figsize=(8, 6))
plt.pie(
sentiments.values(),
labels=sentiments.keys(),
autopct='%1.1f%%',
colors=['#90ee90', '#ffcccb', '#d3d3d3'],
startangle=140
)
plt.title('Sentiment Distribution')
plt.show()

# Bar chart for polarity scores


plt.figure(figsize=(10, 6))
plt.bar(range(len(polarity_scores)), polarity_scores, color='skyblue', alpha=0.7)
plt.axhline(y=0, color='black', linestyle='--', linewidth=1)
plt.title('Polarity Scores of Texts')
plt.xlabel('Text Index')
plt.ylabel('Polarity Score')
plt.show()
Output

1. Pie Chart and Bar diagram:


○ Visual representation of the sentiment distribution (Positive, Negative, Neutral).

Discussion

1. 1. The program correctly identifies and categorizes sentiments based on polarity.


2. 2. The pie chart effectively conveys the overall sentiment distribution in the dataset.
3. 3. The bar chart provides detailed insights into the polarity of individual texts, highlighting subtle
4. differences.
5. 4. The program is well-suited for small datasets. For larger datasets, scalability and optimization
6. strategies should be considered.

Conclusion

The sentiment analysis program demonstrated the ability to classify textual data into positive, negative,
and neutral categories using TextBlob. Visualizations created with matplotlib enhance understanding of
sentiment trends and distributions. This program can serve as a foundation for analyzing larger datasets or
real-world text sources like product reviews and social media posts.
LAB REPORT:2

Lab Title: Relationship Program using Prolog

Objective:

The objective of this lab is to create a Prolog program that models family relationships using facts and
rules. This includes defining relationships such as father, mother, son, daughter, brother, and sister.

Tools Used:

● Prolog: A logic programming language used to define facts and rules for the relationships.

Theory:

Prolog works by defining facts and rules, then querying the database of facts to infer new information. In
this case, we define family members and their relationships and then use queries to answer specific
relationship questions.

Facts:
● Male and Female: Representing family members' gender.
● Parent: Defining parent-child relationships.

Rules:

● Father: A father is a male parent.


● Mother: A mother is a female parent.
● Son: A son is a male child of a parent.
● Daughter: A daughter is a female child of a parent.
● Brother: A brother is a male sibling.
● Sister: A sister is a female sibling.

Code:

% Facts

male(ram).

male(shyam).

female(sita).
female(rita).

parent(ram, shyam).

parent(sita, shyam).

parent(ram, rita).

parent(sita, rita).

% Rules

father(X, Y) :- parent(X, Y), male(X). % A father is a male parent.

mother(X, Y) :- parent(X, Y), female(X). % A mother is a female


parent.

son(X, Y) :- parent(Y, X), male(X). % A son is a male child of a


parent.

daughter(X, Y) :- parent(Y, X), female(X). % A daughter is a female


child of a parent.

brother(X, Y) :- male(X), parent(Z, X), parent(Z, Y), X \= Y. % A


brother is a male with a common parent, but not the same person.

sister(X, Y) :- female(X), parent(Z, X), parent(Z, Y), X \= Y. % A


sister is a female with a common parent, but not the same person.
Execution and Output:

Discussion:

The program successfully models familial relationships. By defining facts about who is male, female, and
parent, we can use rules to derive other relationships, such as who is the father or son of whom. The
program demonstrates how Prolog works through logical inference, answering queries based on the given
relationships.

Conclusion:
In this lab, we created a Prolog program to model family relationships using facts and rules. We
demonstrated how Prolog's logical inference engine can answer relationship queries, and we validated the
results with specific test cases. This lab highlights the power of Prolog for reasoning and solving
problems based on defined relationships and facts.
LAB REPORT:3
Lab Title:
Implementation of Supervised Learning Algorithm on Kaggle IMDb Dataset

Objectives:

● To demonstrate the application of supervised learning algorithms using the Kaggle IMDb Top
1000 dataset.
● To perform data preprocessing, including feature encoding and handling missing values.
● To implement a Random Forest Regressor model to predict IMDb ratings.
● To evaluate the model using Mean Squared Error (MSE) and R² score.
● To make predictions based on test data.

Tools Used:

● Python (Programming Language)


● Pandas (Data manipulation and analysis)
● Scikit-learn (Machine learning library for regression and metrics)
● VS Code (Development environment)
● CSV Dataset (IMDB Top 1000 dataset)

Theory:

Supervised learning involves training a model on a labeled dataset, where both input and output are
known. In this program, we use Random Forest Regression, an ensemble learning technique that builds
multiple decision trees and merges their predictions to improve accuracy. The key steps involved are:

1. ● Data Preprocessing: Cleaning the dataset by removing unnecessary spaces, handling missing
2. values, and encoding categorical variables.
3. ● Feature Selection and Encoding: Selecting relevant features (like Released_Year, Gross, and
4. one-hot encoding Genre) to be used in model training.
5. ● Model Training and Evaluation: Training the model using the RandomForestRegressor and
6. evaluating its performance with MSE and R² score.

Code:

import pandas as pd

from sklearn.model_selection import train_test_split

from sklearn.ensemble import RandomForestRegressor


from sklearn.metrics import mean_squared_error, r2_score

file_path = 'imdb_top_1000.csv'

data = pd.read_csv(file_path)

data.columns = data.columns.str.strip() # Remove whitespace from column names

data['Released_Year'] = pd.to_numeric(data['Released_Year'], errors='coerce')

data['Gross'] = data['Gross'].str.replace(',', '').astype(float, errors='ignore')

# Drop rows with missing values in critical columns

data = data.dropna(subset=['IMDB_Rating', 'Released_Year', 'Gross'])

# Feature selection and encoding

data['Genre'] = data['Genre'].astype(str)

data = pd.get_dummies(data, columns=['Genre'], drop_first=True) # One-hot encode Genre

X = data[['Released_Year', 'Gross'] + [col for col in data.columns if 'Genre_' in col]] # Features

y = data['IMDB_Rating'] # Target

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Supervised learning: Random Forest Regression

model = RandomForestRegressor(random_state=42)

model.fit(X_train, y_train)

# Predictions

y_pred = model.predict(X_test)

# Evaluation

mse = mean_squared_error(y_test, y_pred)

r2 = r2_score(y_test, y_pred)

print("Mean Squared Error:", mse)


print("R2 Score:", r2)

# Example prediction

example_data = X_test.iloc[0:1]

predicted_rating = model.predict(example_data)

print("Predicted IMDb Rating for example:", predicted_rating[0])

Output:

Discussion:

● The dataset was preprocessed by stripping whitespaces from column names, converting the
Released_Year to numeric values, and removing commas from the Gross column to ensure proper data
formatting.

● After splitting the dataset into training and test sets, a Random Forest Regressor model was trained and
evaluated.

● The results show a certain level of prediction accuracy as indicated by the MSE and R² score. The
performance of the model can be further enhanced by exploring hyperparameter tuning, feature
engineering, or trying other regression algorithms.

Conclusion:

The implementation of a Random Forest Regressor on the Kaggle IMDb dataset demonstrated the power
of supervised learning for regression tasks. The model performed reasonably well in predicting IMDb
ratings based on movie features such as Released_Year and Gross, along with genre encoding.Future
improvements could include experimenting with more features, trying other algorithms, and performing
further model optimization for higher prediction accuracy.
LAB REPORT:4.1

Lab Title:
Implementation of Factorial Calculation Using Recursive Function in Prolog

Objectives:

● To demonstrate the use of recursion in Prolog to calculate the factorial of a number.


● To implement the factorial function for numbers greater than or equal to zero.
● To showcase the base case and recursive case for the factorial calculation.
● To handle invalid input by letting Prolog fail gracefully.

Tools Used:

● Prolog (Programming Language)


● SWI-Prolog(Development Environment)

Theory:

In Prolog, recursion is a fundamental concept used to define functions and solve problems. The factorial
of a number NNN is the product of all positive integers less than or equal to NNN, and is mathematically
defined as:

● 0!=10! = 10!=1

● N!=N×(N−1)!N! = N \times (N-1)!N!=N×(N−1)!, for N>0N > 0N>0 .The base case is defined as
factorial(0, 1) since the factorial of 0 is 1. The recursive case reduces NNN by1, calling factorial(N-1, F1)
until N=0N = 0N=0 is reached.

Code:

% factorial(N, F) means F is the factorial of N

factorial(0, 1). % Base case: factorial of 0 is 1

factorial(N, F) :-

N > 0, % Ensure N is greater than 0

N1 is N - 1, % Subtract 1 from N

factorial(N1, F1), % Recursively calculate factorial of N-1


F is N * F1. % Multiply N with the result of N-1 factorial

% If N is 0 or less, the program will fail gracefully without needing to write anything.

Output:

The output is generated when querying the factorial of a given number.

Discussion:

● The base case (factorial(0, 1)) is crucial as it serves as the stopping point for the recursive function.

● The recursive case (factorial(N, F)) works by breaking the problem down into smaller subproblems,
where each subproblem is the factorial of N−1N-1N−1. It continues until N=0N = 0N=0.

● Prolog handles invalid inputs (such as negative numbers) by failing gracefully without needing
additional explicit checks in the main logic. This is a typical feature in logic programming, where failure
often means invalid input or an unsolvable condition.

Conclusion:

The Prolog program successfully implements the calculation of factorials using recursion. The recursive
approach is a natural fit for problems like factorials, where the problem can be broken down into smaller,
similar subproblems. The program handles the base case and recursive cases effectively, while Prolog’s
inherent failure mechanism gracefully handles invalid inputs. Further improvements could include adding
explicit input validation or optimizing for large inputs, such as using tail recursion or memoization
techniques if necessary.
LAB REPORT:4.2

Lab Title:
Finding the Sum of Natural Numbers Up to the nth Term Using Recursion in Prolog

Objectives:

● To implement a recursive function in Prolog to calculate the sum of natural numbers up to a given
number NNN.
● To understand the use of base and recursive cases in recursion.
● To demonstrate Prolog’s recursive approach for solving problems like summing numbers.

Tools Used:

● Prolog (Programming Language)


● SWI-Prolog (Development Environment)

Theory:

The sum of the first NNN natural numbers is the total obtained by adding all numbers from 1 to NNN.
Mathematically, it is represented as:

Sum(N)=1+2+3+⋯+N\text{Sum}(N) = 1 + 2 + 3 + \dots + NSum(N)=1+2+3+⋯+N

The recursive approach involves:

1. Base case: The sum of numbers from 0 is 0 (sum_natural(0, 0)).

2. Recursive case: To find the sum of numbers up to NNN, we add NNN to the sum of numbers up to
N−1N-1N−1.

The formula becomes:

Sum(N)=N+Sum(N−1)\text{Sum}(N) = N + \text{Sum}(N-1)Sum(N)=N+Sum(N−1)

This continues until N=0N = 0N=0, at which point the recursion stops.

Code:

% sum_natural(N, Sum) means Sum is the sum of natural numbers up to N

% Base case: sum of natural numbers up to 0 is 0

sum_natural(0, 0).
% Recursive case: sum of natural numbers up to N is N + sum of natural numbers up to (N-1)

sum_natural(N, Sum) :-

N > 0, % Ensure N is greater than 0

N1 is N - 1, % Subtract 1 from N

sum_natural(N1, Sum1), % Recursively calculate sum of natural numbers up to N-1

Sum is N + Sum1. % Add N to the sum of numbers up to N-1

Output:

The program calculates the sum of natural numbers up to the nth term based on the input NNN.

Discussion:

The base case (sum_natural(0, 0)) ensures the recursion stops when N=0N = 0N=0.

● The recursive case handles numbers greater than 0 by adding NNN to the sum of numbers up to
N−1N-1N−1.

● Prolog’s recursive nature makes it an ideal language for problems like this, where a problem can be
broken down into smaller subproblems.

● This program handles natural numbers well. However, for very large numbers, the recursion depth
could be a limitation. Prolog’s stack depth may cause an error if the recursion goes too deep..

Conclusion:

The program successfully calculates the sum of natural numbers up to NNN using recursion in Prolog. By
using both the base case and recursive case, we were able to break down the problem effectively. Prolog’s
powerful recursion capabilities make it well-suited for such tasks.
LAB REPORT:5.1
Lab Title:
Implementation of AND, OR, and XOR Gates in C

Objectives:

● Implement AND, OR, and XOR gates in C.


● Demonstrate the use of logical operations in C.
● Accept binary inputs and calculate gate outputs.

Tools Used:

● C Programming Language
● C Compiler (e.g., GCC)

Theory:

1. AND Gate: Returns 1 if both inputs are 1, otherwise 0.


2. OR Gate: Returns 1 if at least one input is 1, otherwise 0.
3. XOR Gate: Returns 1 if inputs are different, otherwise 0.

Code:

#include <stdio.h>

#include <stdbool.h>

bool AND(bool a, bool b) { return a && b; }

bool OR(bool a, bool b) { return a || b; }

bool XOR(bool a, bool b) { return a ^ b; }

int main() {

int input1, input2;

puts("Enter the inputs in binary for the gates:");

scanf("%d%d", &input1, &input2);

printf("\nThe AND is :%d", AND(input1, input2));


printf("\nThe OR is :%d", OR(input1, input2));

printf("\nThe XOR is :%d", XOR(input1, input2));

return 0;

Output:

Discussion:

● The program implements the gates correctly.


● It accepts binary input and calculates the correct output for each gate.

Conclusion:

The program accurately calculates the outputs for AND, OR, and XOR gates using binary inputs. The
solution is simple and efficient for basic logical operations.
LAB REPORT:5.2

Lab Title:
McCulloch-Pitts Neuron Model for Logic Gates in C

Objectives:

● Implement the McCulloch-Pitts neuron model to simulate an AND gate.


● Demonstrate the use of thresholds and weights in logic operations.

Tools Used:

● C Programming Language
● C Compiler (e.g., GCC)

Theory:

The McCulloch-Pitts neuron model uses binary inputs, weights, and a threshold. It outputs 1 if the
weighted sum of inputs is greater than or equal to the threshold, otherwise 0. This can simulate logical
operations like AND gates.

Code:

#include <stdio.h>

#define THRESHOLD 1

#define INPUTS 2

int McCullochPittsAND(int x, int y)

int weights[INPUTS] = {0.6, 0.6};

int sum = x * weights[0] + y * weights[1];

return (sum >= THRESHOLD) ? 1 : 0;}

int main(){

printf("McCulloch-Pitts AND Gate:\n");


printf("0 AND 0 = %d\n", McCullochPittsAND(0, 0));

printf("0 AND 1 = %d\n", McCullochPittsAND(0, 1));

printf("1 AND 0 = %d\n", McCullochPittsAND(1, 0));

printf("1 AND 1 = %d\n", McCullochPittsAND(1, 1));

return 0;

Output:

Discussion:

● The program simulates the AND gate using the McCulloch-Pitts model.
● The output is correct based on the threshold and weighted sum logic.

Conclusion:

The McCulloch-Pitts model correctly simulates an AND gate by applying weights and a threshold,
demonstrating a basic neural computation model.
LAB REPORT:6
Lab Title:
Basic Natural Language Processing Using NLTK in Python

Objectives:

● Perform basic text preprocessing tasks like tokenization, stopword removal, and stemming.
● Use NLTK to analyze and process text data.

Tools Used:

● Programming Language: Python


● Libraries Used: NLTK (Natural Language Toolkit)
● Python Version: Python 3.x
● Editor/IDE: Any Python-supported IDE or text editor (e.g., VS Code, PyCharm)

Theory:

Natural Language Processing (NLP) involves techniques to analyze and process human language data.
NLTK (Natural Language Toolkit) is a powerful library used for various NLP tasks, including:

● Tokenization: Splitting text into sentences or words.

● Stopword Removal: Removing common words (e.g., "the", "and") that do not contribute much
meaning.

● Stemming: Reducing words to their root form (e.g., "running" becomes "run").

● Frequency Distribution: Counting occurrences of words in a text.

Code:

import nltk

from nltk.tokenize import word_tokenize, sent_tokenize

from nltk.corpus import stopwords

from nltk.probability import FreqDist

from nltk.stem import PorterStemmer

# Download necessary NLTK resources (if not already downloaded)

nltk.download('punkt')
nltk.download('stopwords')

# Sample text

text = "Natural language processing (NLP) is a field of artificial


intelligence that focuses on the interaction between computers and
humans using natural language. The ultimate goal of NLP is to enable
computers to understand, interpret, and generate human language."

# Tokenization

sentences = sent_tokenize(text)

words = word_tokenize(text)

# Remove stopwords

stop_words = set(stopwords.words('english'))

filtered_words = [word for word in words if word.lower() not in


stop_words and word.isalpha()]

# Frequency distribution

fdist = FreqDist(filtered_words)

# Stemming

ps = PorterStemmer()

stemmed_words = [ps.stem(word) for word in filtered_words]

# Output results

print("Original Text:")

print(text)

print("\nTokenized Sentences:")

print(sentences)

print("\nTokenized Words:")
print(words)

print("\nFiltered Words (without stopwords):")

print(filtered_words)

print("\nFrequency Distribution of Words:")

print(fdist)

print("\nStemmed Words:")

print(stemmed_words)

Output:

The program will output the following:


Discussion:

Tokenization: The text is split into sentences and words. sent_tokenize() is used for sentence
tokenization, and word_tokenize() is used for word tokenization.

● Stopword Removal: Common words (stopwords) that don’t contribute meaningfully to the analysis are
removed. For example, "is", "a", "and", etc.

● Frequency Distribution: A frequency distribution of filtered words is generated, showing the


occurrence of each word in the text.

● Stemming: Words are reduced to their root form (e.g., "processing" becomes "process").

Conclusion:

The program effectively showcases fundamental NLP tasks using NLTK, such as tokenization, stopword
removal, frequency distribution, and stemming. These methods serve as essential steps in text
preprocessing and feature extraction, laying the groundwork for more complex NLP applications.
LAB REPORT:7

Lab Title:

Demonstrating Prompt Engineering and Generative AI Using RunwayML Tools

Objectives:

● Understand the principles of prompt engineering and its impact on AI-generated results.
● Use the RunwayML tool to generate images from text prompts.
● Explore how changing the text prompt affects the output of the AI model.
● Create a short video by making a static image lipsync with an audio clip.

Tools Used:

● RunwayML
● Text-to-Image Model
● Lipsyncing Tool
● Audio/Video Editing Software

Theory:

Prompt engineering involves crafting and refining input prompts to achieve optimal results from AI
systems. A well-designed prompt enhances the quality, relevance, and accuracy of the AI's responses.
Key principles of prompt engineering include:

● Task Analysis: Clearly identify the desired outcome from the AI.
● Effective Prompt Design: Develop prompts that are specific, clear, and provide precise
instructions to the AI.
● Prompt Refinement: Adjust prompts to improve clarity and ensure the AI understands the
requirements.
● Iteration and Improvement: Continuously refine prompts based on feedback and generated
results.
● Understanding Prompt Components: Recognize how elements like tone, structure, and
specificity influence the AI's output.
Procedure:

1. Generate Images Using Different Prompts:

● Text-to-Image Model in RunwayML was used to generate images based on various prompts.
● Initial Prompt Example:
○ Prompt: “A peaceful sunset over the ocean with a small sailboat, vibrant colors in the
sky.”
○ Result: A serene and beautiful image of a sunset with a sailboat.
● Modified Prompt:
○ Prompt: “A surreal sunset with vivid purple and orange colors, a giant moon in the sky,
and a futuristic city skyline in the background.”
○ Result: A more dramatic and other-worldly sunset with futuristic elements.
● Analyzing the Result:
○ Changes in the prompts, like adding colors (vivid purple and orange) or incorporating
elements (futuristic city, giant moon), led to significant changes in the generated images.
The more specific the prompt, the closer the AI’s output aligns with the desired result.

2. Make the Static Image Lipsync with Your Audio:

● Lipsync Tool in RunwayML was used to animate the static image by syncing it with an audio
clip.
● Procedure:
1. Upload the generated image (from the first task) into the lip-sync tool.
2. Upload your pre-recorded audio file.
3. The tool analyzes the audio and syncs the lips in the image to the spoken words in the
audio.
● Result:
A short video was created with the static image of the generated person/object synced to the
audio. The lips moved in sync with the audio, creating the illusion of speech.

Screenshots:
Discussion:

● The text-to-image generation feature in RunwayML clearly demonstrates how a change in prompt
wording can lead to a diverse range of image outputs. A more detailed and specific prompt yields more
customized and tailored results.

● The lip-syncing feature added a layer of creativity, transforming the static image into an animated
character, which further exemplifies how AI tools can be used to enhance multimedia projects.

● By experimenting with different prompts, it was evident how precision in language (e.g., specifying
colors, adding context, and including details) directly influenced the final output.Iteration and refinement
of prompts are key in achieving the desired result.

Conclusion:

The lab provided valuable insight into prompt engineering by demonstrating how different text prompts
can drastically affect the output of an AI model. Additionally, by using RunwayML's lip-syncing feature,
we explored how static images can be dynamically animated to create engaging video content. This lab
reinforced the importance of clear and well-structured prompts and how they can be utilized to produce
creative outputs in generative AI.
LAB REPORT:8

Lab Title:

Implementing a Simple Rule-Based Chatbot for HSMSS in Python

Objectives:

● To design and implement a rule-based chatbot using Python.


● To create a chatbot that can provide predefined responses based on specific user inputs.
● To apply randomization in the chatbot’s responses for varied interaction.
● To familiarize with basic string manipulation and dictionary usage in Python.

Tools Used:

● Python (for coding the chatbot)


● random module (for selecting random responses)
● Terminal/Command Prompt (for running the program)

Theory:

A rule-based chatbot operates using predefined responses mapped to specific user inputs. It matches the
user's input against keys in a dictionary and provides a relevant response. In this scenario, the chatbot is
programmed to answer queries about the college (HSMSS) and AI. For inputs that do not match any
predefined keys, it returns a default response.

Procedure:

1. Define Predefined Responses:


A dictionary is created with keys representing potential user inputs (e.g., "hello," "about hsmss,"
"faculties") and values containing lists of possible responses.
2. Implement the Response Function:
The get_response(user_input) function checks if any dictionary key matches the user's
input.
○ If a match is found, it randomly selects a response from the corresponding list.
○ If no match is detected, it returns a default response.
3. Build the Chatbot:
The chatbot() function initializes the interaction and enters a loop where it continuously
prompts the user for input.
○ The conversation ends when the user enters "quit," "exit," or "bye."
Code:

import random

# Dictionary of predefined responses

responses = {

"hello": ["Hi there!", "Hello!", "Greetings!"],

"hi": ["Hi there!", "Hello!", "Greetings!"],

"faculties": ["xyz, ABC,"],

"programs": ["+2 : Science, Management, Bachelor: BCA BIM"],

"about hsmss": ["oldest private college"],

"official website": ["www.hsm.edu.np"],

"total students":["around 3000"],

"bye": ["Goodbye!", "See you later!", "Have a great day!"],

"chatbot name": ["I'm an AI Chatbot.", "You can call me AIBot.",


"I'm your friendly AI assistant."]

def get_response(user_input):

user_input = user_input.lower() # Convert to lowercase for


case-insensitive comparison

for key in responses:

if key in user_input: # Check if any predefined key is in the


user's input

return random.choice(responses[key]) # Return a random


response
return "I'm not sure how to respond to that. Can you ask me
something about AI?" # Default response

def chatbot():

print("AI Chatbot: Hello! I'm an AI chatbot. Ask me anything about


AI!") # Initial greeting

while True:

user_input = input("You: ") # Get input from the user

if user_input.lower() in ['quit', 'exit', 'bye']: # Exit


condition

print("AI Chatbot: Goodbye! Thanks for chatting.")

break

response = get_response(user_input) # Get a response based on


user input

print("AI Chatbot:", response) # Output the chatbot's


response

if __name__ == "__main__":

chatbot() # Start the chatbot


Screenshots/Results:

Discussion:

● The chatbot successfully implements a rule-based structure where predefined responses are triggered
based on specific keywords in the user input.

● The randomization of responses (using the random.choice() method) prevents the chatbot from
sounding repetitive, making the interaction more engaging.

● The default response ensures that the chatbot still has an answer even if it doesn’t recognize the input,
providing a safety net for users.

Conclusion:

This lab demonstrated the creation of a simple rule-based chatbot in Python, capable of responding to
predefined queries with random responses. The chatbot is effective for answering questions about the
HSMSS college and AI-related topics. This project also highlighted the use of dictionaries, string
manipulation, and randomization to create a more dynamic user experience.
LAB REPORT:9

Lab Title:

Solving the Tower of Hanoi Problem in Python

Objectives:

● To implement the Tower of Hanoi algorithm.


● To understand recursion and how it is applied to solve problems.

Tools Used:

● Python
● Recursion

Theory:

The Tower of Hanoi is a classic problem in recursion. The problem involves three pegs and a set of disks
of different sizes. The objective is to move all the disks from one peg to another, following these rules:

1. Only one disk can be moved at a time.


2. A disk can only be moved if it is the top disk on a peg.
3. No disk may be placed on top of a smaller disk.

The solution can be achieved through recursion, where we break the problem into smaller subproblems of
moving disks.

Code:

def tower_of_hanoi(n, source, destination, auxiliary):

if n == 1:

print(f"Move disk 1 from {source} to {destination}")

return

tower_of_hanoi(n-1, source, auxiliary, destination)

print(f"Move disk {n} from {source} to {destination}")


tower_of_hanoi(n-1, auxiliary, destination, source)

# Driver code

if __name__ == "__main__":

disks = 3 # Number of disks

tower_of_hanoi(disks, 'A', 'C', 'B')

Results:

Conclusion:

The recursive solution to the Tower of Hanoi problem efficiently solves the puzzle by breaking it down
into smaller subproblems. This approach demonstrates how recursion can be used to solve complex
problems in a simple and elegant manner.This lab report includes the implementation of the Tower of
Hanoi problem and demonstrates how recursion works to solve it.
Lab Report: 10

Lab Title:Implementation of Depth-First Search (DFS) and Breadth-First Search (BFS) Algorithms

Objective:

To implement and demonstrate the Depth-First Search (DFS) and Breadth-First Search (BFS) algorithms
in Python for graph traversal and understand their differences.

Tools Used:

● Python 3.x: Programming language used for implementation.


● Text Editor/IDE: Visual Studio Code or any Python-compatible IDE.
● Collections Module: Used for implementing the BFS queue using deque.

Theory:

● Depth-First Search (DFS): DFS is a graph traversal algorithm where we explore as far as possible
along each branch before backtracking. DFS is implemented using recursion or a stack.
● Breadth-First Search (BFS): BFS is a graph traversal algorithm where we explore all neighbors at
the present depth level before moving on to nodes at the next level. BFS is implemented using a
queue.

5. Code Implementation:

DFS Implementation:

# DFS function

def dfs(graph, start, visited=None):

if visited is None:

visited = set()

visited.add(start)

print(start, end=" ")

# Recurse for all the neighbors


for neighbor in graph[start]:

if neighbor not in visited:

dfs(graph, neighbor, visited)

# Driver code

if __name__ == "__main__":

print("DFS Traversal:")

dfs(graph, 'A')

BFS Implementation:

# BFS function

def bfs(graph, start):

visited = set() # Set to keep track of visited nodes

queue = deque([start]) # Queue to store nodes for BFS

while queue:

node = queue.popleft() # Pop a node from the front of the


queue

if node not in visited:

visited.add(node) # Mark the node as visited

print(node, end=" ")

# Add all unvisited neighbors to the queue

for neighbor in graph[node]:

if neighbor not in visited:

queue.append(neighbor)

# Driver code
if __name__ == "__main__":

print("BFS Traversal:")

bfs(graph, 'A')

Output:

Output of DFS Traversal (Starting from 'A'):

Output of BFS Traversal (Starting from 'A'):

Discussion:

● DFS (Depth-First Search) traverses a graph by visiting a node and then recursively exploring its
neighbors, going as deep as possible before backtracking. In the given example, starting at node
'A', it explores all the way to the deepest node before retracing its steps.
● BFS (Breadth-First Search) explores the graph level by level, visiting all nodes at the current
depth before advancing to the next level. In the example, BFS begins at node 'A', explores all its
immediate neighbors, and then moves on to the next level of nodes.

Conclusion:

In this lab, we successfully implemented both Depth-First Search (DFS) and Breadth-First Search (BFS)
algorithms to traverse a graph. We demonstrated how each algorithm explores nodes in different ways.
DFS goes deep into the graph, whereas BFS explores the graph level by level. The understanding of these
algorithms is essential for various graph-based problems, such as finding the shortest path or searching for
a node.

You might also like