Ai Lab Manual 6thsem
Ai Lab Manual 6thsem
INTELLIGENCE
(CIE-374P)
B.Tech. Programme
(CSE)
LAB MANUAL
1. INTRODUCTION
In the “Artificial Intelligence Lab”, we embark on an exciting journey into the realm of Artificial
Intelligence (AI) where we'll explore its fundamental concepts and implementation using Python
programming. The main objective of the lab is to provide the students with a solid foundation in
AI and equip them with the necessary skills to dive deeper into this fascinating field. Throughout
this lab, we'll leverage Python as our primary tool for coding and experimentation. Python's
simplicity and versatility make it an ideal choice for AI development, allowing us to focus more
on understanding AI concepts and less on complex syntax. In this manual, we will delve into
various experiments designed to explore the diverse realms of AI, ranging from logic
programming to natural language processing and game playing algorithms. Artificial Intelligence
(AI) is a field of computer science that aims to mimic human intelligence in machines, enabling
them to perform tasks that typically require human cognition, such as reasoning, problem-
solving, learning, and perception.
We'll start by acquainting ourselves with Prolog which is a logic programming language
commonly used in AI for symbolic computation and rule-based reasoning. The students will
explore the syntax and semantics of Prolog and understand its application in solving logic-based
problems. We will learn how to represent basic facts and relationships using Prolog predicates.
For instance, stating relationships like "Ram likes mango" or "Rose is red" in Prolog format.
Using Prolog, the students will implement predicates to convert temperatures from centigrade to
Fahrenheit and check if a temperature is below freezing. Utilizing the Natural Language Toolkit
(NLTK), we will write a program to remove stop words from a given passage, a preprocessing
step in text analysis and information retrieval tasks. We will implement stemming, a technique to
reduce words to their root or base form, and lemmatization, a process of reducing words to their
canonical form (lemma) based on their meaning using NLTK. Finally, we will develop a
program for text classification using NLTK, where the goal is to assign predefined categories or
labels to textual data based on its content.
This hands-on experience will not only reinforce our understanding of AI concepts but also
prepare us for tackling more advanced AI projects in the future.
2. Lab Requirements
4. List of Experiments
1. Study of PROLOG.
3. Write predicates, one converts centigrade temperatures to Fahrenheit, the other checks if
a temperature is below freezing using PROLOG.
4. Write a program to implement Breadth First Search Traversal.
10. Write a program to remove stop words for a given passage from a text file using NLTK.
11. Write a program to implement stemming for a given sentence using NLTK.
12. Write a program to POS (part of speech) tagging for the give sentence using NLTK.
14. Write a program for Text Classification for the given sentence using NLTK.
Detail of Experiments
PRACTICAL- 1
Study of PROLOG
Introduction :
Prolog or PROgramming in LOGics is a logical and declarative programming language. It is one
major example of the fourth generation language that supports the declarative programming
paradigm. This is particularly suitable for programs that involve symbolic or non-numeric
computation. This is the main reason to use Prolog as the programming language in Artificial
Intelligence, where symbol manipulation and inference manipulation are the fundamental tasks.
relation(object1,object2...).
Key Features :
1. Unification : The basic idea is, can the given terms be made to represent the same structure.
2. Backtracking : When a task fails, prolog traces backwards and tries to satisfy previous task.
3. Recursion : Recursion is the basis for any search in program.
Advantages :
1. Easy to build database. Doesn’t need a lot of programming effort.
2. Pattern matching is easy. Search is recursion based.
3. It has built in list handling. Makes it easier to play with any algorithm involving lists.
Disadvantages :
1. LISP (another logic programming language) dominates over prolog with respect to I/O
features.
2. Sometimes input and output is not easy.
Applications :
Prolog is highly used in artificial intelligence(AI). Prolog is also used for pattern matching over
natural language parse trees
PRACTICAL- 2
COMMANDS
1-likes(ram,mango)
2-girl(seema)
3-likes(bill,cindy)
4-color(rose,red)
5-owns(john,gold)
OUTPUT
PRACTICAL- 3
T < 0 is the condition that checks if the temperature is below freezing (0 degrees Celsius). If
the temperature T is less than 0, it succeeds, indicating that the temperature is below freezing.
COMMANDS
1-c_to_f(C,F):-F is C*9/5+32
2-freezing(F):-F=<32
OUTPUT
PRACTICAL- 4
THEORY:
Breadth First Search (BFS):
Breadth First Search is a graph traversal algorithm used to explore nodes of a graph
systematically. It starts at a chosen node (often called the "root" or "starting node") and explores
all of its neighboring nodes at the present depth level before moving on to the nodes at the next
depth level.
Algorithm:
CODE
from collections import deque
class Node:
self.data = data
def buildtree(self):
x = int(input('Enter value:'))
if x == -1:
return None
n = Node(x)
n.left = self.buildtree()
n.right = self.buildtree()
return n
def bfs(self):
q = deque()
q.append(self.root)
while len(q):
n = q.popleft()
print(n.data)
if n.left:
q.append(n.left)
if n.right:
q.append(n.right)
if __name__ == '__main__':
t = Tree()
t.root = t.buildtree()
t.bfs()
OUTPUT
PRACTICAL 5
THEORY:
The Water Jug Problem is a classic puzzle in computer science and mathematics that involves
finding a sequence of steps to measure a specific volume of water using jugs of known
capacities. Here's a brief explanation of the theory behind implementing the Water Jug Problem:
Algorithm:
• The Water Jug Problem can be solved using a variation of the Breadth First Search
(BFS) algorithm. The idea is to simulate all possible states of the jugs and explore their
transitions until the target volume is reached.
• Define the initial state, representing the current volumes of water in each jug.
• Create a queue to store states to be explored, starting with the initial state.
• While the queue is not empty:
• Dequeue a state from the queue.
• Generate all possible next states by pouring water between jugs or filling/emptying them.
• Check if any of the next states match the target volume. If so, the problem is solved.
• Enqueue the valid next states into the queue.
• Repeat step 3 until the target volume is reached or all possible states are explored.
Using Prolog:
Code:
jug(0, 2, 4, 3, 2) :- 2 =:= 2,write('Goal state reached').
jug(X, Y, Vx, Vy, Z) :-
Y =:= 0, Y1 is Vy, write('Step : X is '),write(X),write(' and Y is '), write(Y1),nl,jug(X, Y1,
Vx, Vy, Z).
jug(X, Y, Vx, Vy, Z) :-
X =:= Vx, X1 is 0,write('Step : X is '),write(X1),write(' and Y is '), write(Y),nl, jug(X1, Y,
Vx, Vy, Z).
jug(X, Y, Vx, Vy, Z) :-
Y =\= 0, X < Vx, K is min(Y, Vx - X),
X1 is X + K, Y1 is Y - K,
write('Step : X is '),write(X1),write(' and Y is '),
write(Y1),nl, jug(X1, Y1, Vx, Vy, Z).
jug(X, Y, _, _, Z) :- X =:= Z; Y =:= Z,write('Goal state reached').
Output:
Using Python:
Code:
from collections import deque
q = deque()
q.append((0, 0))
path.append([u[0], u[1]])
m[(u[0], u[1])] = 1
if (u[0] == target):
if (u[1] != 0):
path.append([u[0], 0])
else:
if (u[0] != 0):
path.append([0, u[1]])
sz = len(path)
for i in range(sz):
print("(", path[i][0], ",",
path[i][1], ")")
break
q.append([u[0], b])
q.append([a, u[1]])
c = u[0] + ap
d = u[1] - ap
c = u[0] - ap
d = u[1] + ap
q.append([a, 0])
q.append([0, b])
if (not isSolvable):
print("No solution")
# Driver code
if __name__ == '__main__':
Jug1, Jug2, target = 3, 5, 2
print("Path from initial state to solution state ::")
THEORY:
• The Water Jug Problem can be solved using a variation of the Breadth First Search
(BFS) algorithm. The idea is to simulate all possible states of the jugs and explore their
transitions until the target volume is reached.
• Define the initial state, representing the current volumes of water in each jug.
• Create a queue to store states to be explored, starting with the initial state.
• While the queue is not empty:
• Dequeue a state from the queue.
• Generate all possible next states by pouring water between jugs or filling/emptying them.
• Check if any of the next states match the target volume. If so, the problem is solved.
• Enqueue the valid next states into the queue.
• Repeat step 3 until the target volume is reached or all possible states are explored.
Code:
str = input("Enter a string: ")
punctuations = "~`<,>.?/:;{[}]|\_-!'"
for c in str:
if (c in punctuations) :
str = str.replace(c, '')
Output:
PRACTICAL 7
THEORY:
Sorting a sentence in alphabetical order involves arranging the words in the sentence based on
their alphabetical order. Here's the theory behind sorting a sentence in alphabetical order:
1. Split the sentence into words: The first step is to split the given sentence into individual words.
This can be done by using string splitting methods such as split() in Python.
2. Sort the words: Once the sentence is split into words, the next step is to sort these words in
alphabetical order. This can be achieved using built-in sorting functions or methods available in
programming languages.
3. Reconstruct the sorted sentence: After sorting the words, reconstruct the sentence by joining
the sorted words back together. This forms the sorted sentence in alphabetical order.
Code:
def sortSentence(sentence):
wordsArr = [word.lower() for word in sentence.split()]
wordsArr.sort()
THEORY:
Implementing the Hangman game involves several key steps, including choosing a word,
displaying the game interface, handling user input, and updating the game state based on user
guesses. Here's the theory behind implementing the Hangman game:
1. Choose a Word:
Choose a word from a predefined list of words. This word will be the target word that the
player needs to guess.
Code:
import random
words = ['rain', 'hail', 'scanner', 'player', 'python', 'maths', 'player', 'water', 'board', 'greet']
def hangman():
guess = 10
question = random.choice(words)
ans = "".ljust(len(question), "x")
enteredChars = []
charMatched = 0
print("***Welcome to Hangman***")
print("The word looks like: ", ans)
if(c in enteredChars):
print("Already entered. Try another character")
elif(c in question):
print("Nice guess. It matched")
enteredChars.append(c)
for i in range(0, len(ans)):
if(question[i] == c):
ans = ans[0 : i] + c + ans[i+1 : ]
charMatched += 1
else:
print("Alas it failed.. :((")
enteredChars.append(c)
guess -= 1
print(ans)
if(charMatched == len(question)):
print("****Congratulations you won****")
else:
print("You lost..")
print("The word was: ", question)
return
hangman()
Output:
PRACTICAL 9
THEORY:
Implementing a Tic-Tac-Toe game involves creating a game board, displaying the game
interface, handling player moves, checking for win/loss conditions, and repeating the game
until a winner is determined or the game ends in a draw. Here's the theory behind implementing
the Tic-Tac-Toe game:
Code:
def tic_tac_toe():
print("***TIC TAC TOE***")
for i in range(3):
for j in range(3):
print(i*3+j+1, end=" ") print()
mat = [['*', '*', '*'], ['*', '*', '*'], ['*', '*', '*']]
print("\nX for player1")
print("O for player2\n")
turn = 0
movesTaken = 0
for i in range(3):
for j in range(3):
print(mat[i][j], end=" ")
print()
if(mat[0][y] == 'X' and mat[1][y] == 'X' and mat[2][y] == 'X') :
print("Player1 won")
break;
if(mat[x][0] == 'X' and mat[x][1] == 'X' and mat[x][2] == 'X') :
print("Player1 won")
break;
if(x == y and mat[0][0] == 'X' and mat[1][1] == 'X' and mat[2][2] == 'X') :
print("Player1 won")
break;
if(x + y == 2 and mat[0][2] == 'X' and mat[1][1] == 'X' and mat[2][0] == 'X') :
print("Player1 won")
break;
else :
mat[x][y] = 'O'
for i in range(3):
for j in range(3):
break;
if(x == y and mat[0][0] == 'O' and mat[1][1] == 'O' and mat[2][2] == 'O') :
print("Player2 won")
break;
if(x + y == 2 and mat[0][2] == 'O' and mat[1][1] == 'O' and mat[2][0] == 'O') :
print("Player2 won")
break;
movesTaken += 1
turn = (turn+1)%2
tic_tac_toe()
Output:
PRACTICAL 10
Write a program to remove stop words for a given passage from a text file
using NLTK.
THEORY:
To remove stop words from a given passage using NLTK (Natural Language Toolkit), you can
follow these steps:
1. Tokenization: Tokenize the given passage into words or tokens. NLTK provides various
tokenizers to achieve this task.
2. Stop Words Removal: Filter out the stop words from the tokenized passage. NLTK provides a
list of commonly used stop words in different languages.
3. Reconstruct the Passage: Join the remaining words back together to reconstruct the passage
without the stop words.
1. Tokenization:
Tokenization is the process of splitting a text into smaller units such as words or sentences.
NLTK provides various tokenizers such as word_tokenize() for word-level tokenization and
sent_tokenize() for sentence-level tokenization.
2. Stop Words Removal:
Stop words are common words that do not carry significant meaning, such as “the”, “is”,
“and”, etc.
NLTK provides pre-defined lists of stop words for different languages, which can be used
to filter out stop words from text.
3. Reconstruct the Passage:
After removing stop words, reconstruct the passage by joining the remaining words
back together.
Optionally, you may perform additional text processing tasks such as stemming
or lemmatization.
CODE :
import nltk
nltk.download('stopwords')
def read_text_file(file_path):
return file.read()
def remove_stop_words(text):
stop_words = set(stopwords.words('english'))
words = nltk.word_tokenize(text)
file_path = "AI.txt"
passage = read_text_file(file_path)
passage_without_stopwords = remove_stop_words(passage)
print(passage_without_stopwords)
Output:
PRACTICAL 11
THEORY:
Stemming is the process of reducing words to their base or root form, typically by removing
suffixes. For example, stemming the word "running" would result in "run". NLTK (Natural
Language Toolkit) provides several stemmers that can be used to perform stemming in Python.
Here's the theory behind implementing stemming for a given sentence using NLTK:
1. Tokenization:
Tokenize the given sentence into words or tokens. NLTK provides various tokenizers for this
purpose.
2. Stemming:
Apply stemming to each token in the sentence to reduce it to its base form. NLTK
provides stemmers like PorterStemmer, LancasterStemmer, SnowballStemmer, etc.
3. Reconstruct the Sentence:
Join the stemmed tokens back together to reconstruct the sentence with stemmed words.
CODE:
import nltk
nltk.download('punkt')
stemmer = PorterStemmer()
def stem_sentence(sentence):
words = word_tokenize(sentence)
stemmed_sentence
sentence:", stemmed_sentence)
Output:
PRACTICAL 12
Write a program to POS (part of speech) tagging for the give sentence using
NLTK.
THEORY:
Part-of-speech (POS) tagging is the process of marking each word in a sentence with its
corresponding part of speech, such as noun, verb, adjective, etc. NLTK (Natural Language
Toolkit) provides tools and resources to perform POS tagging in Python.
Here's the theory behind implementing POS tagging for a given sentence using NLTK:
1. Tokenization:
Tokenize the given sentence into words or tokens. NLTK provides various tokenizers for this
purpose.
2. POS Tagging:
Apply POS tagging to each token in the sentence to determine its part of speech. NLTK provides
pre-trained models and taggers for POS tagging.
3. Result Interpretation:
Review the POS tags assigned to each word in the sentence to understand their
grammatical roles.
CODE:
import nltk
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
def pos_tagging(sentence):
words = word_tokenize(sentence)
pos_tags = nltk.pos_tag(words)
return pos_tags
sentence = "I love running in the park and playing with my dog"
tagged_words = pos_tagging(sentence)
print(tagged_words)
Output:
PRACTICAL 13
THEORY:
Lemmatization is the process of reducing words to their base or root form, typically by
considering the word's context and part of speech. Unlike stemming, lemmatization ensures that
the resulting word is a valid word in the language. NLTK (Natural Language Toolkit) provides
tools and resources to perform lemmatization in Python.
Here's the theory behind implementing lemmatization using NLTK:
1. Tokenization:
Tokenize the given text into words or tokens. NLTK provides various tokenizers for this
purpose.
2. Lemmatization:
Apply lemmatization to each token in the text to reduce it to its base form. NLTK
provides WordNetLemmatizer for lemmatization.
3. POS Tagging (Optional):
Lemmatization often requires POS tagging to determine the correct part of speech of each word.
NLTK's POS tagger can be used for this purpose.
Join the lemmatized tokens back together to reconstruct the text with lemmatized words.
CODE:
import nltk
lemmatizer = WordNetLemmatizer()
def get_wordnet_pos(word):
tag = nltk.pos_tag([word])[0][1][0].upper()
def lemmatize_sentence(sentence):
words = word_tokenize(sentence)
return lemmatized_sentence
lemmatized_sentence = lemmatize_sentence(original_sentence)
lemmatized_sentence)
Output:
PRACTICAL 14
Write a program for Text Classification for the given sentence using NLTK.
THEORY:
Text classification is the process of categorizing text documents into predefined classes or
categories based on their content. It is a fundamental task in natural language processing
(NLP) and is used in various applications such as sentiment analysis, spam detection, topic
classification, etc. NLTK (Natural Language Toolkit) provides tools and resources to perform
text classification in Python.
Here's the theory behind implementing text classification for a given sentence using NLTK:
1. Preprocessing:
Tokenization: Tokenize the given text into words or tokens.
Stop Words Removal: Remove common stop words that do not carry significant meaning.
Lemmatization or Stemming: Reduce words to their base or root form to normalize the text.
2. Feature Extraction:
Convert the preprocessed text into numerical feature vectors that can be used as input to
machine learning algorithms. This is typically done using techniques such as Bag-of-Words, TF-
IDF (Term Frequency-Inverse Document Frequency), Word Embeddings, etc.
3. Model Training:
Select a suitable machine learning algorithm (e.g., Naive Bayes, Support Vector
Machines, Logistic Regression, etc.).
Train the model using labeled data (i.e., text documents with known categories).
4. Model Evaluation:
Evaluate the trained model using metrics such as accuracy, precision, recall, F1-score, etc., on a
separate test dataset.
5. Prediction
Use the trained model to predict the category of new or unseen text documents
CODE:
import nltk
training_data = [
def preprocess(sentence):
lemmatizer = WordNetLemmatizer()
stop_words = set(stopwords.words('english'))
vectorizer = TfidfVectorizer()
X_vectorized = vectorizer.fit_transform(X)
classifier = MultinomialNB()
classifier.fit(X_vectorized, y)
preprocessed_test_sentence = preprocess(test_sentence)
test_vectorized = vectorizer.transform([preprocessed_test_sentence])
predicted_label = classifier.predict(test_vectorized)[0]
Output:
6. Expected Viva Voce Questions
1. What is Prolog, and how does it differ from traditional programming languages?
2. Explain the concept of facts and rules in Prolog.
3. How would you define a predicate in Prolog?
4. What is NLTK, and what is its significance in natural language processing (NLP)?
5. Can you name some common tasks in NLP that NLTK can assist with?
6. How would you install NLTK and download its resources in Python?
7. Define tokenization and explain its importance in NLP.
8. What is stemming, and how does it help in text processing?
9. Demonstrate tokenization and stemming using NLTK.
10. Why is part-of-speech tagging important in NLP?
11. How does NLTK perform part-of-speech tagging?
12. Provide an example of part-of-speech tagging using NLTK.
13. What is sentiment analysis, and what are its applications?
14. How does NLTK perform sentiment analysis?
15. Provide a simple code example of sentiment analysis using NLTK.
16. How can Prolog be integrated with NLTK for NLP tasks?
17. Provide an example of using Prolog rules to enhance NLP functionality in NLTK.
18. Discuss the advantages and challenges of integrating Prolog and NLTK in AI
applications.
19. Discuss some advanced NLP techniques beyond basic text processing.
20. How can NLTK be extended to incorporate more advanced NLP capabilities?
21. Provide examples of NLTK extensions or modules for advanced NLP tasks.
22. What are some ethical concerns related to AI and NLP applications?
23. How can AI developers address bias and fairness issues in NLP models?
24. Discuss the importance of responsible AI development in the context of NLP.
25. How does NLTK perform sentiment analysis?
7. REFERENCES
Textbooks:
1. "Natural Language Processing with Python" by Steven Bird, Ewan Klein, and Edward Loper.
2. "Programming in Prolog: Using the ISO Standard" by William F. Clocksin and Christopher
S. Mellish.
Research Papers:
1. Marcus, M. P., Santorini, B., & Marcinkiewicz, M. A. (1993). Building a large annotated
corpus of English: The Penn Treebank. Computational Linguistics, 19(2), 313-330.
2. Manning, C. D., Surdeanu, M., Bauer, J., Finkel, J., Bethard, S. J., & McClosky, D. (2014).
The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd annual
meeting of the Association for Computational Linguistics: system demonstrations (pp. 55-60).
Online Resources: