Ex No: 1 Study of Prolog Date: Aim
Ex No: 1 Study of Prolog Date: Aim
DATE:
AIM:
To understand the basics of Prolog Programming in Logic,
PROLOG-PROGRAMMING IN LOGIC:
PROLOG stands for Programming, In Logic — an idea that emerged in the early 1970’s to use logic as
programming language. The early developers of this idea included Robert Kowaiski at Edinburgh (on the
theoretical side), Marrten van Emden at Edinburgh (experimental demonstration) and Alian Colmerauer at
Marseilles (implementation). David D.H. Warren’s efficient implementation at Edinburgh in the mid - 1970’s
greatly contributed to the popularity of PROLOG. PROLOG is a programming language centred around a
small set of basic mechanisms, Including pattern matching, tree based data structuring and automatic
backtracking. This Small set constitutes a surprisingly powerful and flexible programming framework.
PROLOG is especially well suited for problems that involve objects- in particular, structured objects- and
relations between them.
SYMBOLIC LANGUAGE
There are well-known examples of symbolic computation whose implementation in other standard
languages took tens of pages of indigestible code, when the same algorithms were implemented in PROLOG,
the result was a crystal-clear program easily fitting on one page.
FOR EXAMPLE:
a) FACTS:
Some facts about family relationships could be written as:
sister( sue,bill) parent(
ann.sam) male(jo)
female( riya)
b) RULES:
To represent the general rule for grandfather, we write: grand father( X2)
parent(X,Y) parent( Y,Z)
male(X)
c) QUERIES:
Given a database of facts and rules such as that above, we may make queries by typing after a
query a symbol’?’ statements such as:
?-parent(X,sam) Xann
?grandfather(X,Y) X=jo, Y=sam
An expert system is a set of programs that manipulates encoded knowledge to solve problems in a
specialized domain that normally requires human expertise. An expert system’s knowledge is obtained from
expert sources such as texts, journal articles. databases etc and encoded in a form suitable for the system to use
in its inference or reasoning processes. Once a sufficient body of expert knowledge has been acquired, it must
be encoded in some form,loaded into knowledge base, then tested, and refined continually throughout the life
of the system PROLOG serves as a powerful language in designing expert systems because of its following
features.
META PROGRAMMING
A meta-program is a program that takes other programs as data. Interpreters and compilers are
examples of mela-programs. Meta- interpreter is a particular kind of meta-program: an interpreter for a
language written in that language. So a PROLOG interpreter is an interpreter for PROLOG, itself written in
PROLOG. Due to its symbol- manipulation capabilities,PROLOG is a powerful language for meta-
programming. Therefore, it is often used as an implementation language for other languages. PROLOG is
particularly suitable as a language for rapid prototyping where we are interested in implementing new
PROGRAM:
1.SUM:
Sum(x+_y) :-
S is x+y,
Write(S).
2. Square root problem:
Sqrt(y) :-
Z is sqrt(y),
Write(z).
Print_number(x) :-
Write(x),
Nl(x),
Next is x+1,
X<10,
Print_numbers(next).
4.Quadratic equation:
% Example usage:
% quadratic_solver(1, -3, 2, X1, X2).
OUTPUT:
RESULT:
thus the above program was executed successfully.
EX NO: 2 Write simple fact for the statements using PROLOG
DATE:
Aim:
Algorithm:
Step2: Prolog programs describe relations, defined by means of clauses. Pure prolog is restricted to
horn clauses. There are two types of clauses: facts and rules. A rule is one of the form.
Head:-Body.
and is read as “Head is true if Body is true”.
Step3: A rule's body consists of calls to predicates, which are called the rule's goals.
PROGRAM:
% Facts
likes(ram, mango).
likes(bill, cindy).
red(rose).
owns(john, gold).
% Queries
?- likes(ram, What).
?- likes(Who, cindy).
?- red(What).
?- owns(Who, What).
Output:
RESULT:
thus the above program was executed successfully.
Ex.No:3 Write a program to solve the Monkey Banana problem.
DATE:
Aim :
Algorithm :
Production Rules:
can_reach(clever,close) get_on:(can_climb)
under(in room,in_room,in_room,can_climb) Close(get_on,under| tall)
Program:
:- dynamic at/2.
:- dynamic monkey_has/2.
% Initial conditions
at(monkey, door).
at(banana, window).
at(box, floor).
monkey_has(banana, no).
% The monkey can push the box from one place to another
push(From, To) :-
at(box, From),
at(monkey, From),
retract(at(box, From)),
assert(at(box, To)),
move(From, To),
write('Monkey pushes box from '), write(From), write(' to '), writeln(To).
% Solution strategy
solve :-
push(floor, window),
climb,
grab,
goal.
Output:
RESULT:
thus the above program was executed successfully.
EX.4 Tower of Hanoi
Aim
Procedure
This object of this famous puzzle is to move N disks from the left peg to the right peg using the
center peg as an auxiliary holding peg. At no time can a larger disk be placed upon a smaller disk.
The following diagram depicts the starting setup for N=3 disks.
Production Rules
hanoi(N)🡪move(N,left,middle,right).
move(1,A,_,C)🡪inform(A,C),fail.
move(N,A,B,C)🡪N1=N-1,move(N1,A,C,B),inform(A,C),move(N1,B,A,C).
Diagram:-
Parse Tree:-
hanoi(3)
Program:
% Base case: no moves are needed to transfer zero disks.
hanoi(0, _, _, _) :- !.
% Recursive case: move N disks from Source to Destination with the help of Auxiliary.
hanoi(N, Source, Auxiliary, Destination) :-
N > 0,
M is N - 1,
hanoi(M, Source, Destination, Auxiliary),
move(Source, Destination),
hanoi(M, Auxiliary, Source, Destination).
OUTPUT
RESULT:
thus the above program was executed successfully.
EX.NO. 5 Solve 8-Puzzle problem
Aim:-
Procedure :
The title of this section refers to a familiar and popular sliding tile puzzle that has been around for at
least forty years. The most frequent older versions of this puzzle have numbers or letters an the
sliding tiles, and the player is supposed to slide tiles into new positions in order to realign a
scrambled puzzle back into a goal alignment. For illustration, we use the 3 x 3 8-tile version, which
is depicted here in goal configuration
7 2 3
4 6 5
1 8
To represent these puzzle "states" we will use a Prolog term representation employing '/' as a
separator. The positions of the tiles are listed (separated by '/') from top to bottom, and from left to
right. Use "0" to represent the empty tile (space). For example, the goal is ... goal(1/2/3/8/0/4/7/6/5).
The heuristic function we use here is a combination of two other estimators: p_fcn, the
Manhattan distance function, and s_fcn, the sequence function, all as explained in Nilsson
(1980), which estimates how badly out-of-sequence the tiles are (around the outside).
The Prolog program from the previous section and the program outlined in this section can be used
as an 8-puzzle solver.
?- solve(0/8/1/2/4/3/7/6/5, S).
Program:
% 8-queens problem solver
% Example usage
% ?- eight_queens(Solution), display_solution(Solution).
OUTPUT:
RESULT:
thus the above program was executed successfully.
Ex:No: 06 4-Queen problem
Aim:
Algorithm:
Step 1: The N-Queens is to place N-queens in such a manner on an n*n chessboard that no queen attack
each other by being in the same row, column or diagonal.
Step 3: Before solving the problem, let’s known about the movement of the queen in chess.
Step 4: In the chess game, a queen can move any number of steps in any direction like vertical,
horizontal, and diagonal.
Step 5: In the 4-Queens problem we have to place 4 queens such as Q1,Q2,Q3 and Q4 on the
chessboard, in such a way that no two attack each other.
Program:
safe([]).
not_threatened(X, [Y|T], N) :-
X =\= Y,
X =\= Y + N,
X =\= Y - N,
N1 is N + 1,
not_threatened(X, T, N1).
template([1,2,3,4]).
queens(Solution) :-
template(Template),
permute(Template, Solution),
safe(Solution).
permute([], []).
permute(L, [H|T]) :-
append(V, U, W),
permute(W, T).
?- queens(Solution).
Output:
RESULT:
thus the above program was executed successfully.
Ex:No: 07 Traveling salesman problem.
AIM:-
Write a program to solve traveling salesman problem.
Program:
% Define cities and distances
distance(city1, city2, 10).
distance(city1, city3, 15).
distance(city1, city4, 20).
distance(city2, city3, 35).
distance(city2, city4, 25).
distance(city3, city4, 30).
calculate_cost([_], 0).
% Example query
% ?- tsp(Path, Cost).
OUTPUT
RESULT:
thus the above program was executed successfully.
Ex:No: 08 BREADTH FIRST SEARCH
Aim:
Algorithm:
Step 2: Create a variable called NODE-LIST and set it to the initial state
b) For each way that each rule can match the state described in E do: 1)Apply the
rule to generate a new state
2)If the new state is goal state,quit and return this state. 3)Otherwise
,add the new state to the end of NODE-LIST.
Step 4: Print the output as the path traversed
Step 5: Exit
Program:
% Define edges in the graph
edge(a, b).
edge(b, c).
edge(b, d).
edge(c, e).
edge(d, f).
edge(e, g).
edge(f, g).
% Example query
% ?- bfs(a, g).
OUTPUT
RESULT:
thus the above program was executed successfully.
Exno:09 DEPTH FIRST SEARCH
Aim:
To write a program to implement Depth first search.
Algorithm:
Program:
dfs_process_neighbours([], _, _, _).
% Example query
% ?- dfs(a, g).
Output:
RESULT:
thus the above program was executed successfully.
Exno:10 A* search algorithm
Aim:
To write a program to implement A* search algorithm
Algorithm:
Step1: place the starting node in the OPEN list
Step2: click if the OPEN list is empty or not ,if the list is empty then return failure and stops
Step 3: select the node from the OPEN list which has the smallest value of evaluation function (g+h), if node n
is goal node, then return success and stop, otherwise
Step4: expand node n and generate all of its successors and put n into the closed list.
For each successor n ,check whether n is already in the OPEN or CLOSED list, if not then compute evaluation
function n and place it into the open list
Step5: elseif node n is already in OPEN and CLOSED, then it should be attached to the back pointer which
reflects the lowest g(n) value.
Step6: return to step2.
Program:
% A* search algorithm
% Usage: astar(Start, Goal, Path)
% Move function: Generates successors and adds them to the open list
move(Current, G, _, Path, node(Next, NewG, H, [Current | Path])) :-
transition(Current, Next, Cost),
\+ member(Next, Path),
NewG is G + Cost,
heuristic(Next, H).
% Example connected predicate (replace this with your actual connected predicate)
connected(A, B, Cost) :-
% Define your connections and costs here
% For example:
member((A, B, Cost), [(a, b, 1), (b, c, 2), (a, d, 3)]).
% Example usage:
% astar(a, c, Path)
Example:
In this example we will traverse the given graph using A* algorithm. The heuristic value of all states is given in
the below table so we will calculate the f(n) of each state using the formula f(n)=g(n)+h(n) where g(n) is the cost
to reach any node from start node.
Here we will use OPEN and CLOSED list.
Output:
Initialization: {(S,5)}
Iteration1: {(S->A,4),(S->G,10)}
Iteration 2: {(SAC,4),(SAB,7),(SG,10)}
Iteration 3: {(SACG,6), (SACD,11),(SAB,7),(SG,10)}
Iteration 4: will give the final result as SACG it provides the optimal path with cost 6.
RESULT:
thus the above program was executed successfully.
Exno:11 AO* search algorithm
Aim:
To write a program to implement AO*search algorithm.
Algorithm:
Step1: place the starting node into OPEN
Step2: compute the most promising solution tree say T0.
Step3: Select a node n that is both on OPEN and member of T0. remove it from OPEN and place it in CLOSE
Step4: if n is the terminal node then leveled n has solved and leveled all the ancestors of n as solved . if the
starting node is marked has solved then success and exit
Step 5: if n is not a solvable node, then mark n as unsolvable. If the starting node is marked as unsolvable , then
return failure and exit
Step6:expand n. find all its successors and find their h(n) value, push them into OPEN
Step 7: return to step2.
Step8: exit
Program:
Example:
Here in the above example below the node which is given is the heuristic value i.e., h(n). edge length is
considered as 1.
Step1:
F(AC+D)=g(C) +h(C)+g(D)+h(D)
=1+2+1+4 here we have added C&D because they are in AND
=8
So, by calculation AB path is chosen which is the minimum path, that is f(A B)
Step 2:
Step 3
heuristic(Start, H),
sort_open_list(OpenList, SortedOpenList),
% Move function: Generates successors and adds them to the open list
\+ member(Next, Path),
NewG is G + Cost,
heuristic(Next, H).
% Transition function: Define the possible transitions between nodes
transition(A, B, Cost) :-
connected(A, B, Cost).
% Note: This is a placeholder, replace it with an appropriate heuristic for your problem
heuristic(_, 0).
sort_open_list(List, SortedList) :-
% Comparison function for sorting nodes based on f(n) = g(n) + h(n) + k(n)
F1 is G1 + H1 + K1,
F2 is G2 + H2 + K2,
(F1 < F2 -> Order = '<'; F1 > F2 -> Order = '>'; Order = '=').
% Example connected predicate (replace this with your actual connected predicate)
connected(A, B, Cost) :-
% Example usage:
% aostar(a, c, Path).
OUTPUT:
RESULT:
thus the above program was executed successfully.
21CS1612
Aim:
Algorithm:
Step1: Importing the dataset
Step 2: Data pre-processing
Step 3: Splitting the test and train sets
Step 4: Fitting the linear regression model to the training set
Step 5: Predicting test results
Step 6: Visualizing the test results
PROGRAM:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
Output:
RESULT:
Thus, the above program was executed successfully.
Ex. 2: Candidate-Elimination Algorithm
Aim:
For a given set of training data example stored in .csv file, implement and
demonstrate the Candidate-Elimination Algorithm to output and describes the set
of all hypotheses consistent with training example.
The Candidate-Elimination algorithm computes the version space containing all hypothesis
from H that are consistent with an observed sequence of training examples. It begins by
initializing the version space to the set of all hypotheses in H; that is by initializing the G
boundary set to contain most general hypothesis in H
G0 = {(?, ?, ?, ?)}
S0 = {(Θ, Θ, Θ, Θ)}
For each training example, these S and G boundary sets are generalized and specialized,
respectively, to eliminate from the version space any hypothesis found inconsistent with the
new training examples. After execution of all the training examples, the computed version
space contains all the hypotheses consistent with these training examples. The algorithm is
summarized as below:
- If d is a positive example
-Remove s from S
general than h
- Remove from s any hypothesis that is more general than another hypothesis is S
- If d is negative example
-Remove g from G
hypothesis in G
For the implementation of candidate -Elimination Algorithm the enjoy sport data set can be
used from UCI repository. The data set contains around 700 data samples. Few of the samples
are showed in below table
Sky Temp Humid Wind Water Forecast EnjoySpt
Sunny Warm Normal Strong Warm Same Yes
Sunny Warm High Strong Warm Same Yes
Rainy Cold High Strong Warm Change No
Sunny Warm High Strong Cool Change Yes
PROGRAM:
import numpy as np
import pandas as pd
data = pd.read_csv('enjoy_sport.csv')
concepts = np.array(data.iloc[:,0:-1])
target = np.array(data.iloc[:,-1])
def learn(concepts, target):
specific_h = concepts[0].copy()
print("initialization of specific_h \n",specific_h)
general_h = [["?" for i in range(len(specific_h))] for i in
range(len(specific_h))]
print("initialization of general_h \n", general_h)
for i, h in enumerate(concepts):
if target[i] == "yes":
print("If instance is Positive ")
for x in range(len(specific_h)):
if h[x]!= specific_h[x]:
specific_h[x] ='?'
general_h[x][x] ='?'
if target[i] == "no":
print("If instance is Negative ")
for x in range(len(specific_h)):
if h[x]!= specific_h[x]:
general_h[x][x] = specific_h[x]
else:
general_h[x][x] = '?'
OUTPUT
RESULT:
Thus, the above program was executed successfully.
Ex. 3: Decision Tree Based ID3 Algorithm
Aim:
Write a program to demonstrate the working of the decision tree based ID3
algorithm. Use an appropriate data set for building the decision tree and apply this
knowledge to classify a new sample.
Procedure:
1) In the ID3 algorithm, begin with the original set of attributes as the root node.
2) On each iteration of the algorithm, iterate through every unused attribute of the remaining set
and calculates the entropy (or information gain) of that attribute.
3) Then, select the attribute which has the smallest entropy (or largest information gain) value.
4) The set of remaining attributes is then split by the selected attribute to produce subsets of the
data.
5) The algorithm continues to recurs on each subset, considering only attributes never selected
before.
Dataset Details:
ID3 ( Learning Sets S, Attributes Sets A, Attributes values V) Return Decision Tree:
Begin
Load learning sets S first, create decision tree root node 'rootNode', add learning set S into root node as
its subset
For rootNode,
1) Calculate entropy of every attribute using the dataset
2) Split the set into subsets using the attribute for which entropy is minimum (or information
gain is maximum)
End
This approach employs a top-down, greedy search through the space of possible decision trees.
1. Calculate the entropy of every attribute using the data set S using
formula Entropy = - p(a)*log(p(a)) - p(b)*log(p(b))
2. Split the set S into subsets using the attribute for which the resulting entropy (after splitting)
is minimum (or, equivalently, information gain is maximum) using formula
Program:
import pandas as pd
import numpy as np
def entropy(data):
"""Calculates the entropy of a dataset."""
values, counts = zip(*data.value_counts().sort_values(ascending=False).items())
probabilities = np.array(counts) / np.sum(counts)
return -np.sum(probabilities * np.log2(probabilities))
elif len(attributes) == 0:
return data[target_attribute].value_counts().idxmax()
else:
best_attribute = max(attributes, key=lambda a: information_gain(data, a))
tree = {best_attribute: {"parent_info": parent_info, "level": level}}
attributes.remove(best_attribute)
for value in data[best_attribute].unique():
subset = data[data[best_attribute] == value]
subtree = id3(subset, attributes.copy(), target_attribute,
parent_info=f"{best_attribute} EQUALS {value}", level=level+1)
tree[best_attribute][value] = subtree
return tree
Input- Input to the decision algorithm is a dataset stored in .csv file which consists of
attributes, examples, target concept.
Output
RESULT:
Thus, the above program was executed successfully.
Aim:
Apply EM algorithm to cluster a set of data stored in a .CSV file. Use the
same data set for clustering using k-Means algorithm. Compare the results of
these two algorithms and comment on the quality of clustering. Add Python ML
library classes/API in the program.
Unsupervised Learning:
In machine learning, unsupervised learning is a class of problems in which one seeks to determine how
the data are organized. It is distinguished from supervised learning (and reinforcement learning) is that
the learner is given only unlabeled examples.
Dataset:
➢ Iris dataset
➢ Number of instances:150
Clustering Algorithms -
1. K-means clustering:
It is a type of unsupervised learning, which is used when you have unlabeled data
(i.e., data without defined categories or groups).
The goal of this algorithm is to find groups in the data, with the number of groups
represented by the variable K.
Data points are clustered based on feature similarity.
The results of the K-means clustering algorithm are:
The centroids of the K clusters, which can be used to label new data
Labels for the training data (each data point is assigned to a single cluster)
Each centroid of a cluster is a collection of feature values which define the resulting groups.
Examining the centroid feature weights can be used to qualitatively interpret what kind of group each
cluster represents.
x = (xi1, xi2, …, xir), is a vector in a real-valued space X ⊆ Rr, and r is the number of attributes in the
data.
The k-means algorithm partitions the given data into k clusters with each cluster having a center called
a centroid.
Algorithm K-means( k, D )
1. Identify the k data points as the initial centroids (cluster centers).
2. Repeat step 1.
3. For each data point x ϵ D do.
6. Re-compute the centroids using the current cluster memberships until the stopping criterion
is met.
2. Expectation–maximization
➢ Learn initial probabilistic model by estimating model parameters θ from randomly labeled
data
1. Expectation (E-step): Compute P(ci | E) for each example given the current model, and
probabilistically re-label the examples based on these posterior probability estimates.
2. Maximization (M-step): Re-estimate the model parameters, θ, from the probabilistically re-
labeled data.
where mu is the mean, Sigma the covariance matrix, and k is the dimension of the space where x takes
values.
Algorithm:
PROGRAM:
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.cluster import KMeans
from sklearn.mixture import GaussianMixture
import matplotlib.pyplot as plt
# K-Means Clustering
km = KMeans(n_clusters=k)
km_labels = km.fit_predict(smaller_data)
# EM Clustering
gmm = GaussianMixture(n_components=k)
gmm_labels = gmm.fit_predict(smaller_data)
# Visualization
# Graph 1: Line graph
plt.figure(1)
plt.plot(smaller_data["sepal length (cm)"], smaller_data["petal length (cm)"])
plt.title("Line Graph (10 Samples)")
plt.xlabel("Sepal Length (cm)")
plt.ylabel("Petal Length (cm)")
plt.show()
OUTPUT
RESULT:
Thus, the above program was executed successfully.
Ex.5 K-nearest neighbour algorithm
Aim :
Description :
k-nearest neighbors algorithm (k-NN) is a non-parametric method used for classification and
regression.[1] In both cases, the input consists of the k closest training examples in the feature space.
The output depends on whether k-NN is used for classification or regression.
k-NN is a type of instance-based learning, or lazy learning, where the function is only approximated
locally and all computation is deferred until classification. The k-NN algorithm is among the simplest
of all machine learning algorithms.
The kNN task can be broken down into writing 3 primary functions:
The data set contains 3 classes of 151 instances each, where each class refers to a type of iris
plant. One class is linearly separable from the other 2; the latter are NOT linearly separable
from each other.
Attribute Information:
OUTPUT
RESULT:
Thus, the above program was executed successfully.