0% found this document useful (0 votes)
19 views48 pages

Ex No: 1 Study of Prolog Date: Aim

Uploaded by

Varalakshmi KC
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views48 pages

Ex No: 1 Study of Prolog Date: Aim

Uploaded by

Varalakshmi KC
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 48

EX NO: 1 STUDY OF PROLOG

DATE:

AIM:
To understand the basics of Prolog Programming in Logic,

PROLOG-PROGRAMMING IN LOGIC:

PROLOG stands for Programming, In Logic — an idea that emerged in the early 1970’s to use logic as
programming language. The early developers of this idea included Robert Kowaiski at Edinburgh (on the
theoretical side), Marrten van Emden at Edinburgh (experimental demonstration) and Alian Colmerauer at
Marseilles (implementation). David D.H. Warren’s efficient implementation at Edinburgh in the mid - 1970’s
greatly contributed to the popularity of PROLOG. PROLOG is a programming language centred around a
small set of basic mechanisms, Including pattern matching, tree based data structuring and automatic
backtracking. This Small set constitutes a surprisingly powerful and flexible programming framework.
PROLOG is especially well suited for problems that involve objects- in particular, structured objects- and
relations between them.

SYMBOLIC LANGUAGE

PROLOG is a programming language for symbolic, non- numeric computation. It is


especially well suited for solving problems that involve objects and relations between objects.
For example, it is an easy exercise in prolog to express spatial relationship between objects,such as
the blue sphere is behind the green one. It is also easy to state a more general rule: if object X is closer to the
observer than object Y. and object Y is closer than Z, then X must be closer than Z. PROLOG can reason
about the spatial relationships and their consistency with respect to the general rule. Features like this make
PROLOG a powerful language for Artificia1 LanguageA1,) and non- numerical programming.

There are well-known examples of symbolic computation whose implementation in other standard
languages took tens of pages of indigestible code, when the same algorithms were implemented in PROLOG,
the result was a crystal-clear program easily fitting on one page.

FACTS, RULES AND QUERIES

Progmmming in PROIOG is accomplished by creating a database of facts and rules about


objects, their properties, and their relationships to other objects. Queries then can be posed about the objects
and valid conclusions will be determined and returned by the program.Responses to user queries are
determined through a form of inference control known as resolution.

FOR EXAMPLE:
a) FACTS:
Some facts about family relationships could be written as:
sister( sue,bill) parent(
ann.sam) male(jo)
female( riya)

b) RULES:
To represent the general rule for grandfather, we write: grand father( X2)
parent(X,Y) parent( Y,Z)
male(X)

c) QUERIES:

Given a database of facts and rules such as that above, we may make queries by typing after a
query a symbol’?’ statements such as:
?-parent(X,sam) Xann
?grandfather(X,Y) X=jo, Y=sam

PROLOG IN DISGINING EXPERT SYSTEMS

An expert system is a set of programs that manipulates encoded knowledge to solve problems in a
specialized domain that normally requires human expertise. An expert system’s knowledge is obtained from
expert sources such as texts, journal articles. databases etc and encoded in a form suitable for the system to use
in its inference or reasoning processes. Once a sufficient body of expert knowledge has been acquired, it must
be encoded in some form,loaded into knowledge base, then tested, and refined continually throughout the life
of the system PROLOG serves as a powerful language in designing expert systems because of its following
features.

META PROGRAMMING

A meta-program is a program that takes other programs as data. Interpreters and compilers are
examples of mela-programs. Meta- interpreter is a particular kind of meta-program: an interpreter for a
language written in that language. So a PROLOG interpreter is an interpreter for PROLOG, itself written in
PROLOG. Due to its symbol- manipulation capabilities,PROLOG is a powerful language for meta-
programming. Therefore, it is often used as an implementation language for other languages. PROLOG is
particularly suitable as a language for rapid prototyping where we are interested in implementing new

PROGRAM:

1.SUM:
Sum(x+_y) :-
S is x+y,
Write(S).
2. Square root problem:
Sqrt(y) :-
Z is sqrt(y),
Write(z).

3. print numbers 1 to 10:

Print_number(x) :-
Write(x),
Nl(x),
Next is x+1,
X<10,
Print_numbers(next).

4.Quadratic equation:

quadratic_solver(A, B, C, X1, X2) :-


D is B * B - 4 * A * C,
(D < 0 ->
write('No real roots');
(D = 0 ->
X1 is -B / (2 * A),
write('One real root: '), write(X1);
X1 is (-B + sqrt(D)) / (2 * A),
X2 is (-B - sqrt(D)) / (2 * A),
write('Two real roots: '), write(X1), write(' and '), write(X2)
)
).

% Example usage:
% quadratic_solver(1, -3, 2, X1, X2).

OUTPUT:
RESULT:
thus the above program was executed successfully.
EX NO: 2 Write simple fact for the statements using PROLOG
DATE:

Aim:

To develop a simple queries to using prolog.

Algorithm:

Step1: In Prolog syntax, we can write −


Understand logical programming syntax and semantics. Design programs in PROLOG
language.

Step2: Prolog programs describe relations, defined by means of clauses. Pure prolog is restricted to
horn clauses. There are two types of clauses: facts and rules. A rule is one of the form.

Head:-Body.
and is read as “Head is true if Body is true”.

Step3: A rule's body consists of calls to predicates, which are called the rule's goals.

a. Ram like s mango.


b. Seema is a girl.
c. Bill likes Cindy.
d. Rose is red.
e. John owns gold.

PROGRAM:
% Facts
likes(ram, mango).
likes(bill, cindy).
red(rose).
owns(john, gold).

% Queries
?- likes(ram, What).
?- likes(Who, cindy).
?- red(What).
?- owns(Who, What).

Output:
RESULT:
thus the above program was executed successfully.
Ex.No:3 Write a program to solve the Monkey Banana problem.
DATE:

Aim :

To write a program to solve the monkey banana problem using prolog.

Algorithm :

The monkey can perform the following actions:-


Walk on the floor.
Climb the box.
Push the box around (if it is beside the box).
Grasp the banana if it is standing on the box directly under the banana.

Production Rules:

can_reach(clever,close) get_on:(can_climb)
under(in room,in_room,in_room,can_climb) Close(get_on,under| tall)

Program:
:- dynamic at/2.
:- dynamic monkey_has/2.

% Initial conditions
at(monkey, door).
at(banana, window).
at(box, floor).
monkey_has(banana, no).

% The monkey can move from one place to another


move(From, To) :-
at(monkey, From),
retract(at(monkey, From)),
assert(at(monkey, To)),
write('Monkey moves from '), write(From), write(' to '), writeln(To).

% The monkey can push the box from one place to another
push(From, To) :-
at(box, From),
at(monkey, From),
retract(at(box, From)),
assert(at(box, To)),
move(From, To),
write('Monkey pushes box from '), write(From), write(' to '), writeln(To).

% The monkey can climb the box to reach the banana


climb :-
at(monkey, window),
at(box, window),
write('Monkey climbs the box at the window'), nl.

% The monkey can grab the banana


grab :-
at(monkey, window),
at(box, window),
monkey_has(banana, no),
retract(monkey_has(banana, no)),
assert(monkey_has(banana, yes)),
write('Monkey grabs the banana'), nl.

% Define the goal state


goal :-
monkey_has(banana, yes),
write('Monkey now has the banana'), nl.

% Solution strategy
solve :-
push(floor, window),
climb,
grab,
goal.

Output:
RESULT:
thus the above program was executed successfully.
EX.4 Tower of Hanoi

Aim

Write a program to solve Tower of Hanoi.

Procedure

This object of this famous puzzle is to move N disks from the left peg to the right peg using the
center peg as an auxiliary holding peg. At no time can a larger disk be placed upon a smaller disk.
The following diagram depicts the starting setup for N=3 disks.

Production Rules
hanoi(N)🡪move(N,left,middle,right).
move(1,A,_,C)🡪inform(A,C),fail.
move(N,A,B,C)🡪N1=N-1,move(N1,A,C,B),inform(A,C),move(N1,B,A,C).
Diagram:-
Parse Tree:-
hanoi(3)

move(N,left,middle,right) move(1,A,_,C) inform(A,C),!


N=3

Move top Move top disk Move top disk


disk from left to right from right to center from center to left

Move top disk


right from left to center

Move top disk from


left to right

Move top disk from


left to right

Program:
% Base case: no moves are needed to transfer zero disks.
hanoi(0, _, _, _) :- !.

% Recursive case: move N disks from Source to Destination with the help of Auxiliary.
hanoi(N, Source, Auxiliary, Destination) :-
N > 0,
M is N - 1,
hanoi(M, Source, Destination, Auxiliary),
move(Source, Destination),
hanoi(M, Auxiliary, Source, Destination).

% Inform the move to the user.


move(Source, Destination) :-
write('Move top disk from '),
write(Source),
write(' to '),
write(Destination),
nl.

% To test the program, run: ?- hanoi(3, left, middle, right).

OUTPUT
RESULT:
thus the above program was executed successfully.
EX.NO. 5 Solve 8-Puzzle problem

Aim:-

Write a program to solve 8-Puzzle problem.

Procedure :

The title of this section refers to a familiar and popular sliding tile puzzle that has been around for at
least forty years. The most frequent older versions of this puzzle have numbers or letters an the
sliding tiles, and the player is supposed to slide tiles into new positions in order to realign a
scrambled puzzle back into a goal alignment. For illustration, we use the 3 x 3 8-tile version, which
is depicted here in goal configuration

7 2 3
4 6 5

1 8

To represent these puzzle "states" we will use a Prolog term representation employing '/' as a
separator. The positions of the tiles are listed (separated by '/') from top to bottom, and from left to
right. Use "0" to represent the empty tile (space). For example, the goal is ... goal(1/2/3/8/0/4/7/6/5).

The heuristic function we use here is a combination of two other estimators: p_fcn, the
Manhattan distance function, and s_fcn, the sequence function, all as explained in Nilsson
(1980), which estimates how badly out-of-sequence the tiles are (around the outside).

The Prolog program from the previous section and the program outlined in this section can be used
as an 8-puzzle solver.

?- solve(0/8/1/2/4/3/7/6/5, S).
Program:
% 8-queens problem solver

% The main predicate to solve the 8-queens problem


eight_queens(Solution) :-
% Generate the initial row placement
permutation([1,2,3,4,5,6,7,8], Solution),
% Check if the queens are placed safely (no two queens threaten each other)
safe(Solution).

% Predicate to check if the queens are placed safely


safe([]).
safe([Queen|Queens]) :-
safe(Queens, Queen, 1),
safe(Queens).

% Helper predicate to check if a single queen is safe


safe([], _, _).
safe([OtherQueen|Queens], Queen, Offset) :-
Queen + Offset =\= OtherQueen,
Queen - Offset =\= OtherQueen,
NewOffset is Offset + 1,
safe(Queens, Queen, NewOffset).

% Predicate to display the solution


display_solution([]).
display_solution([Q|Queens]) :-
display_row(Q, 1),
display_solution(Queens).

% Helper predicate to display a single row


display_row(_, 9) :- nl.
display_row(Queen, Col) :-
Col =\= Queen,
write(' - '),
NewCol is Col + 1,
display_row(Queen, NewCol).
display_row(Queen, Col) :-
Col == Queen,
write(' Q '),
NewCol is Col + 1,
display_row(Queen, NewCol).

% Example usage
% ?- eight_queens(Solution), display_solution(Solution).
OUTPUT:

RESULT:
thus the above program was executed successfully.
Ex:No: 06 4-Queen problem

Aim:

To write a program for placing 4 Queens on a chessboard using C programming.

Algorithm:

Step 1: The N-Queens is to place N-queens in such a manner on an n*n chessboard that no queen attack
each other by being in the same row, column or diagonal.

Step 2: Here we solve the problem for N=4queens.

Step 3: Before solving the problem, let’s known about the movement of the queen in chess.

Step 4: In the chess game, a queen can move any number of steps in any direction like vertical,
horizontal, and diagonal.

Step 5: In the 4-Queens problem we have to place 4 queens such as Q1,Q2,Q3 and Q4 on the
chessboard, in such a way that no two attack each other.

Program:

% Define the safe check for placing queens

safe([]).

safe([Head|Tail]) :- safe(Tail), not_threatened(Head, Tail, 1).

% Check that a queen is not threatened by others

not_threatened(_, [], _).

not_threatened(X, [Y|T], N) :-

X =\= Y,

X =\= Y + N,

X =\= Y - N,

N1 is N + 1,

not_threatened(X, T, N1).

% Template for a solution

template([1,2,3,4]).

% Permute the template to find a safe arrangement of queens

queens(Solution) :-
template(Template),

permute(Template, Solution),

safe(Solution).

% Generate permutations of a list

permute([], []).

permute(L, [H|T]) :-

append(V, [H|U], L),

append(V, U, W),

permute(W, T).

% Run the solver

?- queens(Solution).
Output:

RESULT:
thus the above program was executed successfully.
Ex:No: 07 Traveling salesman problem.

AIM:-
Write a program to solve traveling salesman problem.

Program:
% Define cities and distances
distance(city1, city2, 10).
distance(city1, city3, 15).
distance(city1, city4, 20).
distance(city2, city3, 35).
distance(city2, city4, 25).
distance(city3, city4, 30).

% Define predicate for finding the shortest path


tsp(Path, Cost) :-
cities(Cities),
permutation(Cities, Path),
calculate_cost(Path, Cost).

% Define predicate for calculating the total cost of a path


calculate_cost([City1, City2 | Rest], Cost) :-
distance(City1, City2, Dist),
calculate_cost([City2 | Rest], PartialCost),
Cost is PartialCost + Dist.

calculate_cost([_], 0).

% Define the list of cities


cities([city1, city2, city3, city4]).

% Example query
% ?- tsp(Path, Cost).

OUTPUT
RESULT:
thus the above program was executed successfully.
Ex:No: 08 BREADTH FIRST SEARCH

Aim:

To write a program to implement Breadth First Search.

Algorithm:

Step 1:Enter the node to be found

Step 2: Create a variable called NODE-LIST and set it to the initial state

Step 3: Until a goal state is found or NODE-LIST is empty do:


a) Remove the first element from NODE-LIST and call it E. If NODE- LIST was
empty,quit.

b) For each way that each rule can match the state described in E do: 1)Apply the
rule to generate a new state
2)If the new state is goal state,quit and return this state. 3)Otherwise
,add the new state to the end of NODE-LIST.
Step 4: Print the output as the path traversed
Step 5: Exit

Program:
% Define edges in the graph
edge(a, b).
edge(b, c).
edge(b, d).
edge(c, e).
edge(d, f).
edge(e, g).
edge(f, g).

% Breadth-First Search predicate


bfs(StartNode, SearchNode) :-
bfs_queue([StartNode], [], SearchNode),
(member(SearchNode, [StartNode | _]) ->
write('Node found!');
write('Node not found.')
).

bfs_queue([SearchNode | _], _, SearchNode).

bfs_queue([Node | Rest], Visited, SearchNode) :-


\+ member(Node, Visited), % Avoid revisiting nodes
write(Node), nl,
bfs_neighbours(Node, Visited, Neighbours),
append(Rest, Neighbours, UpdatedQueue),
bfs_queue(UpdatedQueue, [Node | Visited],
SearchNode).

bfs_neighbours(Node, Visited, Neighbours) :-


findall(NextNode, (edge(Node, NextNode);
edge(NextNode, Node)), Neighbours).

% Example query
% ?- bfs(a, g).

OUTPUT

RESULT:
thus the above program was executed successfully.
Exno:09 DEPTH FIRST SEARCH

Aim:
To write a program to implement Depth first search.

Algorithm:

Step1: enter the node to be found


Step 2: if the initial state is a goal state, quit and return success
Step3: otherwise, do the following until success or failure is signaled.
a) Generate a successor, E, of the initial state. If there are no more successors, signal failure.
b) Call depth first search with E as the initial state.
c) If the success is returned, signal success. Otherwise continue in this loop.
Step 4: print the output as the path traversed.

Program:

% Define edges in the graph


edge(a, b).
edge(b, c).
edge(b, d).
edge(c, e).
edge(d, f).
edge(e, g).
edge(f, g).

% Depth-First Search predicate


dfs(StartNode, SearchNode) :-
dfs_recursive(StartNode, [], SearchNode, Visited),
(member(SearchNode, Visited) ->
write('Node found!');
write('Node not found.')
).

dfs_recursive(Node, Path, SearchNode, Visited) :-


\+ member(Node, Path), % Avoid cycles
write(Node), nl,
dfs_neighbours(Node, [Node | Path], Neighbours),
dfs_process_neighbours(Neighbours, [Node | Path], SearchNode, Visited).

dfs_neighbours(Node, Path, Neighbours) :-


findall(NextNode, (edge(Node, NextNode); edge(NextNode, Node)), Neighbours).

dfs_process_neighbours([], _, _, _).

dfs_process_neighbours([SearchNode | _], _, SearchNode, _).


dfs_process_neighbours([NextNode | Rest], Path, SearchNode, Visited) :-
dfs_recursive(NextNode, Path, SearchNode, Visited),
dfs_process_neighbours(Rest, Path, SearchNode, Visited).

% Example query
% ?- dfs(a, g).

Output:

RESULT:
thus the above program was executed successfully.
Exno:10 A* search algorithm

Aim:
To write a program to implement A* search algorithm

Algorithm:
Step1: place the starting node in the OPEN list
Step2: click if the OPEN list is empty or not ,if the list is empty then return failure and stops
Step 3: select the node from the OPEN list which has the smallest value of evaluation function (g+h), if node n
is goal node, then return success and stop, otherwise
Step4: expand node n and generate all of its successors and put n into the closed list.
For each successor n ,check whether n is already in the OPEN or CLOSED list, if not then compute evaluation
function n and place it into the open list
Step5: elseif node n is already in OPEN and CLOSED, then it should be attached to the back pointer which
reflects the lowest g(n) value.
Step6: return to step2.

Program:
% A* search algorithm
% Usage: astar(Start, Goal, Path)

astar(Start, Goal, Path) :-


heuristic(Start, H),
astar_search([node(Start, 0, H, [])], Goal, Path).

% Base case: Goal node is reached


astar_search([node(Goal, G, _, Path) | _], Goal, Path) :-
write('Path found: '), write(Path), nl,
write('Cost: '), write(G), nl.

% Recursive case: Expand the current node


astar_search([node(Current, G, H, Path) | Rest], Goal, FinalPath) :-
findall(Child, move(Current, G, H, Path, Child), Children),
append(Children, Rest, OpenList),
sort_open_list(OpenList, SortedOpenList),
astar_search(SortedOpenList, Goal, FinalPath).

% Move function: Generates successors and adds them to the open list
move(Current, G, _, Path, node(Next, NewG, H, [Current | Path])) :-
transition(Current, Next, Cost),
\+ member(Next, Path),
NewG is G + Cost,
heuristic(Next, H).

% Transition function: Define the possible transitions between nodes


transition(A, B, Cost) :-
% Add your transition rules here
% For example, if A and B are connected with a cost of 1:
connected(A, B, Cost).
% Heuristic function: Define the heuristic for each node
% Note: This is a placeholder, replace it with an appropriate heuristic for your problem
heuristic(_, 0).

% Sort the open list based on f(n) = g(n) + h(n)


sort_open_list(List, SortedList) :-
predsort(compare_nodes, List, SortedList).

% Comparison function for sorting nodes based on f(n) = g(n) + h(n)


compare_nodes(Order, node(_, G1, H1, _), node(_, G2, H2, _)) :-
F1 is G1 + H1,
F2 is G2 + H2,
(F1 < F2 -> Order = '<'; F1 > F2 -> Order = '>'; Order = '=').

% Example connected predicate (replace this with your actual connected predicate)
connected(A, B, Cost) :-
% Define your connections and costs here
% For example:
member((A, B, Cost), [(a, b, 1), (b, c, 2), (a, d, 3)]).

% Example usage:
% astar(a, c, Path)

Example:
In this example we will traverse the given graph using A* algorithm. The heuristic value of all states is given in
the below table so we will calculate the f(n) of each state using the formula f(n)=g(n)+h(n) where g(n) is the cost
to reach any node from start node.
Here we will use OPEN and CLOSED list.

Output:
Initialization: {(S,5)}
Iteration1: {(S->A,4),(S->G,10)}
Iteration 2: {(SAC,4),(SAB,7),(SG,10)}
Iteration 3: {(SACG,6), (SACD,11),(SAB,7),(SG,10)}
Iteration 4: will give the final result as SACG it provides the optimal path with cost 6.

RESULT:
thus the above program was executed successfully.
Exno:11 AO* search algorithm

Aim:
To write a program to implement AO*search algorithm.

Algorithm:
Step1: place the starting node into OPEN
Step2: compute the most promising solution tree say T0.
Step3: Select a node n that is both on OPEN and member of T0. remove it from OPEN and place it in CLOSE
Step4: if n is the terminal node then leveled n has solved and leveled all the ancestors of n as solved . if the
starting node is marked has solved then success and exit
Step 5: if n is not a solvable node, then mark n as unsolvable. If the starting node is marked as unsolvable , then
return failure and exit
Step6:expand n. find all its successors and find their h(n) value, push them into OPEN
Step 7: return to step2.
Step8: exit

Program:

Example:

Here in the above example below the node which is given is the heuristic value i.e., h(n). edge length is
considered as 1.
Step1:

With help of f(n)=g(n)+h(n) evaluation function,


Start from node A, f(AB) = g(B)+h(B)= 1 +5 here g(n)=1 is taken by default for path cost=6

F(AC+D)=g(C) +h(C)+g(D)+h(D)
=1+2+1+4 here we have added C&D because they are in AND
=8
So, by calculation AB path is chosen which is the minimum path, that is f(A B)

Step 2:

AO* Algorithm (Step-2)


According to the answer of step 1, explore node B
Here the value of E & F are calculated as follows,
f(B⇢E) = g(e) + h(e)
f(B⇢E) = 1 + 7
=8

f(B⇢f) = g(f) + h(f)


f(B⇢f) = 1 + 9
= 10
So, by above calculation B⇢E path is chosen which is minimum path, i.e f(B⇢E)
because B's heuristic value is different from its actual value The heuristic is
updated and the minimum cost path is selected. The minimum value in our situation is 8.
Therefore, the heuristic for A must be updated due to the change in B's heuristic.
So we need to calculate it again.

f(A⇢B) = g(B) + updated h(B)


=1+8
=9
We have Updated all values in the above tree

Step 3

ao* algorithm (step-3) –geeksforgeeks


PROGRAM:
% Usage: aostar(Start, Goal, Path)

aostar(Start, Goal, Path) :-

heuristic(Start, H),

aostar_search([node(Start, 0, H, [])], Goal, Path).

% Base case: Goal node is reached

aostar_search([node(Goal, G, _, Path) | _], Goal, Path) :-

write('Path found: '), write(Path), nl,

write('Cost: '), write(G), nl.

% Recursive case: Expand the current node

aostar_search([node(Current, G, H, Path) | Rest], Goal, FinalPath) :-

findall(Child, move(Current, G, H, Path, Child), Children),

append(Children, Rest, OpenList),

sort_open_list(OpenList, SortedOpenList),

aostar_search(SortedOpenList, Goal, FinalPath).

% Move function: Generates successors and adds them to the open list

move(Current, G, _, Path, node(Next, NewG, H, [Current | Path])) :-

transition(Current, Next, Cost),

\+ member(Next, Path),

NewG is G + Cost,

heuristic(Next, H).
% Transition function: Define the possible transitions between nodes

transition(A, B, Cost) :-

% Add your transition rules here

% For example, if A and B are connected with a cost of 1:

connected(A, B, Cost).

% Heuristic function: Define the heuristic for each node

% Note: This is a placeholder, replace it with an appropriate heuristic for your problem

heuristic(_, 0).

% Sort the open list based on f(n) = g(n) + h(n) + k(n)

sort_open_list(List, SortedList) :-

predsort(compare_nodes, List, SortedList).

% Comparison function for sorting nodes based on f(n) = g(n) + h(n) + k(n)

compare_nodes(Order, node(_, G1, H1, _), node(_, G2, H2, _)) :-

K1 is 0, % You can replace this with an appropriate incremental cost estimate

K2 is 0, % You can replace this with an appropriate incremental cost estimate

F1 is G1 + H1 + K1,

F2 is G2 + H2 + K2,

(F1 < F2 -> Order = '<'; F1 > F2 -> Order = '>'; Order = '=').

% Example connected predicate (replace this with your actual connected predicate)

connected(A, B, Cost) :-

% Define your connections and costs here


% For example:

member((A, B, Cost), [(a, b, 1), (b, c, 2), (a, d, 3)]).

% Example usage:
% aostar(a, c, Path).
OUTPUT:
RESULT:
thus the above program was executed successfully.
21CS1612

Machine Learning Lab


Ex 1: Linear Regression Model

Aim:

Create a Linear Regression Model in Python Using randomly created


dataset.

Algorithm:
Step1: Importing the dataset
Step 2: Data pre-processing
Step 3: Splitting the test and train sets
Step 4: Fitting the linear regression model to the training set
Step 5: Predicting test results
Step 6: Visualizing the test results
PROGRAM:

import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression

# Create a dataset with salaries and experience levels


np.random.seed(42) # Set seed for reproducibility
experience = np.random.randint(1, 20, size=100) # Experience levels (1-20 years)
salaries = 30000 + 5000 * experience + np.random.randn(100) * 10000 # Salaries with
noise

# Split the dataset into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(experience.reshape(-1, 1),
salaries, test_size=0.2, random_state=42)

# Create and fit the linear regression model


model = LinearRegression()
model.fit(X_train, y_train)

# Predict the test and training results


y_pred_train = model.predict(X_train)
y_pred_test = model.predict(X_test)

# Print the training and testing MSE


print("Training MSE:", np.mean((y_pred_train - y_train)**2))
print("Testing MSE:", np.mean((y_pred_test - y_test)**2))

# Plot the training set results


plt.figure(figsize=(8, 6))
plt.scatter(X_train, y_train, color='blue', label='Actual Salaries')
plt.plot(X_train, y_pred_train, color='red', linewidth=2, label='Predicted Salaries')
plt.xlabel('Experience (Years)')
plt.ylabel('Salary')
plt.title('Training Set Performance')
plt.legend()
plt.show()

# Plot the testing set results


plt.figure(figsize=(8, 6))
plt.scatter(X_test, y_test, color='blue', label='Actual Salaries')
plt.plot(X_test, y_pred_test, color='red', linewidth=2, label='Predicted Salaries')
plt.xlabel('Experience (Years)')
plt.ylabel('Salary')
plt.title('Testing Set Performance')
plt.legend()
plt.show()

Output:

RESULT:
Thus, the above program was executed successfully.
Ex. 2: Candidate-Elimination Algorithm
Aim:

For a given set of training data example stored in .csv file, implement and
demonstrate the Candidate-Elimination Algorithm to output and describes the set
of all hypotheses consistent with training example.

Candidate-Elimination Learning Algorithm

The Candidate-Elimination algorithm computes the version space containing all hypothesis
from H that are consistent with an observed sequence of training examples. It begins by
initializing the version space to the set of all hypotheses in H; that is by initializing the G
boundary set to contain most general hypothesis in H

G0 = {(?, ?, ?, ?)}

Then initialize the S boundary set to contain most specific hypothesis in H

S0 = {(Θ, Θ, Θ, Θ)}

For each training example, these S and G boundary sets are generalized and specialized,
respectively, to eliminate from the version space any hypothesis found inconsistent with the
new training examples. After execution of all the training examples, the computed version
space contains all the hypotheses consistent with these training examples. The algorithm is
summarized as below:

Candidate -Elimination Algorithm

Initialize G to the set of maximally general hypotheses in H

Initialize S to the set of maximally specific hypotheses in H

For every learning example d,do

- If d is a positive example

-Remove from G any hypothesis inconsistent with d

-For each hypothesis s in S that is not consistent with d

-Remove s from S

- Add to s all minimal generalization h of s such that


- H is consistent with d , and some member of G is more

general than h

- Remove from s any hypothesis that is more general than another hypothesis is S

- If d is negative example

-Remove from S any hypothesis inconsistent with d

-for each hypothesis g in G that is not consistent with d

-Remove g from G

-Add to G all minimal specialization h of g such that

-H is consistent with d and some member of S is more specific than h

- Remove from G any hypothesis that is less general than another

hypothesis in G

For the implementation of candidate -Elimination Algorithm the enjoy sport data set can be
used from UCI repository. The data set contains around 700 data samples. Few of the samples
are showed in below table
Sky Temp Humid Wind Water Forecast EnjoySpt
Sunny Warm Normal Strong Warm Same Yes
Sunny Warm High Strong Warm Same Yes
Rainy Cold High Strong Warm Change No
Sunny Warm High Strong Cool Change Yes

PROGRAM:
import numpy as np
import pandas as pd

data = pd.read_csv('enjoy_sport.csv')
concepts = np.array(data.iloc[:,0:-1])
target = np.array(data.iloc[:,-1])
def learn(concepts, target):
specific_h = concepts[0].copy()
print("initialization of specific_h \n",specific_h)
general_h = [["?" for i in range(len(specific_h))] for i in
range(len(specific_h))]
print("initialization of general_h \n", general_h)
for i, h in enumerate(concepts):
if target[i] == "yes":
print("If instance is Positive ")
for x in range(len(specific_h)):
if h[x]!= specific_h[x]:
specific_h[x] ='?'
general_h[x][x] ='?'

if target[i] == "no":
print("If instance is Negative ")
for x in range(len(specific_h)):
if h[x]!= specific_h[x]:
general_h[x][x] = specific_h[x]
else:
general_h[x][x] = '?'

print(" step {}".format(i+1))


print(specific_h)
print(general_h)
print("\n")
print("\n")

indices = [i for i, val in enumerate(general_h) if val == ['?', '?', '?', '?',


'?', '?']]
for i in indices:
general_h.remove(['?', '?', '?', '?', '?', '?'])
return specific_h, general_h

s_final, g_final = learn(concepts, target)

print("Final Specific_h:", s_final, sep="\n")


print("Final General_h:", g_final, sep="\n")

OUTPUT

RESULT:
Thus, the above program was executed successfully.
Ex. 3: Decision Tree Based ID3 Algorithm

Aim:
Write a program to demonstrate the working of the decision tree based ID3
algorithm. Use an appropriate data set for building the decision tree and apply this
knowledge to classify a new sample.

Following terminologies are used in this algorithm

 Entropy : Entropy is a measure of


impurity It is defined for a binary class with
values a/b as:

 Information Gain : measuring the expected reduction in Entropy

Procedure:

1) In the ID3 algorithm, begin with the original set of attributes as the root node.

2) On each iteration of the algorithm, iterate through every unused attribute of the remaining set
and calculates the entropy (or information gain) of that attribute.

3) Then, select the attribute which has the smallest entropy (or largest information gain) value.

4) The set of remaining attributes is then split by the selected attribute to produce subsets of the
data.

5) The algorithm continues to recurs on each subset, considering only attributes never selected
before.
Dataset Details:

play tennis dataset which has following structure Total


number of instances=15
Attributes=Outlook, Temperature, Humidity, Wind, Answer Target
Concept=Answer

ID3 ( Learning Sets S, Attributes Sets A, Attributes values V) Return Decision Tree:

Begin
Load learning sets S first, create decision tree root node 'rootNode', add learning set S into root node as
its subset
For rootNode,
1) Calculate entropy of every attribute using the dataset

2) Split the set into subsets using the attribute for which entropy is minimum (or information
gain is maximum)

3) Make a decision tree node containing that attribute

4) Recurse on subsets using renaming attributes

End

This approach employs a top-down, greedy search through the space of possible decision trees.

 Algorithm starts by creating root node for the tree


 If all the examples are positive then return node with positive label
 If all the examples are negative then return node with negative label
 I f Attributes is empty, Return the single-node tree Root, with label = most common
value of Targetattribute in Example
 Otherwise -

1. Calculate the entropy of every attribute using the data set S using
formula Entropy = - p(a)*log(p(a)) - p(b)*log(p(b))
2. Split the set S into subsets using the attribute for which the resulting entropy (after splitting)
is minimum (or, equivalently, information gain is maximum) using formula

Gain(S,A)= Entropy(S) - Sum for v from 1 to n of (|Sv|/|S|) * Entropy(Sv)

3. Make a decision tree node containing that attribute

4. Recurring on subsets using remaining attributes.

Program:
import pandas as pd
import numpy as np

def entropy(data):
"""Calculates the entropy of a dataset."""
values, counts = zip(*data.value_counts().sort_values(ascending=False).items())
probabilities = np.array(counts) / np.sum(counts)
return -np.sum(probabilities * np.log2(probabilities))

def information_gain(data, attribute):


"""Calculates the information gain of an attribute."""
total_entropy = entropy(data[target_attribute])
weighted_entropies = 0
for value in data[attribute].unique():
subset_entropy = entropy(data[data[attribute] == value][target_attribute])
subset_weight = len(data[data[attribute] == value]) / len(data)
weighted_entropies += subset_weight * subset_entropy
return total_entropy - weighted_entropies

def id3(data, attributes, target_attribute, parent_info=None, level=0):


"""Constructs a decision tree using the ID3 algorithm."""
if len(data[target_attribute].unique()) == 1:
return data[target_attribute].unique()[0]

elif len(attributes) == 0:
return data[target_attribute].value_counts().idxmax()

else:
best_attribute = max(attributes, key=lambda a: information_gain(data, a))
tree = {best_attribute: {"parent_info": parent_info, "level": level}}
attributes.remove(best_attribute)
for value in data[best_attribute].unique():
subset = data[data[best_attribute] == value]
subtree = id3(subset, attributes.copy(), target_attribute,
parent_info=f"{best_attribute} EQUALS {value}", level=level+1)
tree[best_attribute][value] = subtree
return tree

# Load the dataset


data = pd.read_csv("/content/tennis.csv - give me the dataset also .csv")
target_attribute = "PlayTennis"

# Construct the decision tree


tree = id3(data, list(data.columns[:-1]), target_attribute)

# Print the decision tree (optional)


print(tree)

INPUTS AND OUTPUTS:

Input- Input to the decision algorithm is a dataset stored in .csv file which consists of
attributes, examples, target concept.

Output

RESULT:
Thus, the above program was executed successfully.

Ex. 4: clustering using k-Means algorithm

Aim:
Apply EM algorithm to cluster a set of data stored in a .CSV file. Use the
same data set for clustering using k-Means algorithm. Compare the results of
these two algorithms and comment on the quality of clustering. Add Python ML
library classes/API in the program.

Unsupervised Learning:

In machine learning, unsupervised learning is a class of problems in which one seeks to determine how
the data are organized. It is distinguished from supervised learning (and reinforcement learning) is that
the learner is given only unlabeled examples.

Dataset:
➢ Iris dataset

➢ Number of Attributes:2 1. sepal length 2. sepal width

➢ Number of instances:150

Clustering Algorithms -

1. K-means clustering:

 It is a type of unsupervised learning, which is used when you have unlabeled data
(i.e., data without defined categories or groups).
 The goal of this algorithm is to find groups in the data, with the number of groups
represented by the variable K.
 Data points are clustered based on feature similarity.
 The results of the K-means clustering algorithm are:
 The centroids of the K clusters, which can be used to label new data
 Labels for the training data (each data point is assigned to a single cluster)

Each centroid of a cluster is a collection of feature values which define the resulting groups.

Examining the centroid feature weights can be used to qualitatively interpret what kind of group each
cluster represents.

The k-means is a partitional clustering algorithm.

Let the set of data points (or instances) be as follows:

D = {x1, x2, …, xn}, where

x = (xi1, xi2, …, xir), is a vector in a real-valued space X ⊆ Rr, and r is the number of attributes in the
data.

The k-means algorithm partitions the given data into k clusters with each cluster having a center called
a centroid.

k is specified by the user.

Given k, the k-means algorithm works as follows:

Algorithm K-means( k, D )
1. Identify the k data points as the initial centroids (cluster centers).

2. Repeat step 1.
3. For each data point x ϵ D do.

4. Compute the distance from x to the centroid.

5. Assign x to the closest centroid (a centroid represents a cluster).

6. Re-compute the centroids using the current cluster memberships until the stopping criterion
is met.

2. Expectation–maximization

➢ EM algorithm is an iterative method to find maximum likelihood estimates of parameters in


statistical models, where the model depends on unobserved latent variables.

➢ Iteratively learn probabilistic categorization model from unsupervised data.

➢ Initially assume random assignment of examples to categories “Randomly label” data

➢ Learn initial probabilistic model by estimating model parameters θ from randomly labeled
data

➢ Iterate until convergence:

1. Expectation (E-step): Compute P(ci | E) for each example given the current model, and
probabilistically re-label the examples based on these posterior probability estimates.

2. Maximization (M-step): Re-estimate the model parameters, θ, from the probabilistically re-
labeled data.

The EM Algorithm for Gaussian Mixtures

➢ The probability density function for multivariate_normal is

where mu is the mean, Sigma the covariance matrix, and k is the dimension of the space where x takes
values.

Algorithm:

An arbitrary initial hypothesis h=<μ1, μ2 ,.., μk> is chosen. The EM


Algorithm iterates over two steps:
Step 1 (Estimation, E): Calculate the expected value E[zij] of each hidden variable zij, assuming
that the current hypothesis h=<μ1, μ2 ,.., μk> holds.
Step 2 (Maximization, M): Calculate a new maximum likelihood hypothesis h‟=<μ1‟, μ2‟ ,.., μk‟>,
assuming the value taken on by each hidden variable zij is its expected value E[zij] calculated in step 1.
Then replace the hypothesis h=<μ1, μ2 ,.., μk> by the new hypothesis h‟=<μ1‟, μ2‟ ,.., μk‟> and
iterate.

PROGRAM:
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.cluster import KMeans
from sklearn.mixture import GaussianMixture
import matplotlib.pyplot as plt

# Load the Iris dataset


iris = load_iris()
data = pd.DataFrame(iris.data, columns=iris.feature_names)

# Select only 10 samples for visualization


smaller_data = data.iloc[:10, :]

# Select the number of clusters (k)


k = 3

# K-Means Clustering
km = KMeans(n_clusters=k)
km_labels = km.fit_predict(smaller_data)

# EM Clustering
gmm = GaussianMixture(n_components=k)
gmm_labels = gmm.fit_predict(smaller_data)

# Visualization
# Graph 1: Line graph
plt.figure(1)
plt.plot(smaller_data["sepal length (cm)"], smaller_data["petal length (cm)"])
plt.title("Line Graph (10 Samples)")
plt.xlabel("Sepal Length (cm)")
plt.ylabel("Petal Length (cm)")
plt.show()

# Graph 2: Scatter plot with dotted points


plt.figure(2)
plt.scatter(smaller_data["petal length (cm)"], smaller_data["petal width (cm)"],
c="b", marker="o", linewidth=2, linestyle="--")
plt.title("Scatter Plot with Dotted Points (10 Samples)")
plt.xlabel("Petal Length (cm)")
plt.ylabel("Petal Width (cm)")
plt.show()

# Graph 3: Scatter plot with triangle points


plt.figure(3)
plt.scatter(smaller_data["sepal length (cm)"], smaller_data["sepal width (cm)"],
c="g", marker="^")
plt.title("Scatter Plot with Triangle Points (10 Samples)")
plt.xlabel("Sepal Length (cm)")
plt.ylabel("Sepal Width (cm)")
plt.show()

OUTPUT

RESULT:
Thus, the above program was executed successfully.
Ex.5 K-nearest neighbour algorithm

Aim :

Write a program to implement K-nearest neighbour algorithm to classify iris


dataset.

Description :
k-nearest neighbors algorithm (k-NN) is a non-parametric method used for classification and
regression.[1] In both cases, the input consists of the k closest training examples in the feature space.
The output depends on whether k-NN is used for classification or regression.

k-NN is a type of instance-based learning, or lazy learning, where the function is only approximated
locally and all computation is deferred until classification. The k-NN algorithm is among the simplest
of all machine learning algorithms.

The kNN task can be broken down into writing 3 primary functions:

1. Calculate the distance between any two points

2. Find the nearest neighbours based on these pair wise distances

3. Majority vote on a class labels based on the nearest neighbour list


Dataset
Iris dataset, consists of flower measurements for three species of iris flower. Our task is to predict the
species labels of a set of flowers based on their flower measurements. Since you’ll be building a
predictor based on a set of known correct classifications .

The data set contains 3 classes of 151 instances each, where each class refers to a type of iris
plant. One class is linearly separable from the other 2; the latter are NOT linearly separable
from each other.

Predicted attribute: class of iris plant

Attribute Information:

1.sepal length in cm 2. sepal width in cm 3. petal length in cm 4. petal width in cm

Class: - Iris Setosa - Iris Versicolour - Iris Virginica


PROGRAM:
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier

# Load the Iris dataset


iris = load_iris()
data = pd.DataFrame(iris.data, columns=iris.feature_names)
target = iris.target

# Split the dataset into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.33,
random_state=42)

# Create a KNN classifier with 5 neighbors


knn = KNeighborsClassifier(n_neighbors=5)

# Train the classifier on the training data


knn.fit(X_train, y_train)

# Make predictions on the testing data


y_pred = knn.predict(X_test)

# Print the number of training and testing data points


print("Number of Training data:", len(X_train))
print("Number of Test Data:", len(X_test))

# Print the predictions and actual labels


for i in range(len(y_test)):
print("predicted=", y_pred[i], ", actual=", y_test[i])

# Calculate and print the accuracy


accuracy = knn.score(X_test, y_test)
print("The Accuracy is:", accuracy * 100, "%")

OUTPUT

RESULT:
Thus, the above program was executed successfully.

You might also like