0% found this document useful (0 votes)
31 views26 pages

Ai Exp 1-10

The document describes an A* search algorithm to find the shortest path between nodes in a graph. It initializes open and closed sets to track explored and unexplored nodes, and uses a priority queue to efficiently search nodes based on an estimated total cost. The algorithm iteratively removes the lowest cost node from the queue, explores its neighbors, and adds them to the open set until the goal node is found.

Uploaded by

angel99battel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views26 pages

Ai Exp 1-10

The document describes an A* search algorithm to find the shortest path between nodes in a graph. It initializes open and closed sets to track explored and unexplored nodes, and uses a priority queue to efficiently search nodes based on an estimated total cost. The algorithm iteratively removes the lowest cost node from the queue, explores its neighbors, and adds them to the open set until the goal node is found.

Uploaded by

angel99battel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

TABLE-DRIVEN AGENT

Exp. No.: 01 Date:


SOURCE CODE:
loc_A = 'A'
loc_B = 'B'
percepts = []
table = {((loc_A, 'Clean'),): 'Right',
((loc_A, 'Dirty'),): 'Suck',
((loc_B, 'Clean'),): 'Left',
((loc_B, 'Dirty'),): 'Suck',
((loc_A, 'Dirty'), (loc_A, 'Clean')): 'Right',
((loc_A, 'Clean'), (loc_B, 'Dirty')): 'Suck',
((loc_B, 'Clean'), (loc_A, 'Dirty')): 'Suck',
((loc_B, 'Dirty'), (loc_B, 'Clean')): 'Left',
((loc_A, 'Dirty'), (loc_A, 'Clean'), (loc_B, 'Dirty')): 'Suck',
((loc_B, 'Dirty'), (loc_B, 'Clean'), (loc_A, 'Dirty')): 'Suck'}

def LOOKUP(percepts, table):

action = table.get(tuple(percepts))
return action

def TABLE_DRIVEN_AGENT(percept):

percepts.append(percept)
action = LOOKUP(percepts, table)
return action

def run():

print('Action\tPercepts')
print(TABLE_DRIVEN_AGENT((loc_A, 'Dirty')), '\t', percepts)
print(TABLE_DRIVEN_AGENT((loc_A, 'Clean')), '\t', percepts)
print(TABLE_DRIVEN_AGENT((loc_B, 'Clean')), '\t', percepts)

run()

OUTPUT:
Action Percepts
Suck [('A', 'Dirty')]
Right [('A', 'Dirty'), ('A', 'Clean')]
None [('A', 'Dirty'), ('A', 'Clean'), ('B', 'Clean')]
SIMPLE REFLEX AGENT
Exp. No.: 02 - I Date:
SOURCE CODE:
# Define the agent function
def reflex_vacuum_agent(percept):
location, status = percept

# Choose an action based on the current percept


if status == 'dirty':
action = 'suck'
elif location == 'A':
action = 'right'
else:
action = 'left'

# Return the chosen action


return action

# Define the main function to simulate the vacuum-cleaner world


def main():
# Initialize the environment
environment = {'A': 'dirty', 'B': 'dirty'}

# Initialize the agent's location and performance score


location = 'A'
score = 0

# Repeat the following steps indefinitely


while True:
# Sense the current state of the environment
percept = (location, environment[location])

# Use the agent function to choose an action


action = reflex_vacuum_agent(percept)

# Execute the chosen action and update the environment and performance score
if action == 'suck':
environment[location] = 'clean'
score += 1
elif action == 'right':
if location == 'A':
location = 'B'
else:
location = 'A'
score -= 1

# Print the current state of the environment and the agent's performance
score
print("Location: {}, Environment: {}, Action: {}, Score: {}".format(
location, environment, action, score))
# Check if the environment is clean
if all(status == 'clean' for status in environment.values()):
print("Environment is clean")
break

# Run the main function


if __name__ == '__main__':
main()

OUTPUT:
Location: A, Environment: {'A': 'clean', 'B': 'dirty'}, Action: suck, Score: 1
Location: B, Environment: {'A': 'clean', 'B': 'dirty'}, Action: right, Score: 0
Location: B, Environment: {'A': 'clean', 'B': 'clean'}, Action: suck, Score: 1
Environment is clean
MODEL BASED REFLEX AGENT
Exp. No.: 02 - II Date:
SOURCE CODE:
def agent(location, status):
if status == "Dirty":
return "Suck"
elif location == 'A':
return "Right"
elif location == 'B':
return "Left"

def vacuum_cleaner():
location = 'A'
environment = {'A': 'Dirty', 'B': 'Dirty'}
score = 0

while True:
print("Location:", location)
print("Environment:", environment)
print("Score:", score)

action = agent(location, environment[location])


print("Action:", action)

if action == "Suck":
environment[location] = "Clean"
score += 1
elif action == "Right":
location = 'B'
score -= 1
elif action == "Left":
location = 'A'
score -= 1

if environment['A'] == 'Clean' and environment['B'] == 'Clean':


print("All is clean.")
print("Environment is clean with score:", score)
break

vacuum_cleaner()
OUTPUT:
Location: A
Environment: {'A': 'Dirty', 'B': 'Dirty'}
Score: 0
Action: Suck
Location: A
Environment: {'A': 'Clean', 'B': 'Dirty'}
Score: 1
Action: Right
Location: B
Environment: {'A': 'Clean', 'B': 'Dirty'}
Score: 0
Action: Suck
All is clean.
Environment is clean with score: 1
TRAVELLING SALES MAN PROBLEM
Exp. No.: 03 Date:
SOURCE CODE:
import random

def calculate_distance(path, distances):


total_distance = 0
for i in range(len(path)):
total_distance += distances[path[i]][path[(i+1) % len(path)]]
return total_distance

def hill_climbing(distances):
num_cities = len(distances)
current_path = list(range(num_cities))
random.shuffle(current_path)
current_distance = calculate_distance(current_path, distances)
while True:
found_better_path = False
for i in range(num_cities):
for j in range(i+1, num_cities):
if j == (i+1) % num_cities:
continue
new_path = current_path[:]
new_path[i], new_path[j] = new_path[j], new_path[i]
new_distance = calculate_distance(new_path, distances)
if new_distance < current_distance:
current_path = new_path
current_distance = new_distance
found_better_path = True
break
if found_better_path:
break
if not found_better_path:
break
return current_path, current_distance

# Example 4x4 matrix


distances = [
[0, 400, 500, 300],
[400, 0, 300, 500],
[500, 300, 0, 400],
[300, 500, 400, 0]
]

shortest_path, shortest_distance = hill_climbing(distances)

print("Shortest path:", shortest_path)


print("Shortest distance:", shortest_distance)
OUTPUT 1:
Shortest path: [0, 3, 2, 1]
Shortest distance: 1400

OUTPUT 2:
Shortest path: [1, 0, 3, 2]
Shortest distance: 1400
8-PUZZLE PROBLEM
Exp. No.: 04 Date:
SOURCE CODE:
from queue import PriorityQueue

# Define the goal state


goal_state = [1, 2, 3, 4, 5, 6, 7, 8, 0]

# Define the starting state


start_state = [1, 0, 3, 4, 2, 5, 7, 8, 6]
# start_state = [3, 4, 2, 1, 0, 8, 5, 6, 7]

# Define the heuristic function (using the Manhattan distance)


def heuristic(state):
distance = 0
for i in range(9):
distance += abs(i // 3 - state.index(i) // 3) + \
abs(i % 3 - state.index(i) % 3)
return distance

# Define the Greedy Best First Search function


def greedy_best_first_search(start, goal, h):
visited = set()
queue = PriorityQueue()
queue.put((h(start), start, []))
while not queue.empty():
(cost, current_state, path) = queue.get()
if current_state == goal:
return path + [current_state]
visited.add(tuple(current_state))
for neighbor in get_neighbors(current_state):
if tuple(neighbor) not in visited:
visited.add(tuple(neighbor))
queue.put((h(neighbor), neighbor, path + [current_state]))
return None

# Define a helper function to get the neighbors of a state


def get_neighbors(state):
neighbors = []
blank_idx = state.index(0)
if blank_idx % 3 > 0:
# Move the blank tile left
new_state = state[:]
new_state[blank_idx], new_state[blank_idx -
1] = new_state[blank_idx - 1],
new_state[blank_idx]
neighbors.append(new_state)
if blank_idx % 3 < 2:
# Move the blank tile right
new_state = state[:]
new_state[blank_idx], new_state[blank_idx +
1] = new_state[blank_idx + 1],
new_state[blank_idx]
neighbors.append(new_state)
if blank_idx // 3 > 0:
# Move the blank tile up
new_state = state[:]
new_state[blank_idx], new_state[blank_idx -
3] = new_state[blank_idx - 3],
new_state[blank_idx]
neighbors.append(new_state)
if blank_idx // 3 < 2:
# Move the blank tile down
new_state = state[:]
new_state[blank_idx], new_state[blank_idx +
3] = new_state[blank_idx + 3],
new_state[blank_idx]
neighbors.append(new_state)
return neighbors

# Find the solution


solution = greedy_best_first_search(start_state, goal_state, heuristic)

# Print the solution


if solution is not None:
print("Solution found in", len(solution) - 1, "moves:")
for i, state in enumerate(solution):
print("Step", i, ":")
for j in range(0, 9, 3):
print(state[j], state[j+1], state[j+2])
print()
else:
print("No solution found.")
OUTPUT:
Solution found in 3 moves:
Step 0 :
1 0 3
4 2 5
7 8 6

Step 1 :
1 2 3
4 0 5
7 8 6

Step 2 :
1 2 3
4 5 0
7 8 6

Step 3 :
1 2 3
4 5 6
7 8 0
A* SEARCH ALGORITHM
Exp. No.: 05 Date:
SOURCE CODE:

import heapq

def astar(graph, start, goal):


# Initialize the open and closed sets
open_set = []
closed_set = set()

# Add the starting node to the open set


heapq.heappush(open_set, (0, start))

# Keep track of the path from the start to each node


came_from = {}

# Initialize the g-score and f-score for each node


g_score = {node: float('inf') for node in graph}
g_score[start] = 0
f_score = {node: float('inf') for node in graph}
f_score[start] = heuristic(start, goal)

# Run the A* search algorithm


while open_set:
# Get the node with the lowest f-score from the open set
current = heapq.heappop(open_set)[1]

# If we've reached the goal, reconstruct the path and return it


if current == goal:
path = [current]
while current in came_from:
current = came_from[current]
path.append(current)
path.reverse()
return path

# Add the current node to the closed set


closed_set.add(current)

# Check the neighbors of the current node


for neighbor in graph[current]:
# Skip neighbors that have already been visited
if neighbor in closed_set:
continue

# Calculate the tentative g-score for this neighbor


tentative_g_score = g_score[current] + graph[current][neighbor]
# If this is a better path to the neighbor, update its g-score and add
it to the open set
if tentative_g_score < g_score[neighbor]:
came_from[neighbor] = current
g_score[neighbor] = tentative_g_score
f_score[neighbor] = g_score[neighbor] + \
heuristic(neighbor, goal)
heapq.heappush(open_set, (f_score[neighbor], neighbor))

# If we've exhausted all possible paths without reaching the goal, return None
return None

def heuristic(node, goal):


# Calculate the Manhattan distance between the node and the goal
return abs(node[0] - goal[0]) + abs(node[1] - goal[1])

graph = {
(0, 0): {(0, 1): 1, (1, 0): 1},
(0, 1): {(0, 0): 1, (0, 2): 1},
(0, 2): {(0, 1): 1, (1, 2): 1},
(1, 0): {(0, 0): 1, (2, 0): 1},
(1, 2): {(0, 2): 1, (2, 2): 1},
(2, 0): {(1, 0): 1, (2, 1): 1},
(2, 1): {(2, 0): 1, (2, 2): 1},
(2, 2): {(1, 2): 1, (2, 1): 1},
}
start = (0, 0)
goal = (2, 2)
path = astar(graph, start, goal)
print('Shortest path :', path)

OUTPUT:

Shortest path : [(0, 0), (0, 1), (0, 2), (1, 2), (2, 2)]
TIC-TAC-TOE GAME USING MINIMAX ALGORITHM
Exp. No.: 06 Date:
SOURCE CODE:

import sys

# Represents the Tic-Tac-Toe board


board = [[' ' for _ in range(3)] for _ in range(3)]

# Constants for player and opponent


PLAYER = 'X'
OPPONENT = 'O'

def print_board():
print("---------")
for i in range(3):
print("|", end=" ")
for j in range(3):
print(board[i][j], end=" ")
print("|")
print("---------")

def is_move_left():
for i in range(3):
for j in range(3):
if board[i][j] == ' ':
return True
return False

def evaluate():
# Checking rows for victory
for row in range(3):
if board[row][0] == board[row][1] == board[row][2]:
if board[row][0] == PLAYER:
return 10
elif board[row][0] == OPPONENT:
return -10

# Checking columns for victory


for col in range(3):
if board[0][col] == board[1][col] == board[2][col]:
if board[0][col] == PLAYER:
return 10
elif board[0][col] == OPPONENT:
return -10

# Checking diagonals for victory


if board[0][0] == board[1][1] == board[2][2]:
if board[0][0] == PLAYER:
return 10
elif board[0][0] == OPPONENT:
return -10

if board[0][2] == board[1][1] == board[2][0]:


if board[0][2] == PLAYER:
return 10
elif board[0][2] == OPPONENT:
return -10

# No winner
return 0

def minimax(depth, is_maximizing):


score = evaluate()

# If the maximizer or minimizer wins the game, return the score


if score == 10:
return score - depth
if score == -10:
return score + depth

# If there are no moves left, it's a tie


if not is_move_left():
return 0

# If it's the maximizer's turn


if is_maximizing:
best_score = -sys.maxsize
for i in range(3):
for j in range(3):
if board[i][j] == ' ':
board[i][j] = PLAYER
best_score = max(best_score, minimax(
depth + 1, not is_maximizing))
board[i][j] = ' '
return best_score

# If it's the minimizer's turn


else:
best_score = sys.maxsize
for i in range(3):
for j in range(3):
if board[i][j] == ' ':
board[i][j] = OPPONENT
best_score = min(best_score, minimax(
depth + 1, not is_maximizing))
board[i][j] = ' '
return best_score

def find_best_move():
best_score = -sys.maxsize
best_move = (-1, -1)

for i in range(3):
for j in range(3):
if board[i][j] == ' ':
board[i][j] = PLAYER
score = minimax(0, False)
board[i][j] = ' '

if score > best_score:


best_score = score
best_move = (i, j)

return best_move

# Main game loop


def play_game():
print("Tic-Tac-Toe Game")
print("Enter the coordinates (row, col) to make a move.")
print("Coordinates range from 0 to 2, top-left is (0, 0) and bottom-right is (2,
2).")

player_turn = True # True if it's player's turn, False if it's AI's turn

while is_move_left() and evaluate() == 0:


print_board()

if player_turn:
row = int(input("Enter the row: "))
col = int(input("Enter the column: "))

if board[row][col] != ' ':


print("Invalid move. Try again.")
continue

board[row][col] = PLAYER
else:
print("AI's turn...")
move = find_best_move()
board[move[0]][move[1]] = OPPONENT

player_turn = not player_turn


print_board()

score = evaluate()
if score > 0:
print("Player wins!")
elif score < 0:
print("AI wins!")
else:
print("It's a tie!")

# Start the game


play_game()

OUTPUT:

Tic-Tac-Toe Game
Enter the coordinates (row, col) to make a move.
Coordinates range from 0 to 2, top-left is (0, 0) and bottom-right is (2, 2).
---------
| |
| |
| |
---------
Enter the row: 0
Enter the column: 1
---------
| X |
| |
| |
---------
AI's turn...
---------
| O X |
| |
| |
---------
Enter the row: 1
Enter the column: 1
---------
| O X |
| X |
| |
---------
AI's turn...
---------
| O X |
| X |
| O |
---------
Enter the row: 2
Enter the column: 0
---------
| O X |
| X |
| X O |
---------
AI's turn...
---------
| O X O |
| X |
| X O |
---------
Enter the row: 1
Enter the column: 0
---------
| O X O |
| X X |
| X O |
---------
AI's turn...
---------
| O X O |
| X X O |
| X O |
---------
Enter the row: 0
Enter the column: 1
Invalid move. Try again.
---------
| O X O |
| X X O |
| X O |
---------
Enter the row: 2
Enter the column: 2
---------
| O X O |
| X X O |
| X O X |
---------
It's a tie!
MONTO-CARLO SEARCH TREE
Exp. No.: 07 Date:
SOURCE CODE:

import random
import math

class Node:
def __init__(self, value, parent=None):
self.value = value
self.parent = parent
self.children = []
self.wins = 0
self.visits = 0

def select_child(self):
exploration_constant = 1.414

selected_child = None
max_ucb = float("-inf")

for child in self.children:


ucb = child.wins / child.visits + exploration_constant *
math.sqrt(math.log(self.visits) / child.visits)
if ucb > max_ucb:
selected_child = child
max_ucb = ucb

return selected_child

def expand(self):
possible_values = self.get_possible_values()

for value in possible_values:


child_node = Node(value, self)
self.children.append(child_node)

def update(self, result):


self.visits += 1
self.wins += result

if self.parent:
self.parent.update(result)

def get_possible_values(self):
possible_values = []

for i in range(1, 11):


if i not in self.value:
possible_values.append(self.value + [i])

return possible_values

class MonteCarloTreeSearch:
def __init__(self, initial_state):
self.root = Node(initial_state)

def run(self, simulations):


for _ in range(simulations):
node = self.selection()
node.expand()
result = self.simulation(node)
node.update(result)

best_child = self.root.select_child()
return best_child.value

def selection(self):
node = self.root

while node.children:
if all(child.visits for child in node.children):
node = node.select_child()
else:
return self.expand(node)

return node

def expand(self, node):


unvisited_child = next(child for child in node.children if child.visits== 0)
return unvisited_child

def simulation(self, node):


current_node = node

while len(current_node.value) < 5:


possible_values = current_node.get_possible_values()
random_value = random.choice(possible_values)
child_node = Node(random_value, current_node)
current_node = child_node

return sum(current_node.value)

initial_value = []
mcts = MonteCarloTreeSearch(initial_value)
simulations = 1000
best_value = mcts.run(simulations)
print("Best Value:", best_value)
OUTPUT:

Best Value: [7]


MONTY HALL PROBLEM USING BAYESIAN NETWORK
Exp. No.: 08 Date:
SOURCE CODE:

import random

def monty_hall():
# Initialize the doors with car and goats
doors = ['car', 'goat', 'goat']

# Shuffle the doors randomly


random.shuffle(doors)

# Player makes the initial choice


player_choice = random.randint(0, 2)

# Host reveals a door with a goat


host_choice = reveal_door(doors, player_choice)

# Player makes the final switch


final_choice = switch_door(player_choice, host_choice)

# Return True if the final choice has the car, False otherwise
return doors[final_choice] == 'car'

def reveal_door(doors, player_choice):


# Host reveals a door with a goat that was not chosen by the player
for i in range(len(doors)):
if i != player_choice and doors[i] == 'goat':
return i

def switch_door(player_choice, host_choice):


# Player switches their choice to the remaining door
for i in range(3):
if i != player_choice and i != host_choice:
return i

def simulate_monty_hall(num_trials):
wins_with_switch = 0
wins_without_switch = 0

for _ in range(num_trials):
if monty_hall():
wins_with_switch += 1
else:
wins_without_switch += 1

print("Results after", num_trials, "trials:")


print("Wins with switching:", wins_with_switch)
print("Wins without switching:", wins_without_switch)

# Run the simulation


simulate_monty_hall(10000)

OUTPUT:

Results after 10000 trials:


Wins with switching: 6652
Wins without switching: 3348
KALMAN FILTER TO TRACK THE AIRCRAFT
Exp. No.: 09 Date:
SOURCE CODE:

import numpy as np

class KalmanFilter:
def __init__(self, initial_state, initial_covariance, process_noise,
measurement_noise):
self.state = initial_state
self.covariance = initial_covariance
self.process_noise = process_noise
self.measurement_noise = measurement_noise

def predict(self, dt):


# State prediction
F = np.array([[1, dt],
[0, 1]])
self.state = np.dot(F, self.state)

# Covariance prediction
Q = np.array([[0.25 * dt**4, 0.5 * dt**3],
[0.5 * dt**3, dt**2]])
self.covariance = np.dot(np.dot(F, self.covariance), F.T) + Q

def update(self, measurement):


# Measurement update
H = np.array([[1, 0]])
R = np.array([[self.measurement_noise]])
y = measurement - np.dot(H, self.state)
S = np.dot(np.dot(H, self.covariance), H.T) + R
K = np.dot(np.dot(self.covariance, H.T), np.linalg.inv(S))
self.state = self.state + np.dot(K, y)
self.covariance = np.dot((np.eye(2) - np.dot(K, H)), self.covariance)

# Initial state and covariance


initial_state = np.array([[0],
[0]])
initial_covariance = np.array([[100, 0],
[0, 100]])

# Process noise and measurement noise


process_noise = 0.1
measurement_noise = 10

# Initialize the Kalman filter


kalman_filter = KalmanFilter(initial_state, initial_covariance, process_noise,
measurement_noise)
# Simulated measurements
measurements = [0.5, 2.1, 4.0, 5.8, 9.2]
print(measurements)

# Perform the Kalman filtering


for measurement in measurements:
# Prediction step
kalman_filter.predict(dt=1)

# Update step
kalman_filter.update(measurement)

# Print the estimated state

print("Estimated state:", kalman_filter.state.flatten())

OUTPUT:

[0.5, 2.1, 4.0, 5.8, 9.2]


Estimated state: [0.47621879 0.23900119]
Estimated state: [1.93173013 1.21901814]
Estimated state: [3.81510833 1.59472679]
Estimated state: [5.67682061 1.71393539]
Estimated state: [8.50414669 2.15595094]
STOCK PRICES FORECASTING MODEL USING THE HIDDEN MARKOV MODEL
Exp. No.: 10 Date:
SOURCE CODE:

import numpy as np
from tabulate import tabulate
from hmmlearn import hmm

# Generate sample data


np.random.seed(42)
observed_prices = np.random.random(10) * 100 # Random observed stock prices

# Prepare the data for HMM


X = observed_prices.reshape(-1, 1)

# Define and train the HMM model


model = hmm.GaussianHMM(n_components=3, covariance_type="diag")
model.fit(X)

# Generate future stock price forecasts


forecast_length = 10
future_forecasts, _ = model.sample(forecast_length)

# Prepare the observed prices and future forecasts for printing


table_data = np.concatenate((observed_prices.reshape(-1, 1), future_forecasts),
axis=1)

# Set the headers for the table


headers = ["Observed prices"] + ["Future forecasts"] * forecast_length

# Print the table


print(tabulate(table_data, headers, tablefmt="grid"))

OUTPUT:

Fitting a model with 14 free scalar parameters with only 10 data points will result
in a degenerate solution.
+-------------------+--------------------+
| Observed prices | Future forecasts |
+===================+====================+
| 37.454 | 22.9751 |
+-------------------+--------------------+
| 95.0714 | 37.5188 |
+-------------------+--------------------+
| 73.1994 | 53.7053 |
+-------------------+--------------------+
| 59.8658 | 60.634 |
+-------------------+--------------------+
| 15.6019 | 71.5652 |
+-------------------+--------------------+
| 15.5995 | 55.7926 |
+-------------------+--------------------+
| 5.80836 | 69.8999 |
+-------------------+--------------------+
| 86.6176 | 71.0359 |
+-------------------+--------------------+
| 60.1115 | 67.5987 |
+-------------------+--------------------+
| 70.8073 | 84.4716 |
+-------------------+--------------------+

You might also like