0% found this document useful (0 votes)
7 views6 pages

122A8056 Assignment1

Uploaded by

potesoham196
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views6 pages

122A8056 Assignment1

Uploaded by

potesoham196
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Assignment no.

NAME Soham Pote

PRN 122A8056

Batch B3

Branch AI-DS

Subject Artificial Intelligence

Assignment no. & Assignment 1: Vaccum world simulation


topic
Vacuum World Simulation

Introduction

This report lists the improvements made to a simple vacuum world


simulation. The improvements included the addition of a Goal-Based
Agent to intelligently clean the grid, augmentation of the animation to
show which cells had been cleaned, and making the agent move around
efficiently.

Improvements Made

1. Goal-Based Agent:

- The Vacuum cleaner agent was enhanced from a random pattern of


movement to an aim-oriented strategy. Now, this agent will intelligently
select its moves to either move or suck, depending on whether there is
dirt in the neighbourhood or not. That makes it much more efficient in
cleaning the grid.

2. Improvement in Visualization:

This is achieved by changing the representation of cleaned cells to blue


squares in order for better visualization of where cleaning has taken place.
This is in view of easy looking of which areas on the grid have been
cleaned in the simulation.

Agent Movement Strategy

The strategy that the Goal-Based Agent follows is the following:

1. Cleaning Action:

- If it currently is over a dirty cell, it will clean the cell, making the value 2.

2. Move Towards Dirt,


- From the current cell, the agent looks in all four possible directions for
the nearest dirty cell and then moves toward it.

3. Random Move:

- If no dirt is visible in the nearest proximity, the agent moves randomly to


expose other areas of the grid.

This heuristics enables the vacuum cleaner to clean the grid effectively by
first cleaning up the nearest dirt from its vicinity.

Features of Animation

1. Visualization of Grid:

- Grid is painted with colors that denote a certain state:

- Green: Clean cell.

- Gray: Dirty cell.

- Blue: Cell cleaned.

- Red Square: Location of vacuum cleaner.

2. Action Visualization:

- It would display the live vacuum cleaner's location along with its action
of cleaning and how it moves on the grid.

Code:

import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from matplotlib import colors

# Define the grid size


grid_size = 5

# Create the grid with some dirt


np.random.seed(0)
grid = np.random.choice([0, 1], size=(grid_size, grid_size), p=[0.7, 0.3])

# Initial position of the vacuum cleaner


vacuum_pos = [0, 0]

# Function to visualize the grid


def plot_grid(grid, vacuum_pos):
# Update colormap to include blue for cleaned cells
cmap = colors.ListedColormap(['green', 'gray', 'blue'])
plt.imshow(grid, cmap=cmap, vmin=0, vmax=2)
plt.scatter(vacuum_pos[1], vacuum_pos[0], c='red', s=200, marker='s')
plt.xticks(range(grid_size))
plt.yticks(range(grid_size))
plt.grid(True)

plot_grid(grid, vacuum_pos)

# Define actions
ACTIONS = {
"UP": (-1, 0),
"DOWN": (1, 0),
"LEFT": (0, -1),
"RIGHT": (0, 1),
"CLEAN": (0, 0)
}

# Function to perform an action


def perform_action(grid, vacuum_pos, action):
if action == "CLEAN":
grid[vacuum_pos[0], vacuum_pos[1]] = 2 # Mark the cell as cleaned
else: # update position
new_pos = [vacuum_pos[0] + ACTIONS[action][0], vacuum_pos[1] +
ACTIONS[action][1]]
if 0 <= new_pos[0] < grid_size and 0 <= new_pos[1] < grid_size:
vacuum_pos[0], vacuum_pos[1] = new_pos
return grid, vacuum_pos

# Goal-Based Agent that cleans dirt systematically


def goal_based_agent(grid, vacuum_pos):
# If current position has dirt, clean it
if grid[vacuum_pos[0], vacuum_pos[1]] == 1:
return "CLEAN"
# Otherwise, move towards the closest dirt
for action in ["UP", "DOWN", "LEFT", "RIGHT"]:
new_pos = [vacuum_pos[0] + ACTIONS[action][0], vacuum_pos[1] +
ACTIONS[action][1]]
if 0 <= new_pos[0] < grid_size and 0 <= new_pos[1] < grid_size:
if grid[new_pos[0], new_pos[1]] == 1:
return action
# If no dirt is found, move randomly
return np.random.choice(["UP", "DOWN", "LEFT", "RIGHT"])

# Initialize the plot


fig, ax = plt.subplots()

def update_plot(frame, grid, vacuum_pos):


ax.clear()
# Use the goal-based agent to determine the next action
action = goal_based_agent(grid, vacuum_pos)
plot_grid(grid, vacuum_pos)
grid, vacuum_pos = perform_action(grid, vacuum_pos, action)
return grid, vacuum_pos

# Run the animation


ani = animation.FuncAnimation(fig, update_plot, fargs=(grid,
vacuum_pos), frames=50, interval=500, repeat=False)
plt.show()

Snapshots
Conclusion

The changes to the vacuum world simulation created an intelligent and


visually informative system. One could consider that the Goal-Based
Agent cleaned more efficiently, and through improved visualization, one
could better track the agent's progress. This can be extended with the use
of more complex grid worlds and various strategies for the agent.

You might also like