0% found this document useful (0 votes)
19 views37 pages

AI-G Lab

This document is a practical file for a Bachelor of Technology course in Electronics and Communication Engineering focused on Artificial Intelligence. It includes a series of experiments implementing the A* algorithm for pathfinding in various scenarios, such as weighted mazes and the 8-puzzle game. Each experiment outlines the aim, required software, theoretical background, code implementation, and conclusions about the efficiency of the A* algorithm.

Uploaded by

Simone Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views37 pages

AI-G Lab

This document is a practical file for a Bachelor of Technology course in Electronics and Communication Engineering focused on Artificial Intelligence. It includes a series of experiments implementing the A* algorithm for pathfinding in various scenarios, such as weighted mazes and the 8-puzzle game. Each experiment outlines the aim, required software, theoretical background, code implementation, and conclusions about the efficiency of the A* algorithm.

Uploaded by

Simone Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

INDIRA GANDHI

DELHI TECHNICAL UNIVERSITY


FOR WOMEN

BACHELOR OF TECHNOLOGY

Electronics and Communication Engineering-


Artificial Intelligence
(ECE-AI Batch 1)
(2021 - 2025)

Artificial Intelligence
for Games
(BAI-406)

PRACTICAL FILE

Submitted to: Submitted by:


Ms. Pushpanjali Saachi Verma
ECE-AI 1
02701182021
INDEX

Experiment Aim Date Signature


Number
1. Implement the A* (A-Star) 24/01/2025
algorithm for pathfinding
in a weighted maze.
2. Implement the A* (A-Star) 31/01/2025
algorithm for solving the
8-puzzle game.
3. Implement A* (A-Star) 14/02/2025
pathfinding on a game by
converting the maze into a
Navigation Mesh
(NavMesh).
4. To implement a Finite 21/02/2025
State Machine (FSM) for
an NPC (Non-Player
Character) in a survival
game using Python.
5. Implement Python code of 21/03/2025
behavior tree on a Game.
6. Combine Utility theory 04/04/2025
with behavior tree for a
Game.
EXPERIMENT- 1

AIM

To implement the A* (A-Star) algorithm for pathfinding in a weighted maze, ensuring an


optimal path is found while considering obstacles and different path costs.

SOFTWARE REQUIRED

• C++ Compiler (GCC/MinGW/MSVC)


• Code Editor (VS Code, Code::Blocks, or Dev-C++)
• Operating System (Windows/Linux/MacOS)

THEORY

A* (A-Star) is an informed search algorithm widely used in artificial intelligence for games,
robotics, and real-world navigation. It is an extension of Dijkstra’s algorithm with an added
heuristic to improve efficiency. A* efficiently finds the shortest path from a start position to a
goal while minimizing computational costs.

The algorithm operates by evaluating paths using the function:

where:

• represents the actual cost from the start node to the current node.
• is a heuristic estimate of the cost from the current node to the goal.
• is the total estimated cost of the cheapest path passing through node .

Heuristic Function

A* uses a heuristic function to guide its search. A common heuristic for grid-based pathfinding is
the Manhattan Distance, which assumes only horizontal and vertical movement: Other heuristic
functions include:

• Euclidean Distance (used when diagonal movement is allowed)


• Octile Distance (used in tile-based games)
• Chebyshev Distance (for environments with diagonal movement and uniform cost)

Why A* is Efficient?

A* is more efficient than uninformed search algorithms like Breadth-First Search (BFS) and
Dijkstra’s Algorithm because it prioritizes paths that appear more promising. It balances between
optimality (Dijkstra’s approach) and efficiency (Greedy Best-First Search). The algorithm
guarantees an optimal path as long as the heuristic function is admissible (never overestimates the
actual cost).
CODE

#include <iostream>
#include <vector>
#include <queue>
#include <cmath>
#include <stack>

using namespace std;

// Define a Node structure to hold coordinates and costs


struct Node {
int x, y;
int g, h, f;
Node* parent;

Node(int x, int y, int g, int h, Node* parent = nullptr)


: x(x), y(y), g(g), h(h), f(g + h), parent(parent) {}
};

// Compare function for priority queue


struct Compare {
bool operator()(const Node* a, const Node* b) {
return a->f > b->f;
}
};

// Heuristic function: Manhattan distance


int heuristic(int x1, int y1, int x2, int y2) {
return abs(x1 - x2) + abs(y1 - y2);
}

// Check if a cell is valid for movement


bool isValid(int x, int y, const vector<vector<int> >& maze) {
return x >= 0 && x < maze.size() && y >= 0 && y < maze[0].size() && maze[x][y] != -1;
}

// Trace the path from the target to the start


void tracePath(Node* node) {
stack<pair<int, int> > path;
while (node) {
path.push(make_pair(node->x, node->y));
node = node->parent;
}

cout << "Path found:\n";


while (!path.empty()) {
pair<int, int> top = path.top();
path.pop();
cout << "(" << top.first << ", " << top.second << ") ";
}
cout << endl;
}

// A* Algorithm with weights


void aStar(const vector<vector<int> >& maze, pair<int, int> start, pair<int, int> target) {
int rows = maze.size();
int cols = maze[0].size();

priority_queue<Node*, vector<Node*>, Compare> openList;


vector<vector<bool> > closedList(rows, vector<bool>(cols, false));

// Push the start node into the open list


Node* startNode = new Node(start.first, start.second, 0, heuristic(start.first, start.second,
target.first, target.second));
openList.push(startNode);

// Define movement directions (up, down, left, right)


vector<pair<int, int> > directions;
directions.push_back(make_pair(-1, 0));
directions.push_back(make_pair(1, 0));
directions.push_back(make_pair(0, -1));
directions.push_back(make_pair(0, 1));

while (!openList.empty()) {
Node* current = openList.top();
openList.pop();

// Mark the current cell as closed


closedList[current->x][current->y] = true;
// If the target is reached, trace the path and return
if (current->x == target.first && current->y == target.second) {
tracePath(current);
return;
}

// Explore neighbors
for (size_t i = 0; i < directions.size(); ++i) {
int newX = current->x + directions[i].first;
int newY = current->y + directions[i].second;

if (isValid(newX, newY, maze) && !closedList[newX][newY]) {


int g = current->g + maze[newX][newY]; // Include the weight of the new cell
int h = heuristic(newX, newY, target.first, target.second);
Node* neighbor = new Node(newX, newY, g, h, current);
openList.push(neighbor);
}
}
}

cout << "No path found.\n";


}

int main() {
// Weighted maze: -1 indicates obstacle, other values represent path cost
vector<vector<int> > maze = {
{1, -1, 3, 1, 1},
{1, -1, 2, -1, 2},
{1, 2, 2, -1, 2},
{-1, -1, 1, -1, 1},
{1, 1, 1, 2, 1}
};

pair<int, int> start(0, 0); // Starting point


pair<int, int> target(4, 4); // Target point

aStar(maze, start, target);

return 0;
}
EXPLANATION

1. Data Structures Used

• Node Structure:
o Stores the coordinates (x, y), actual path cost g, heuristic cost h, total cost f = g + h,
and a pointer to its parent node.
• Priority Queue (openList):
o Maintains nodes sorted by the lowest f value to explore the most promising paths
first.
• 2D Boolean Vector (closedList):
o Keeps track of visited nodes to avoid redundant processing.

2. Functions

• heuristic(x1, y1, x2, y2):


o Computes the Manhattan Distance heuristic
h(n)=∣xgoal−xcurrent∣+∣ygoal−ycurrent∣, assuming movement is restricted to
horizontal and vertical directions.
• isValid(x, y, maze):
o Checks if a given cell (x, y) is within the maze boundaries and not an obstacle (-1).
• tracePath(Node* node):
o Backtracks from the target node to the start node, printing the optimal path found.
• aStar(maze, start, target):
o Implements the A* algorithm:
o Initializes the open list with the start node.
o Iteratively selects the node with the lowest f value.
o Expands neighbors, calculating their g (cost from start) and h (heuristic cost).
o Pushes valid neighbors into the open list for exploration.
o Terminates when the target is reached or no path is found.

3. Execution Flow

o The algorithm starts at the given initial position and expands nodes based on f(n) =
g(n) + h(n).
o The lowest-cost path is expanded first, avoiding obstacles.
o The final path is traced from the goal back to the start.

OUTPUT
CONCLUSION

The A* algorithm efficiently finds the shortest path in a weighted maze, balancing cost (g) and
heuristic (h). This technique is widely used in game AI for real-time pathfinding in navigation
meshes, 2D tile-based maps, and 3D environments.
EXPERIMENT- 2

AIM

To implement the A* (A-Star) algorithm for solving the 8-puzzle game, ensuring an optimal
solution while minimizing the number of moves required.

SOFTWARE REQUIRED

• C++ Compiler (GCC/MinGW/MSVC)


• Code Editor (VS Code, Code::Blocks, or Dev-C++)
• Operating System (Windows/Linux/MacOS)

THEORY

The 8-puzzle game consists of a 3x3 grid with 8 numbered tiles and one empty space. The
objective is to rearrange the tiles to reach a goal state by sliding tiles into the empty space.

A* (A-Star) is an informed search algorithm widely used for pathfinding and problem-solving
in artificial intelligence. It extends Dijkstra’s algorithm by using a heuristic to improve
efficiency. A* finds the optimal solution while minimizing computational costs.

The algorithm operates using the function:

where:

• represents the actual cost from the start node to the current node.

• is a heuristic estimate of the cost from the current node to the goal.

• is the total estimated cost of the cheapest solution passing through node .

Heuristic Function

A* uses a heuristic function to guide its search. Common heuristics for the 8-puzzle include:

• Manhattan Distance: Sum of the vertical and horizontal distances of each tile from its
goal position.

• Misplaced Tiles: Count of tiles not in their goal position.

• Linear Conflict: Refinement of Manhattan Distance considering row/column conflicts.


CODE

#include <iostream>
#include <vector>
#include <queue>
#include <set>
#include <cmath>

using namespace std;

struct Node {
vector<vector<int>> state;
int g, h, f;
Node* parent;
pair<int, int> blank;

Node(vector<vector<int>> state, int g, int h, Node* parent, pair<int, int> blank)


: state(state), g(g), h(h), f(g + h), parent(parent), blank(blank) {}
};

struct Compare {
bool operator()(const Node* a, const Node* b) {
return a->f > b->f;
}
};

int heuristic(const vector<vector<int>>& state, const vector<vector<int>>& goal) {


int h = 0;
for (int i = 0; i < 3; i++) {
for (int j = 0; j < 3; j++) {
if (state[i][j] != 0) {
for (int x = 0; x < 3; x++) {
for (int y = 0; y < 3; y++) {
if (state[i][j] == goal[x][y]) {
h += abs(i - x) + abs(j - y);
}
}
}
}
}
}
return h;
}

bool isGoal(const vector<vector<int>>& state, const vector<vector<int>>& goal) {


return state == goal;
}

void printPath(Node* node) {


if (node == nullptr) return;
printPath(node->parent);
for (auto row : node->state) {
for (int num : row) {
cout << num << " ";
}
cout << endl;
}
cout << " ---- \n";
}

void aStar(vector<vector<int>> start, vector<vector<int>> goal) {


priority_queue<Node*, vector<Node*>, Compare> openList;
set<vector<vector<int>>> closedList;

pair<int, int> blank;


for (int i = 0; i < 3; i++) {
for (int j = 0; j < 3; j++) {
if (start[i][j] == 0) blank = {i, j};
}
}

Node* startNode = new Node(start, 0, heuristic(start, goal), nullptr, blank);


openList.push(startNode);

vector<pair<int, int>> directions = {{-1, 0}, {1, 0}, {0, -1}, {0, 1}};

while (!openList.empty()) {
Node* current = openList.top();
openList.pop();
if (isGoal(current->state, goal)) {
cout << "Solution Found:\n";
printPath(current);
return;
}

closedList.insert(current->state);

for (auto dir : directions) {


int newX = current->blank.first + dir.first;
int newY = current->blank.second + dir.second;

if (newX >= 0 && newX < 3 && newY >= 0 && newY < 3) {
vector<vector<int>> newState = current->state;
swap(newState[current->blank.first][current->blank.second], newState[newX][newY]);

if (closedList.find(newState) == closedList.end()) {
Node* neighbor = new Node(newState, current->g + 1, heuristic(newState, goal),
current, {newX, newY});
openList.push(neighbor);
}
}
}
}
cout << "No solution found." << endl;
}

int main() {
vector<vector<int>> start = {{1, 2, 3}, {4, 0, 5}, {6, 7, 8}};
vector<vector<int>> goal = {{1, 2, 3}, {4, 5, 6}, {7, 8, 0}};

aStar(start, goal);
return 0;
}

EXPLANATION

1. Data Structures Used


o Node structure stores the 8-puzzle state, costs (g, h, f), parent node, and blank tile
position.
o Priority Queue maintains nodes sorted by the lowest f value.
o Set stores visited states to prevent redundant processing.
2. Functions
o heuristic(state, goal): Computes the Manhattan distance heuristic.
o isGoal(state, goal): Checks if the current state is the goal state.
o printPath(node): Prints the solution path by backtracking from the goal.
o aStar(start, goal): Implements the A* algorithm.

3. Execution Flow
o The algorithm expands nodes with the lowest f(n) = g(n) + h(n).
o The blank tile moves in all possible directions.
o The optimal sequence of moves is printed upon reaching the goal.

OUTPUT
CONCLUSION

The A* algorithm efficiently finds the optimal solution for the 8-puzzle game, minimizing moves
while ensuring correctness. It is widely used in AI for game solving and robotics applications.
EXPERIMENT- 3

AIM

To implement A* (A-Star) pathfinding on a 2D maze by converting the maze into a Navigation


Mesh (NavMesh) and finding the optimal path between a given start and goal position.

SOFTWARE REQUIRED

• C++ Compiler (GCC/MinGW/MSVC)


• Code Editor (VS Code, Code::Blocks, or Dev-C++)
• Operating System (Windows/Linux/MacOS)

THEORY

A* (A-Star) is an efficient and widely used pathfinding algorithm that finds the shortest path
between a start and a goal node in a graph. It is commonly used in games and robotics for AI
navigation.

Working of A*
• It uses g(n) (cost from start to current node) and h(n) (heuristic estimated cost from
current to goal).
• The f(n) = g(n) + h(n) function determines the best path.
• A* expands the most promising nodes first, ensuring an optimal and fast path.

In traditional maze games, AI characters need to navigate from a start position to a goal while
avoiding obstacles (walls). Instead of checking every cell in a grid-based approach, we use
NavMesh (Navigation Mesh) to optimize pathfinding.

A Navigation Mesh (NavMesh) is a simplified representation of a game environment, where


walkable areas are divided into polygons (nodes) instead of using a grid-based approach.

Advantages of NavMesh in Games


• Reduces the number of pathfinding calculations.
• Ensures smooth movement instead of grid-based sharp turns.
• Widely used in modern AI-based games like open-world RPGs and RTS games.

CODE

#include <iostream>
#include <vector>
#include <queue>
#include <cmath>
#include <unordered_map>
#include <algorithm>

using namespace std;

// Maze dimensions
const int ROWS = 5;
const int COLS = 5;

// Input maze (0 = walkable, 1 = wall)


int maze[ROWS][COLS] = {
{0, 0, 0, 1, 1},
{1, 1, 0, 1, 1},
{0, 0, 0, 0, 1},
{1, 1, 1, 0, 1},
{1, 1, 1, 0, 0},
};

// Structure for a polygon in the NavMesh


struct Polygon {
int id;
pair<int, int> center;
vector<int> neighbors;
};

// List of polygons in the NavMesh


vector<Polygon> navMesh;

// Check if a given cell in the maze is walkable


bool IsWalkable(int r, int c) {
return (r >= 0 && r < ROWS && c >= 0 && c < COLS && maze[r][c] == 0);
}

// Convert the 2D maze into NavMesh polygons


void GenerateNavMesh() {
int id = 0;
unordered_map<int, unordered_map<int, int>> cellToId;

for (int r = 0; r < ROWS; r++) {


for (int c = 0; c < COLS; c++) {
if (maze[r][c] == 0) {
Polygon polygon;
polygon.id = id;
polygon.center = {c, r};
cellToId[r][c] = id; // Store mapping of (row, col) to polygon ID
navMesh.push_back(polygon);
id++;
}
}
}

// Establish neighbor connections


for (auto &polygon : navMesh) {
int r = polygon.center.second;
int c = polygon.center.first;

if (IsWalkable(r, c - 1)) polygon.neighbors.push_back(cellToId[r][c - 1]); // Left


if (IsWalkable(r, c + 1)) polygon.neighbors.push_back(cellToId[r][c + 1]); // Right
if (IsWalkable(r - 1, c)) polygon.neighbors.push_back(cellToId[r - 1][c]); // Up
if (IsWalkable(r + 1, c)) polygon.neighbors.push_back(cellToId[r + 1][c]); // Down
}
}

// Euclidean distance heuristic


double Heuristic(pair<int, int> a, pair<int, int> b) {
return sqrt(pow(a.first - b.first, 2) + pow(a.second - b.second, 2));
}

// Find the closest NavMesh polygon to a given position


int FindClosestPolygon(pair<int, int> pos) {
double minDist = 1e9;
int closest = -1;

for (auto &polygon : navMesh) {


double distance = Heuristic(pos, polygon.center);
if (distance < minDist) {
minDist = distance;
closest = polygon.id;
}
}

return closest;
}
// A* Pathfinding algorithm on NavMesh
vector<int> AStarNavMesh(pair<int, int> start, pair<int, int> goal) {
int startNode = FindClosestPolygon(start);
int goalNode = FindClosestPolygon(goal);

if (startNode == goalNode) return {startNode};

priority_queue<pair<double, int>, vector<pair<double, int>>, greater<>> openSet;


openSet.push({0, startNode});

unordered_map<int, int> cameFrom;


unordered_map<int, double> gScore, fScore;

for (auto &polygon : navMesh) {


gScore[polygon.id] = 1e9;
fScore[polygon.id] = 1e9;
}

gScore[startNode] = 0;
fScore[startNode] = Heuristic(navMesh[startNode].center, navMesh[goalNode].center);

while (!openSet.empty()) {
int current = openSet.top().second;
openSet.pop();

if (current == goalNode) {
vector<int> path;
while (cameFrom.find(current) != cameFrom.end()) {
path.push_back(current);
current = cameFrom[current];
}
path.push_back(startNode);
reverse(path.begin(), path.end());
return path;
}

for (int neighbor : navMesh[current].neighbors) {


double tentative_gScore = gScore[current] + Heuristic(navMesh[current].center,
navMesh[neighbor].center);

if (tentative_gScore < gScore[neighbor]) {


cameFrom[neighbor] = current;
gScore[neighbor] = tentative_gScore;
fScore[neighbor] = gScore[neighbor] + Heuristic(navMesh[neighbor].center,
navMesh[goalNode].center);
openSet.push({fScore[neighbor], neighbor});
}
}
}

return {}; // No path found


}

// Main function
int main() {
GenerateNavMesh();

pair<int, int> startPos = {0, 0}; // Start position in the maze


pair<int, int> goalPos = {4, 4}; // Goal position in the maze

vector<int> path = AStarNavMesh(startPos, goalPos);

if (!path.empty()) {
cout << "Path found through NavMesh nodes:\n";
for (int id : path) {
auto node = navMesh[id];
cout << "ID: " << node.id << " -> (" << node.center.first << ", " <<
node.center.second << ")\n";
}
cout << "Goal Reached!\n";
} else {
cout << "No path found!\n";
}

return 0;
}
EXPLANATION

1. Data Structures Used


• Polygon Structure (NavMesh Node)
• Represents a walkable region in the maze.
• Stores:
o ID (unique identifier of the polygon).
o Center coordinates (x, y).
o Neighbors (IDs of adjacent polygons for pathfinding).
• Priority Queue (openSet)
• Stores polygons sorted by lowest f-value (f = g + h).
• Ensures the most promising polygon is expanded first.
• Unordered Map (cameFrom, gScore, fScore)
• cameFrom: Tracks the parent polygon for reconstructing the path.
• gScore: Stores g(n) (cost from start to current polygon).
• fScore: Stores f(n) = g(n) + h(n) (total estimated cost).

2. Functions
• Heuristic (Euclidean Distance)
• Computes the heuristic h(n) as the Euclidean distance between two polygon centers:
h(n)=sqrt((xgoal−xcurrent)^2+(ygoal−ycurrent)^2 )
• Find Closest NavMesh Polygon
• Finds the nearest polygon for a given start or goal position by comparing distances.
• A Algorithm on NavMesh*
1. Initialize the Open List
o Start polygon is added with g = 0 and f = h(start, goal).
2. Expand the Best Node (Lowest f-value)
o Pick the polygon with the lowest f from the open list.
o If it is the goal polygon, terminate and reconstruct the path.
3. Explore Neighbors
o Calculate g and f values for each connected polygon.
o If the new path is better, update scores and parent pointers.
4. Continue Until Goal is Found or Open List is Empty
• Trace the Path
• Backtracks from the goal polygon to the start polygon using cameFrom.
• Prints the polygon IDs and their center coordinates.

3. Execution Flow
1. Convert the maze into a Navigation Mesh by grouping walkable areas into polygons.
2. Find the closest polygons to the start and goal positions.
3. Run A pathfinding* to find the shortest route through the polygons.
4. Output the final path, listing both polygon IDs and coordinates.
OUTPUT

CONCLUSION

In this experiment, we successfully implemented *A pathfinding on a 2D maze using a Navigation


Mesh (NavMesh)**. Instead of navigating through individual grid cells, we divided the maze into
walkable polygonal regions, significantly improving efficiency and movement smoothness.
EXPERIMENT- 4

AIM

To implement a Finite State Machine (FSM) for an NPC (Non-Player Character) Zombie
AI in a survival game using Python, where the NPC transitions between different behavioral
states.

SOFTWARE REQUIRED

• Python 3 Interpreter (Python 3.x)


• Code Editor: VS Code (Visual Studio Code)
• Operating System: Windows / Linux / macOS

THEORY

In modern video games, Non-Player Characters (NPCs) are controlled using Artificial
Intelligence (AI) techniques to make them behave realistically. One common approach to
implementing NPC behavior is using a Finite State Machine (FSM), which allows the NPC
to transition between different states based on external events and conditions.
In this experiment, we implement Zombie AI for a Survival Game using FSM in Python.
The zombie will have different states like Wandering, Investigating, Chasing, Attacking,
Feeding, and Searching, making its behavior dynamic and engaging.

Finite State Machine (FSM) in AI


A Finite State Machine (FSM) is a mathematical model used to design NPC behavior. It
consists of:
1. States – The different behaviors an NPC can exhibit.
2. Transitions – The conditions that trigger a switch from one state to another.
3. Events – The external triggers (like player actions or environmental factors) that cause
transitions.

FSM is widely used in games because it provides:


• Predictable behavior – NPCs follow structured decision-making.
• Efficiency – FSM is lightweight and doesn’t consume excessive resources.
• Modularity – Additional states can be easily added.

Zombie AI in a Survival Game


The Zombie NPC will have different states to create a realistic survival horror experience. The
states and transitions are:
1. Wandering
• Default state: The zombie aimlessly roams the environment.
• Transitions:
o If the zombie hears a noise, it switches to Investigating.
o If it sees the player, it switches to Chasing.
2. Investigating
• The zombie moves toward the location of a noise.
• Transitions:
o If it finds the player, it switches to Chasing.
o If it finds nothing, it switches back to Wandering.
3. Chasing
• The zombie actively pursues the player.
• Transitions:
o If the player escapes or hides, the zombie switches to Searching.
o If the player is close enough, the zombie switches to Attacking.
4. Attacking
• The zombie tries to bite or claw the player.
• Transitions:
o If the player is killed, the zombie switches to Feeding.
o If the player escapes, the zombie switches to Searching.
5. Feeding
• If the zombie kills the player, it stops to eat.
• Transitions:
o After finishing feeding, it returns to Wandering.
6. Searching
• If the player disappears from sight, the zombie searches the nearby area.
• Transitions:
o If it finds the player, it switches to Chasing.
o If it finds nothing, it switches back to Wandering.

CODE

import time
import random

class ZombieFSM:
def init (self):
self.state = "Wandering"
self.search_time = 0 # Time zombie spends investigating
self.hunger_level = random.randint(3, 6) # Hunger increases aggression

def on_event(self, event):


if self.state == "Wandering":
print("Zombie: *Mindlessly roaming...*")
if event == "heard_noise":
print("Zombie: *Grrrr... what's that noise?*")
self.state = "Investigating"
self.search_time = 3
elif event == "saw_player":
print("Zombie: *BRAINS!* (Starts chasing)")
self.state = "Chasing"

elif self.state == "Investigating":


if self.search_time > 0:
print("Zombie: *Sniff sniff... Looking around...*")
self.search_time -= 1
else:
print("Zombie: *Nothing here... Back to wandering.*")
self.state = "Wandering"

if event == "saw_player":
print("Zombie: *Found fresh meat! (Chasing!)*")
self.state = "Chasing"

elif self.state == "Chasing":


print("Zombie: *Groooar! Running after the player!*")
if event == "lost_sight":
print("Zombie: *Where did they go?! Searching...*")
self.state = "Searching"
self.search_time = 3
elif event == "player_close":
print("Zombie: *Lunging at the player!*")
self.state = "Attacking"

elif self.state == "Attacking":


print("Zombie: *Biting and clawing!*")
if event == "player_killed":
print("Zombie: *Mmm... Fresh meat!* (Starts Feeding)")
self.state = "Feeding"
elif event == "player_escaped":
print("Zombie: *Huh?! Where'd they go?!* (Searching)")
self.state = "Searching"
self.search_time = 3

elif self.state == "Feeding":


if self.hunger_level > 0:
print("Zombie: *Munch munch...*")
self.hunger_level -= 1
else:
print("Zombie: *Satisfied... Time to wander again...*")
self.state = "Wandering"
elif self.state == "Searching":
if self.search_time > 0:
print("Zombie: *Looking around... Maybe they are hiding...*")
self.search_time -= 1
else:
print("Zombie: *Guess they got away... Back to wandering.*")
self.state = "Wandering"

def get_state(self):
return self.state

# Simulating Zombie AI Behavior


zombie = ZombieFSM()

# Example scenario with different events


events = [
"heard_noise", "saw_player", "player_close", "player_escaped",
"lost_sight", "lost_sight", "saw_player", "player_close",
"player_killed"
]

for event in events:


time.sleep(1)
zombie.on_event(event)
print(f"Current State: {zombie.get_state()}\n")

EXPLANATION

This Python program implements a Finite State Machine (FSM) for a Zombie NPC in a
Survival Game. The zombie reacts to environmental stimuli such as noises and the player's
presence, transitioning between different states based on events.

Data Structures Used


ZombieFSM Class (Finite State Machine)

• Maintains the zombie's current state.


• Stores variables like hunger level (aggression factor) and search time (how long it searches
before giving up).
• Controls state transitions based on received events.
State Transition Table
Current State Event New State Action Performed
Wandering heard_noise Investigating Moves to noise source
Wandering saw_player Chasing Runs toward player
Investigating Time expires Wandering Resumes wandering
Investigating saw_player Chasing Runs toward player
Chasing player_close Attacking Lunges at the player
Chasing lost_sight Searching Looks around for player
Attacking player_killed Feeding Starts eating the player
Attacking player_escaped Searching Starts looking around
Searching Time expires Wandering Gives up and resumes wandering
Feeding Hunger Satisfied Wandering Resumes wandering

Explanation of Code Functionality


State Management in on_event(event)

• The on_event(event) method processes events and changes the zombie’s state accordingly.
• The zombie begins in the Wandering state and transitions based on stimuli:
o Noise → Investigating
o Seeing a player → Chasing
o Losing sight of the player → Searching
o Getting close to the player → Attacking
o Killing the player → Feeding
o If hunger is satisfied → Wandering again

Simulating the Zombie's Perception

• The FSM detects environment changes via events (heard_noise, saw_player, etc.).
• Events simulate an AI perception system in a real game.

Zombie AI Behavior Execution

1. Wandering
o The zombie roams without a specific target.
o If it hears a noise, it moves to Investigating.
o If it sees the player, it starts Chasing.
2. Investigating
o The zombie looks for the source of the noise.
o If the search time expires, it resumes wandering.
o If it spots the player, it switches to Chasing.
3. Chasing
o The zombie aggressively follows the player.
o If it loses sight of the player, it starts Searching.
o If it gets close enough, it switches to Attacking.
4. Attacking
o The zombie tries to bite the player.
o If the player escapes, it transitions to Searching.
o If the player dies, it starts Feeding.
5. Searching
o The zombie looks for the player for a while.
o If the search time expires, it resumes wandering.
6. Feeding
o The zombie eats the player's body for a few cycles.
o Once its hunger is satisfied, it returns to Wandering.

Execution Flow of the Current FSM Code


Simulating the Zombie's Behavior in the Console

The provided code runs a simulation where events are triggered sequentially.

Analysis

• The zombie responds to events in a realistic way.


• It prioritizes threats but lacks an explicit priority queue.
• With a priority queue, the zombie could make better decisions when multiple events occur
at once.

OUTPUT

CONCLUSION

In this experiment, we designed and implemented Finite State Machine (FSM) based Zombie
AI for a Survival Game using Python. The FSM model allows the zombie to react
dynamically to external stimuli, providing engaging and challenging gameplay for the player.
This approach is widely used in game development to create intelligent and realistic NPC
behaviors.
EXPERIMENT- 5

AIM

Implement Python code of behavior tree on Zombie AI Game.

SOFTWARE REQUIRED

• Python 3 Interpreter (Python 3.x)


• Code Editor: VS Code (Visual Studio Code)
• Operating System: Windows / Linux / macOS

THEORY

A Behavior Tree (BT) is a hierarchical decision-making model used in game AI. It consists of
different node types that control NPC (Non-Playable Character) behavior in a structured way. Unlike
Finite State Machines (FSM), BTs provide modularity, flexibility, and reusability.
Key Components of a Behavior Tree
1. Control Nodes
o Selector: Tries each child node until one succeeds.
o Sequence: Executes children in order; fails if any fail.
2. Decorator Nodes
o Modify the behavior of a child (e.g., Inverter flips success/failure).
3. Leaf Nodes
o Condition Nodes: Check game states (e.g., "Is the player nearby?").
o Action Nodes: Perform actions (e.g., attack, move, flee).

Zombie AI in a Survival Game


In a zombie survival game, AI-controlled zombies need to make dynamic decisions based on player
actions. A behavior tree enables zombies to:
• Attack the player if nearby.
• Retreat when health is low.
• Wander aimlessly if no immediate threats are present.
This approach ensures realistic, engaging AI behavior by allowing zombies to react dynamically
rather than following scripted patterns.

CODE
import random

# Base Node Class


class Node:
def run(self):
"""Runs the node and returns status"""
raise NotImplementedError

# Control Flow Nodes


class Selector(Node):
def init (self, children):
self.children = children

def run(self):
for child in self.children:
if child.run(): # If any child succeeds, return success
return True
return False # If all fail, return failure

class Sequence(Node):
def init (self, children):
self.children = children

def run(self):
for child in self.children:
if not child.run(): # If any child fails, return failure
return False
return True # If all succeed, return success

# Action (Leaf) Nodes


class CheckPlayerNearby(Node):
def run(self):
"""Returns True if player is nearby"""
player_nearby = random.choice([True, False])
print(f"[Check] Player Nearby: {player_nearby}")
return player_nearby

class ChasePlayer(Node):
def run(self):
print("[Action] Zombie is chasing the player...")
return True

class CheckAttackRange(Node):
def run(self):
"""Check if the player is within attack range"""
in_range = random.choice([True, False])
print(f"[Check] Player in Attack Range: {in_range}")
return in_range

class AttackPlayer(Node):
def run(self):
print("[Action] Zombie is attacking the player!")
return random.choice([True, False]) # Attack might fail

class CheckHealth(Node):
def init (self):
self.health = random.randint(10, 100) # Random initial health

def run(self):
"""Check if zombie has low health (below 30)"""
print(f"[Check] Zombie Health: {self.health}")
return self.health < 30
class Retreat(Node):
def run(self):
print("[Action] Zombie is retreating to recover...")
return True

class Wander(Node):
def run(self):
print("[Action] Zombie is wandering around...")
return True

# Build the Behavior Tree


def build_behavior_tree():
root = Selector([
Sequence([CheckPlayerNearby(), ChasePlayer(),
Selector([Sequence([CheckAttackRange(), AttackPlayer()]), Wander()])]),
Sequence([CheckHealth(), Retreat()]),
Wander()
])
return root

# Run the behavior tree


tree = build_behavior_tree()
for _ in range(5): # Simulate multiple decision cycles
print("\n=== New Decision Cycle ===")
tree.run()

EXPLANATION

This code implements a Behavior Tree (BT) to control a Zombie AI in a game. The tree allows the
zombie to make dynamic decisions such as chasing the player, attacking, retreating when health
is low, or wandering if idle.

1. Base Node Class

The base node class defines the structure of all behavior tree nodes. Each node must implement a
run() method, which determines whether the action succeeds (True) or fails (False).

2. Control Flow Nodes

These nodes determine how different behaviors are executed in a structured way.

• Selector Node: Tries each child node one by one. If any child succeeds, it stops and returns
success. If all fail, it returns failure.
• Sequence Node: Executes its children in order. If any child fails, the sequence stops and
returns failure. If all succeed, it returns success.

3. Action (Leaf) Nodes

Leaf nodes represent conditions (checks) or actions (behaviors).

• CheckPlayerNearby: Randomly determines if the player is near. If true, the zombie chases
the player.
• ChasePlayer: Executes the action of chasing the player when nearby.
• CheckAttackRange: Checks whether the zombie is close enough to attack.
• AttackPlayer: If in range, the zombie attacks the player, but the attack may fail.
• CheckHealth: Determines whether the zombie's health is low (below a threshold).
• Retreat: If health is low, the zombie retreats to recover.
• Wander: If the zombie is not engaged in combat, it wanders randomly in the environment.

4. Behavior Tree Structure

The tree is structured with different behaviors prioritized using Selector and Sequence nodes. The
logic follows:

1. If the player is nearby, the zombie chases them.


2. If the zombie reaches attack range, it tries to attack.
3. If the attack fails, the zombie wanders instead.
4. If the zombie’s health is low, it retreats instead of fighting.
5. If none of the above conditions are met, the zombie wanders aimlessly.

5. Execution and Decision Cycles

The behavior tree is run multiple times, simulating multiple decision cycles. Each cycle determines
the zombie’s action based on random conditions, making the AI dynamic and unpredictable.

This structure allows the zombie to react logically and adaptively to the game environment, making
the AI behavior more engaging and realistic.

OUTPUT
CONCLUSION

This experiment demonstrated the effectiveness of Behavior Trees (BTs) for Zombie AI in a
survival game. Compared to Finite State Machines (FSMs), BTs offer modularity,
flexibility, and scalability for decision-making. The zombie dynamically chased, attacked,
retreated, or wandered based on conditions, making its behavior more realistic and
adaptive. This structured AI approach enhances gameplay immersion and can be further
improved with group coordination, environmental awareness, and advanced behaviors.
EXPERIMENT- 6

AIM

Combine Utility theory with behavior tree for Zombie AI game.

SOFTWARE REQUIRED

• Python 3 Interpreter (Python 3.x)


• Code Editor: VS Code (Visual Studio Code)
• Operating System: Windows / Linux / macOS

THEORY

In game AI, Behavior Trees (BTs) provide a structured and modular approach to decision-
making, allowing AI characters to execute actions based on logical conditions. However, BTs alone
follow a fixed hierarchical structure, making them less flexible in dynamic situations.
Utility Theory enhances BTs by introducing a numerical decision-making system, where actions
are assigned utility values based on game conditions such as health, distance to the player, and
attack success rates. The action with the highest utility is chosen, ensuring that AI behavior is more
adaptive and context-sensitive rather than following rigid priority orders.
By integrating Utility Theory with Behavior Trees, the Zombie AI can:
• Dynamically switch behaviors based on environmental factors.
• Prioritize actions like chasing, attacking, retreating, or wandering based on real-time
utility calculations.
• Make more realistic and unpredictable decisions, improving gameplay immersion.
This hybrid approach creates a scalable, intelligent, and efficient AI model, making NPCs more
responsive and lifelike.

CODE

import random

# Base Node Class


class Node:
def run(self):
"""Runs the node and returns status"""
raise NotImplementedError

# Control Flow Nodes


class Selector(Node):
def init (self, children):
self.children = children

def run(self):
for child in self.children:
if child.run(): # If any child succeeds, return success
return True
return False # If all fail, return failure

class Sequence(Node):
def init (self, children):
self.children = children

def run(self):
for child in self.children:
if not child.run(): # If any child fails, return failure
return False
return True # If all succeed, return success

# Utility-Based Decision Node


class UtilitySelector(Node):
def init (self, actions):
self.actions = actions # Dictionary of {action: utility function}

def run(self):
# Calculate utility for each action
utilities = {action: utility() for action, utility in self.actions.items()}
# Select the action with the highest utility
best_action = max(utilities, key=utilities.get)
print(f"[Decision] Selected action: {best_action. class . name } with utility
{utilities[best_action]:.2f}")
return best_action.run()

# Action (Leaf) Nodes


class CheckPlayerNearby(Node):
def run(self):
"""Returns True if player is nearby"""
player_nearby = random.choice([True, False])
print(f"[Check] Player Nearby: {player_nearby}")
return player_nearby

class ChasePlayer(Node):
def run(self):
print("[Action] Zombie is chasing the player...")
return True

class CheckAttackRange(Node):
def run(self):
"""Check if the player is within attack range"""
in_range = random.choice([True, False])
print(f"[Check] Player in Attack Range: {in_range}")
return in_range

class AttackPlayer(Node):
def run(self):
print("[Action] Zombie is attacking the player!")
return random.choice([True, False]) # Attack might fail

class CheckHealth(Node):
def init (self):
self.health = random.randint(10, 100) # Random initial health

def run(self):
"""Check if zombie has low health (below 30)"""
print(f"[Check] Zombie Health: {self.health}")
return self.health < 30

class Retreat(Node):
def run(self):
print("[Action] Zombie is retreating to recover...")
return True

class Wander(Node):
def run(self):
print("[Action] Zombie is wandering around...")
return True

# Utility Functions for Actions


def utility_chase():
"""Higher utility if player is nearby"""
return random.uniform(0.6, 1.0) if random.choice([True, False]) else random.uniform(0.1, 0.4)

def utility_attack():
"""Higher utility if the player is in attack range"""
return random.uniform(0.7, 1.0) if random.choice([True, False]) else random.uniform(0.2, 0.5)

def utility_retreat():
"""Higher utility if health is low"""
return random.uniform(0.8, 1.0) if random.randint(0, 100) < 30 else random.uniform(0.1, 0.3)

def utility_wander():
"""Default wandering action when no other priority exists"""
return random.uniform(0.2, 0.6)

# Build the Behavior Tree with Utility-Based Decision Making


def build_behavior_tree():
root = Selector([
Sequence([CheckPlayerNearby(), ChasePlayer(),
Selector([Sequence([CheckAttackRange(), AttackPlayer()]), Wander()])]),
UtilitySelector({
Retreat(): utility_retreat,
AttackPlayer(): utility_attack,
ChasePlayer(): utility_chase,
Wander(): utility_wander
})
])
return root
# Run the behavior tree
tree = build_behavior_tree()
for _ in range(5): # Simulate multiple decision cycles
print("\n=== New Decision Cycle ===")
tree.run()
EXPLANATION

This code integrates Behavior Trees (BTs) with Utility Theory to create an adaptive Zombie AI in
a survival game. The AI makes decisions dynamically based on logical conditions and utility scores,
ensuring more realistic and flexible behavior.

1. Core Components

1. Behavior Tree Structure


o The AI follows a Selector → Sequence → Action hierarchy.
o The zombie checks for the player, then chases, attacks, or wanders.
o If low on health, it may retreat instead of fighting.
2. Utility-Based Decision Making
o The UtilitySelector assigns a utility score to each possible action.
o The action with the highest score is selected.
o Utility values are randomized but biased toward logical choices:
§ Higher chance to attack if the player is in range.
§ Higher chance to retreat when health is low.
§ Higher chance to chase if the player is nearby.

2. Key Classes and Functions

1. Control Flow Nodes


o Selector: Tries child nodes until one succeeds.
o Sequence: Runs child nodes in order; fails if one fails.
o UtilitySelector: Calculates utility scores for actions and picks the best one.
2. Zombie Actions
o CheckPlayerNearby: Determines if the player is within range.
o ChasePlayer: Moves toward the player if detected.
o CheckAttackRange: Checks if the player is close enough to attack.
o AttackPlayer: Attempts an attack; success is random.
o CheckHealth: Checks if the zombie’s health is low.
o Retreat: Moves away if health is critically low.
o Wander: Moves randomly if no priority action is chosen.
3. Utility Functions
o utility_chase(): Higher utility if the player is nearby.
o utility_attack(): Higher utility if the player is within attack range.
o utility_retreat(): Higher utility when health is low.
o utility_wander(): Default action with medium utility.

3. Behavior Tree Execution

• The tree first checks if the player is nearby.


• If the player is detected, the zombie chases and attempts to attack.
• If no player is detected, the zombie uses utility-based selection:
o If low health, it retreats.
o If attack is possible, it attacks.
o If nothing is urgent, it wanders.
OUTPUT

CONCLUSION

Combining Utility Theory with Behavior Trees creates a dynamic and adaptive Zombie
AI. Unlike static decision-making, this approach allows real-time evaluation of factors like
health, distance, and attack success, making zombies more responsive and unpredictable.
It enhances realism, scalability, and gameplay immersion, ensuring a challenging and
engaging survival experience.

You might also like