Ai File-1
Ai File-1
The Water Jug Problem is a classic puzzle in which you are given two jugs with different
capacities (typically 5 liters and 3 liters) and must measure out a specific amount of
water-commonly 4 liters-using only these jugs, with no measuring markers. You can fill,
empty, or transfer water between the jugs, and the challenge lies in finding a sequence
of actions that results in exactly the target volume in one of the jugs.
Code:
def display_jugs(x, y):
print(f"Jug A: {x} liters | Jug B: {y} liters")
def run_jug_simulation():
jug_a = int(input("Initial amount in Jug A (0-5): "))
jug_b = int(input("Initial amount in Jug B (0-3): "))
if choice == 1:
jug_a = 0
elif choice == 2:
jug_b = 0
elif choice == 3:
jug_a, jug_b = transfer_from_a_to_b(jug_a, jug_b)
elif choice == 4:
jug_a, jug_b = transfer_from_b_to_a(jug_a, jug_b)
elif choice == 5:
jug_a = 5
elif choice == 6:
jug_b = 3
display_jugs(jug_a, jug_b)
def start_program():
while True:
run_jug_simulation()
if input("\nTry again? (y/n): ").lower() != 'y':
break
if __name__ == "__main__":
start_program()
05519051623 AIML B1 Udbhav Kashyap
Learning:
Conclusion:
This program helped us understand how search algorithms can be used to tackle
real-world problems. By exploring all possible states of the two-jug water measuring
problem and applying a straightforward decision-making approach, we successfully
reached the target outcome. This hands-on experience highlights the significance of
state-space search in AI, demonstrating that even basic loops and conditionals can be
combined to build an effective problem-solving algorithm.
05519051623 AIML B1 Udbhav Kashyap
The 8-puzzle problem is a classic sliding puzzle consisting of a 3×3 grid with 8
numbered tiles and one empty space (represented by 0). The objective is to reach a
predefined goal state, typically where the tiles are ordered from 1 to 8 with the blank at
the bottom right, by sliding adjacent tiles into the empty space. Only one tile can be
moved at a time, and legal moves are restricted to up, down, left, or right of the blank.
The challenge lies in finding the shortest or most efficient sequence of moves to
transform a given initial configuration into the goal configuration.
Code:
def up(target,initial):
row = target[0]
col = target[1]
new_row = row - 1
if (new_row >= 0):
initial[row][col],initial[new_row][col] =
initial[new_row][col],initial[row][col]
target = (new_row,col)
return target,initial
def down(target,initial):
row = target[0]
col = target[1]
new_row = row + 1
if (new_row <= 2):
initial[row][col],initial[new_row][col] =
initial[new_row][col],initial[row][col]
target = (new_row,col)
return target,initial
def left(target,initial):
row = target[0]
col = target[1]
new_col = col - 1
if (new_col >= 0):
05519051623 AIML B1 Udbhav Kashyap
initial[row][col],initial[row][new_col] =
initial[row][new_col],initial[row][col]
target = (row,new_col)
return target,initial
def right(target,initial):
row = target[0]
col = target[1]
new_col = col + 1
if (new_col <= 2):
initial[row][col],initial[row][new_col] =
initial[row][new_col],initial[row][col]
target = (row,new_col)
return target,initial
def print_matrix(initial):
for i in range(3):
for j in range(3):
print(initial[i][j], end = " ")
print()
def target_find(arr):
loc = tuple()
for i in range(3):
for j in range(3):
if arr[i][j] == 0:
loc = (i,j)
break
return loc
def main():
initial = []
goal = []
print("Input initial state: ")
05519051623 AIML B1 Udbhav Kashyap
for i in range(3):
x = list(map(int,input(f"Enter row {i + 1}: ").split()))
initial.append(x)
init_pos = target_find(initial)
if (choice == 1):
init_pos, initial = up(init_pos,initial)
print_matrix(initial)
print_matrix(initial)
if (initial == goal):
print("You won! ")
break
if __name__ == "__main__":
main()
05519051623 AIML B1 Udbhav Kashyap
Learning:
The 8-puzzle serves as an excellent exercise to sharpen logical reasoning and
problem-solving skills. It involves determining the right sequence of steps to reach a
desired goal state, encouraging a methodical approach to tackling challenges. The
puzzle introduces the concept of state-space search, where all possible configurations
are considered to find an optimal path. This fosters algorithmic thinking and helps
demonstrate how search strategies like BFS and A* can be applied in real-world
scientific or engineering problems to systematically explore solutions. Moreover, it
emphasizes not just finding any solution, but discovering the most efficient one —
highlighting the importance of optimization in computational thinking
Conclusion:
The 8-puzzle problem is a simple yet powerful way to grasp the basics of
problem-solving and algorithm design. It enhances understanding of state-space
exploration, search strategies, and optimization methods. Though the puzzle itself is
basic, the principles and techniques used to solve it are applicable to complex
challenges in computer science and artificial intelligence. Ultimately, working through
the 8-puzzle teaches valuable lessons in logical thinking, persistence, and finding the
most efficient path to a solution.
05519051623 AIML B1 Udbhav Kashyap
Hill Climbing with Heuristics is a local search technique widely used in artificial
intelligence to solve optimization problems. The algorithm begins with an initial state
and continually moves to neighboring states that show improvement, guided by a
heuristic—a practical estimate of how close a given state is to the desired goal.
In the case of the 8-puzzle problem, typical heuristic functions include:
● Misplaced Tiles: The number of tiles that are not in their correct positions
● Manhattan Distance: The total of the horizontal and vertical distances each tile
must move to reach its goal position.
The algorithm examines all neighboring states, evaluates them using the heuristic, and
selects the one with the most promising score (i.e., the lowest cost or best fit). If none of
the neighboring states offer an improvement, the algorithm stops. This limitation means
it can get stuck at a local optimum—a solution that appears best in its immediate
neighborhood but is not the overall best.
Code:
import copy
def up(target, initial):
row, col = target
new_row = row - 1
if new_row >= 0:
initial = copy.deepcopy(initial)
initial[row][col], initial[new_row][col] =
initial[new_row][col], initial[row][col]
target = (new_row, col)
return target, initial
initial = copy.deepcopy(initial)
initial[row][col], initial[row][new_col] =
initial[row][new_col], initial[row][col]
target = (row, new_col)
return target, initial
def print_matrix(initial):
for row in initial:
print(" ".join(map(str, row)))
print()
def target_find(arr):
for i in range(3):
for j in range(3):
if arr[i][j] == 0:
return (i, j)
return None
sol = None
moves = [
up(target, initial),
right(target, initial),
left(target, initial),
down(target, initial)
]
def main():
initial = []
goal = []
for i in range(3):
initial.append(list(map(int, input(f"Enter row {i + 1}:
").split())))
init_pos = target_find(initial)
hill_climbing(initial, goal, init_pos, float("inf"))
print("\nSolution State:")
if sol:
print_matrix(sol)
else:
print("No solution found.")
if __name__ == "__main__":
05519051623 AIML B1 Udbhav Kashyap
Learning:
Working on this program helped us understand how using heuristics can significantly
improve problem-solving efficiency, especially in scenarios with constraints. Unlike
approaches that explore all possible options, Hill Climbing uses a focused strategy,
choosing paths that show immediate improvement. Through this task, we saw how
quickly it can lead to a solution, but also how it can fail by getting stuck in
less-than-ideal states. To address these drawbacks, we explored variations like
simulated annealing and random restarts—techniques designed to push the search out
of local traps and toward better outcomes.
Conclusion:
Hill Climbing offered a smart, step-by-step way to tackle the 8-puzzle by rearranging
tiles based on heuristic feedback. This project sharpened our grasp of how local search
methods function and how they apply to real-world optimization challenges. However, its
tendency to settle for suboptimal answers highlights the need for more robust
strategies—such as A* or simulated annealing—when dealing with more complex or
large-scale problems.
05519051623 AIML B1 Udbhav Kashyap
Code:
open.put(start_index)
visited[start_index] = 1
if node == goal_index:
break
for i in range(len(graph)):
if visited[i] == 0 and graph[node][i] == 1:
open.put(i)
visited[i] = 1
05519051623 AIML B1 Udbhav Kashyap
graph = [[0,1,1,0,0,0],
[1,0,0,1,0,0],
[1,0,0,0,1,0],
[0,1,0,0,0,1],
[0,0,1,0,0,1],
[0,0,0,1,1,0]]
Depth-First Search (DFS) is a method for navigating through trees or graphs by diving
deep into each branch before moving to the next. It begins at a starting point (usually
the root or source node) and proceeds by visiting a node and then recursively (or via a
stack) exploring its child nodes one by one. This process continues until it reaches a
node with no unvisited neighbors, at which point it backtracks to explore alternative
paths from previous nodes. DFS is particularly effective for applications like detecting
cycles, performing topological sorts, or exploring all possible paths
Code:
itoc = {0: 'A', 1: 'B', 2: 'C', 3: 'D', 4: 'E', 5: 'F'}
ctoi = {j: i for i, j in itoc.items()}
if goal == start:
print("Path:", path)
return
for i in range(len(graph)):
if graph[ctoi[start]][i] == 1 and not visited[i]:
dfs(graph, visited, goal, itoc[i], path)
path.pop()
visited[ctoi[start]] = 0
graph = [[0,1,1,0,0,0],
[1,0,0,1,0,0],
[1,0,0,0,1,0],
[0,1,0,0,0,1],
[0,0,1,0,0,1],
[0,0,0,1,1,0]]
Learning
This program helped us gain hands-on experience with two fundamental graph traversal
techniques—BFS and DFS. We observed how BFS proceeds level by level, making it
ideal for identifying the shortest path in graphs without weights. In contrast, DFS delves
deep into each branch before moving on, which is advantageous in tasks like
backtracking and topological ordering.
Conclusion
The implementation of both BFS and DFS successfully demonstrated different traversal
strategies. BFS allowed for systematic exploration across levels, while DFS followed
individual paths to their depth before reversing course. This practical exercise
strengthened our grasp of how these core algorithms function and their applicability in
various graph-based scenarios.
05519051623 AIML B1 Udbhav Kashyap
6. A* Algorithm:
Code:
open_list.put((heuristics[start], 0, start))
came_from[start] = None
if current == goal:
path = []
while current is not None:
path.append(current)
current = came_from[current]
return path[::-1]
if current in closed_set:
05519051623 AIML B1 Udbhav Kashyap
continue
closed_set.add(current)
new_g = g + cost
new_f = new_g + heuristics.get(neighbor, 0)
return None
graph = {
'X': [('Y', 2), ('Z', 4)],
'Y': [('W', 3), ('V', 5)],
'Z': [('U', 1), ('T', 7)],
'W': [('R', 2)],
'V': [('R', 6)],
'U': [('R', 3)],
'T': [('R', 1)],
'R': []
}
heuristics = {
'X': 8, 'Y': 6, 'Z': 5, 'W': 4, 'V': 3, 'U': 2, 'T': 1, 'R':
0
}
The AO* (And-Or Star) algorithm is a best-first search technique designed to address
problems represented by AND-OR graphs. These graphs are commonly seen in
decision-making and planning, where solutions require managing multiple subgoals
(AND nodes) or choosing between alternatives (OR nodes). Unlike traditional search
algorithms, which explore a single path at a time, AO* is able to handle complex
problems by considering various options simultaneously. It uses a heuristic to estimate
the cost of reaching the goal and iteratively refines paths, discarding less promising
ones. This makes AO* particularly well-suited for tasks like automated planning,
diagnostic systems, and problem decomposition in AI, providing both optimality and
efficiency.
Code:
class AOStar:
def __init__(self, graph, heuristic):
self.graph = graph
self.heuristic = heuristic
self.solution_graph = {}
def get_minimum_cost_child_nodes(self, node):
cost_list = []
for child in self.graph.get(node, []):
cost = 0
node_list = []
for c in child:
cost += self.heuristic[c]
node_list.append(c)
cost_list.append((cost, node_list))
if not cost_list:
return 0, []
cost, child_nodes =
self.get_minimum_cost_child_nodes(node)
self.heuristic[node] = cost
self.solution_graph[node] = child_nodes
if not backtracking:
self.print_solution()
def print_solution(self):
print("\nOptimal Solution Path:")
print(self.solution_graph)
graph = {
'A': [['B', 'C'], ['D']],
'B': [['E'], ['F']],
'C': [['G'], ['H']],
'D': [['I']],
'E': [],
'F': [],
'G': [],
'H': [],
'I': []
}
heuristic = {
'A': 10,
'B': 6,
'C': 4,
05519051623 AIML B1 Udbhav Kashyap
'D': 8,
'E': 2,
'F': 4,
'G': 3,
'H': 2,
'I': 1
}
ao = AOStar(graph, heuristic)
ao.ao_star('A')
Learning:
The AO* algorithm demonstrates how to approach problems using AND-OR graphs,
where solutions require either satisfying multiple conditions (AND) or choosing among
alternatives (OR). Unlike traditional pathfinding methods, AO* operates on goal trees,
navigating them efficiently with heuristics to direct the search toward the most promising
solution, thereby avoiding unnecessary exploration. By recursively assessing
subproblems and expanding only the most pertinent paths, AO* emphasizes optimal
decision-making within structured problem domains. It highlights the significance of
well-designed heuristics, optimal substructure, and selective pruning, providing valuable
insights for creating goal-oriented, cost-effective, and scalable AI systems.
05519051623 AIML B1 Udbhav Kashyap
Conclusion:
In conclusion, the AO* algorithm proves to be an efficient and intelligent method for
solving problems in AND-OR graphs. Its ability to manage both AND and OR nodes
makes it versatile and effective across a wide range of applications, from gaming and
robotics to natural language processing.
05519051623 AIML B1 Udbhav Kashyap
The Minimum Color Graph Problem, also referred to as the Graph Coloring Problem, is
a well-known optimization challenge in computer science and combinatorics. The goal is
to assign the smallest number of colors to the vertices of a graph in such a way that no
two adjacent vertices share the same color. This problem is relevant in fields like
scheduling, register allocation, and frequency assignment. As an NP-hard problem,
there is no known efficient algorithm that can solve all cases in a timely manner.
Solutions typically rely on methods like backtracking, greedy algorithms, or heuristics to
achieve a valid coloring using the fewest possible colors.
def graph_coloring(adj_matrix):
n = len(adj_matrix)
colors = [-1] * n
colors[0] = 0
return colors
adj_matrix = [
[0, 1, 1, 1],
[1, 0, 1, 0],
[1, 1, 0, 1],
05519051623 AIML B1 Udbhav Kashyap
[1, 0, 1, 0]
]
colors = graph_coloring(adj_matrix)
print("Vertex → Color")
for vertex, color in enumerate(colors):
print(f" {vertex + 1} → {color}")
Learning:
Graph coloring is a core concept in computer science and graph theory, widely applied
in areas like scheduling, resource allocation, and network design. To effectively learn
graph coloring, one must first grasp its foundational definition—assigning colors to
graph vertices so that no adjacent vertices share the same color. This requires a solid
understanding of graph structures, algorithmic strategies, and logical reasoning.
Mastery comes through consistent practice, hands-on implementation, and exploring
various algorithms, especially approximation methods, which are essential for tackling
large-scale, NP-complete instances. Leveraging educational materials such as
textbooks, interactive courses, and coding platforms further enhances comprehension
and practical skills.
05519051623 AIML B1 Udbhav Kashyap
Conclusion:
Code:
05519051623 AIML B1 Udbhav Kashyap
Conclusion:
NumPy stands out as a fundamental tool in numerical and scientific computing with
Python. Its ability to handle complex data structures efficiently and perform high-speed
mathematical operations makes it indispensable for data analysts, scientists, and
engineers.
Learning:
Through this overview, we understand that NumPy not only boosts performance in
numerical tasks but also simplifies coding with powerful features like broadcasting and
multi-dimensional arrays. Mastering NumPy is a key step toward effective data analysis
and scientific computing in Python.
05519051623 AIML B1 Udbhav Kashyap
Pandas is a powerful Python library for data manipulation and analysis. It offers:
1. Data Structures: The two main structures are Series (1D) and Data Frame (2D),
which allow you to work with labelled data like Excel tables or SQL databases.
2. Data Handling: Pandas simplifies tasks like importing/ exporting data, cleaning
missing values, filtering rows / columns, and transforming datasets.
3. Aggregation and Grouping: It allows you to easily group data by columns and apply
functions like sum, mean, or count to analyse large
Code:
05519051623 AIML B1 Udbhav Kashyap
Conclusion:
Pandas plays a crucial role in data science by making data manipulation intuitive and
efficient. Its ability to handle structured data and perform complex operations with
minimal code makes it indispensable for analysts and developers.
Learning:
From this, we learn that Pandas greatly simplifies the data preparation process.
Mastering its functions can significantly improve productivity when working with large or
messy datasets.
05519051623 AIML B1 Udbhav Kashyap
This project focuses on automating the prediction of red wine quality using machine
learning techniques. Traditional quality assessment methods rely on expert tasters,
which are subjective and time-consuming. We use a dataset of 1,599 red wine samples
with 11 physicochemical features to develop a three-class classification model: low,
medium, and high quality. After preprocessing the data, we implemented and compared
six classification algorithms: Logistic Regression, k-Nearest Neighbors, Decision Tree,
Random Forest, and Support Vector Machine. Our analysis showed that Random
Forest achieved the highest accuracy (>85%) and revealed the most influential features
affecting wine quality. This project demonstrates how machine learning can assist
winemakers by offering a fast, consistent, and scalable method for evaluating wine
quality.
Key Features:
Preprocessing of real-world wine data
Multi-class classification (Low/Medium/High)
Model comparison using accuracy metrics
Feature importance analysis
Visualization of model performance and data patterns
Outcome:
Random Forest outperformed other models and highlighted key chemical properties that
influence wine quality, offering a practical tool for winemakers.
05519051623 AIML B1 Udbhav Kashyap
05519051623 AIML B1 Udbhav Kashyap
05519051623 AIML B1 Udbhav Kashyap
05519051623 AIML B1 Udbhav Kashyap
05519051623 AIML B1 Udbhav Kashyap
Learnings:
Conclusion:
Machine learning can effectively predict wine quality using physicochemical data.
Among the tested models, Random Forest achieved the best performance (>85%
accuracy), proving its reliability for quality assessment. This approach provides a
scalable, objective alternative to traditional sensory evaluation.