0% found this document useful (0 votes)
33 views16 pages

AI Practical File

Al practical file

Uploaded by

tech4fresherss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views16 pages

AI Practical File

Al practical file

Uploaded by

tech4fresherss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Practical # 1

Objective - Installation and working on Various AI tools such as Python / MATLAB

1. Setting Up the Environment

1. Install Python:

o Download and install Python from the official website. Ensure you check the box to
add Python to your PATH during installation.

o Use the latest stable version (e.g., Python 3.10 or higher).

2. Install a Code Editor or IDE:

o Use tools like Visual Studio Code, PyCharm, or Jupyter Notebook for coding.

3. Install a Virtual Environment (Optional but Recommended):

o Create isolated environments to avoid conflicts between libraries.

o Use venv or conda:

bash

Copy code

python -m venv myenv

source myenv/bin/activate # Linux/macOS

myenv\Scripts\activate # Windows

2. Installing AI Libraries

Common AI-related Python libraries include:

You can install all at once:

bash
1
Copy code

pip install numpy pandas matplotlib scikit-learn tensorflow torch transformers

3. Writing Your First AI Script

Example 1: A Simple Linear Regression Model

python

Copy code

from sklearn.linear_model import LinearRegression

import numpy as np

# Data

X = np.array([[1], [2], [3], [4], [5]])

y = np.array([1, 2, 3, 4, 5])

# Model

model = LinearRegression()

model.fit(X, y)

# Prediction

print("Prediction for 6:", model.predict([[6]]))

Example 2: A Basic Neural Network with TensorFlow

python

Copy code

import tensorflow as tf

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Dense

# Data

X = [[0], [1], [2], [3], [4]]

y = [[0], [1], [4], [9], [16]]

# Model

model = Sequential([
2
Dense(10, activation='relu', input_shape=(1,)),

Dense(1)

])

model.compile(optimizer='adam', loss='mse')

# Training

model.fit(X, y, epochs=100, verbose=0)

# Prediction

print("Prediction for 5:", model.predict([[5]]))

4. Learning and Debugging

1. Practice Small Projects:

o Build predictive models, text analysis tools, or simple neural networks.

2. Online Courses:

o Take courses on platforms like Coursera, Udemy, or edX.

3. Documentation:

o Use the official documentation of libraries like TensorFlow, PyTorch, or Scikit-Learn.

5. Managing Dependencies

To save and share your project’s environment, use:

bash

Copy code

pip freeze > requirements.txt

To recreate the environment:

bash

Copy code

pip install -r requirements.txt

3
Practical # 2

Objective - Program to solve basic AI problem

Problem Description

We will predict house prices based on a single feature: house size (in square feet). This is a
simple

regression problem, and we’ll use synthetic data for the demonstration.

# Import necessary libraries

import numpy as np

import matplotlib.pyplot as plt

from sklearn.model_selection import train_test_split

from sklearn.linear_model import LinearRegression

from sklearn.metrics import mean_squared_error

# 1. Data Preparation

# Synthetic dataset: house size (sq ft) vs price ($1000s)

house size = np.array([500, 750, 1000, 1250, 1500, 1750, 2000, 2250, 2500, 2750]).reshape(-1,1)
house_price = np.array([50, 75, 100, 120, 150, 170, 200, 210, 250, 270]) # Prices in $1000s

# Split the dataset into training and testing sets (80% train, 20% test)

X_train, X_test, y_train, y_test = train_test_split(house_size, house_price, test_size=0.2,


random_state=42)

# 2. Model Training

model = LinearRegression()

model.fit(X_train, y_train)

# 3. Predictions

y_pred = model.predict(X_test)

# 4. Evaluation

mse = mean_squared_error(y_test, y_pred)

print(f"Mean Squared Error: {mse:.2f}")


4
# 5. Visualize Results

plt.scatter(house_size, house_price, color='blue', label='Actual Data')

plt.plot(house_size, model.predict(house_size), color='red', label='Model Prediction')

plt.xlabel("House Size (sq ft)")

plt.ylabel("House Price ($1000s)")

plt.title("House Price Prediction")

plt.legend()

plt.show()

# 6. Make a New Prediction

new_house_size = np.array([[1800]]) # Size of the new house in sq ft

predicted_price = model.predict(new_house_size)

print(f"Predicted price for a 1800 sq ft house: ${predicted_price[0]:.2f}k")

Output:

1. The Mean Squared Error gives a measure of how well the model predicts.

2. A graph shows the fit of the model.

3. A new prediction estimates the price of a house with a specified size.

5
Practical # 3

Objective - Implementation of BFS AI Searching Techniques

Problem: Pathfinding on a Graph

We’ll use a simple graph where nodes represent locations, and edges represent possible paths.

Graph Representation

We'll use an adjacency list for the graph.

# Sample graph as an adjacency list

graph = {

'A': {'B': 1, 'C': 3},

'B': {'A': 1, 'D': 1, 'E': 5},

'C': {'A': 3, 'F': 2},

'D': {'B': 1},

'E': {'B': 5, 'F': 1},

'F': {'C': 2, 'E': 1}

1. Breadth-First Search (BFS)

BFS explores all neighbors at the current depth before moving to nodes at the next level.

from collections import deque

def bfs(graph, start, goal):

queue = deque([start])

visited = set()

path = {}

while queue:

current = queue.popleft()

if current == goal:

# Reconstruct the path


6
final_path = []

while current is not None:

final_path.append(current)

current = path.get(current)

return list(reversed(final_path))

visited.add(current)

for neighbor in graph[current]:

if neighbor not in visited and neighbor not in queue:

queue.append(neighbor)

path[neighbor] = current # Keep track of the path

return None # If no path is found

# Test BFS

print("BFS Path:", bfs(graph, 'A', 'F'))

7
Practical # 4

Objective - Implementation of DFS AI Searching Techniques

1. Depth-First Search (DFS)

DFS explores as far as possible along each branch before backtracking.

def dfs(graph, start, goal, visited=None, path=None):

if visited is None:

visited = set()

if path is None:

path = []

visited.add(start)

path.append(start)

if start == goal:

return path

for neighbor in graph[start]:

if neighbor not in visited:

result = dfs(graph, neighbor, goal, visited, path)

if result:

return result

path.pop() # Backtrack

return None

# Test DFS

print("DFS Path:", dfs(graph, 'A', 'F'))

8
Practical # 5

Objective - Implementation of A Search* AI Searching Techniques

1. A Search*

A* Search uses heuristics to prioritize nodes, making it more efficient for pathfinding.

import heapq

def heuristic(node, goal):

# Example heuristic (straight-line distance, etc.)

return abs(ord(node) - ord(goal))

def a_star(graph, start, goal):

priority_queue = []

heapq.heappush(priority_queue, (0, start)) # (cost, node)

g_costs = {start: 0} # Cost from start to the node

path = {}

while priority_queue:

_, current = heapq.heappop(priority_queue)

if current == goal:

# Reconstruct the path

final_path = []

while current is not None:

final_path.append(current)

current = path.get(current)

return list(reversed(final_path))

for neighbor, weight in graph[current].items():

tentative_g_cost = g_costs[current] + weight

if neighbor not in g_costs or tentative_g_cost < g_costs[neighbor]:

g_costs[neighbor] = tentative_g_cost
9
f_cost = tentative_g_cost + heuristic(neighbor, goal)

heapq.heappush(priority_queue, (f_cost, neighbor))

path[neighbor] = current # Track the path

return None # If no path is found

# Test A* Search

print("A* Path:", a_star(graph, 'A', 'F'))

Sample Output

For the given graph:

• BFS Path: ['A', 'C', 'F']

• DFS Path: ['A', 'C', 'F']

• A Path*: ['A', 'C', 'F']

10
Practical # 6

Objective - Implementation of Minimax game playing techniques.

Here’s an implementation of basic game-playing techniques for a simple two-player game: Tic-Tac-Toe.

We'll demonstrate:

Minimax Algorithm

Game: Tic-Tac-Toe

Rules:

• Two players: Player 1 (X) and Player 2 (O).

• The goal is to align three symbols (X or O) in a row, column, or diagonal.

• Players take turns placing their symbols on a 3x3 grid.

1. Board Representation

# Tic-Tac-Toe board representation

def print_board(board):

for row in board:

print(" | ".join(row))

print("-" * 5)

# Check if there are moves left

def is_moves_left(board):

for row in board:

if " " in row:

return True

return False

2. Minimax Algorithm

How It Works:

• The algorithm recursively simulates all possible moves.

11
• The maximizing player (X) tries to maximize the score.

• The minimizing player (O) tries to minimize the score.

# Evaluate the board for a winner

def evaluate(board):

for row in board:

if row[0] == row[1] == row[2] and row[0] != " ":

return 10 if row[0] == "X" else -10

for col in range(3):

if board[0][col] == board[1][col] == board[2][col] and board[0][col] != " ":

return 10 if board[0][col] == "X" else -10

if board[0][0] == board[1][1] == board[2][2] and board[0][0] != " ":

return 10 if board[0][0] == "X" else -10

if board[0][2] == board[1][1] == board[2][0] and board[0][2] != " ":

return 10 if board[0][2] == "X" else -10

return 0

# Minimax function

def minimax(board, depth, is_maximizing):

score = evaluate(board)

if score == 10 or score == -10:

return score

if not is_moves_left(board):

return 0

if is_maximizing:

best = -float("inf")

for i in range(3):

for j in range(3):

if board[i][j] == " ":


12
board[i][j] = "X"

best = max(best, minimax(board, depth + 1, False))

board[i][j] = " "

return best

else:

best = float("inf")

for i in range(3):

for j in range(3):

if board[i][j] == " ":

board[i][j] = "O"

best = min(best, minimax(board, depth + 1, True))

board[i][j] = " "

return best

3. Best Move Selection

# Find the best move

def find_best_move(board, is_maximizing):

best_val = -float("inf") if is_maximizing else float("inf")

best_move = (-1, -1)

for i in range(3):

for j in range(3):

if board[i][j] == " ":

board[i][j] = "X" if is_maximizing else "O"

move_val = minimax(board, 0, not is_maximizing)

board[i][j] = " "

if (is_maximizing and move_val > best_val) or (not is_maximizing and move_val < best_val):

best_val = move_val

best_move = (i, j)
13
return best_move

4. Main Function to Play

def play_tic_tac_toe():

board = [[" " for _ in range(3)] for _ in range(3)]

print("Welcome to Tic-Tac-Toe!")

print_board(board)

for turn in range(9):

if turn % 2 == 0: # Player X (AI)

print("AI's turn:")

row, col = find_best_move(board, True)

board[row][col] = "X"

else: # Player O

print("Your turn (Enter row and column, e.g., 1 2):")

row, col = map(int, input().split())

if board[row][col] == " ":

board[row][col] = "O"

else:

print("Invalid move, try again!")

continue

print_board(board)

if evaluate(board) == 10:

print("AI wins!")

return

elif evaluate(board) == -10:

print("You win!")

return

print("It's a draw!")
14
# Start the game

play_tic_tac_toe()

15
Practical # 7

Objective - Implementation of various knowledge representation techniques.

from pgmpy.models import BayesianNetwork

from pgmpy.factors.discrete import TabularCPD

model = BayesianNetwork([('Rain', 'Traffic')])

cpd_rain = TabularCPD('Rain', 2, [[0.7], [0.3]])

cpd_traffic = TabularCPD('Traffic', 2, [[0.9, 0.4], [0.1, 0.6]], ['Rain'], [2])

model.add_cpds(cpd_rain, cpd_traffic)

print(model.check_model())

16

You might also like