0% found this document useful (0 votes)
28 views31 pages

Cs3491 Aiml Lab PDF

The bush began to shake. Brad couldn't see what was causing it to shake, but he didn't care. he had a pretty good idea about what was going on and what was happening. He was so confident that he approached the bush carefree and with a smile on his face. That all changed the instant he realized what was actually behind the bush

Uploaded by

skontrade2022
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
28 views31 pages

Cs3491 Aiml Lab PDF

The bush began to shake. Brad couldn't see what was causing it to shake, but he didn't care. he had a pretty good idea about what was going on and what was happening. He was so confident that he approached the bush carefree and with a smile on his face. That all changed the instant he realized what was actually behind the bush

Uploaded by

skontrade2022
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 31
LIST OF EXPERIMENTS Implementation of Uninformed search algorithms (BFS, DFS) Implementation of Informed search algorithms (A*, memory-bounded A*) Implement naive Bayes models Implement Bayesian Networks Build Regression models Build decision trees and random forests Build SVM models Implement ensembling techniques pe NP PP Yn PB Implement clustering algorithms 10. Implement EM for Bayesian networks 11. Build simple NN models 12. Build deep learning NN models Ex, No. LA UNINFORMED SEARCH ALGORITHM - BFS Date: Aim: To write a Python program to implement Breadth First Search (BFS). Algorithm: Step 1. Start Step 2. Put any one of the graph’s vertices at the back of the queue. Step 3. Take the front item of the queue and add it to the visited lis Step 4. Create a list of that vertex’s adjacent nodes. Add those which are not within the isited list to the rear of the queue. ue steps 3 and 4 till the queue is empty. visited = [] # List for visited nodes. queue =[] #nitialize a queue def bfs(visited, graph, node): #fuunction for BFS visited.append(node) queue.append(node) while queue: # Creating loop to visit each node m= queue.pop(0) print (m, end ="") for neighbour in graph{m]: if neighbour not in visited: visited.append (neighbour) queue.append(neighbour) # Driver Code print("Following is the Breadth-First Search") bfs(visited, graph, '5') #,fuunction calling Result: Thus the Python program to implement Breadth First Search (BFS) was developed successfully. Ex. No.LB UNINFORMED SEARCH ALGORITHM - DES Date: Aim: To write a Python program to implement Depth First Search (DFS). Algorithm: Step 1.Start Step 2.Put any one of the graph's vertex on top of the stack. Step 3.After that take the top item of the stack and add it to the visited list of the vertex. Step 4.Next, create a list of that adjacent node of the vertex. Add the ones which aren’t in the visited list of vertexes to the top of the stack. Step 5.Repeat steps 3 and 4 until the stack is empty. Step 6.Stop Program: graph = { (3,7, ‘a1, T[8'}, 2:0 ‘4: ['8'], st) } visited = set() # Set to keep track of visited nodes of graph. def dfs(visited, graph, node): #function for dfs, if node not in visited: print (node) visited.add(node) for neighbour in graph{node]: dfs(visited, graph, neighbour) # Driver Code print("Following is the Depth- dfs(visited, graph, ’5') t Search") Result Thus the Python program to implement Depth First Search (DFS) was developed successfully. Ex. No2.A INFORMED SEARCH ALGORITHM Date: A* SEARCH Aim: To write a Python program to implement A* search algorithm, Algorithm: Step 1: Create a priority queue and push the starting node onto the queue.Initialize minimum value (min_index) to location 0. Step 2: Create a set to store the visited nodes. Step 3: Repeat the following steps until the queue is empty: 3.1: Pop the node with the lowest cost + heuristic from the queue. 3.2: Ifthe current node is the goal, return the path to the goal. 3.3: If the current node has already been visited, skip it. 3.4: Mark the current node as visited. Expand the current node and add its neighbors to the queue. Step 4: If the queue is empty and the goal has not been found, return None (no path found). Step 5: Stop Program: import heapq class Node: def__init (self, state, parent, cost, heuristic): self.state = state parent cost self.heuristic = heuristic def__It_(self, other): return (self.cost + self-heuristic) <(other.cost + other.heuris def astar(start, goal, graph): heap = [] heapg.heappush(heap, (0, Node(start, None, 0, 0))) visited = set() while heap: (cost, current) = heapq.heappop(heap) if current. state path = [] while current is not None: path.append(current.state) goal current = current parent # Return reversed path return path{::-1] if current.state in visited: continue visited.add(current state) for state, cost in graph{current state} items(): if state not in visited: heuristic =0 # replace with your heuristic function heapq.heappush(heap, (cost, Node(state, current, current.cost + cost, heuristic))) return None # No path found graph = { SB (A' A’ (Cc: result = astar(start, goal, graph) print(result) Result: ‘Thus the python program for A* Search was developed and the output was verified successfully. Ex. No.2.B INFORMED SEARCH ALGORITHM Date: MEMORY-BOUNDED A* Aim: To write a Python program to implement memory- bounded A* search algorithm. Algorithm: Step 1: Create a priority queue and push the starting node onto the queue. Step 2: Create a set to store the visited nodes, Step 3: Set a counter to keep track of the number of nodes expanded. Step 4: Repeat the following steps until the queue is empty or the node counter exceeds the ‘max_nodes: 4.1: Pop the node with the lowest cost + heuristic from the queue. 4.2: If the current node is the goal, return the path to the goal. 4.3: Ifthe current node has already been visited, skip it. 4.4: Mark the current node as visited. : Increment the node counter. : Expand the current node and add its neighbors to the queue. Step 5: If the queue is empty and the goal has not been found, return None (no path found). Step 6: Stop Program: import heapq class Node: def__init (self, state, parent, cost, heut self. state = state self. parent = parent self.cost = cost self heuristic jeuristic def__It_(self, other): retum (self.cost + self.heuristic) <(other.cost + other-heuristic) def astar(start, goal, graph, max_nodes): heap = [] heapg.heappush(heap, (0, Node(start, None, 0, 0))) visited = set() node_counter = 0. while heap and node_counter 0 and np.allelose(prev_probabilities, probabilities): break prev_probabilities = np.copy(probabilities) return probabilities # Run the EM algorithm probabilities = em_algorithm(nodes, parents, probabilities, data) # Print the final parameter estimates for node in nodes: print(node, probabilities{node]) Thus the Python program to Implement EM for Bayesian Networks was Ensemble techniques implementation Aim : To build a Ensemble techniques implementation using python program, Algorith Step 1: In bagging, multiple models are trained to make the final prediction. Step 2: In boosting, multiple weak models are trained to correct the errors of the previous model. Step 3: In stacking, multiple models are trained to combine the predictions of the base models to make the final prediction, Step 4: In averaging, multiple models are trained independently to make the final prediction, Step 5: In blending, multiple models are trained independently and their predictionsare combined using a weighted average. Step 6: In ensemble of ensembles, multiple ensembles are created using different techniques and combined to create a final prediction. Program, from sklearn.ensemble import VotingClassifier from sklearn.ensemble import BaggingClassifier from sklearn.ensemble import AdaBoostClassifier from sklearn.ensemble import GradientBoostingClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score # Load the iris dataset data = load_iris) # Split the data into training and testing sets X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.3, random_state=42) # Define the base models modell = DecisionTreeClassifier(random_stat model2 = LogisticRegression(random_state model3 = KNeighborsClassifier() # Define the ensemble models, ensemble! = BaggingClassifier(base_estimator=model I, n_esti ensemble2 = AdaBoostClassifier(base_estimator=model2, n_estimators=10, random_state=42) ensemble3 = GradientBoostingClassifier(n_estimators=10, random_state=42) # Define the voting classifier voting_clf = VotingClassifier(estimators=[(‘bagging’, ensemble1), (‘adaboost’, ensemble2), # Train the voting classifier voting_clf.fit(X_train, y_train) # Make predictions on the test set y_pted = voting_clf.predict(X_test) # Calculate the accuracy of the ensemble model accuracy = accuracy_score(y_test, y_pred) print(’Accuracy: %.2 % accuracy) Output : Accuracy: 1.00 Result ‘The program has been executed successfully and the output has been verified. IMPLEMENT CLUSTERING ALGORITHMS ~ KMEANS, CLUSTERING Aim : To implement kmeans clustering algorithms using python. tart the program. Step 2:- Import the necessary modules. Step 3:- Create two arrays namely x and y. Step 4:- construct kmeans cluster using Kmeans(). Step 5:- Display the result. Step 6:- Stop the program. Program import matplotlib.pyplot as plt x=[4,5, 10,4, 3, 11, 14,6, 10, 12] y 22,21, 21] from sklearn.cluster import KMeans data = list(zip(x, y)) inertias = [] fori in range(1,11): kmeans = KMeans(n_clusters=i) kmeans.fit(di inertias.append(kmeans.inertia_) pltplot(range(1,11), inertias, marker='o') plt.itleElbow method’) plt.xlabel('Number of clusters’) pltylabel('Inertia’) plt.show() Result : ‘The program has been executed successfully and the output has been verified. Build Simple NN Models To write a python program to build simple NN models. Algorithm: 1. Define the input and output data. 2. Choose the number of layers and neurons in each layer. This depends on the problemyou are trying to solve. Define the activation function for each layer. Common choices are ReLU, sigmoid, andtanh. |. Initialize the weights and biases for each neuron in the network. This can be donerandomly or using a pre-trained model. Define the loss function and optimizer to be used during training. The loss function measures how well the model is doing, while the optimizer updates the weights andbiases to minimize the loss. |. Train the model on the input data using the defined loss function and optimizer. This involves forward propagation to compute the output of the model, and backpropagation to compute the gradients of the loss with respect to the weights and biases. The optimizer then updates the weights and biases based on the gradients. . Evaluate the performance of the model on new data using metrics such as accuracy, precision, recall, and FI score. Program: import numpy as np from keras.models import Sequential from keras.layers import Dense # Define the input and output data X = nparray(({0, 0}, (0, 1), (1, 0} (1, up y = np.array({(0], [1], (11, (01) # Define the model model = Sequential() ‘model.add(Dense(4, input_dim=2, activation model.add(Dense( 1, activation='sigmoid’)) # Compile the model model.compile(loss="binary_crossentropy’, optimizer='adam’, metrics=['accuracy']) 4# Fit the model to the data model.fit(X, y, epochs=1000, batch_size=4) # Evaluate the model on new data test_data = np.array({{0, 0}, [0, 1], [1, 0}, (1, 1) predictions = model.predict(test_data) print(predictions) Result: Thus the Python program to build simple NN Models was developed successfully. Deep Learning NN Models To write a python program to implement deep learning of NN models. Algorithm: Import the necessary libraries, such as numpy and keras. Load or generate your dataset. This can be done using numpy or any other data manipulation library Preprocess your data by performing any necessary normalization, scaling, orother transformations. Define your neural network architecture using the Keras Sequential API. Add layers to the model using the add() method, specifying the number of units, activation function, and input dimensions for each layer. . Compile your model using the compile() method. Specify the loss function, optimizer, and any evaluation metrics you want to use, Train your model using the fit() method. Specify the training data, validation data, batch size, and number of epochs. Evaluate your model using the evaluate() method. This will give you the loss and accuracy metrics on the test set. Use your trained model to make predictions on new data using the predict() method. Program: # Import necessary libraries import numpy as np from keras.models import Sequential from keras.layers import Dense # Define the neural network model model = Sequential() model.add(Dense(units=64, activation='relu’, input_dim=100)) model.add(Dense(units=10, activatior # Compile the model ‘model.compile(loss='categorical_crossentropy’, optimizer='sg 4# Generate some random data for training and testingdata = np.random.random((1000, 100)) labels = np.random.randint(10, size=(1000, 1)) one_hot_labels = keras.utils.to_categorical(labels, num_classes=10) # Train the model on the data model.fit(data, one_hot_labels, epochs=10, batch_size=32) # Evaluate the model on a test set test_data = np.random.random((100, 100)) test_labels = np.random.randint(10, size=(100, 1)) test_one_hot_labels = keras.utils.to_categorical(test_labels, num_classes model.evaluate(test_data, test_one_hot_labels, batch_size=32) print("Test loss:", loss_and_metries{0)) print(""Test accuracy:", loss_and_metrics[1]) Result: Thus the Python program to implement deep learning of NN Models developed successfully

You might also like