0% found this document useful (0 votes)
7 views6 pages

Codeaiml

The document contains Python code implementations for various machine learning algorithms including uninformed search algorithms (BFS, DFS), informed search algorithms (A*, Memory-Bounded A), Naive Bayes models, Bayesian networks, regression models, decision trees, random forests, SVM models, ensembling techniques, clustering algorithms, expectation-maximization for Bayesian networks, simple neural networks, and deep learning neural networks. Each section provides a brief example of how to implement the respective algorithm using libraries such as scikit-learn, TensorFlow, and pgmpy. The document serves as a comprehensive guide for practitioners looking to apply these algorithms in their projects.

Uploaded by

xaxinfos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views6 pages

Codeaiml

The document contains Python code implementations for various machine learning algorithms including uninformed search algorithms (BFS, DFS), informed search algorithms (A*, Memory-Bounded A), Naive Bayes models, Bayesian networks, regression models, decision trees, random forests, SVM models, ensembling techniques, clustering algorithms, expectation-maximization for Bayesian networks, simple neural networks, and deep learning neural networks. Each section provides a brief example of how to implement the respective algorithm using libraries such as scikit-learn, TensorFlow, and pgmpy. The document serves as a comprehensive guide for practitioners looking to apply these algorithms in their projects.

Uploaded by

xaxinfos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 6

Sure, here are the codes without the comments:

### 1. Uninformed Search Algorithms (BFS, DFS)

```python
from collections import deque

def bfs(graph, start):


visited = set()
queue = deque([start])
while queue:
node = queue.popleft()
if node not in visited:
visited.add(node)
queue.extend(graph[node] - visited)
return visited

def dfs(graph, start, visited=None):


if visited is None:
visited = set()
visited.add(start)
for next_node in graph[start] - visited:
dfs(graph, next_node, visited)
return visited

graph = {
'A': {'B', 'C'},
'B': {'A', 'D', 'E'},
'C': {'A', 'F'},
'D': {'B'},
'E': {'B', 'F'},
'F': {'C', 'E'}
}
print("BFS:", bfs(graph, 'A'))
print("DFS:", dfs(graph, 'A'))
```

### 2. Informed Search Algorithms (A*, Memory-Bounded A)

```python
import heapq

def a_star(graph, start, goal, h):


open_set = []
heapq.heappush(open_set, (0, start))
came_from = {}
g_score = {node: float('inf') for node in graph}
g_score[start] = 0
f_score = {node: float('inf') for node in graph}
f_score[start] = h[start]

while open_set:
current = heapq.heappop(open_set)[1]
if current == goal:
return reconstruct_path(came_from, current)

for neighbor in graph[current]:


tentative_g_score = g_score[current] + graph[current][neighbor]
if tentative_g_score < g_score[neighbor]:
came_from[neighbor] = current
g_score[neighbor] = tentative_g_score
f_score[neighbor] = g_score[neighbor] + h[neighbor]
heapq.heappush(open_set, (f_score[neighbor], neighbor))
return None

def reconstruct_path(came_from, current):


total_path = [current]
while current in came_from:
current = came_from[current]
total_path.append(current)
total_path.reverse()
return total_path

graph = {
'A': {'B': 1, 'C': 4},
'B': {'A': 1, 'D': 2, 'E': 5},
'C': {'A': 4, 'F': 1},
'D': {'B': 2},
'E': {'B': 5, 'F': 1},
'F': {'C': 1, 'E': 1}
}
h = {'A': 7, 'B': 6, 'C': 2, 'D': 5, 'E': 3, 'F': 0}
print("A* Path:", a_star(graph, 'A', 'F', h))
```

### 3. Naive Bayes Models

```python
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

X = [[1, 2], [2, 3], [3, 4], [4, 5], [5, 6]]
y = [0, 0, 1, 1, 1]

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)


model = GaussianNB()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)

print("Naive Bayes Accuracy:", accuracy_score(y_test, y_pred))


```

### 4. Bayesian Networks

```python
!pip install pgmpy
from pgmpy.models import BayesianNetwork
from pgmpy.factors.discrete import TabularCPD
from pgmpy.inference import VariableElimination

model = BayesianNetwork([('A', 'B'), ('A', 'C')])

cpd_a = TabularCPD(variable='A', variable_card=2, values=[[0.6], [0.4]])


cpd_b = TabularCPD(variable='B', variable_card=2, values=[[0.7, 0.2], [0.3, 0.8]],
evidence=['A'], evidence_card=[2])
cpd_c = TabularCPD(variable='C', variable_card=2, values=[[0.9, 0.4], [0.1, 0.6]],
evidence=['A'], evidence_card=[2])
model.add_cpds(cpd_a, cpd_b, cpd_c)

infer = VariableElimination(model)
print(infer.query(variables=['B'], evidence={'A': 1}))
```

### 5. Regression Models

```python
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

X = [[1], [2], [3], [4], [5]]


y = [1, 2, 3, 4, 5]

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)


model = LinearRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)

print("Linear Regression MSE:", mean_squared_error(y_test, y_pred))


```

### 6. Decision Trees and Random Forests

```python
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

X = [[1, 2], [2, 3], [3, 4], [4, 5], [5, 6]]
y = [0, 0, 1, 1, 1]

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)


dt_model = DecisionTreeClassifier()
dt_model.fit(X_train, y_train)
y_pred = dt_model.predict(X_test)

print("Decision Tree Accuracy:", accuracy_score(y_test, y_pred))

rf_model = RandomForestClassifier()
rf_model.fit(X_train, y_train)
y_pred = rf_model.predict(X_test)

print("Random Forest Accuracy:", accuracy_score(y_test, y_pred))


```

### 7. SVM Models

```python
from sklearn import svm
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

X = [[1, 2], [2, 3], [3, 4], [4, 5], [5, 6]]
y = [0, 0, 1, 1, 1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
model = svm.SVC()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)

print("SVM Accuracy:", accuracy_score(y_test, y_pred))


```

### 8. Ensembling Techniques

```python
from sklearn.ensemble import VotingClassifier, BaggingClassifier,
AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

X = [[1, 2], [2, 3], [3, 4], [4, 5], [5, 6]]
y = [0, 0, 1, 1, 1]

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)


model1 = LogisticRegression()
model2 = DecisionTreeClassifier()
model3 = SVC(probability=True)

voting_model = VotingClassifier(estimators=[('lr', model1), ('dt', model2), ('svc',


model3)], voting='soft')
voting_model.fit(X_train, y_train)
y_pred = voting_model.predict(X_test)

print("Voting Classifier Accuracy:", accuracy_score(y_test, y_pred))

bagging_model = BaggingClassifier(base_estimator=DecisionTreeClassifier(),
n_estimators=10)
bagging_model.fit(X_train, y_train)
y_pred = bagging_model.predict(X_test)

print("Bagging Classifier Accuracy:", accuracy_score(y_test, y_pred))

adaboost_model = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(),
n_estimators=50)
adaboost_model.fit(X_train, y_train)
y_pred = adaboost_model.predict(X_test)

print("AdaBoost Classifier Accuracy:", accuracy_score(y_test, y_pred))


```

### 9. Clustering Algorithms

```python
from sklearn.cluster import KMeans, DBSCAN
from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt

X, _ = make_blobs(n_samples=100, centers=3, n_features=2, random_state=0)


kmeans = KMeans(n_clusters=3)
kmeans.fit(X)

plt.scatter(X[:, 0], X[:, 1], c=kmeans.labels_, cmap='viridis')


plt.title('KMeans Clustering')
plt.show()

dbscan = DBSCAN(eps=0.5, min_samples=5)


dbscan.fit(X)

plt.scatter(X[:, 0], X[:, 1], c=dbscan.labels_, cmap='viridis')


plt.title('DBSCAN Clustering')
plt.show()
```

### 10. EM for

Bayesian Networks

```python
!pip install pgmpy
from pgmpy.models import BayesianNetwork
from pgmpy.estimators import ExpectationMaximization, BayesianEstimator
from pgmpy.factors.discrete import TabularCPD

model = BayesianNetwork([('A', 'B'), ('A', 'C')])

import pandas as pd
data = pd.DataFrame(data={'A': [0, 0, 1, 1], 'B': [0, 1, 0, 1], 'C': [1, 0, 1, 0]})

em = ExpectationMaximization(model, data)
model = em.get_parameters()

for cpd in model.get_cpds():


print(cpd)
```

### 11. Simple Neural Network Models

```python
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score

X = [[1], [2], [3], [4], [5]]


y = [0, 0, 1, 1, 1]

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)


scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

model = Sequential()
model.add(Dense(10, input_dim=1, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])


model.fit(X_train, y_train, epochs=50, verbose=0)

y_pred = (model.predict(X_test) > 0.5).astype("int32")


print("Simple NN Accuracy:", accuracy_score(y_test, y_pred))
```

### 12. Deep Learning NN Models

```python
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
import numpy as np

X = np.random.rand(1000, 20)
y = np.random.randint(0, 2, 1000)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)


scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

model = Sequential()
model.add(Dense(64, input_dim=20, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])


model.fit(X_train, y_train, epochs=50, batch_size=32, verbose=0)

y_pred = (model.predict(X_test) > 0.5).astype("int32")


print("Deep Learning NN Accuracy:", accuracy_score(y_test, y_pred))
```

These implementations cover a range of machine learning and AI algorithms. You can
modify and expand these examples as needed for your specific use cases.

You might also like