0% found this document useful (0 votes)
16 views4 pages

Learning Algorithms Implementation

Uploaded by

2Milionn news
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views4 pages

Learning Algorithms Implementation

Uploaded by

2Milionn news
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 4

# Perceptron Learning Algorithm, Pocket Algorithm, and Delta Rule Learning

Algorithm

## Step 1: Import Libraries

First, we need to import the necessary libraries.

```python
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
from sklearn.metrics import accuracy_score
```

## Step 2: Data Generation

We will use the `make_moons` function from `sklearn.datasets` to generate a


nonlinearly separable dataset.

```python
# Generate a nonlinearly separable dataset
X, y = make_moons(n_samples=100, noise=0.2, random_state=42)

# Convert labels from {0, 1} to {-1, 1}


y = np.where(y == 0, -1, 1)

# Visualize the dataset


plt.figure(figsize=(8, 6))
plt.scatter(X[y == 1][:, 0], X[y == 1][:, 1], color='blue', label='Class +1')
plt.scatter(X[y == -1][:, 0], X[y == -1][:, 1], color='red', label='Class -1')
plt.title('Nonlinearly Separable Dataset')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.legend()
plt.grid()
plt.show()
```

## Step 3: Implement Learning Algorithms

### 1. Perceptron Learning Algorithm

```python
def perceptron(X, y, w0, max_iter=100):
w = w0
n = len(y)
errors = []

for _ in range(max_iter):
errors_count = 0
for i in range(n):
if np.sign(np.dot(w, X[i])) != y[i]:
w += y[i] * X[i] # Update weights
errors_count += 1
errors.append(errors_count / n)
if errors_count == 0: # No misclassifications
break
return w, errors
```

### 2. Pocket Algorithm

```python
def pocket(X, y, w0, max_iter=100):
w = w0
best_w = w.copy()
n = len(y)
best_error = float('inf')
errors = []

for _ in range(max_iter):
errors_count = 0
for i in range(n):
if np.sign(np.dot(w, X[i])) != y[i]:
w += y[i] * X[i] # Update weights
errors_count += 1

current_error = errors_count / n
errors.append(current_error)

if current_error < best_error: # Keep the best weights


best_w = w.copy()
best_error = current_error

return best_w, errors


```

### 3. Delta Rule Learning Algorithm

```python
def delta_rule(X, y, w0, learning_rate=0.1, max_iter=100):
w = w0
n = len(y)
errors = []

for _ in range(max_iter):
for i in range(n):
# Update the weights using the Delta Rule
w += learning_rate * (y[i] - np.dot(w, X[i])) * X[i]

# Calculate the empirical loss


loss = np.mean((y - np.dot(X, w))**2)
errors.append(loss)

return w, errors
```

## Step 4: Apply Algorithms to the Dataset

Now, we will apply each of the algorithms to the generated dataset.

```python
# Initialize parameters
np.random.seed(42) # For reproducibility
w0 = np.random.rand(X.shape[1]) # Random initialization of weights
# Set maximum iterations
T_max = 100

# Apply Perceptron Algorithm


w_perceptron, errors_perceptron = perceptron(X, y, w0, max_iter=T_max)

# Apply Pocket Algorithm


w_pocket, errors_pocket = pocket(X, y, w0, max_iter=T_max)

# Apply Delta Rule Algorithm


learning_rate = 0.1
w_delta, errors_delta = delta_rule(X, y, w0, learning_rate=learning_rate,
max_iter=T_max)
```

## Step 5: Empirical Error Analysis

We will plot the empirical error evolution over iterations for each algorithm.

```python
plt.figure(figsize=(12, 8))
plt.plot(errors_perceptron, label='Perceptron', color='blue')
plt.plot(errors_pocket, label='Pocket', color='orange')
plt.plot(errors_delta, label='Delta Rule', color='green')
plt.title('Empirical Error Evolution Over Iterations')
plt.xlabel('Iterations')
plt.ylabel('Empirical Error')
plt.legend()
plt.grid()
plt.ylim(0, 1)
plt.show()
```

## Step 6: Performance Comparison

Now we can compare the three algorithms based on:


- The number of iterations required.
- Classification accuracy.
- Progression of empirical error.

```python
# Classification accuracy
y_pred_perceptron = np.sign(np.dot(X, w_perceptron))
y_pred_pocket = np.sign(np.dot(X, w_pocket))
y_pred_delta = np.sign(np.dot(X, w_delta))

accuracy_perceptron = accuracy_score(y, y_pred_perceptron)


accuracy_pocket = accuracy_score(y, y_pred_pocket)
accuracy_delta = accuracy_score(y, y_pred_delta)

# Print results
print(f'Perceptron Accuracy: {accuracy_perceptron:.2f}, Iterations:
{len(errors_perceptron)}')
print(f'Pocket Accuracy: {accuracy_pocket:.2f}, Iterations: {len(errors_pocket)}')
print(f'Delta Rule Accuracy: {accuracy_delta:.2f}, Iterations:
{len(errors_delta)}')
```

## Step 7: Conclusion
After executing the above code, you can analyze the outputs, the progression of
empirical error, and the accuracy of each algorithm. Here are points to consider
for your conclusion:

- **Performance Comparison**: Assess how quickly each algorithm converged, their


final classification accuracy, and how the empirical error changed over iterations.
- **Stability**: Consider the stability of each algorithm in terms of how quickly
they reach a low error.
- **Sensitivity to Parameters**: Discuss how changing parameters (like the learning
rate in the Delta Rule) could impact the results.

### Suggestions for Optimal T_max:


- **Cross-Validation**: Use cross-validation to test different values of \( T_{max}
\) and determine the best based on the validation set performance.
- **Early Stopping**: Implement early stopping based on validation error to
determine when to stop training, rather than relying solely on a fixed maximum
number of iterations.

This code provides a comprehensive framework for implementing, testing, and


comparing these three learning algorithms on a nonlinearly separable dataset
influenced by noise. You can run this code in a Python environment with the
necessary libraries installed. If you have any further questions or need
clarification on specific sections, feel free to ask!

You might also like