0% found this document useful (0 votes)
7 views2 pages

AND Gate Perceptron Learning

Uploaded by

ivms.gov
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views2 pages

AND Gate Perceptron Learning

Uploaded by

ivms.gov
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Perceptron Learning Algorithm for AND Gate

The AND gate returns 1 if and only if both inputs are 1; otherwise, it returns 0.

We can solve this problem using the perceptron learning algorithm, which learns the weights

and bias for a linear decision boundary to fit the desired outputs.

Truth Table for AND Gate:

| Input 1 | Input 2 | Output |

|---------|---------|--------|

| 0 | 0 | 0 |

| 0 | 1 | 0 |

| 1 | 0 | 0 |

| 1 | 1 | 1 |

The perceptron learning algorithm updates weights (w1, w2) and bias (b) as follows:

1. Initialize weights and bias to 0 or small random values.

2. For each input-output pair:

- Calculate the weighted sum: z = w1*x1 + w2*x2 + b

- Apply activation: y = 1 if z >= 0, else 0

- Update weights and bias if prediction (y) != target (t):

w1 = w1 + learning_rate * (t - y) * x1

w2 = w2 + learning_rate * (t - y) * x2

b = b + learning_rate * (t - y)

3. Repeat until all outputs match targets or maximum iterations are reached.
Example: Learning weights for the AND gate.

Inputs (x1, x2) -> Outputs (t):

(0, 0) -> 0, (0, 1) -> 0, (1, 0) -> 0, (1, 1) -> 1

Initial weights: w1 = 0, w2 = 0, b = 0, learning_rate = 1

Iteration 1:

- Input: (1, 1), Target: 1

z = w1*1 + w2*1 + b = 0, y = 0 (wrong)

Update: w1 = 0 + 1*(1-0)*1 = 1

w2 = 0 + 1*(1-0)*1 = 1

b = 0 + 1*(1-0) = 1

Repeat for other inputs until convergence.

You might also like