Feed Back
Feed Back
These kinds of networks can have signals travelling from both directions, that is, from
input to output as well as from output to input.
Feedback neural networks, or RNNs, are characterized by their ability to maintain a state
that captures information about previous inputs. This is achieved through recurrent
connections that loop back from the output to the input of the same layer or previous
layers.
ASSOCIATE MEMORY
NETWORK
❑These kinds of neural networks work on the basis of pattern association, which
means they can store different patterns and at the time of giving an output they can
produce one of the stored patterns by matching them with the given input pattern.
❑These types of memories are also called Content-Addressable Memory CAM.
❑Associative memory makes a parallel search with the stored patterns as data files.
❑Following are the two types of associative memories we can observe −
As per the question, input vector x with missing entries, x = [0 0 1 0] ([x1 x2 x3 x4])
(binary). Make yi = x = [0 0 1 0] ([y1 y2 y3 y4]).
Choosing unit yi (order doesn’t matter) for updating its activation.
Take the ith column of the weight matrix for calculation.
EXAMPLE
In the next steps, for all values of yi, check if there is convergence or not…
Now for next unit, we will take updated value via feedback. (i.e. y = [1 0 1 0])
EXAMPLE
Now for next unit, we will take updated value via feedback. (i.e. y = [1 0 1 0])
EXAMPLE
EXAMPLE
STOCHASTIC NETWORKS
❑A stochastic neural network is a type of artificial neural network that incorporates
randomness in its operations.
❑In stochastic neural networks, the algorithm instead of providing deterministic values
to each neurons it assigns probabilities to each neuron.
❑If each neuron passes the threshold values then only the neurons will fire.
❑It is built by introducing random variation into the network and by giving stochastic
weights.
❑This randomness can enhance learning efficiency, robustness, or enable the network
to model uncertainty.
EXAMPLES OF STOCHASTIC
NEURAL NETWORKS
Boltzmann Machines: A generative stochastic network using energy-based
models.
Dropout Regularization: During training, random neurons are "dropped"
(set to zero) to prevent overfitting.
Bayesian Neural Networks (BNNs): Extend neural networks to include
probability distributions for weights and biases, capturing epistemic
uncertainty.
Stochastic Gradient Descent (SGD):
While not a neural network type, it incorporates randomness in optimization, underpinning
many training processes.
Stochastic Gradient Descent (SGD) is a variant of the Gradient Descent algorithm that is
used for optimizing machine learning models.
EXAMPLE
A stochastic neural network has:
Input Layer: 2 neurons with inputs 𝑥1=3.0 and 𝑥2=1.0
Hidden Layer: 3 neurons, using stochastic activations based on the sigmoid function. For
each hidden neuron:
𝑃(active)=𝜎(𝑤1⋅𝑥1+𝑤2⋅𝑥2+𝑏)
where 𝑤1,𝑤2 are weights, and 𝑏 is the bias.
Assume the following:
Hidden Neuron 1: 𝑤1=0.4,𝑤2=0.6,𝑏=−0.2
Hidden Neuron 2: 𝑤1=−0.3,𝑤2=0.8,𝑏=0.1
Hidden Neuron 3: 𝑤1=0.5,𝑤2=−0.4,𝑏=0.3
Output Layer: 1 neuron that computes:
where 𝑣𝑖 are weights of connections from hidden to output layer:𝑣1=1.0,𝑣2=−1.5,𝑣3=2.0
EXAMPLE
Ques(1): Compute the activation probabilities of the hidden neurons.
Ques(2): Compute the output at the output neuron.
Solution:
Step 1: Calculate activations for the hidden layer neurons
For each hidden neuron:
𝑧=𝑤1⋅𝑥1+𝑤2⋅𝑥2+𝑏
Hidden Neuron 1: 𝑧1=(0.4*3.0)+(0.6*1.0)−0.2 ,
=1.2+0.6−0.2=1.6
EXAMPLE
Hidden Neuron 2:
𝑧2=(−0.3*3.0)+(0.8*1.0)+0.1=−0.9+0.8+0.1=0.0
Hidden Neuron 3:
z3=(0.5*3.0)+(−0.4*1.0)+0.3=1.5−0.4+0.3=1.4
From the energy, the Boltzmann machine defines the probability distribution over binary patterns.
The lower the energy of a state, the higher its probability.
Where the summation with respect to x is over all of the possible N bit binary values. Namely, the
higher the energy of a pattern x, the less likely that the x is generated.
Applications of Artificial
THANK YOU Intelligence and Neural
Networks(CS-538)