0% found this document useful (0 votes)
31 views8 pages

Aiml 3

The document discusses artificial neural networks including their architecture types, perceptrons, gradient descent, backpropagation, and Bayesian belief networks. It provides definitions and examples of single and multi-layer feedforward, recurrent, and mesh architectures. It also explains perceptrons, gradient descent, backpropagation, Bayesian belief networks, and naive Bayes classifiers with examples.

Uploaded by

Sam Sun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views8 pages

Aiml 3

The document discusses artificial neural networks including their architecture types, perceptrons, gradient descent, backpropagation, and Bayesian belief networks. It provides definitions and examples of single and multi-layer feedforward, recurrent, and mesh architectures. It also explains perceptrons, gradient descent, backpropagation, Bayesian belief networks, and naive Bayes classifiers with examples.

Uploaded by

Sam Sun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

1. What is artificial neuron? Discuss with neat diagram.

Artificial neural networks (ANNs) provide a general, practical method for learning real-
valued, discrete-valued, and vector-valued target functions from examples.

2. When to consider Neural Networks ?


• Input is a high-dimensional discrete or real-valued (e.g., sensor input)
• Output is discrete or real-valued
• Output is a vector of values
• Possibly noisy data
• Form of target function is unknown
• Human readability of result is unimportant
Examples:
1. Speech phoneme recognition
2. Image classification
3. Financial perdition

3. Discuss different type’s architecture of artificial neural networks.


Single-Layer Feedforward Architecture
Þ This artificial neural network has just one input layer and a single neural layer, which
is also the output layer.
Þ Figure illustrates a simple-layer feedforward network composed of n inputs and m
outputs.
Þ The information always flows in a single direction (thus, unidirectional), which is
from the input layer to the output layer
Multi-Layer Feedforward Architecture
Þ This artificial neural feedforward networks with multiple layers are composed of one
or more hidden neural layers.
Þ Figure shows a feedforward network with multiple layers composed of one input
layer with n sample signals, two hidden neural layers consisting of n1 and n2
neurons respectively, and, finally, one output neural layer composed of m neurons
representing the respective output values of the problem being analyzed.
Recurrent or Feedback Architecture
Þ In these networks, the outputs of the neurons are used as feedback inputs for other
neurons.
Þ Figure illustrates an example of a Perceptron network with feedback, where one of
its output signals is fed back to the middle layer.
Mesh Architectures
Þ The main features of networks with mesh structures reside in considering the spatial
arrangement of neurons for pattern extraction purposes, that is, the spatial
localization of the neurons is directly related to the process of adjusting their
synaptic weights and thresholds.
Þ Figure illustrates an example of the Kohonen network where its neurons are
arranged within a two- dimensional space

4. Explain single perceptron with AND Function.


One type of ANN system is based on a unit called a perceptron. Perceptron is a single layer
neural network.
• A perceptron takes a vector of real-valued inputs, calculates a linear combination of
these inputs, then outputs a 1 if the result is greater than some threshold and -1 otherwise.
• Given inputs x through x, the output O(x1, . . . , xn) computed by the perceptron is

• Where, each wi is a real-valued constant, or weight, that determines the


contribution of input xi to the perceptron output.
• -w0 is a threshold that the weighted combination of inputs w1x1 + . . . + wnxn must
surpass in order for the perceptron to output a 1.
Sometimes, the perceptron function is written as,

Learning a perceptron involves choosing values for the weights w0 , . . . , wn . Therefore, the
space H of candidate hypotheses considered in perceptron learning is the set of all possible
real-valued weight vectors

5. Discuss the gradient Descent rule for training linear unit.


6. Derivation for the back propagation rule both case1 and case2.

7. Write the algorithm for stochastic gradient algorithm.

9. Apply back propagation for the network given below

1. Explain Bayesian belief network with suitable example.


The naive Bayes classifier makes significant use of the assumption that the values of the
attributes a1 . . .an are conditionally independent given the target value v.
This assumption dramatically reduces the complexity of learning the target function
A Bayesian belief network describes the probability distribution governing a set of variables
by specifying a set of conditional independence assumptions along with a set of conditional
probabilities
Bayesian belief networks allow stating conditional independence assumptions that apply to
subsets of the variables
Notation
Consider an arbitrary set of random variables Y1 . . . Yn , where each variable Yi can
take on the set of possible values V(Yi).
The joint space of the set of variables Y to be the cross product V(Y1) x V(Y2) x. . .
V(Yn).
In other words, each item in the joint space corresponds to one of the possible
assignments of values to the tuple of variables (Y1 . . . Yn). The probability distribution
over this joint' space is called the joint probability distribution.
The joint probability distribution specifies the probability for each of the possible
variable bindings for the tuple (Y1 . . . Yn).
A Bayesian belief network describes the joint probability distribution for a set of
variables.
Conditional Independence
Let X, Y, and Z be three discrete-valued random variables. X is conditionally independent of
Y given Z if the probability distribution governing X is independent of the value of Y given a
value for Z, that is, if
2. Explain brute force bayes concept learning in details.
3. Derive the equation for Least- Squared Error Hypothesis
4. Discuss the Naïve Bayes Classifier with suitable example.

You might also like