AAI Unit 2
AAI Unit 2
AI TECHNOLOGIES
• An Artificial Neural Network (ANN) is an information processing
paradigm that is inspired by the brain. ANNs, like people, learn by
examples. An ANN is configured for a specific application, such as
pattern recognition or data classification, through a learning
process. Learning largely involves adjustments to the synaptic
connections that exist between the neurons.
• Artificial Neural Networks (ANNs) are a type of machine learning
model that are inspired by the structure and function of the
human brain. They consist of layers of interconnected “neurons”
that process and transmit information.
• There are several different architectures for ANNs, each with their
own strengths and weaknesses. Some of the most common
architectures include:
• Feedforward Neural Networks: This is the simplest type of ANN
architecture, where the information flows in one direction from input to
output. The layers are fully connected, meaning each neuron in a layer
is connected to all the neurons in the next layer.
• Recurrent Neural Networks (RNNs): These networks have a “memory”
component, where information can flow in cycles through the network.
This allows the network to process sequences of data, such as time
series or speech.
• Convolutional Neural Networks (CNNs): These networks are designed
to process data with a grid-like topology, such as images. The layers
consist of convolutional layers, which learn to detect specific features
in the data, and pooling layers, which reduce the spatial dimensions of
the data.
• Autoencoders: These are neural networks that are used for unsupervised
learning. They consist of an encoder that maps the input data to a lower-
dimensional representation and a decoder that maps the representation back to
the original data.
• Generative Adversarial Networks (GANs): These are neural networks that are
used for generative modeling. They consist of two parts: a generator that learns
to generate new data samples, and a discriminator that learns to distinguish
between real and generated data.
• The model of an artificial neural network can be specified by three entities:
• Interconnections
• Activation functions
• Learning rules
• Interconnections:
• Interconnection can be defined as the way processing elements
(Neuron) in ANN are connected to each other. Hence, the arrangements
of these processing elements and geometry of interconnections are very
essential in ANN.
These arrangements always have two layers that are common to all
network architectures, the Input layer and output layer where the input
layer buffers the input signal, and the output layer generates the output
of the network. The third layer is the Hidden layer, in which neurons are
neither kept in the input layer nor in the output layer. These neurons are
hidden from the people who are interfacing with the system and act as a
black box to them. By increasing the hidden layers with neurons, the
system’s computational and processing power can be increased but the
training phenomena of the system get more complex at the same time.
• There exist five basic types of neuron connection architecture :
•
1.Single-layer feed-forward network
2.Multilayer feed-forward network
3.Single node with its own feedback
4.Single-layer recurrent network
5.Multilayer recurrent network
• 1. Single-layer feed-forward network
• In this type of network, we have only two layers input layer and the
output layer but the input layer does not count because no computation
is performed in this layer. The output layer is formed when different
weights are applied to input nodes and the cumulative effect per node is
taken. After this, the neurons collectively give the output layer to
compute the output signals.
•
• 2. Multilayer feed-forward network
• This layer also has a hidden layer that is internal to the network and has
no direct contact with the external layer. The existence of one or more
hidden layers enables the network to be computationally stronger, a
feed-forward network because of information flow through the input
function, and the intermediate computations used to determine the
output Z. There are no feedback connections in which outputs of the
model are fed back into itself.
•
MULTILAYER FEED-FORWARD
NETWORK
• 3. Single node with its own feedback
• When outputs can be directed back as inputs to the same layer or
preceding layer nodes, then it results in feedback networks.
Recurrent networks are feedback networks with closed loops. The
above figure shows a single recurrent network having a single
neuron with feedback to itself.
SINGLE NODE WITH ITS OWN
FEEDBACK
• Single node with its own feedback
• The above network is a single-layer network with a feedback
connection in which the processing element’s output can be
directed back to itself or to another processing element or
both. A recurrent neural network is a class of artificial neural
networks where connections between nodes form a directed
graph along a sequence. This allows it to exhibit dynamic
temporal behavior for a time sequence. Unlike feedforward
neural networks, RNNs can use their internal state (memory) to
process sequences of inputs.
SINGLE NODE WITH ITS OWN
FEEDBACK
• 5. Multilayer recurrent network
• In this type of network, processing element output can be directed to
the processing element in the same layer and in the preceding layer
forming a multilayer recurrent network. They perform the same task for
every element of a sequence, with the output being dependent on the
previous computations. Inputs are not needed at each time step. The
main feature of a Recurrent Neural Network is its hidden state, which
captures some information about a sequence.
•
MULTILAYER RECURRENT NETWORK
IMPLEMENTING ARTIFICIAL NEURAL
NETWORK TRAINING PROCESS IN PYTHON
A synapse is able to increase or decrease the The artificial signals can be changed by
strength of the connection. This is where weights in a manner similar to the physical
information is stored. changes that occur in the synapses.
DIFFERENCE BETWEEN THE HUMAN BRAIN AND COMPUTERS IN TERMS OF HOW
INFORMATION IS PROCESSED.
Biological Neurons compute slowly (several ms Artificial Neurons compute fast (<1 nanosecond
per computation) per computation)
The brain represents information in a distributed In computer programs every bit has to function
way because neurons are unreliable and could as intended otherwise these programs would
die any time. crash.
Our brain changes their connectivity over time to The connectivity between the electronic
represents new information and requirements components in a computer never change unless
imposed on us. we replace its components.
1.Maximizes Performance
2.Sustain Change for a long period of time
3.Too much Reinforcement can lead to an overload of states which can
diminish the results
2.Negative: Negative Reinforcement is defined as
strengthening of behavior because a negative condition is
stopped or avoided.
Advantages of reinforcement learning:
2.Increases Behavior
3.Provide defiance to a minimum standard of performance
4.It Only provides enough to meet up the minimum behavior
• Elements of Reinforcement Learning
• i) Policy: Defines the agent’s behavior at a given time.
• ii) Reward Function: Defines the goal of the RL problem by
providing feedback.
• iii) Value Function: Estimates long-term rewards from a
state.
• iv) Model of the Environment: Helps in predicting future
states and rewards for planning.
• Application of Reinforcement Learnings
• i) Robotics: Automating tasks in structured environments
like manufacturing.
• ii) Game Playing: Developing strategies in complex games
like chess.
• iii) Industrial Control: Real-time adjustments in operations
like refinery controls.
• iv) Personalized Training Systems: Customizing
instruction based on individual needs.
• Advantages and Disadvantages of Reinforcement Learning
• Advantages:
• 1. Reinforcement learning can be used to solve very complex problems
that cannot be solved by conventional techniques.
• 2. The model can correct the errors that occurred during the training
process.
• 3. In RL, training data is obtained via the direct interaction of the agent
with the environment
• 4. Reinforcement learning can handle environments that are non-
deterministic, meaning that the outcomes of actions are not always
predictable. This is useful in real-world applications where the
environment may change over time or is uncertain.
• 5. Reinforcement learning can be used to solve a wide range
of problems, including those that involve decision making,
control, and optimization.
• 6. Reinforcement learning is a flexible approach that can be
combined with other machine learning techniques, such as
deep learning, to improve performance.
• Disadvantages:
• 1. Reinforcement learning is not preferable to use for solving
simple problems.
• 2. Reinforcement learning needs a lot of data and a lot of
computation
• 3. Reinforcement learning is highly dependent on the quality of
the reward function. If the reward function is poorly designed,
the agent may not learn the desired behavior.
• 4. Reinforcement learning can be difficult to debug and interpret.
It is not always clear why the agent is behaving in a certain way,
which can make it difficult to diagnose and fix problems.
• Conclusion
• Reinforcement learning is a powerful technique for decision-
making and optimization in dynamic environments. Its
applications range from robotics to personalized learning
systems. However, the complexity of RL requires careful
design of reward functions and significant computational
resources. By understanding its principles and applications,
one can leverage RL to solve intricate real-world problems.
TRANSFER LEARNING
How are they updated? Must be trained explicitly Learns from each interaction