Backpropagation: Static Backpropagation Is A Network Designed To
Backpropagation: Static Backpropagation Is A Network Designed To
square matrices give the scaler multiple of the vector, i.e. we define
Application DL:(a)COMPUTER VISION,object detection,image an eigenvector for matrix A to be “v” if it specifies the condition, Av
classcification,image segmentation(B)natural language = λvEigenvector Equation The Eigenvector equation is the equation
procession,automatic text genration,language that is used to find the eigenvector of any square matrix. The
translation,sentiment analysis,speech recognition(C)RENFORMENTS eigenvector equation is, Av = λv where, A is the given square matrix
LEARNING,gamee playing,robitics,control system Challenges v is the eigenvector of matrix A λ is any scaler multipleAppli:
DL:data availability,computationalresources,time- pca,solving diffrential equactions,stabilityanalysis,quatum
consuming,interpretablity,overfiting.Adv:high accuracy, automated mechanism,image compression.
feature engg.,sacalability,flexibility,control improvements
DisaAdv:high computational Unit3: Autoencoders: Autoencoders are neural networks that stack
requrements,interpretablility,overfiting,black-box nature.Gredient numerous non-linear transformations to reduce input into a low-
Decenttype:Batch,stochastic,minibatch. dimensional latent space (layers). They use an encoder-decoder
system. Autoencoders may be used to reduce dimensionality when
Unit2: feedforworld layers:layer of input,hidden,output,neurons. the latent space has fewer dimensions than the input.Relation
Features of Backpropagation: it is the gradient descent method as Between Autoencoder and PCA: similarObjective,linerrity vs.
used in the case of simple perceptron network with the nonliner,dementionally reductioins,interpretability.
differentiable unit. it is different from other networks in respect to
the process by which the weights are calculated during the learning denoising autoencoder is a type of artificial neural network used to
period of the network. training is done in the three stages : the learn efficient data codings in an unsupervised manner. The primary
feed-forward of input training pattern the calculation and aim of a denoising autoencoder is to learn a representation
backpropagation of the error updation of the (encoding) for a set of data, typically for the purpose of
weight.Backpropagation Algorithm: Step 1: Inputs X, arrive through dimensionality reduction, by introducing a reconstruction
the preconnected path. Step 2: The input is modeled using true constraint.Works Their key charectireistics: noisy input
weights W. Weights are usually chosen randomly. Step 3: Calculate data,architeture,traning nobjective,noise removel.Applications:
the output of each neuron from the input layer to the hidden layer image, audio signal,data
to the output layer. Step 4: Calculate the error in the outputs restoration(regularization,hyperparameters,evalutions)
Backpropagation Error= Actual Output – Desired Output Step 5:
From the output layer, go back to the hidden layer to adjust the Contractive Autoencoder:type of NNA,used for unsupervised
weights to reduce the error. Step 6: Repeat the process until the learning,autoencoder structure,contractive penalty. Objective
desired output is achieved.Types of Backpropagation: Static Fuc:Reconstraction error,constractive penalty. Appli:noise
backpropagation: Static backpropagation is a network designed to robustness,data
map static inputs for static outputs.Recurrent backpropagation: denoising,featurelearning(htperparameters,evalution)
Recursive backpropagation is another network used for fixed-point
learning. Activation in recurrent backpropagation is feed-forward Unit4: Lenet-5 is one of the earliest pre-trained models proposed by
until a fixed value is reached. Advantages: It is simple, fast, and Yann LeCun and others in the year 1998, in the research
easy to program. Only numbers of the input are tuned, not any paper Gradient-Based Learning Applied to Document Recognition.
other parameter. It is Flexible and efficient. No need for users to They used this architecture for recognizing the handwritten and
learn any special functions. Disadvantages: It is sensitive to noisy machine-printed characters.AlexNet is a pioneering convolutional
data and irregularities. Noisy data can lead to inaccurate results. neural network (CNN) used primarily for image recognition and
Performance is highly dependent on input data. Spending too much classification tasks. It won the ImageNet Large Scale Visual
time training. The matrix-based approach is preferred over a mini- Recognition Challenge in 2012, marking a breakthrough in deep
batch.principal component analysis(PAC): unsupervised learning learning. AlexNet’s architecture, with its innovative use of
algorithm that is used for the dimensionality reduction in machine
convolutional layers and rectified linear units (ReLU), laid the
learning. PCA works by considering the variance of each attribute
foundation for modern deep learning models, advancing computer
because the high attribute shows the good split between the
classes, and hence it reduces the dimensionality. Some real-world vision and pattern recognition applications.
applications of PCA are image processing, movie recommendation
system, optimizing the power allocation in various communication
channels.PCA algorithm is based on some mathematical
concepts:Variance and Covariance, Eigenvalues and Eigen
factorsterms used in PCA algorithm: Dimensionality: It is the
number of features or variables present in the given dataset. it is
the number of columns present in the dataset. Correlation: It
signifies that how strongly two variables are related to each other.
Such as if one changes, the other variable also gets changed.
Orthogonal: It defines that variables are not correlated to each
other, and hence the correlation between the pair of variables is
zero. Eigenvectors: If there is a square matrix M, and a non-zero
vector v is given. Then v will be eigenvector if Av is the scalar
multiple of v. Covariance Matrix: A matrix containing the covariance
between the pair of variables is called the Covariance
Matrix.Principal Components in PCA :Some properties of these
principal components :The principal component must be the linear
combination of the original features. These components are
orthogonal, i.e., the correlation between a pair of variables is zero.
The importance of each component decreases when going to 1 to n,
it means the 1 PC has the most importance, and n PC will have the
least importance.Steps for PCA algorithm: Getting the dataset
,Representing data into a Structure,Standardizing the data,
Calculating the Covariance of Z , Sorting the Eigen Vectors,
Calculating the new features Or Principal Components , Remove
features from the new dataset.Applications of Principal
Component Analysis PCA is mainly used as the dimensionality
reduction technique in various AI applications such as computer
vision, image compression, etc. It can also be used for finding
hidden patterns if data has high dimensions. Some fields where PCA
is used are Finance, data mining, Psychology, etc.