0% found this document useful (0 votes)
17 views4 pages

Technical Seminar Index

Uploaded by

shailajabaspally
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views4 pages

Technical Seminar Index

Uploaded by

shailajabaspally
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

TECHNICAL SEMINAR

Topic:

Name :

Roll number :

Branch : CSE(AIML)

Year & Sem : IV - I

Academic year :

1.introduction to neural network


 A neural network is a machine learning (ml) model designed to
process data in a way that mimics the function and structure of the
human brain. Neural networks are intricate networks of
interconnected nodes, or artificial neurons, that collaborate to tackle
complicated problems.
 Also referred to as artificial neural networks (ANNs), neural nets or
deep neural networks, neural networks represent a type of deep
learning technology that's classified under the broader field of
artificial intelligence (AI).
 Neural networks are widely used in a variety of applications,
including image recognition, predictive modeling, decision-making
and natural language processing (NLP). Examples of significant
commercial applications over the past 25 years include
handwriting recognition for check processing, speech-to-
text transcription, oil exploration data analysis, weather
prediction and facial recognition.
 Neural networks are typically trained through empirical risk
minimization. This method is based on the idea of optimizing
the network's parameters to minimize the difference, or
empirical risk, between the predicted output and the actual
target values in a given dataset. [4] Gradient-based methods
such as backpropagation are usually used to estimate the
parameters of the network.[4] During the training phase,
ANNs learn from labeled training data by iteratively
updating their parameters to minimize a defined loss
function.[5] This method allows the network to generalize to
unseen data.
2.History of neural network
 The history of artificial neural networks (ANNs) traces back to the
mid-20th century, drawing inspiration from biological neural
networks. In 1943, Warren McCulloch and Walter Pitts introduced
the first mathematical model of a neuron, which laid the
groundwork for future neural network research. Their work, which
used symbolic logic to describe neural processes, connected
computational models of neurons to Turing machines, showing
that these models could possess the same computational power
as traditional computing systems. This approach led to two
primary directions in research: one that sought to replicate
biological processes and another focused on the practical
application of neural networks to artificial intelligence. In the
1950s, D.O. Hebb's learning hypothesis, known as Hebbian
learning, proposed a mechanism for unsupervised learning,
where the strength of connections between neurons increased
when they were activated together. This idea formed the
foundation for early computational models, such as the
perceptron, developed by Frank Rosenblatt in 1958. The
perceptron, a simple feedforward network, sparked significant
interest as an algorithm for pattern recognition.
 Despite early optimism, progress in neural network research
slowed in the 1960s, partly due to the limitations of single-layer
networks and the critiques presented in Marvin Minsky and
Seymour Papert's book Perceptrons (1969). Their work
highlighted that perceptrons could not solve certain problems,
such as the XOR function, effectively stalling research for a time.
However, significant strides were made in the 1970s, as Alexey
Ivakhnenko and colleagues introduced methods for training
deeper neural networks, which laid the groundwork for more
complex models. One of the notable advances was Shun'ichi
Amari’s work in 1967, which introduced the first deep learning
multilayer perceptron trained by stochastic gradient descent
(SGD). This technique would later become a cornerstone for
training deep neural networks, enabling the network to learn
complex, non-linear relationships. Despite these developments,
the practical application of deep neural networks remained
limited due to insufficient computational power and the lack of
effective training methods.
 The resurgence of neural networks in the 1980s and 1990s can
be attributed to advances in training algorithms, most notably the
backpropagation algorithm. Backpropagation, a method for
efficiently training multi-layer networks, was initially introduced
by Rosenblatt in 1962 but was not fully realized until the early
1970s, with contributions from researchers like Seppo
Linnainmaa, Paul Werbos, and finally, in 1986, David Rumelhart
and colleagues. The introduction of backpropagation significantly
enhanced the training of deep neural networks, allowing them to
adjust their weights by propagating error gradients backwards
through the layers. This development, combined with
improvements in hardware and algorithmic efficiency, renewed
interest in ANNs and set the stage for the breakthroughs of the
2010s. In particular, the development of AlexNet in 2012, a deep
convolutional neural network (CNN), demonstrated the power of
deep learning by winning the ImageNet competition, marking the
beginning of the modern deep learning era. This period saw a
significant increase in research and application of neural
networks, leading to advancements in image recognition, natural
language processing, and generative models like transformers
and diffusion models.

You might also like