0% found this document useful (0 votes)
76 views

Deep Learning Project

presentation about deep learning and AI

Uploaded by

sherbo.msm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views

Deep Learning Project

presentation about deep learning and AI

Uploaded by

sherbo.msm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

DEEP LEARNING

Prepared by;

MAHMOUD HASSAN MAHMOUD -SECTION 9- 324243194


MAHMOUD SHARBLI MOHSEN -SECTION 9 -324243466
MUSTAFA MOHSEN SAAD -SECTION 9- 324243087
MUSTAFA FARAG MAGHAWRY -SECTION -9 324243126
MAHMOUD SALAH GAMAL - SECTION 9- 324243477

Prepared to : Dr Asmaa El said

DATE : 2024/4/26

1
2

Abstract:

One of data science's fastest-growing subfields, deep learning,


includes artificial neural network-based algorithms designed to
interpret unstructured data, such as text, speech, video, and
image. Though deep learning dates to the mid-1980s, its full
potential has only just been understood. Over the last five years,
deep learning has risen rapidly to the forefront of contemporary
research. This chapter explores the practical applications,
provides a conceptual foundation for comprehending deep
learning, and clarifies the early history. It delves into the
significant distinctions between state-of-the-art methods used in
deep learning and conventional neural network topologies,
offering a thorough derivation of the backpropagation algorithm in
vector-matrix form. It also clarifies the connection between deep
learning software, computational graphs, and backpropagation.
Many deep learning models, such as autoencoders, recurrent
neural networks, deep convolutional neural networks, and
stochastic techniques like Boltzmann machines, are all
thoroughly covered in this chapter. It also discusses important
practical issues of using huge datasets to train these models,
highlighting the critical importance of GPU computing.
3

Table of content :

1. introduction to deep learning


2.what is deep learning
3.Importance of Deep Learning
3.1. Automatic feature learning
3.2. Handling large and complex data
3.3.Handling non-linear relationships
3.4. Handling structured and non-structured data
3.5.Predictive modelling
4.Common types of DL models
5.Advantages and challenges of deep learning models.
6.Why Deep Learning ?
7.Deep learning vs. machine learning
8. Future of Deep learning
9. Conclusion
4

Introduction:

Deep learning is one among the machine learning methods that


dominate in various application areas. Machine learning functions
similarly to a neonate. There are billions of linked neurons within the
brain, which are engaged when a message is shipped to the brain.
When a baby is shown a vehicle, as an example, a selected set of
neurons are active. When the infant is shown another automobile of a
special model, an equivalent set of neurons plus some extra neurons
could also be triggered. Thus, humans are trained and educated
during childhood, and through this process, their neurons and
therefore the pathways linking them are modified. If AI is sort of a
brain, then machine learning is that the process through which AI
gains new cognitive abilities, and deep learning is that the best self-
training system now available.

Machine learning is that the study of creating computers learn and


improve in ways in which mimic or outperform human brain .
Developers train models to predict the intended result from a group of
inputs. This understanding takes the place of the pc program within
the past.
5

the whole discipline of AI referred to as machine learning is founded


on this principle of learning by example, of which deep learning may
be a subset. rather than providing a computer with an extended list of
rules to follow so as to unravel a drag. This is also how machine
learning works, where the pc is trained using several instances from
the training data sets, neural networks are trained, and their routes
adjusted accordingly.

The machine receives fresh inputs and generates outputs. Real-world


applications of this technology include spam filters in Gmail, Yahoo,
and therefore the true caller app, which filters spam emails; Amazon’s
Alexa; and therefore the recommended videos that appear on our
YouTube homepage supported the type of videos we watched before.
Tesla, Apple, and Nissan are among the companies developing
autonomous technology supported deep learning.
6

1.introduction to Deep Learning:

Artificial neural networks are the foundation of the machine learning


subfield known as deep learning. It has the ability to recognize
intricate links and patterns in data. We don't have to explicitly program
everything in deep learning. Because of the availability of enormous
datasets and the advancements in computing power, it has grown in
popularity in recent years. Due to the fact that deep neural networks,
or artificial neural networks (ANNs), are its foundation (DNNs). These
neural networks, which are built to learn from vast amounts of data,
are modeled after the composition and operation of actual neurons in
the human brain.

1. Deep Learning is a branch of machine learning that models and


resolves complicated issues using neural networks. Neural
networks are made up of layers of connected nodes that process
and transform input; they are modeled after the structure and
operation of the human brain.

2. The utilization of deep neural networks, which include numerous


layers of connected nodes, is a fundamental aspect of deep
learning. By identifying hierarchical patterns and features in the
data, these networks are able to learn complicated
representations of the data. Without requiring human feature
engineering, deep learning algorithms can automatically learn
from and get better at using data.
7

3. Deep Learning has achieved significant success in various fields,


including image recognition, natural language processing,
speech recognition, and recommendation systems.
Convolutional neural networks (CNNs), recurrent neural network
s (RNNs), and deep belief networks (DBNs) are a few of the well-
liked Deep Learning designs.

4. It usually takes a lot of data and processing power to train deep


neural networks. Deep neural network training has become
simpler, though, because to the advent of cloud computing and
the creation of specialized hardware, including Graphics
Processing Units (GPUs).

In conclusion, deep learning is a branch of machine learning that


studies how to represent and resolve complicated issues using
deep neural networks. Deep Learning has shown great promise
in a number of domains, and as more data and more potent
computational resources become available, its application is
anticipated to increase.
8

2. What is deep learning ?

Artificial neural network architecture forms the foundation of the


machine learning subfield known as deep learning. An artificial
neural network, also known as an ANN, processes and learns
from input data by utilizing layers of networked nodes, or
neurons. A fully connected deep neural network consists of an
input layer connected sequentially to one or more hidden layers.
Every neuron receives input from the input layer or from neurons
in the layer above it. Until the last layer of the network generates
the network's output, the output of one neuron serves as the
input for neurons in the layer above it. The neural network's
layers apply a number of nonlinear modifications to the input
data, allowing the network to learn complex representations
of the input data.

Today Deep learning has become one of the most popular and
visible areas of machine learning, due to its success in a variety
of applications, such as computer vision, natural language
processing, and Reinforcement learning.
Deep learning can be used for supervised, unsupervised as well
as reinforcement machine learning. it uses a variety of ways
to process these.
9

Common types of DL models

Typical categories of DL models Recurrent Neural Networks (RNN),


CNN, and Deep Belief Networks (DBN) algorithms, which comprise
Restricted Boltzmann Machines (RBMS: One of the most widely used
building blocks for deep architectures), Autoencoders, Sparse Coding,
and Recursive Neural Tensor Networks (RNTN), can be considered
the six major models of deep learning methods. These techniques are
more sophisticated iterations of conventional NN with improved
training. In summary, there are other categories as well; nevertheless,
Figure 7 displays the most prevalent ones.
10

3.Importance of Deep Learning.


mportance of Deep Lear lies in solving many
The value of deep learning is found in its ability to solve several issues
that are too complex or unattainable for human experts or
conventional algorithms. It can manage complicated and huge data
sets that include text, audio, video, photos, and more. It can also learn
from unstructured or unlabeled data, so it doesn't need supervision or
human input to retrieve valuable information. Furthermore, Deep
Learning may be trained to do activities that are incomprehensible for
humans, including creating realistic visuals, making music, or playing
video games. As a result, technology becomes essential to our daily
life in several ways. The following are some of the factors that make
deep learning significant:

3.1 .Automatic feature learning

One of the significant advantages of Deep Learning is its ability to independently


learn features from the data, eliminating the need for manual feature design.
This proves particularly beneficial for tasks where defining features is
challenging, such as image recognition, natural language processing, and
speech comprehension. There are numerous instances of Deep Learning
algorithms automatically learning features.
11

1) Image recognition: Deep Learning algorithms, particularly Convolutional


Neural Networks (CNNs), can self-learn image features. They can identify
faces, objects, scenes, and actions in images and videos, segment images,
convert 2D images into 3D models, and create realistic images from sketches
or text. For instance, in 2022, a Deep Learning model, ResNEt-152,
surpassed human performance on the ImageNet challenge with an error rate
of 3.56%, compared to the human error rate of 5.1%.

2) Natural language processing (NLP): in (NLP)Deep Learning


algorithms, specifically Recurrent Neural Networks (RNNs), can self-
learn text features. They can generate lifelike text and images, and
develop chatbots. For instance, in 2022, a Deep Learning model,
GPT_3, outperformed the previous best score of 87.6% by T5 on the
GLUW benchmark, achieving a score of 88.5%.

3) Speech recognition: Deep Learning algorithms have demonstrated


outstanding performance in speech recognition tasks, including text
transcription and language translation. For instance, using the
LibriSpeech test-clean dataset in 2022, a Deep Learning model
known as Wav2Vec 2.0 achieved a word error rate of 1.9%, which is
equivalent to the human performance of 1.8%.
12

3.2. Handling large and complex data

Deep Learning algorithms can deal with large and complex datasets
that would be challenging for traditional Machine Learning algorithms
to handle. This makes it useful for finding insights from big data, such
as posts on social media, webpages, videos, audio files, and sensor
data.

Some of the examples are:

1) Social media analysis: Deep Learning algorithms can analyse


millions of social media posts and feedbacks to find patterns, trends,
and sentiments by using NLP, that can understand and generate
natural language. For example, BERT achieved an accuracy of 95.7%
on predicting the sentiment of tweets.

2) Web Content extraction: Deep Learning algorithms can extract


information from millions of webpages such as titles, headings, links,
images, or tables by using computer vision, that can understand and
generate visual content such as images or videos. For example,
LayoutLM achieved an accuracy of 96.3% on extracting key
information from web pages
13

3.3.Handling non-linear relationships

The Importance of Deep Learning also lies in discovering non-linear


relationships in data that would be difficult to detect through traditional
methods. This allows it to model complex phenomena and capture
higher-level abstractions. Some of the examples of handling non-
linear relationships by Deep Learning algorithms are:

1) Physical systems: A Deep Potential Molecular Dynamics (DPMD)


achieved an accuracy of 98.9% on predicting the potential energy
surface of water molecules, which is a highly non-linear function of
atomic positions.

2) Biological systems: A Deep Learning model called DeepChrome


achieved an accuracy of 89.9% on predicting the chromatin state of
genes, which is non-linear function of the histone modifications.

3) Social system: A Deep Learning model called EMoNet achieved an


accuracy of 71.0% on predicting the emotion categories of images,
which is a non-linear function of visual features.
14

3.4. Handling structured and non-structured data

Deep Learning algorithms are capable of processing both structured


and unstructured data, including images, text, audio, video, and
tabular data. This allows them to combine various data sources and
utilize their combined strengths.

Here are some examples of how Deep Learning algorithms handle


different types of data:

1) Images : Deep Learning uses Convolutional Neural Networks


(CNNs) to process image data. These networks learn image features
using filters that traverse the image pixels to generate feature maps.

2) Text : Recurrent Neural Networks (RNNs) are used by Deep


Learning for text data. These networks learn text features using
hidden states that retain information from previous inputs and outputs.
RNNs can generate realistic text and images, answer questions,
summarise documents, and even create chatbots.

3) Audio : Deep Learning algorithms process audio data using either


CNNs or RNNs. These networks learn audio features using filters or
hidden states that capture the frequency and temporal patterns in the
audio signals. They can transcribe speech to different languages,
generate realistic speech and music, extract key features in audio
signals, and synchronise lip movements with speech.
15

4) Video : Deep Learning algorithms process video data using either


CNNs or RNNs. These networks learn video features using filters or
hidden states that capture the spatial and temporal patterns in the
video frames. They can recognise faces, objects, scenes, and actions
in videos, segment videos into clips, reconstruct 3D models from 2D
videos, and synthesise realistic videos from sketches or text.

5) Tabular : Feedforward Neural Networks (FNNs) are used by Deep


Learning algorithms to process tabular data. These networks learn
features from tabular data using fully connected layers that connect
every input node to every output node. FNNs can predict customer
demand and behaviour, forecast sales and revenue, detect fraud and
anomalies, optimise inventory and supply chain, recommend products
and services, and more.
16

3.5.Predictive modelling

Deep Learning is a powerful tool for predictive modelling, which


involves using historical data to forecast future outcomes. This can aid
businesses and organisations in making informed decisions and
optimising their operations.

Here are some examples of predictive modelling:

1) Customer Demand and Behaviour : Deep Learning algorithms,


specifically RNNs, can predict customer demand and behaviour by
modelling temporal dependencies and variations over time. For
instance, the DeepAR model achieved a 94.7% accuracy rate in
predicting hourly electricity consumption in Irish households.

2) Sales and Revenue : Deep Learning algorithms can forecast sales


and revenue using CNNs, which learn features from spatial data like
images or tabular data. They can model spatial correlations and
patterns in sales and revenue across different regions or markets. For
example, the Wavenet model achieved a 96.4% accuracy rate in
predicting the daily sales of 1115 stores in Germany.

3) Inventory and Supply Chain : Deep Learning algorithms can


optimise inventory and supply chain by using reinforcement learning
to find the optimal policy or strategy that maximises the expected
reward or minimises the expected cost. For example, the DQN model
achieved a 98.2% accuracy rate in optimising the inventory
management of a multi-echelon supply chain.
17

Advantages and challenges of deep


learning models.

Deep learning models have garnered a lot of interest because of its


many benefits, which include speech recognition, picture processing
and recognition, self-driving automobiles, and more. The ability of
deep learning models to generate new features from a restricted set of
characteristics in the training dataset is the primary advantage of
utilizing them over machine learning (ML) technologies (Kotsiopoulos
et al. 2021). These models cover a wide range of facets of human life
and can also be used to generate new tasks to solve existing ones.
Deep learning models have the potential to save a great lot of time
when working with large datasets because these algorithms can
produce features without the need for human involvement (Gupta et
al. 2021).

Deep learning models, despite their benefits, face significant


challenges. They lack transparency in decision-making and require
complete algorithm revisions to correct errors, leading to longer
training times due to the need for high-performance computing units,
powerful GPUs, and vast storage. Deep learning excels in handling
unstructured data and optimizing parameters for improved precision,
producing novel features from limited datasets, and addressing
existing problems with full-cycle learning.
18

A key advantage of using the DL approach is that it can perform


feature engineering on its own. In this method, the algorithm is not
given any explicit instructions, but rather it automatically searches
through the data for features that correlate and then combines them to
facilitate faster learning. Because of its ability to handle massive data,
DL scales extremely well. The algorithms of DL can be learned on a
wide range of data formats while still producing insights relevant to the
objectives of the training. For instance, DL algorithms can be utilized
to identify correlations between social media activities, market
research, and other factors in order to predict the future stock value of
a particular firm.

There are a number of issues with DL models as well. In order to


outperform alternative methods, deep learning needs access to a
massive dataset. Therefore managing data is the key challenge that
hinders DL in industrial implementations. Deep learning is currently
limited in its applicability because of the extensive computer
resources and training datasets it necessitates. It is still a mystery as
to how exactly DL models arrive at their conclusions. Not like in
traditional ML, where we can trace back the reasoning behind a
system’s identification of a given image as representing a cat rather
than a dog. To rectify errors in DL algorithms, the entire algorithm
must be modified. However, no universally applicable theory is
available that can help us to choose the appropriate DL tools as it
needs knowledge of training methods, topology, and other features.
19

Why Deep Learning ?

for most flavors of the old generations of learning algorithms …


performance will plateau deep learning is the first class of algorithms
that is scalable performance just keeps getting better as you feed
them more data
20

Deep learning vs. machine learning


Artificial intelligence encompasses both machine learning and deep
learning. Machine learning, to put it briefly, is AI that can adapt on its
own with little assistance from humans. Artificial neural networks are
used in deep learning, a kind of machine learning, to mimic the way
the human brain learns.
21

Future of Deep learning

As we enter an era of abundant data, deep learning's future is not just


promising but essential for global advancement and problem-solving.
It's a crucial tool in various fields, from science and engineering to
humanities and health. Deep learning's role in cybersecurity,
surveillance, and quantum computing signifies its importance in the
future. Its success in computer vision and AI, extracting meaningful
features from data, and tolerance for data variations, positions it as a
foundation for future innovations. However, more understanding is
needed to enhance deep learning networks that handle complex,
high-dimensional data efficiently.

The growing interest in investments, particularly of giant tech


companies (Google, Facebook, Apple), represents and signals the
value and potency of deep learning in the present and future.
Although deep learning demands high computational power and
constant training to generate reliable results, more work is yet to be
done to ensure that deep learning networks are efficient and cost-
effective in extracting and identifying distinct features from real-world
data, mimicking the ability of biological intelligence. Therefore, when
constructing a deep learning methodology, it is important to ensure
that the model can deal with uncertainty, is scalable, and has
transferable qualities to be implemented and applied to multiple
problem systems (Zhang et al. 2020). Alongside the development of
deep learning techniques, the availability of user-friendly hardware
22

and software systems are significant future prospects for deep


learning.

Enhancing the performance of Deep Learning (DL) models in complex


environments requires larger datasets. As we embrace artificial
intelligence and deep learning, ethical frameworks are needed to
guide their use and manage big data effectively. Limited training
samples can hinder the recognition of certain activities. However,
larger datasets can improve model precision. Currently, there's a lack
of publicly available, comprehensive, and standardized datasets for
specific tasks like activity recognition, pose detection, and object
detection across various construction sites and conditions.

Research combining deep learning and expert knowledge might be


beneficial since models can be dynamically enhanced with newly
acquired data to create useful digital twins that aid with maintenance
decision making. Even though physics-induced deep learning is
currently exploring several avenues, there isn't any consensus or
unification regarding these paths or how they might be applied to
commercial settings. Further research is required to hone and
integrate these methods, which could contribute to improving the
models' capacity for generalization. The efficient composition and
selection of training data sets is another matter that needs to be
addressed in subsequent research. This holds particular significance
in dynamic contexts with highly changeable operating conditions,
because the training dataset may not accurately represent the entire
spectrum of anticipated operating conditions. Decisions on whether
new data should be added to training datasets and algorithms
updated or whether the information is redundant and already present
23

in the datasets used to train the algorithms must be made on a


continuous basis.

Conclusion :
Deep learning is one of the active and quickly growing areas of
computer science. Because of the many problems caused by the vast
amount of complicated data, developing a good deep learning model
for an application is getting harder and harder. Although deep learning
is still in its infancy and has its share of challenges, it has shown to
have enormous learning potential. Research on future artificial
intelligence is still in progress. The most important developments in
deep learning and their applications to many fields have been
discussed in this paper. overview of deep learning technology, which
is essential to data science and artificial intelligence. An overview of
the history of ANNs is given at the outset, followed by a discussion of
more recent developments in deep learning methods across
numerous additional domains. Subsequently, we investigate deep
neural network modeling and the fundamental methods within this
domain. Additionally, we have provided a classification that takes into
account the wide range of deep learning tasks and their many uses.
Deep learning differs from traditional machine learning and data
mining in that it can handle incredibly detailed representations of data
from very large datasets. This has produced a great deal of amazing
solutions to urgent real-world problems. Any deep learning technique
must include data-driven modeling that is suitable for the raw data's
features in order to be successful. Prior to a system
24

References :

- Tutorials :
• Geeksforgeeks
• IBM
• MIT News
• tech target

- Scientific Papers ( links ) :


• springer
• mdpi
• sciencedirect
• sciencedirect
• springer
• papers.ssrn

You might also like