0% found this document useful (0 votes)
88 views11 pages

A Study To Find Facts Behind Preprocessing On Deep Learning Algorithms

1. This document discusses several research papers that apply deep learning algorithms to various tasks. These include preprocessing for deep learning models, detecting diabetes using heart rate data, simulating groundwater flow, and image-based object recognition for IoT applications. 2. Deep learning techniques are shown to be effective for tasks like classification and feature extraction. Preprocessing data improves results for many deep learning algorithms. Models like CNN-LSTM and recurrent neural networks can accurately detect diabetes from ECG signals. 3. Simulating large-scale groundwater flow and levels using deep learning is computationally efficient for IoT applications. AI models can model groundwater without full understanding of environmental factors.

Uploaded by

Esclet PH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views11 pages

A Study To Find Facts Behind Preprocessing On Deep Learning Algorithms

1. This document discusses several research papers that apply deep learning algorithms to various tasks. These include preprocessing for deep learning models, detecting diabetes using heart rate data, simulating groundwater flow, and image-based object recognition for IoT applications. 2. Deep learning techniques are shown to be effective for tasks like classification and feature extraction. Preprocessing data improves results for many deep learning algorithms. Models like CNN-LSTM and recurrent neural networks can accurately detect diabetes from ECG signals. 3. Simulating large-scale groundwater flow and levels using deep learning is computationally efficient for IoT applications. AI models can model groundwater without full understanding of environmental factors.

Uploaded by

Esclet PH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

1.

A Study to Find Facts Behind Preprocessing on Deep Learning Algorithms

Deep learning techniques frequently let a machine operate based on an

assumption. Most deep learning algorithms emulate how neurons link in the

human brain to add artificial intelligence to the computer system. This paper

outlines the building blocks needed to include an algorithm based on deep

learning. The research also makes an effort to thoroughly examine how important

the preprocessing phase is for several deep learning-based applications.

The advancement of science and technology allowed researchers to

create a computer system that could carry out some mathematical and analytical

operations on its own. Computer systems' precision and speed are incomparable

to those of a human. A computer never gets weary, falls asleep while working, or

takes time off. These are the justifications for using automated systems

everywhere rather than people.

Artificial intelligence is being used by several robotic applications to

address specific problems. For instance, a robot that follows lines is taught to

recognize and evaluate a line that is painted in front of it. Similar to this, various

robots for medical applications have been trained to see the differences between

scanned and ordinary images. Currently, computers are designed to carry out

specified tasks.
Without using any preprocessing methods, deep learning algorithms are

capable of being taught. The literature paper provides a summary of the methods

used with and without a preprocessing step. It suggests that methods with

preprocessing modules produce, on average, superior results for image, data,

and signal classification algorithms.


2. Diabetes detection using deep learning algorithms

Diabetes is a metabolic illness that affects a large number of individuals

globally and is frighteningly on the rise. Early diabetes diagnosis is crucial for

prompt treatment that can halt the development of such problems. The

categorization of diabetic and healthy HRV signals using deep learning

architectures is presented in this research study. Diagnosing diabetes with the

suggested categorization approach has a 95.7% accuracy rate, which is quite

high, according to Swapna G. (2018).

Diabetes is a condition in which the body's ability to metabolize blood

sugar (glucose) is impaired. A diabetic may experience serious side effects

including stroke, heart attack, renal failure, and nerve damage. Statistics from

2017 indicate that 8.8% of the world's population has diabetes. By 2045, this is

probably going to rise to 9.9%.

An input time sequence may be used to extract dynamic temporal

behavior using a recurrent neural network (RNN). Every node in a basic RNN

has a directed (one-way) link to every other node, simulating a network of

neurons. Nodes can be output nodes that produce results, input nodes that

accept data from outside the network, hidden nodes that alter data as it flows

through them, or output nodes that provide results.


Twenty participants from diabetes and normal groups had their

electrocardiograms (ECG) taken for ten minutes while they relaxedly lay on their

backs. The Pan and Tompkins technique is used to extract the heart rate

information from the ECG signals. Based on morphological characteristics

including slope, amplitude, and breadth, this real-time system can efficiently

identify QRS complexes in an ECG signal.

In this study, we extract features from CNN-LSTM-based deep learning

networks and send them to SVM for classification. A data sequence's long-term

dependencies can be handled via LSTM. TensorFlow with the Keras framework

is GPU enabled for all tests. SVM has outperformed other network architectures

in 5-fold cross-validation in virtually all cases.


3. Applying deep learning algorithms to enhance simulations of large-scale

groundwater flow in IoTs

Using deep learning to improve the simulation of IoT groundwater flow is a

fantastic way to learn more about how aquifer systems behave. The mechanism

can assess the spatial linkages of observed data, produce meshes, and display

spatial distributions of in-situ data. We discovered that deep learning techniques

are highly computationally efficient for large-scale groundwater flow issues based

on the findings of the numerical simulation.

Groundwater level (GWL) is a straightforward and direct indicator of the

accessibility and availability of groundwater. The modeling of GWL is a difficult

undertaking because GWL is an integrated response to several meteorological,

topography, and hydrogeological elements and their interactions. The indirect

impacts of groundwater on the environment and communities are clear, making it

one of the most valuable and significant sources of water in the world.

AI models are intriguing because they can simulate and forecast GWL

without requiring an in-depth understanding of the underlying topographical and

hydro-geophysical characteristics. Hydrologists and hydrogeologists may learn a

lot about short- and long-term fluctuations in groundwater supply by monitoring

GWL. Over the past 20 years, a significant number of researchers have looked at

and reported on the use of AI in modeling GWL.


Particularly ML models have the potential to be used in GWL simulation.

The purpose of this study is to close the knowledge gap surrounding the

development and use of innovative AI models in groundwater modeling. The

current article's main focus is on the most recent advancements, innovations,

limitations, and flaws of cutting-edge AI techniques for handling GWL.

The goal of the current survey is to give an educational benchmark on the

application of machine learning models in the simulation of GWL. Ten different

ML model versions have been used globally for GWL modeling, according to the

reviewed data. The study encompassed the years 2008 through 2020, and all the

articles that were gathered came from publications that were indexed on the Web

of Science.
4. Design of Deep Learning Algorithm for IoT Application by Image-based

Recognition

An ecosystem known as the Internet of Things (IoT) is made up of

numerous devices and connections, numerous users, and enormous amounts of

data. Deep learning is particularly well suited for these situations since it can

handle "big data" challenges and potential future issues. Additionally, the main

goal of this study project is to compile in-depth survey information on the different

IoT implementations with high recognition rates.

Robotics and artificial intelligence are emerging as promising fields of

study with the potential to greatly enhance human living in terms of both quality

and safety. Only three of the five senses used by humans—vision, hearing, and

touch—transmit information; taste and smell don't. The validity of this assertion is

demonstrated by observations of nature and daily life. Robots of today are not

toys or slaves, but rather sophisticated companions for humans.

The suggested design uses statistical learning to extract features using

the principal component analysis (PCA) approach. This is an unsupervised

approach for reducing the dimensionality of images used in image-based

recognition. Utilizing this idea, a linear transformation is carried out to collect

visual attributes for recognition. To establish more accurate information regarding

its extent, these aspects of transformable images can be gathered.


An example IoT image dataset has been used to test the proposed

architecture. There are a total of 11,000 photos in the training set, including 200

distinct species of birds. Each image has a target frame label, component labels,

and attributes for easier identification. All categories now have a four-layer

structure, which has been unveiled.

The suggested research project has run several tests and found that PCA

produces the best image-based recognition outcomes. The large degree of

dispersion, or scatter, that occurred after projection helped with image

recognition on the IoT. The investigation will likely be conducted in the future

utilizing linear discriminant analysis (LDA) and picture feature extraction.


5. Applying deep learning algorithm to maintain social distance in public

place through drone technology

The drone notifies the public and the neighboring police station of an

emergency. It also carries masks and drops them off at the appropriate locations.

Traffic cops will be seen nearby and will provide them with a water package and

a mask if necessary. If neither is completed, it alerts the local police station and

the general public.

The inspection procedure is carried out by an autonomous drone that is

employed in the proposed system. The yolov3 algorithm is used by the drone

camera to determine whether the social distance is being maintained and

whether individuals are wearing masks in public. The drone completes this task

autonomously.

The suggested approach provides masks to those who are not wearing

them and emphasizes the value of masks and social isolation. Similar to how

drone units get information from drone position details and store it in databases,

the algorithm may be incorporated in public cameras so that details can then be

fetched to the camera unit. Thus, the suggested solution benefits society by

reducing the transmission of the corona virus and saving time.


Bug fixes may take a long time for software engineers to complete. Here,

we provide a way to correct these errors automatically. To get a rank score, we

perform bug localization using an autoencoder and CNN. We break up the

source code of the application into different lines using tokens. The Seq-GAN

method is then used to create the potential problematic fixes.


References:

- Ranganathan, G. (2021). A Study to Find Facts Behind Preprocessing on

Deep Learning Algorithms. Journal of Innovative Image Processing (JIIP).

Retrieved July 30, 2022, from https://irojournals.com/iroiip/V3/I1/06.pdf

- Swampa, G. (2018, December). Diabetes detection using deep learning

algorithms. Science Direct. Retrieved July 30, 2022, from

https://www.sciencedirect.com/science/article/pii/S2405959518304624

- Hai, T. (2022, June 7). Applying deep learning algorithms to enhance

simulations of large-scale groundwater flow in IoTs. Science Direct.

Retrieved August 1, 2022, from

https://www.sciencedirect.com/science/article/abs/pii/S156849462030238

- Jacob, J. (2021). Design of Deep Learning Algorithm for IoT Application by

Image-based Recognition. Journal of ISMAC. Retrieved August 1, 2022,

from https://irojournals.com/iroismac/V3/I3/08.pdf

- Ramadass, L. (2020, July 15). Applying deep learning algorithms to

maintain social distance in public place through drone technology.

Emerald Insight. Retrieved August 1, 2022, from

https://www.emerald.com/insight/content/doi/10.1108/IJPCC-05-2020-004

6/full/html

You might also like