Project 2
Project 2
The stock market is a complex and dynamic system, and predicting stock prices is an
important task for investors and traders. With the rise of deep learning models, especially long-short-
term memory (LSTM) models, stock price prediction has become more accurate and efficient. Our
article presents a new method of visualizing and forecasting stock prices using LSTM models. LSTM
models are well suited for modeling and forecasting sequences with long-term dependencies, a key
feature of financial time series data. Our approach is to train an LSTM model on historical stock
market data, which includes open, high, low and close prices, as well as trading volume. After
training, the LSTM model is used to predict future stock prices based on current market trends and
patterns. To assess the effectiveness of our approach, we compared the forecasting performance of
LSTM models with traditional time series models. Our results show that LSTM models outperform
traditional time series models in predicting stock prices. This finding is consistent with previous
studies that have also shown the effectiveness of LSTM models in predicting stock prices. In addition
to the predictive performance of LSTM models, we also introduce a new visualization technique that
can intuitively explain model predictions. Our visualization technique uses a scatterplot where the x-
axis represents the predicted stock price and the y-axis represents the actual stock price. Each data
point on the scatter plot corresponds to a time period for the stock market. Data points are color
coded based on the size of the difference between the predicted price and the actual price. This
visualization technique allows investors and traders to gain insight into the predictive performance of
a model and identify areas for improvement in the model. We apply our method to several publicly
traded stocks, including Apple, Amazon, and Tesla, to demonstrate its effectiveness. Our LSTM
model predicts these stocks with greater accuracy than traditional time series models, suggesting that
LSTM models can capture more complex patterns in stock market data. This finding is consistent
with previous studies that have also shown the effectiveness of LSTM models in predicting stock
prices. In summary, our method provides a new way to visualize and predict stock prices using deep
LSTM models. The ability of LSTM models to capture long-term dependencies in time series data
makes them an invaluable tool for forecasting stock prices. Our visualization techniques provide an
intuitive way to interpret model predictions and identify areas for improvement. Our approach has
the potential to help investors and traders make informed stock market decisions.
KEYWORDS: Bitcoin, Deep Learning, GPU, Recurrent Neural Network, Long Short-Term
Memory, ARIMA.
CONTENTS
CHAPTER-1
INTRODUCTION
1.1 GENERAL
The stock price fluctuations are uncertain, and there are many
interconnected reasons behind the scene for such behavior. The possible cause
could be the global economic data, changes in the unemployment rate, monetary
policies of influencing countries, immigration policies, natural disasters, public
health conditions, and several others. All the stock market stakeholders aim to
make higher profits and reduce the risks from the thorough market evaluation.
The major challenge is gathering the multifaceted information, putting them
together into one basket, and constructing a reliable model for accurate
predictions.
robustness of the models. One possible reason for not achieving the expected
outcome could be in the variable selection process. There is a greater chance that
the developed model performs reasonably better if a good combination of
features is considered. One of the contributions of this study is selecting the
variables by looking meticulously at multiple aspects of the economy and their
potential impact in the broader markets. Moreover, a detailed justification is
supplied why the specific explanatory variables are chosen in the present context
in Section 4.
Things were getting more interesting from the eighties because of the
development in data analysis tools and techniques. For instance, the spreadsheet
was invented to model financial performance, automated data collection became
a reality, and improvements in computing power helped predictive models to
analyze the data quickly and efficiently. Because of the availability of large-scale
data, advancement in technology, and inherent problem associated with the
classical time series models, researchers started to build models by unlocking the
power of artificial neural networks and deep learning techniques in the area of
sequential data modeling and forecasting. These methods are capable of learning
complex and non-linear relationships compared to traditional methods. They are
Department of CSE, GPCET, Kurnool Page 2
Visualising and forecasting of stocks using LSTM
more efficient in extracting the most important information from the given input
variables.
There are many tools available for predicting the stock market but it is an
exceedingly difficult task for humans to solve with tradational data analysis tools
and only data analytic experts know how to use these tools.But what about the
rest of the people how can they predict the stock market? This is where our
project comes it is really very simple to use,it has a really simple user interface
for the user to use,and the user does not need to have any prior knowledge about
stock market analysis,he just needs to type in the stock name and the date he
wants to know the price of the stock he wants,it returns the price predicted along
with basic information about the company and the forecast.
can help traders and investors to better understand the behavior of the stock
market and make more informed decisions.
3. Manage risk: By using LSTM models to forecast stock prices, investors can
identify potential market fluctuations and manage risk accordingly. This can
help minimize losses and maximize returns.
CHAPTER 2
LITERATURE REVIEW
[1] P. Li, C. Jing, T. Liang, M. Liu, Z. Chen and L. Guo, "Autoregressive
moving average modeling in the financial sector," 2015 2nd International
Conference on Information Technology, Computer, and Electrical
Engineering (ICITACEE), Semarang, Indonesia,2015
10.1109/ICITACEE.2015.7437772
Abstract: Time series modelling has long been used to make forecast in different
industries with a variety of statistical models currently available. Methods for
analyzing changing patterns of stock prices have always been based on fixed time
series. Considering that these methods have ignored some crucial factors in stock
prices, we use ARIMA model to predict stock prices given the stock-trading
volume and exchange rate as independent variables to achieve a more stable and
accurate prediction process. In this paper we will introduce the modeling process
and give the estimate SSE (Shanghai Stock Exchange) Composite Index to see
the model's estimation so this document helps you in visualising of stocks.
keywords:{Biological-system-modeling;Autoregressive-processes;Timeseries
analysis;Predictive models;Indexes;Computational modeling;Estimation;Time
Series;Statistical Modeling;ARIMA;styling},
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7437772&isnu
mber=7437747
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7783235&isnu
mber=7783169
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8500658&isnu
mber=8500355
[4] R. Fu, Z. Zhang and L. Li, "Using LSTM and GRU neural network
methods for traffic flow prediction," 2016 31st Youth Academic Annual
Conference of Chinese Association of Automation (YAC), Wuhan, China,
2016, pp.324-328.
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7804912&isnu
mber=7804853
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7550882&isnu
mber=7550716
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8126078&isnu
mber=8125802
CHAPTER 3
PROPOSED SYSTEM
Limitations:
o K-NN algorithm assumes the similarity between the new case/data and
available cases and put the new case into the category that is most similar to
the available categories.
o K-NN algorithm stores all the available data and classifies a new data point
based on the similarity. This means when new data appears then it can be
easily classified into a well suite category by using K- NN algorithm.
o K-NN algorithm can be used for Regression as well as for Classification but
mostly it is used for the Classification problems.
o It is also called a lazy learner algorithm because it does not learn from the
training set immediately instead it stores the dataset and at the time of
classification, it performs an action on the dataset.
o KNN algorithm at the training phase just stores the dataset and when it gets
new data, then it classifies that data into a category that is much similar to
the new data.
The K-NN working can be explained on the basis of the below algorithm:
o Step-4: Among these k neighbours, count the number of the data points in
each category.
o Step-5: Assign the new data points to that category for which the number of
the neighbour is maximum.
Suppose we have a new data point and we need to put it in the required category.
o Next, we will calculate the Euclidean distance between the data points. The
Euclidean distance is the distance between two points, which we have
already studied in geometry.
Below are some points to remember while selecting the value of K in the K-NN
algorithm:
o There is no particular way to determine the best value for "K", so we need to try
some values to find the best out of them. The most preferred value for K is 5.
o A very low value for K such as K=1 or K=2, can be noisy and lead to the effects of
outliers in the model.
o Large values for K are good, but it may find some difficulties.
o It is simple to implement.
o The curve from the logistic function indicates the likelihood of something
such as whether the cells are cancerous or not, a mouse is obese or not
based on its weight, etc.
o It maps any real value into another value within a range of 0 and 1.
o The value of the logistic regression must be between 0 and 1, which cannot
go beyond this limit, so it forms a curve like the "S" form. The S-form
curve is called the Sigmoid function or the logistic function.
The Logistic regression equation can be obtained from the Linear Regression
equation. We know the equation of the straight line can be written as:
o In Logistic Regression y can be between 0 and 1 only, so for this let's divide
the above equation by (1-y):
On the basis of the categories, Logistic Regression can be classified into three types:
ARIMA is a powerful tool for predicting stock prices because it takes into
account the trend, seasonality, and random fluctuations in the data. The ARIMA
model consists of three components: the autoregressive component (AR), the
integrated component (I), and the moving average component (MA). The AR
component represents the relationship between past values and future values of
the time series. The MA component represents the relationship between past
errors and future values of the time series. The I component represents the
differencing of the time series to remove any trend or seasonality.
The following are the steps involved in building an ARIMA model for
stock market prediction:
1. Data preparation: The first step is to prepare the time series data. This
involves cleaning the data, checking for missing values, and ensuring that the
data is stationary.
2. Stationarity: ARIMA models assume that the time series data is stationary.
Stationarity means that the mean, variance, and autocorrelation of the data are
constant over time. If the data is not stationary, we need to transform the data
to make it stationary. This can be achieved by taking the first or second
difference of the data.
5. Model Evaluation: Once we have selected the best ARIMA model, we need
to evaluate its performance. We can use statistical measures such as the Mean
Absolute Error (MAE), Mean Squared Error (MSE), and Root Mean Squared
Error (RMSE) to evaluate the accuracy of the model.
6. Forecasting: The final step is to use the ARIMA model to forecast future
values of the time series. We can use the forecasted values to make informed
investment decisions.
LSTM networks extend the recurrent neural network (RNNs) mainly designed to
deal with situations in which RNNs do not work. When we talk about RNN, it is
an algorithm that processes the current input by taking into account the output of
previous events (feedback) and then storing it in the memory of its users for a
brief amount of time (short-term memory). Of the many applications, its most
well-known ones are those in the areas of non-Markovian speech control and
music composition. However, there are some drawbacks to RNNs. It is the first
to fail to save information over long periods of time. Sometimes an ancestor of
data stored a considerable time ago is needed to determine the output of the
present. However, RNNs are utterly incapable of managing these "long-term
dependencies." The second issue is that there is no better control of which
component of the context is required to continue and what part of the past must
be forgotten. Other issues associated with RNNs are the exploding or
disappearing slopes (explained later) that occur in training an RNN through
backtracking. Therefore, the Long-Short-Term Memory (LSTM) was introduced
into the picture. It was designed so that the problem of the gradient disappearing
is eliminated almost entirely as the training model is unaffected. Long-time lags
within specific issues are solved using LSTMs, which also deal with the effects
of noise, distributed representations, or endless numbers. With LSTMs, they do
not meet the requirement to maintain the same number of states before the time
required by the hideaway Markov model (HMM). LSTMs offer us an extensive
range of parameters like learning rates and output and input biases. Therefore,
there is no need for minor adjustments. The effort to update each weight is
decreased to O(1) by using LSTMs like those used in Back Propagation Through
Time (BPTT), which is a significant advantage.
In training a network, the primary objective is to reduce the losses (in terms of
cost or error) seen in the output of the network when training data is passed
through it. We determine the gradient, or loss in relation to a weight set, then
adjust the weights in accordance with this, and repeat this process until we arrive
at an optimal set of weights that will ensure the loss is as low as. This is the idea
behind reverse-tracking. Sometimes, it is the case that the gradient becomes
minimal. It is important to note that the amount of gradient in one layer is
determined by some aspects of the following layers. If any component is tiny
(less one), The result is that the gradient will appear smaller. This is also known
Department of CSE, GPCET, Kurnool Page 21
Visualising and forecasting of stocks using LSTM
as "the scaling effect. If this effect is multiplied by the rate of learning, which is a
tiny value that ranges from 0.1-to 0.001, this produces a lower value. This means
that the change in weights is minimal and produces nearly the same results as
before. If the gradients are significant because of the vast components and the
weights are changed to be higher than the ideal value. The issue is commonly
referred to as the issue of explosive gradients. To stop this effect of scaling, the
neural network unit was rebuilt such that the scale factor was set to one. The cell
was then enhanced by a number of gating units and was named the LSTM.
3.2.3 Architecture:
The main difference between the structures that comprise RNNs as well as
LSTMs can be seen in the fact that the hidden layer of LSTM is the gated unit or
cell. It has four layers that work with each other to create the output of the cell, as
well as the cell's state. Both of these are transferred to the next layer. Contrary to
RNNs, which comprise the sole neural net layer made up of Tanh, LSTMs are
comprised of three logistic sigmoid gates and a Tanh layer. Gates were added to
restrict the information that goes through cells. They decide which portion of the
data is required in the next cell and which parts must be eliminated. The output
will typically fall in the range of 0-1, where "0" is a reference to "reject all' while
"1" means "include all."
Each LSTM cell is equipped with three inputs and two outputs, h t, and Ct. At a
specific time, t, which ht is the hidden state, and C t is the cell state or memory. It
xt is the present information point or the input. The first sigmoid layer contains
two inputs: ht-1 and xt, where ht-1 is the state hidden in the cell before it. It is also
known by its name and the forget gate since its output is a selection of the
amount of data from the last cell that should be included. Its output will be a
number [0,1] multiplied (pointwise) by the previous cell's state .
3.2.5 Applications:
LSTM models have to be trained using a training dataset before being used for
real-world use. The most challenging applications are listed in the following
sections:
based dataset that includes words and translations is cleaned first before the
relevant portion to build the model. An encoder-decoder LSTM model can
convert the input sequences into their formatted vector (encoding) and then
convert the translated version.
3.2.6 Drawbacks:
Everything in the world indeed has its advantages and disadvantages. LSTMs are
no exception, and they also come with a few disadvantages that are discussed
below:
1. They became popular due to the fact that they solved the issue of gradients
disappearing. However that they are unable to eliminate the problem. The
issue lies in that data needs to be moved between cells for its analysis.
Furthermore, the cell is becoming extremely complex with the addition of
functions (such as the forget gate) that are now part of the picture.
2. They require lots of time and resources to be trained and prepared for real-
world applications. Technically speaking, they require high memory
bandwidth due to the linear layers present within each cell, which the
system is usually unable to supply. Therefore, in terms of hardware, LSTMs
become pretty inefficient.
3. With the growing technology of data, mining scientists are searching for a
system that is able to store past data for more extended periods of time than
LSTMs. The motivation behind the development of such a model is the
habit of humans of dividing a particular chunk of information into smaller
parts to facilitate recollection.
The company tickers of S&P 500 list from Wikipedia is being saved and
abstraction of stock data in contradiction of every company ticker is being done.
Then close index of every company is taken into account and put it into one
data frame and try to find a connection between each company and then pre-
processing the data and creating different technical parameters built on stock price,
bulk and close worth and based on the movement of prices will progress
technical meters that will aid set a target percentage to foretell buy, sell, hold.
For each table row, ticker is the table data, clutch the .text of it and attach this
ticker to the list, to save the list use pickle and if the list changes, modify it to check for
specific periods of time. Redeemable the list of tickers, so not to hit Wikipedia again
and again every time the script is ride.
Have tickers of 500 companies, need the stock estimating data of each
company. Abstract the stock estimating data of the first 15 companies, each
company has stock data to around 6000 admissions for each company. The
companies which were started after 2000 and has vacant values, their entries of nan
are replaced by zero.
Fig 3..2.7.5 : Code to compile all close index of company in one data frame.
Fig 3.2.7.6 : Output close index of all companies together in one data frame.
Fig 3.2.7.10 : Code to set trading conditions and data processing for labels.
Fig 3.2.7.11: Code to extract feature sets and map them to labels.
Ram : 4 GB
□ Python is Interactive − You can actually sit at a Python prompt and interact with
the interpreter directly to write your programs.
History Of Python:
Python was developed by Guido van Rossum in the late eighties and early nineties at
the National Research Institute for Mathematics and Computer Science in the
Netherlands.
Python is derived from many other languages, including ABC, Modula-3, C, C++,
Algol-68, Smalltalk, and UNIX shell and other scripting languages.
Python is copyrighted. Like Perl, Python source code is now available under the
GNU General Public License (GPL).
System Environment
A broad standard library − Python's bulk of the library is very portable and
crossplatform compatible on UNIX, Windows, and Macintosh.
Interactive Mode − Python has support for an interactive mode which allows
interactive testing and debugging of snippets of code.
Portable − Python can run on a wide variety of hardware platforms and has the
same interface on all platforms.
Extendable − You can add low-level modules to the Python interpreter. These
modules enable programmers to add to or customize their tools to be more
efficient.
Databases − Python provides interfaces to all major commercial databases.
GUI Programming − Python supports GUI applications that can be created and
ported to many system calls, libraries and windows systems, such as Windows
MFC, Macintosh, and the X Window system of Unix.
Scalable − Python provides a better structure and support for large programs
than shell scripting. Apart from the above-mentioned features, Python has a big
list of good features, few are listed below −
It supports functional and structured programming methods as well as OOP.
It can be used as a scripting language or can be compiled to byte-code for
building large applications.
It provides very high-level dynamic data types and supports dynamic type
checking.
It supports automatic garbage collection.
It can be easily integrated with C, C++, COM, ActiveX, CORBA, and Java.
Python is available on a wide variety of platforms including Linux and Mac
OS.
Open a terminal window and type "python" to find out if it is already installed
and which version is installed.
Unix (Solaris, Linux, FreeBSD, AIX, HP/UX, SunOS, IRIX, etc.)
Win 9x/NT/2000
Macintosh (Intel, PPC, 68K)
OS/2
DOS (multiple versions)
PalmOS
Nokia mobile phones
Windows CE
Acorn/RISC OS
BeOS
Amiga
VMS/OpenVMS
QNX
VxWorks
Psion
Python has also been ported to the Java and .NET virtual machines
Getting Python:
The most up-to-date and current source code, binaries, documentation, news,
etc., is available on the official website of Python https://www.python.org/
Installing Python:
Here is a quick overview of installing Python on various platforms − Unix and Linux
Installation
Windows Installation:
Run the downloaded file. This brings up the Python install wizard, which is
really easy to use. Just accept the default settings, wait until the install is
finished, and you are done.
Macintosh installation:
Recent Macs come with Python installed, but it may be several years out of
date.
Setting Up Path:
In Mac OS, the installer handles the path details. To invoke the Python
interpreter from any particular directory, you must add the Python directory to your
path.
To add the Python directory to the path for a particular session in Windows −
At the command prompt − type path %path%; C:\Python and press Enter.
Running Python:
1. Interactive Interpreter
You can start Python from UNIX, DOS, or any other system that provides you
a command line interpreter or shell window.
-O
2
It generates optimized bytecode (resulting in .pyo files).
-S
3
Do not run import site to look for Python paths on startup.
-v
4
verbose output (detailed trace on import statements).
5 -X
disable class-based built-in exceptions (just use strings); obsolete starting with
version 1.6.
-c cmd
6
run Python script sent in as cmd string
File
7
run Python script from given file
A Python script can be executed at command line by invoking the interpreter on your
or
or
You can run Python from a Graphical User Interface (GUI) environment as well, if
you have a GUI application on your system that supports Python.
UNIX − IDLE is the very first Unix IDE for Python.
Windows – Python Win is the first Windows interface for Python and is an
IDE with a GUI.
Macintosh − The Macintosh version of Python along with the IDLE IDE is
available from the main website, downloadable as either Mac Binary or Bin
Hex'd files.
If you are not able to set up the environment properly, then you can take help
from your system admin. Make sure the Python environment is properly set
up and working perfectly fine.
Python implementation was started in the year 1989 December by Guido van
Rossum. It is an open source language easy to learn and easy to read, very friendly
and great interactive environment for the beginners. Its standard library is made up
of many functions that come with Python when it is installed. It is an object oriented
and functional that easy to interface with c, obj c, java, FORTRAN. Python is a very
interactive language because it takes less time in providing the results. Python is
often used for are:
Web development
Scientific programming
Desktop GUIs
Network programming
Gamming program.
Advantages of Python
MySql
MySQL is an Open Source DBMS developed, supported and distributed by
Oracle Corporation. MySQL is easy to use, extremely powerful, supported and
secure. It is ideal database solution for web sites because of its small size and
speed.
4. The MySQL database server is very fast, reliable, scalable, and easy to use.
Software system meets its requirements and user expectations and does not
fail in an unacceptable manner. There are various types of test. Each test type
addresses a specific testing requirement.
Types Of Tests
Unit Testing
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid
outputs. All decision branches and internal code flow should be validated. It is the
testing of individual software units of the application .it is done after the completion
of an individual unit before integration. This is a structural testing, that relies on
knowledge of its construction and is invasive. Unit tests perform basic tests at
component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs
accurately to the documented specifications and contains clearly defined inputs and
expected results.
Integration Testing
Functional Testing
System Testing
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.
White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at least
its purpose. It is purpose. It is used to test areas that cannot be reached from a black
box level.
Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most
other kinds of tests, must be written from a definitive source document, such as
specification or requirements document, such as specification or requirements
document. It is a testing in which the software under test is treated, as a black box
.you cannot “see” into it. The test provides inputs and responds to outputs without
considering how the software works.
Unit Testing:
Unit testing is usually conducted as part of a combined code and unit test
phase of the software lifecycle, although it is not uncommon for coding and unit
testing to be conducted as two distinct phases.
Field testing will be performed manually and functional tests will be written
in detail.
Test objectives
Features to be tested
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
Acceptance Testing :
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
Unit Testing:
Unit testing focuses verification effort on the smallest unit of Software design
that is the module. Unit testing exercises specific paths in a module’s control
structure to ensure complete coverage and maximum error detection. This test
focuses on each module individually, ensuring that it functions properly as a unit.
Hence, the naming is Unit Testing.
During this testing, each module is tested individually and the module
interfaces are verified for the consistency with design specification. All important
processing path are tested for the expected results. All error handling paths are also
tested.
Integration Testing
Integration testing addresses the issues associated with the dual problems of
verification and program construction. After the software has been integrated a set of
high order tests are conducted. The main objective in this testing process is to take
unit tested modules and builds a program structure that has been dictated by design.
hierarchy, beginning with the main program module. The module subordinates to the
main program module are incorporated into the structure in either a depth first or
breadth first manner.
In this method, the software is tested from main module and individual stubs
are replaced when the test proceeds downwards.
2. Bottom-up Integration
This method begins the construction and testing with the modules at the lowest
level in the program structure. Since the modules are integrated from the bottom up,
processing required for modules subordinate to a given level is always available and
the need for stubs is eliminated. The bottom up integration strategy may be
implemented with the following steps:
The low-level modules are combined into clusters into clusters that
perform a specific Software sub-function.
A driver (i.e.) the control program for testing is written to coordinate test
case input and output.
The cluster is tested.
Drivers are removed and clusters are combined moving upward in the
programs .
The bottom up approaches tests each module individually and then each module is
module is integrated with a main module and tested for functionality.
User Acceptance of a system is the key factor for the success of any system.
The system under consideration is tested for user acceptance by constantly keeping
in touch with the prospective system users at the time of developing and making
changes wherever required. The system developed provides a friendly user interface
that can easily be understood even by a person who is new to the system.
Output Testing
After performing the validation testing, the next step is output testing of the
proposed system, since no system could be useful if it does not produce the required
output in the specified format. Asking the users about the format required by them tests
the outputs generated or displayed by the system under consideration. Hence the output
format is considered in 2 ways – one is on screen and another in printed format.
CHAPTER 4
APPLICATIONS
4.1 ARIMA APPLICATIONS
1. Natural Language Processing (NLP): LSTM models are used in NLP for tasks
such as language translation, sentiment analysis, and text generation. By
processing and analyzing the context of the text, LSTM models can generate more
accurate and meaningful results compared to traditional machine learning models.
2. Speech Recognition: LSTM models are used in speech recognition for tasks
such as voice recognition and speech-to-text conversion. By analyzing the context
and temporal patterns of speech, LSTM models can generate more accurate and
robust results compared to traditional models.
3. Image and Video Recognition: LSTM models are used in image and video
recognition for tasks such as object detection, facial recognition, and gesture
recognition. By analyzing the temporal patterns and context of the images or video
frames, LSTM models can generate more accurate and detailed results compared to
traditional models.
4. Health Care: LSTM models are used in health care for tasks such as medical
image analysis, disease diagnosis, and patient monitoring. By analyzing the
patient's medical history and identifying the temporal patterns and context of their
symptoms, LSTM models can generate more accurate and timely diagnosis and
treatment recommendations.
5. Financial Forecasting: LSTM models are used in finance for tasks such as stock
price prediction, risk management, and fraud detection. By analyzing the historical
patterns and context of financial data, LSTM models can generate more accurate
and reliable predictions and help investors make informed decisions.
6. Autonomous Driving: LSTM models are used in autonomous driving for tasks
such as object detection, lane recognition, and pedestrian detection. By analyzing
the temporal patterns and context of the driving environment, LSTM models can
help self-driving cars make more accurate and safe decisions.
CHAPTER 5
EXPERIMENTAL ANALYSIS
Fig 5.3:Output of predicted stock that is entered in the text field here it is AAPL in figure
Fig 5.4:Existing Model output such as KNN ARIMA Model Accuracy Of stocks.
CHAPTER 6
CONCLUSION AND FUTURE SCOPE
Through this study, it can be seen that Deep Learning algorithms have
significant influence on modern technologies especially to develop different time
series based prediction models. For stock price prediction, they can generate the
highest level of accuracy compared to any other regression models. Among different
Deep Learning models, both LSTM, and BI-LSTM can be used for stock price
prediction with proper adjustment of different parameters. To develop any kind of
prediction model, adjustment of these parameters is very important as the accuracy
in prediction depends significantly upon these parameters. Therefore, LSTM, and
BILSTM models also require this proper tuning of parameters. Using the same
parameters between these two models, BILSTM model generates lower RMSE
compared to LSTM model. Therefore, our proposed prediction model using
BILSTM can be used by individuals and ventures for stock market forecasting. This
can help the investors to gain much financial benefit while retaining a sustainable
environment in stock market. In future, we plan to analyze the data from more stock
markets of different categories to investigate the performance of our approach
REFERENCES
CM.MKT.LCAP.CD? locations=1W
Annals of Operations Research, vol. 260, no. 1-2, pp. 197–216, 2018.
[3] P. Li, C. Jing, T. Liang, M. Liu, Z. Chen, and L. Guo, “Autoregressive moving
[4] G. Zhang, X. Zhang, and H. Feng, “Forecasting financial time series using a
integration in daily oil prices: Smooth transition autoregressive st-fi (ap) garch
[6] I. Kaastra and M. Boyd, “Designing a neural network for forecasting financial,”
Economic and Social Systems, vol. 14, no. 1, pp. 81–91, 2000.
learning algorithms, architectures and stability. John Wiley & Sons, Inc., 2001.
2014