Decoding_Stocks_Patterns_Using_LSTM
Decoding_Stocks_Patterns_Using_LSTM
ARTICLEINFO ABSTRACT
Copyright © 2024 The Author(s): This is an open-access article distributed under the terms of the Creative 306
Commons Attribution 4.0 International License (CC BY-NC 4.0)
Zalak L. Thakker et al Int. J. Sci. Res. Comput. Sci. Eng. Inf. Technol., May-June-2024, 10 (3) : 306-310
Forget gate, and Output gate. These gates helps in flow are not. The specific attributes chosen for this process
of the information through the cell. include Trade Open, Trade High, Trade Low, and
As data enters the LSTM network, the input gate Trade Close. These attributes are pivotal as they
determines which values from the input should be provide crucial insights into the trading patterns and
updated. The forget gate then plays a crucial role by market behaviors, which are essential for accurate
deciding which information from the previous cell analysis. Once the relevant attributes are selected, we
state should be discarded, effectively "forgetting" employ normalization techniques to adjust their values
irrelevant data. Finally, the output gate selects the to fall within a specific range. Normalization is a
useful information that will be passed on to the next critical step in data pre-processing as it ensures that all
cell in the sequence. This gated structure enables LSTM attributes contribute equally to the analysis,
networks to maintain long-term dependencies, making preventing any single attribute from
them highly effective for tasks that involve time series disproportionately influencing the model due to its
data, such as natural language processing and time scale. By scaling the values, normalization also helps in
series forecasting. speeding up the convergence of the algorithm and
improves its overall performance. This methodical
In the context of developing a general-purpose image approach to data pre-processing not only streamlines
segmentation system, the initial step involves careful the dataset but also enhances the algorithm's ability to
data selection. This process includes gathering a learn effectively from the data. By focusing on essential
comprehensive dataset and subsequently dividing it attributes and applying normalization, we ensure that
into two distinct subsets: one for training the model the data is in the optimal form for analysis, leading to
and one for testing its performance. For our study, we more accurate and reliable results in subsequent stages
designated 75% of the dataset for training purposes, of the study.
ensuring that the model has ample data to learn from.
The remaining 25% of the data was reserved for testing, Prediction using LSTM: Utilizing LSTM for Predictive
allowing us to evaluate the model’s effectiveness and Modeling: Within this framework, we employ the
generalize its performance on unseen data. This LSTM algorithm to calculate values of stock. Initially,
stratified approach to data allocation helps in building training data undergoes training within system to
a robust and reliable image segmentation system by refine the model. Subsequently, during testing phase,
ensuring that the model is both well-trained and the predicted values are juxtaposed with the actual
thoroughly tested. values to gauge model's accuracy and efficacy.
The versatility of LSTM extends beyond stock
prediction; it finds applications in diverse domains
such as forecasting of weather, natural language
processing, speech and handwriting recognition, and
time-series prediction. LSTM's efficacy in handling of
sequential data, retaining long-term dependencies, and
mitigating the improve the gradient problem makes it
particularly well-suited for these tasks.
Pre-processing of data: Data Pre-processing: In the pre- The script utilizes a neural network model consists of
processing phase, we focus on selecting the attributes three LSTM layers with 50 units each, followed by a
necessary for the algorithm and disregarding those that dense layer with one unit. The input shape for the
Stock Table:
V. EXPERIMENTAL RESULTS
Apple: