0% found this document useful (0 votes)
8 views6 pages

Analysis of Stock Market Predicting Future Trend Using ML

The document discusses a novel approach to predicting stock market trends using machine learning techniques, specifically focusing on a Hierarchical Adaptive Temporal-Relational Interaction model. This model aims to capture complex interdependencies among stocks and improve prediction accuracy by integrating various data sources and temporal relationships. Experimental results demonstrate the model's effectiveness compared to existing methods, highlighting its potential in the field of financial technology.

Uploaded by

Anandakumar A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views6 pages

Analysis of Stock Market Predicting Future Trend Using ML

The document discusses a novel approach to predicting stock market trends using machine learning techniques, specifically focusing on a Hierarchical Adaptive Temporal-Relational Interaction model. This model aims to capture complex interdependencies among stocks and improve prediction accuracy by integrating various data sources and temporal relationships. Experimental results demonstrate the model's effectiveness compared to existing methods, highlighting its potential in the field of financial technology.

Uploaded by

Anandakumar A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2024 International Conference on Computing and Data Science (ICCDS-2024)

Analysis of Stock Market Predicting Future Trend


using ML
V.P. Murugan, V. R.Thejeshwar A.Vijayaraj,
Department of Mathematics, Department of CSE, Department of IT,
Panimalar Engineering College, Panimalar Engineering College, R.M.K. Engineering College,
Varadharajapuram, poonamallee, Varadharajapuram, poonamallee, RSM Nagar, Kavaraipettai,
Chennai- 600 123. Chennai- 600 123. Chennai- 601 206.
[email protected] [email protected] [email protected]

K Saravanan, R.Megavannan, P.Rajeswari


Department of IT, Faculty of Management, Department of CSE,
R.M.K. Engineering College, SRM Institute of Science and R.M.D Engineering college,
RSM Nagar, Kavaraipettai, Technology, RSM Nagar Kavaraipettai,
Chennai- 601 206. Kattankulathur. Chennai- 601 206.
[email protected] [email protected] [email protected]
2024 International Conference on Computing and Data Science (ICCDS) | 979-8-3503-6533-7/24/$31.00 ©2024 IEEE | DOI: 10.1109/ICCDS60734.2024.10560379

Abstract-Predicting future movements in stock prices is a topic social media data, artificial intelligence may predict price
of intense interest in the world of Fintech. Non-Standard movements.
Dynamics and complicated interplays of the stock market make Researchers used Discrete Fourier Transform and SFM, for
effective stock profiling difficult. The majority of currently example, to uncover multi-frequency stock interchange
available methods either treat each stock individually or look enterprises that had earlier been secreted inside an LSTM
for really basic with uniform patterns. In practice, there are network. stock forecasting has based on neural network
many potential sources of stock market connection, and signs strides, most existing approaches still have trouble
about underlying relationships are sometimes hidden in simultaneously defining the detailed relational and temporal
elaborate graphs. Hierarchical Adaptive Temporal-Relational market environmental information. Recurrent Neural
Interaction model for cascading dilated convolutions and gating Networks (RNNs) are a popular choice. However, RNNs
routes to understand the regularities of dynamic transitions in frequently have difficulty capturing intricate feature units
stock market. We considered stock pair matching, in across local time snippets and long-term dependencies.
particular, happens at each time stage rather than waiting for Despite these challenges, the temporal relational model is
the last flattened representations, while relevant feature points
useful for predicting stock market trends.
and enhancement are determined taking time attenuation into
account. Lastly, we optimize the stock representations using
regularized global clusters representation. The efficacy of our
suggested model is demonstrated experimentally based on three
actual stock market datasets.

Keywords: Ensemble modelling, Time series , neural


network, Investment decision, stock market.

I. INTRODUCTION

In modern era machine learning plays a vital role in various


fields such as agriculture [1,2], health care [3,4] etc. Because Figure 1. Temporal-relational views
of the market's growing value, safeties trading has developed
as an important component of the economic structure. Stock In contrast to general time series problems, the
trend prediction, which seeks to automatically assess interdependence of stock dynamics is not trivial.
potential deviations in stock prices, is becoming increasingly Unfortunately, most current research either consider each
popular among researchers and businesspeople. Statistical stock to be independent from all others, or they bridge linked
and ML-based time sequence techniques such as ARIMA, stocks based solely on heuristic methods [6] or predetermined
SVM, and Kalman Filters serve as the foundation for homogenous graph architectures [11]. In reality, the stock
traditional approaches to stock trend prediction. Due to listed market's links stem from many other places. Figure 1 shows
companies' extremely non-stationary dynamics and intricate the price dynamics of many related stock pairs. Controlling
interdependencies, this endeavor is intrinsically challenging . inter-stock collective synergy in a large setting is difficult
As artificial intelligence (AI) advances, a slew of deep neural because a stock vertex can interact with multiple neighbors
models[5] emerge, each of which uses By interpreting with dissimilar semantics.
intricate hidden signals in technical, basic, relational, and In this paper, Figure 1. we use three insights to reevaluate
how relational model sources are represented and interact

979-8-3503-6533-7/24/$31.00 ©2024 IEEE


Authorized licensed use limited to: Zhejiang University. Downloaded on February 12,2025 at 07:29:20 UTC from IEEE Xplore. Restrictions apply.
with one another. (a) The multiplier effect of relationships. gather data from every node and choose a reliable route to
(b) Because of variations in dynamic amplitude and prevent traffic jams and increase data security. The proposed
frequency, the appropriate stock in a stock market shifts with system results by M.S. Gayathri et al. [14] show that our
time. According to this, signals in the temporal-relational proposal outperforms the others, that input errors have no
duplex are mostly fixed, and stock estimates may be detrimental effect on its effectiveness, cost comparison
enhanced by comprehending the connections between them. analysis.R. Srinivasan and A. Vijayaraj [15] proposed new
(c) we have implicit auto association. It appears difficult to approaches based on mixed integer programming
assume specific types of known relations adequately reflect formulations. They also took into consideration the allocation
numerous complicated market forces. In response to these of limited bandwidth. According to A. Vijayaraj et al. [16],
challenges, we provide three original contributions: congestion occurs close to the destination and is addressed by
elliptical priority scheduling, in which a time slot is allocated
 To improve the spread and consolidation of inter- for each packet's processing [17] and packets are sent using a
stock correlation signals, we propose combining priority algorithm.
them into a single attribute.
 Two convergent methods of synchronizing III. PROPOSED METHOD
temporal-relational data are examined: Instead of
waiting until compressed temporal representations A. Predicting Stock Trends with Graphs
are obtained, in which stocks interact with one Stock prediction and future price activities are separated into
another for each stage. two categories. Assuming S denotes a group of stocks, the
We present a comprehensive analysis of the complication of resulting network G = (S, E, X) represents the various
the end-to-end manner in which all the mechanisms operate. connections between the companies. The time series
Comprehensive studies are compared for performance, composed of each node's historical stock market indicators.
demonstrating HATR-efficacy. The average findings for The X is obtained from the input of all nodes, where Ds is an
ACC, AUC, F1, and MCC are all better than HATR by a initial feature dimension.
margin of (3.2%, 6.2%, 3.12%, 16.6%). In order to prove the
efficiency of each part, we conduct exhaustive ablation
studies that providing understanding into the incorporation of
several interpersonal data in the context.

II. RELATED WORK

An examination of numerical data reveals the widespread use


of time series modelling to forecast stock market trends.
While many classification and regression methods attempt to
capture volatility patterns, determining appropriate technical
characteristics necessitates considerable expertise. Typically,
RNN based models are used to incorporate sequential
dependencies. DARNN, for example, uses attention
mechanisms to improve LSTM by identifying critical Figure 2. Outline of HATR-I
conditions and give input signals [7]. Meanwhile, HMG-TF
combines Modifier and a Gaussian prior to modelling stock
price series. However, these approaches primarily focus on A. Outline of HATR-I
critical temporal moments regarding the distinct Figure 2 depicts the structure of HATR-I, which forecasts
characteristics of individual stocks. Several studies have been future stock price movements using previous information and
conducted as researchers delve deeper into leveraging aspects networks derived from various relationship. HATR-I, which
other indications for improved stock market predictions. contains three key elements, begins with the creation of a
Despite advancements, these methods frequently assume multi-level Relational block that captures various aspects of
independence between stock movements, which limits their stock unpredictability and relationship. Following that, a
ability to resolve the complex method found in economic scaling layer is used, guided by stock- and target-specific
markets. GNNs have emerged as powerful tools in a variety queries, to combine embeddings from previous layers and
of fields due to their ability to learn interrelationships generate standardized stock pictures for final forecast. A soft
between charts. Although computationally demanding, early clustering layer is also included to identify common patterns.
experiments in recurrent approaches successfully capture
nearby features to supplement the representation of the target C. Time-Relational and Hierarchical Building
node [9]. Recent advances have resulted in the development Blocks
of numerous methods, particularly in spectral or spatial We present the hierarchical temporal-relational building
categories, that use convolutional operators and attention blocks that form the foundation of the multiscale embedding
mechanisms to analyze graph data. These approaches use hierarchy. The Temporal Modelling (T-Module) is
spatial-based methods to propagate neighboring data by responsible for transforming and distributing the impact of
leveraging graph convolutions [10]. A.Vijayaraj and S. interconnected pairs, whereas the R-Module operates within
Indhuja [13] talked about the auditing node, which is used to a multipart network with properties. The top most layer, a

Authorized licensed use limited to: Zhejiang University. Downloaded on February 12,2025 at 07:29:20 UTC from IEEE Xplore. Restrictions apply.
Timewise operator is used to combine the embedded stocks consistently surpass or fall short of their counterparts
sequence and identify critical points in the process. for return volatility [12]. Firm size is evaluated using C and
T, and the effect value of intra-industry stocks is determined
using an indicator function:

=> 2>
;,< =( = .
2?
> 1).
?
(2)Topicality Graph (GT). The context provided by first and
second-order links in Wiki data is extremely valuable. If
stocks A and B are linked via an in-between entity M,
indicating a supplier consumer relationship. The
corresponding scheme is A connecting to M and M
connecting back to B. A large number of stockholders share
their portfolios and exchange opinions on various social
Figure 3. Overview of T-Model media . We employ a financial lexicon for attitude
recognition, drawing on [3], in order to discover linked
D. Sequential Modeling With the T-Module bullish/bearish stock pairings. Next, we link neighboring
Figure 3 depicts the complete the T-Module's L-layers. The nodes that are important by using filtering approaches.
use of a scaled dot product directs attention to analogous steps (3) Shareholding Graph (GS). Under Jon Hon's leadership,
(keys), facilitating the enhancement of each step by aligning China's leading aviation producer employs syntactic similar
K, Q and V with the matching X. patterns involving stockholder. Using this data, we create a
stock diagram by connecting with common investors.
, , , (1) F. Attention Network for Multiplex Graphs
The adjacency matrices of the several stock graph types in the
In the row corresponding to the ith time step, we consolidate HATR model were previously combined using a
all similarity coefficients of the sequence into a unified vector concatenation operator following diffusion convolutions on
denoted as Attself[ ]Attself[i]. Adding a residual connection, them:
we update ‾Xi by incorporating Attself[ ]selfAttself[i]self. @ = ΘA [⊕ C& DE FE ],∀r 7R . (5)
GLU and extended CNN [11], which include skip
connections, to capture the pattern. Specifically, the dilated (1) Graph fusion: For every node, the three
convolution makes use of a kernel.
homogeneous stock graphs (GH, GI, and GT) come
Kf of size 2 +12ω+1, is expressed as:
together to produce a single adjacency matrix.
In GF, every transition item has a self-loop connection added
* = + "#$ , (2) H
to it that is supported by pertinent data. ;,< IJ∃ LM ;,<E N
The accumulation of vectors is used to regulate the gap 0P ' I| | (6)
between skips. This allows for the progressive accumulation
of notifications across different time scales, which is used to where ∈ S0,1U| |:| | , Where s represents any pair stock
decrease dealing out requirements and mitigate data loss that is not equal zero. In addition, we offer an attribute matrix
caused by downsampling. The efficacy of this method in that summarizes the set of meta-relations we use to describe
RNN architectures has motivated its implementation: the type of relationship between nodes.
GLU ( ) = ( Θ& * ' (& ) ∙ * Θ+ ∗ ' (+ ) (3)
∑|M| JIJ MY
N 0P : 2[0& P , \]
This includes a series of kernels labeled 1 and 2, denoted by V;,<H W [C& ;,<
(7)
b1 and b2. The dilated convolution operator uses the sigmoid 2|M| ]
function to regulate the data-to-noise ratio, followed by
element-wise multiplication matrices.
dt denotes the layer. (2) Graph Diffusion
we provide a dual devotion strategy that uses graph diffusion
ℎ .ℎ/0∆23&,…., ℎ/ 678 ∆29: 9
to produce an ideal abstraction by taking into account both
(4) node significance and semantics. Multi-headed attention is
utilized to quantify the attributes.
E. The Relational Modeling R Module
ℎ`3&
^//_ [C& ∑;Lc
||a ; α;<[ F[ ℎe
d , (8)
By merging stock graphs from different domain connections,
we construct a single multiplex network with edge features. /[^i @
Figure 2. It demonstrates how we employ a tensor of α;<[ fg ;, < = hg
∀<Lcj ; ℎ 2
;
F[ ℎ@ < , (9)
adjacency matrices for each relation type R in order to
represent the cross-effect among stocks. where the vector output average from all M attention heads is
denoted by ||. Specifically, to measure the weight distribution
{1}Industry Graph (GI): In general, comparable sector over several neighbors, the forecast of boundaries is utilized
stocks have a distinct lead-lag connection, whereby certain in combination with many heads:

Authorized licensed use limited to: Zhejiang University. Downloaded on February 12,2025 at 07:29:20 UTC from IEEE Xplore. Restrictions apply.
The average vector output from all M attention heads is

shown here by the symbol ||. W Specifically, the Š = relu( ×e +( ), (16)
{|} • Ž€•
z w = tanh( ‹ ‰ + (‹ ), βl =
measurement of weight distribution across distinct neighbors
∑>•‘ i# • Ž€•
• (17)
is made possible by the combination of edge projection and
numerous attention heads.
ℎ@ E3& ||k#C& ∑;Lc ; β;< F# ℎ@ < , where z = ∑’C& β z ,
#
(10)
stocks form the tensor - Z ∈ 8 |
? |: u
.

#
β;< = hg /[^i
m# ,]
I. Soft Cluster Regularization
[ ;, < ∀<Lcj ; (11)
Our technique employs a "soft projection matrix" in each row
m# ,] *Jnopq o;<E2 F#& ' (#& PF#+ ' (#+ ), (12) represents the possibility that a stock may belong to more
than one category, as an alternative to manually classifying
H stocks.
Where o ;<E = rM [ε;,< ] rM ∈ 2 |M|3& : u It is arbitrarily This approach has two primary purposes:
adjusted within the range of [-0.1, 0.1] in the attribute. The Stocks are segmented into groups, which allows for an
attention coefficient of the pth head at the kth hop of impartial evaluation of potential stock correlations.
propagation is normalized. New contextual vector Shared cluster embeddings simplify the framework by
information is provided to improve the target stock controlling dynamic stock profiling. This ensures that the
presentation when the two low-dimensional diffusion probabilities sum to one, as specified by the formula.
message kinds that emerge are run concurrently. D;,. = Softmax (F# “; ). The ith hidden cluster ci is formed
using these weighted summaries, with i representing the
ℎ@ 3&
;
= relu W w ℎ^//
3&
?
⊕ ℎ^//
3&
?
) +bw ) , (13) number of stocks involved.
| |
”; ∑<C& D;,< “< (18)
where "concatenation" is used, W w & bw are limitations.
We also investigate how stocks relate to the latent clusters
G. Coordination of time and relationship strategy that different aggregation features imply. Unlike GNN-based
methods, which use items to calculate the distance between
Our attention is focused on two distinct approaches: MSR and two clusters directly, the approach is expressed as follows:
multi-stage relational matching. As a result, peer influences
H
•;< ∑|[C&
|
∑|fC&
|
D;,[
2
A[,f Df,<
on stocks vary according to the granularity and timing of their
(19)
individual characteristics. These approaches are investigated
further using graph theory and stock interactions.
Remember that the item A (m, n)((F)) 0, 1 is contained in the
w adjacency matrix of the stock graph G F. We meticulously
h# = Wψ (θh# ⊕(1 − θ) h# + bψ , (14) modify and combine the embeddings of connected items
using the acquired proximity scores to produce an updated
[0, 1]- Coefficient cluster:
b - parameter that sets the breadth of the concatenation,
W - width of the concatenation. c;w relu(∑=<C& •;,< F˜ ”< ) (20)

H. Classification Combination Crossways Several Inverting the ci-production process allows for the creation of
Scales regularized stock embeddings. This includes combining the
development matrix to represent with shared clusters.
where W represents the When it comes to time aggregation,
it's critical to keep in mind that historical trends in a stock's Z;˜ ∑=<C& D;,< c<w (21)
price movement could not be a reliable indicator of its future
course. In particular, the weight at the lth layer of the pth
time-step is precisely given by: J. Prediction of the layer

€ • € • The framework uses temporal-relational block


{|} ~• ‚~9
λ# = € • € • : 1 ' 7exp −ˆ∆ # (15) representations and soft clustering regularization to
∑ƒ i# ~• ‚~9
determine ultimate stock ((z_s) ̃= z_s + z_s^c).
where, In this case,
W - learnt change format matrix. š› = σ (W+g relu W&g ‰ + b&g ) + bg+ ) , (22)
t p - time difference among step p,
W - scalable format matrix. To summarize the l-th layer where the sigmoid function () comes in.
embeddings, use the formula ‰ ∑# λ# hw# where t p
| |
is the time difference between steps p. To generate a stock- L=−∑2;C& ∑ C&[š; log šŸ ' 1 − š; log 1 − š¡
Ÿd ] (23)
specific query vector, we apply a surrounding layer created
on the stock's ID:

Authorized licensed use limited to: Zhejiang University. Downloaded on February 12,2025 at 07:29:20 UTC from IEEE Xplore. Restrictions apply.
IV. RESULT AND DISCUSSIONS

A. Dataset and Experimental Environment 70 70


* - GCN

Experimental values of ACC, AUC, F1, and MCC -->

Experimental values of ACC, AUC, F1, and MCC -->


HATR-I uses data from two primary datasets to conduct a
O - TGC
60 60
s - TGAT

more in-depth analysis. The first dataset comprises the 95


50 50

40 40

firms with the highest market capitalizations from the 30


* - SVM
S - RF
O - DA-RNN
30

TOPIX-300 index of the Tokyo Stock Exchange. Large-cap 20


+ - SFM
d - TPA-LSTM
v - Inception Time
20

firms with strong liquidity on the Shenzhen and Shanghai 10


x - HMG-TF
10

stock exchanges are included in this index. The second 0


1 1.5 2 2.5 3
ACC, AUC, F1, and MCC -->
3.5 4
0
1 1.5 2 2.5 3
ACC, AUC, F1, and MCC -->
3.5 4

dataset consists of stocks from S&P 500 Composite Index 70

that were consistently traded on the NASDAQ and NYSE

Experimental values of ACC, AUC, F1, and MCC -->


60

exchanges from 2015 to 2020. 50


* - HATR

The same technical indicators—adjusted up, down, and


40 O - HATR-I
S - Improvement

30

closing prices—as well as trading volume—which are 20

normalized in various ways—are used in both datasets. We 10

employed a four-level T-Module representation hierarchy 0


1 1.5 2 2.5 3
ACC, AUC, F1, and MCC -->
3.5 4

with 1-2-3-4 dilation rates in our studies. There are just three
and thirty-two gated convolution kernels, respectively, in Figure 6: (a), (b) (c) Test with Topix Dataset
each layer. For CSI and SPX, a 32-dimensional EQ
embedding is used. If we select K = 2 for the fixed diffusion We make individual predictions about the direction of various
stage in the R-Module. For Soft Clustering, four attention equities using numerical indicators as input. To represent
heads are used at the node and semantic levels. measurable stock features,

A. Scales Overall Efficiency


we compare HATR-I to its predecessor, determined the Figures 4, 5, and 6 show the comprehensive findings. HATR-
primary hyperparameters for each baseline based on their I outperforms other models in stock prediction, with
respective publications. We then use grid search to fine-tune significant improvements (p < 0.05) across multiple datasets.
these parameters and maximize efficiency. Furthermore, we On average, HATR-I outperforms the original HATR by
repurpose several regression techniques to classify stock approximately (4.26%, 7.28%, 3.16%, and 16.66%) in ACC,
price changes. AUC, F1, and MCC. DA-RNN, HMG-TF, and TGAT are
70 70
* - GCN popular baseline methods that effectively capture stock
Experimental values of ACC, AUC, F1, and MCC -->
Experimental values of ACC, AUC, F1, and MCC -->

O - TGC

relational or temporal signals. To do our investigation, we


60 60 s - TGAT

50

40
50
used a replica of the T-module. The traditional ML
30 * - SVM
40
techniques like RF and SVM, which are unable to retrieve
external data. Moreover, strategies that capitalize on stock
S - RF
30
O - DA-RNN
20
+ - SFM
d - TPA-LSTM
10 v - Inception Time
x - HMG-TF
20
interconnectivity (e.g., TGC, TGAT, and HATR) usually
0
1 1.5 2 2.5 3
ACC, AUC, F1, and MCC -->
3.5 4
10
1 1.5 2 2.5 3
ACC, AUC, F1, and MCC -->
3.5 4
yield superior returns than strategies that depend just on
80 individual stock price fluctuations. Thanks to the attention
Experimental values of ACC, AUC, F1, and MCC -->

70

60
mechanism, TGAT performs better than TGC.resentment
50
* - HATR
O - HATR-I
40 S - Improvement
B. Sensitivity to Parameters
Figure 6 depicts the variation in F1 scores across the CSI
30

20

10 dataset. Notably, we see that when the stack has fewer than
0
1 1.5 2 2.5 3
ACC, AUC, F1, and MCC -->
3.5 4 four layers, the score for a given pattern increases.
Figure 4: (a), (b) (c) Tests with CSI Dataset Conversely, as more filters are used to collect more feature
patterns, the score tends to drop, possibly due to pattern
70 70
redundancy. We then investigate the effect of varying the
duration of the input time series. This phenomenon could be
* - GCN
Experimental values of ACC, AUC, F1, and MCC -->

Experimental values of ACC, AUC, F1, and MCC -->

O - TGC
60 60
s - TGAT

50 50
attributed to the need for a resilient sequence when modeling
40
* - SVM
S - RF
40
highly non-stationary stock dynamics. When set to 16,
HATR-I consistently performs well across multiple
30 O - DA-RNN 30
+ - SFM
d - TPA-LSTM
20 20

dimensions. The various edge types according to the stock


v - Inception Time
x - HMG-TF

10 10

0
1 1.5 2 2.5 3 3.5 4
0
1 1.5 2 2.5 3 3.5 4
homogeneous graph assessments that have been executed. In
our paper, there are three types of relations (GI, GT, and GH)
ACC, AUC, F1, and MCC --> ACC, AUC, F1, and MCC -->

generate nine distinct edge annotations ranging from 0 to 2|R|


70
Experimental values of ACC, AUC, F1, and MCC -->

60

50 + 1.
* - HATR
40 O - HATR-I
S - Improvement

30

20

10

0
1 1.5 2 2.5 3 3.5 4
ACC, AUC, F1, and MCC -->

Figure 5: (a), (b) (c) Test with SPX Dataset

Authorized licensed use limited to: Zhejiang University. Downloaded on February 12,2025 at 07:29:20 UTC from IEEE Xplore. Restrictions apply.
Values of HATR-I,noStack,noHaks,noTrsq,HATR-I,noStack,noHaks,noTrsq
Values of HATR-I, noTemp, noRel,noScl,HATR-I, noTemp, noRel, noScl

[4] K. P, V. K. S and S. P. S, "CNN and Edge-Based Segmentation for the


72 * - HATR-I 72 * - HATR-I
S - noTemp
S - noStack
70 O - noRel
70 O - noHaks
+ - noScl
68 d - HATR-I
p - noTemp 68
+ - noTrsq
d - HATR-I Identification of Medicinal Plants," 2024 5th International Conference
p - noStack
x - noRel

on Intelligent Communication Technologies and Virtual Mobile


66 x - noHaks
h - noScl 66
h - noTrsq
64
64
62
62
Networks (ICICV), Tirunelveli, India, 2024, pp. 89-94, doi:
60

58
60 10.1109/ICICV62344.2024.00021.
[5] P. K, S. S. Pandi, T. Kumaragurubaran and V. Rahul Chiranjeevi,
56 58

54 56
1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3

"Human Activity Recognitions in Handheld Devices Using Random


Values of HATR-I,noNdAtt,noSmAtt,noMview,HATR-I,noNdAtt,noSmAtt,noMview

CSI, SPX, Topix CSI, SPX, Topix

Forest Algorithm," 2024 International Conference on Automation and


72 * - HATR-I
S - noNdAtt
70 O - noSmAtt
+ - noMview

68
d - HATR-I
p - noNdAtt
Computation (AUTOCOM), Dehradun, India, 2024, pp. 159-163, doi:
x - noSmAtt
66 h - noMview
10.1109/AUTOCOM60220.2024.10486087
64

62
[6] G. Kavitha, A. Udhayakumar, and D. Nagarajan, “Stock market trend
60 analysis using hidden markov models,” arXiv preprint
arXiv:1311.4771, 2013.
58

56
1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
CSI, SPX, Topix
[7] Y. Qin, D. Song, H. Chen, W. Cheng, G. Jiang, and G. W. Cottrell, “A
dual-stage attention-based recurrent neural network for time series
Figure 7: Different components in HATR-I prediction,” in IJCAI, 2017, pp. 2627–2633.
[8] [8] Z. Hu, W. Liu, J. Bian, X. Liu, and T. Liu, “Listening to chaotic
We investigate graph convolution experiments with different whispers: A deep learning framework for news-oriented stock trend
parameters, and we find that omitting the multi-head plan prediction,” in WSDM, 2018, pp. 261–269.
when the value is 1 is effective. In most cases, increasing the [9] Y. Li, D. Tarlow, M. Brockschmidt, and R. S. Zemel, “Gated graph
number of "attention heads" improves productivity, which is
sequence neural networks,” in ICLR, 2016.
especially noticeable when the number of heads is less than
[10] J. Atwood and D. Towsley, “Diffusion-convolutional neural
four, though the effect fades thereafter. Figure 6(f) illustrates
networks,” in NIPS, 2016, pp. 1993–2001.
the effect of this change and shows that, when utilizing
[11] F. Yu and V. Koltun, “Multi-scale context aggregation by dilated
HATR-I, K = 2 is best across all datasets. Nodes can learn
convolutions,” in ICLR, 2016.
more about their representation from higher-order neighbors
as the number of interaction layers rises. All nodes in the [12] A.W. Lo and A. C. Mackinlay, “When are contrarian profits due to
network may be impacted if this dynamic were to reverse as stock market overreaction?” The Review of Financial Studies, vol. 3,
a result of further escalation. no. 2, pp. 175–205, 2015.
[13] A. Vijayaraj S. Indhuja ,” Detection of malicious nodes to avoid data
V. CONCLUSION loss in wireless networks using elastic routing table “ Third
The HATR-I framework is proposed in this research to International Conference on Sensing, Signal Processing and Security
forecasting the market trends. We contend that agreeing a (ICSSS), 10.1109/SSPS.2017.8071646
stock's growth is suggested by historical time series patterns [14] M.S. Gayathri S. Tamil Selvi; A. Vijayaraj and S. Ilavarasan, “Trinity
as well as synchronized. By stacking modules representing tree construction for unattended web extraction“. International
time at different sizes, we gradually extract regularities Conference on Innovations in Information, Embedded and
associated with both quick and slow transitions. With the use Communication Systems (ICIIECS),
of we are able to understand the interdependencies across DOI: 10.1109/ICIIECS.2015.7193060.
stocks. For real-time stock exchange, we investigate final [15] R. Srinivasan and A. Vijayaraj,” Mobile Communication
stage and multi-stage relational matching algorithms in Implementation Techniques to Improve Last Mile High Speed FSO
depth. In addition, the global regularization for prediction is Communication” , International Conference on Web and Semantic
found automatically in the form of shared latent clusters. The Technology, Springer-Verlag Berlin Heidelberg 2011
importance and efficacy of HATR I are experimentally [16] A Vijayaraj, R.M. Suresh, K. Raghavi.[4] “Packet classification using
demonstrated on three real-world stock market datasets. elliptical priority scheduling in wireless local area networks for
Future time, we plan to investigate vital representation that
congestion avoidance” International Conference on Innovations in
combine immediate information to discover how markets are
Information, Embedded and Communication Systems (ICIIECS) ,
dependent on one another over time.
DOI: 10.1109/ICIIECS.2015.7193255.
[17] P. Kumar, K. N. Manisha and M. Nivetha (2024), "Market Basket
REFERENCES
Analysis for Retail Sales Optimization (2024)," 2024 Second
[1] S. S. Pandi, V. R. Chiranjeevi, Kumaragurubaran. T and K. P,
International Conference on Emerging Trends in Information
"Improvement of Classification Accuracy in Machine Learning
Technology and Engineering (ICETITE), Vellore, India, pp. 1-7, doi:
Algorithm by Hyper-Parameter Optimization," 2023 RMKMATE,
10.1109/ic-ETITE58242.2024.10493283.
Chennai, India, 2023, pp. 1-5, doi:
https://ieeexplore.ieee.org/document/10493283
10.1109/RMKMATE59243.2023.10369177
[2] S. S. Pandi, B. Kalpana, V. K. S and K. P, "Lung Tumor Volumetric
Estimation and Segmentation using Adaptive Multiple Resolution
Contour Model," 2023 RMKMATE, Chennai, India, 2023, pp. 1-4,
doi: 10.1109/RMKMATE59243.2023.10369853
[3] S. P. S, K. P and S. L. T A, "Projection of Plant Leaf Disease Using
Support Vector Machine Algorithm," 2023 ICRASET, B G NAGARA,
India, 2023, pp. 1-6, doi: 10.1109/ICRASET59632.2023.10419981

Authorized licensed use limited to: Zhejiang University. Downloaded on February 12,2025 at 07:29:20 UTC from IEEE Xplore. Restrictions apply.

You might also like