0% found this document useful (0 votes)
35 views77 pages

Superhypergraph Neural Networks and Plithogenic Graph Neural Networks: Theoretical Foundations

This document discusses the theoretical foundations of Superhypergraph Neural Networks (SHGNNs) and Plithogenic Graph Neural Networks, which extend traditional graph neural networks to more complex structures. It highlights the significance of hypergraphs and superhypergraphs in various fields, as well as the integration of uncertainty handling through models like Fuzzy and Neutrosophic graphs. The paper aims to establish a mathematical framework for these advanced neural networks, anticipating future computational experiments.

Uploaded by

Victor Hermann
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views77 pages

Superhypergraph Neural Networks and Plithogenic Graph Neural Networks: Theoretical Foundations

This document discusses the theoretical foundations of Superhypergraph Neural Networks (SHGNNs) and Plithogenic Graph Neural Networks, which extend traditional graph neural networks to more complex structures. It highlights the significance of hypergraphs and superhypergraphs in various fields, as well as the integration of uncertainty handling through models like Fuzzy and Neutrosophic graphs. The paper aims to establish a mathematical framework for these advanced neural networks, anticipating future computational experiments.

Uploaded by

Victor Hermann
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

Superhypergraph Neural Networks and Plithogenic Graph Neural Networks: Theoretical

Foundations, pp. 577-653, in Takaaki Fujita, Florentin Smarandache: Advancing Uncertain


Combinatorics through Graphization, Hyperization, and Uncertainization: Fuzzy, Neutrosophic,
Soft, Rough, and Beyond. Fifth volume: Various SuperHyperConcepts (Collected Papers). Gallup,
NM, United States of America – Guayaquil (Ecuador): NSIA Publishing House, 2025, 653 p.

Chapter 17
Superhypergraph Neural Networks and Plithogenic Graph Neural Networks:
Theoretical Foundations

Takaaki Fujita1 ∗, Florentin Smarandache2 ,


1∗ Independent Researcher, Shinjuku, Shinjuku-ku, Tokyo, Japan. [email protected]
2 University of New Mexico, Gallup Campus, NM 87301, USA. [email protected]

Abstract: Hypergraphs extend traditional graphs by allowing edges to connect multiple nodes, while su-
perhypergraphs further generalize this concept to represent even more complex relationships. Neural networks,
inspired by biological systems, are widely used for tasks such as pattern recognition, data classification, and
prediction.
Graph Neural Networks (GNNs), a well-established framework, have recently been extended to Hyper-
graph Neural Networks (HGNNs), with their properties and applications being actively studied. The Plithogenic
Graph framework enhances graph representations by integrating multi-valued attributes, as well as membership
and contradiction functions, enabling the detailed modeling of complex relationships.
In the context of handling uncertainty, concepts such as Fuzzy Graphs and Neutrosophic Graphs have
gained prominence. It is well established that Plithogenic Graphs serve as a generalization of both Fuzzy Graphs
and Neutrosophic Graphs. Furthermore, the Fuzzy Graph Neural Network has been proposed and is an active
area of research.
This paper establishes the theoretical foundation for the development of SuperHyperGraph Neural Net-
works (SHGNNs) and Plithogenic Graph Neural Networks, expanding the applicability of neural networks to
these advanced graph structures. While mathematical generalizations and proofs are presented, future computa-
tional experiments are anticipated.
Keywords: hypergraph, superhypergraph, Neural Network, Neutrosophic Graph, Fuzzy Graph
MSC2010 (Mathematics Subject Classification 2010): 05C65 - Hypergraphs, 05C82 - Graph theory with
applications, 03E72 - Fuzzy set theory

1 Introduction
1.1 Hypergraphs and Superhypergraphs
Graph theory, a pivotal area of mathematics, focuses on understanding networks composed of vertices
(nodes) and edges (connections)[100, 102]. These mathematical structures effectively model relationships, de-
pendencies, and transitions among elements, making them versatile tools across various domains [45,58,95,156].
The foundational significance of graph theory has spurred its development and application in numerous
disciplines, including:
• Computational Sciences: Graphs are essential in designing circuits and optimizing computational work-
flows, as highlighted in recent studies on graph-based optimization techniques [40, 41, 405].
• Chemistry and Biology: Chemical graph theory models molecular structures and interactions [42, 380],
while bioinformatics leverages graphs to study protein structures and gene interactions [6, 373, 377].
• Project Management: Graphs are utilized to analyze workflows and dependencies, facilitating efficient
resource allocation and scheduling in project management frameworks [202, 296, 368].
• Probabilistic Modeling: Bayesian networks employ graph structures to represent conditional dependen-
cies among random variables [277, 418].
• Graph Databases: Modern data storage and retrieval systems increasingly rely on graph databases for
their ability to model complex relationships effectively [21, 22, 31, 141, 166, 261, 304].

1
A hypergraph is a generalization of a conventional graph, extending and abstracting concepts from graph
theory [51, 60, 152, 153, 164]. Hypergraphs have wide-ranging applications across fields such as machine learn-
ing, biology, social sciences, and graph database analysis, among others (e.g., [69, 85, 139, 187, 232, 403, 427,
443]). From a set-theoretic perspective, a hypergraph can, without risk of misunderstanding, be viewed as the
powerset of its vertex set.
The concept of SuperHyperGraph has recently emerged as a more general extension of hypergraphs,
generating substantial research interest similar to that seen in the study of hypergraphs[126,130,340]. Numerous
investigations have been carried out in this field [122, 126, 128, 130, 170, 171, 340, 341, 343, 346, 351].
A Superhypergraph is a type of Superhyperstructure. It can be regarded as an extension of the concept
of an n-th-Power Set[331] applied to graphs. The definitions of Superhyperstructure and n-th Power Set are
provided below.
Definition 1.1 (𝑛-th powerset). (cf.[331, 352]) The 𝑛-th powerset of 𝐻, denoted 𝑃𝑛 (𝐻), is defined recursively
as:
𝑃1 (𝐻) = 𝑃(𝐻), 𝑃𝑛+1 (𝐻) = 𝑃(𝑃𝑛 (𝐻)) for 𝑛 ≥ 1.
Similarly, the 𝑛-th non-empty powerset of 𝐻, denoted 𝑃∗𝑛 (𝐻), is defined as:

𝑃1∗ (𝐻) = 𝑃∗ (𝐻), 𝑃∗𝑛+1 (𝐻) = 𝑃∗ (𝑃∗𝑛 (𝐻)).

Definition 1.2. (cf.[331, 352]) A SuperHyperStructure is a mathematical structure defined as a pair:

S = (𝑃∗𝑛 (𝐻), O),

where:
1. 𝑃∗𝑛 (𝐻) is the 𝑛-th non-empty powerset of 𝐻, which excludes the empty set.
2. O is a set of operations or relations, called SuperHyperOperators, defined on 𝑃∗𝑛 (𝐻).
Example 1.3 (Example of SuperHyperOperators). (cf.[331, 352]) A binary SuperHyperOperator ◦ can be de-
fined as:
◦ : 𝑃∗𝑛 (𝐻) × 𝑃∗𝑛 (𝐻) → 𝑃∗𝑛 (𝐻).
For example, given two elements 𝐴, 𝐵 ∈ 𝑃∗𝑛 (𝐻), their operation under ◦ might be defined as:

𝐴 ◦ 𝐵 = {𝐶 | 𝐶 = 𝑓 ( 𝐴, 𝐵) for some function 𝑓 }.

Other examples of Superhyperstructures include Superhyperalgebras[197, 198, 212, 213, 221, 299, 300,
331, 342], Superhypertopology[348, 349, 358, 407, 422], Superhyperfunctions[345, 350], and Superhypersoft
sets[126, 127, 265, 347, 360], all of which are well-known in this field. Therefore, research on hypergraphs
and superhypergraphs is significant from both mathematical and practical perspectives.
For reference, the relationships between Superhypergraphs are illustrated in Figure 1.
1.2 Graph Neural Networks
This subsection provides an overview of Graph Neural Networks. In recent years, fields such as machine
learning (cf. [28, 186, 273, 304, 405, 419]), artificial intelligence (cf. [5, 34, 321, 374]), and big data (cf. [49, 79,
200, 257]) have gained significant prominence. This paper focuses on neural networks, which play a pivotal role
in these domains.
A neural network is a computational model inspired by biological neural systems, designed for tasks
such as pattern recognition, data classification, and prediction [20, 25, 46, 223, 393, 411, 412]. Building upon this
foundation, a Graph Neural Network (GNN) extends neural networks to graph structures, enabling the modeling
of relationships between nodes, edges, and their associated features [94, 205, 269, 297, 316, 324, 386, 404, 429,
440, 447].
Building on this concept, Hypergraph Neural Networks (HGNNs) extend traditional Graph Neural Net-
works (GNNs) by leveraging hyperedges to capture higher-order relationships that involve multiple nodes simul-
taneously [70, 115, 181, 183, 204, 369, 401]. Related concepts include Hypernetworks, which have been studied
extensively in works such as [76, 167, 225, 363, 388]. Additionally, networks built on directed graphs, such as
Directed Graph Neural Networks [177–179,325,450], and those based on mixed graph structures, such as Mixed
Graph Neural Networks [163], are also well-known.
Given the wide range of applications studied in these areas, research into Graph Neural Networks is of
critical importance.

2
Fig. 1. Some Superhypergraphs Hierarchy.

1.3 Uncertain graphs


The concept of fuzzy sets was introduced in 1965 [430]. Fuzzy sets provide a framework for addressing
uncertainty in the real world and have been applied in various fields, including graph theory, algebra, topology,
and logic. Furthermore, extensions of fuzzy sets, such as neutrosophic sets [332, 334], have been developed to
handle even more complex forms of uncertainty.
These concepts for handling uncertainty are highly compatible with real-world applications[47,208,235,
263, 270, 278, 322]. For instance, neutrosophic sets extend fuzzy sets by introducing three membership degrees:
truth, indeterminacy, and falsity, making them particularly valuable in scenarios with incomplete or conflicting
information. Applications include:

• Healthcare Decision-Making: Neutrosophic sets assist in evaluating treatment options by balancing effec-
tiveness (truth), uncertainty (indeterminacy), and risk (falsity) when data is incomplete or contradictory
[29, 196].
• Social Network Analysis: They model relationships between users, such as trust, suspicion, and disagree-
ment, in social networks [108, 253, 309, 382].
• Fault Diagnosis in Engineering: Neutrosophic sets identify faults in mechanical systems by accounting
for uncertain and conflicting diagnostic evidence (cf.[155, 226, 326]).
• Market Analysis: Businesses use them to analyze customer preferences, integrating positive feedback
(truth), ambiguous responses (indeterminacy), and negative feedback (falsity) [43, 264, 312].

This paper examines various models of uncertain graphs, including Fuzzy, Intuitionistic Fuzzy, Neu-
trosophic, and Plithogenic Graphs. These models extend classical graph theory by incorporating degrees of
uncertainty, enabling a more nuanced analysis of ambiguous and complex relationships [120,121,123–127,129,
131, 132].
Examples of uncertain graph models include the following:
• Fuzzy Graph: A Fuzzy Graph utilizes membership functions to represent uncertainty in vertices and
edges, enabling more flexible modeling of relationships [8, 10, 12, 274, 306].
• Neutrosophic Graph: A Neutrosophic Graph extends Fuzzy Graphs by incorporating truth, indeterminacy,
and falsity degrees for vertices and edges, offering a richer data representation [26, 63, 192, 272, 371, 372,
420]. It is well known that Neutrosophic Graphs can generalize Fuzzy Graphs.

3
• Plithogenic Graph: The Plithogenic Graph framework models graphs with multi-valued attributes using
membership and contradiction functions, providing a detailed representation of complex relationships
[121, 338, 357]. It is widely recognized that Plithogenic Graphs can generalize Neutrosophic Graphs.
These concepts, including set-based approaches, are applied in decision-making [18] as well as in neural
networks [24, 112, 113, 416, 442] and machine learning[96, 142, 238, 246]. This highlights the importance of
studying concepts related to uncertain graphs.
For reference, the relationships between Uncertain graphs are illustrated in Figure 2 (cf. [126]). Since
Figure 2 is a highly simplified diagram, readers are encouraged to refer to the literature, such as [126], for further
details if necessary.

Fig. 2. Some Uncertain graphs Hierarchy(cf.[126]).

1.4 Our Contribution


This subsection highlights the key contributions of our work. While Graph Neural Networks (GNNs)
for hypergraphs have been extensively studied, no previous research has explored the development of GNNs
tailored to SuperHyperGraphs.
In this paper, we introduce the SuperHyperGraph Neural Network (SHGNN), a mathematical extension
of Hypergraph Neural Networks that leverages the unique structural properties of SuperHyperGraphs. Addition-
ally, we examine uncertain graph neural models, such as Neutrosophic Graph Neural Networks and Plithogenic
Graph Neural Networks, which address similar challenges. Importantly, we demonstrate that both Neutrosophic
and Plithogenic Graph Neural Networks serve as mathematical generalizations of Fuzzy Graph Neural Networks.
This work is theoretical in nature, focusing on establishing the mathematical framework for SHGNNs
and PGNNs. It does not include computational experiments or practical implementations. Therefore, we hope
that computational experiments will be conducted in the future by experts and readers alike. For precise defini-
tions and detailed notations, readers are encouraged to consult the relevant literature, such as [115].
In this paper, we conduct a theoretical examination of the relationships between Graph Neural Networks,
as illustrated in Figure 3. This diagram illustrates that the concept at the arrow’s origin is included in (and
generalized by) the concept at the arrow’s destination.
Although not directly related to the Graph Neural Networks discussed earlier, this paper also explores
several extended concepts in hypergraph theory, including Multilevel k-way Hypergraph Partitioning, Superhy-
pergraph Random Walk, and the Superhypergraph Turán Problem. As these investigations are limited to theo-
retical considerations, it is hoped that computational experiments and practical validations will be conducted in
the future as needed.

2 Preliminaries and Definitions


In this section, we provide a brief overview of the definitions and notations used throughout this paper.
While we aim to make the content accessible to readers from various backgrounds, it is not possible to cover
all relevant details comprehensively. Readers are encouraged to consult the referenced literature for additional
information as needed.

4
Fig. 3. Hierarchy of Some Neural Networks. This diagram illustrates that the concept at the arrow’s origin is included in (and
generalized by) the concept at the arrow’s destination.

2.1 Basic Graph Concepts


This subsection outlines foundational graph concepts. For a comprehensive understanding of graph
theory and notations, refer to [100–102, 158, 406]. Additionally, when discussing graph theory, basic set theory
concepts are often used. Readers are encouraged to consult references such as [117, 182, 201, 389] as needed.
Definition 2.1 (Graph). [102] A graph 𝐺 is a mathematical structure defined as an ordered pair 𝐺 = (𝑉, 𝐸),
where:
• 𝑉 (𝐺): the set of vertices (or nodes),
• 𝐸 (𝐺): the set of edges, which represent connections between pairs of vertices.
Definition 2.2 (Degree). [102] Let 𝐺 = (𝑉, 𝐸) be a graph. The degree of a vertex 𝑣 ∈ 𝑉, denoted deg(𝑣), is the
number of edges incident to 𝑣. For undirected graphs:

deg(𝑣) = |{𝑒 ∈ 𝐸 | 𝑣 ∈ 𝑒}|.

In directed graphs:
• The in-degree deg − (𝑣) is the number of edges directed into 𝑣.
• The out-degree deg+ (𝑣) is the number of edges directed out of 𝑣.
Definition 2.3 (Subgraph). [102] A subgraph 𝐺 ′ of a graph 𝐺 = (𝑉, 𝐸) is a graph 𝐺 ′ = (𝑉 ′ , 𝐸 ′ ) such that:
• 𝑉 ′ ⊆ 𝑉,
• 𝐸 ′ ⊆ 𝐸 ∩ {{𝑢, 𝑣} | 𝑢, 𝑣 ∈ 𝑉 ′ }.
Definition 2.4 (Self-loop in an Undirected Graph). In an undirected graph 𝐺 = (𝑉, 𝐸), a self-loop is an edge
that connects a vertex to itself. Formally, an edge 𝑒 ∈ 𝐸 is a self-loop if 𝑒 = {𝑣, 𝑣} for some 𝑣 ∈ 𝑉.
Definition 2.5 (Real numbers). (cf.[107, 303, 367]) The set of real numbers, denoted by R, is defined as the
unique complete ordered field. It satisfies the following:

• Field Axioms: R forms a field under addition and multiplication.


• Order Axioms: R is totally ordered and compatible with field operations.
• Completeness Axiom: Every non-empty subset of R that is bounded above has a least upper bound (supre-
mum).

Definition 2.6 (Undirected Weighted Graph). (cf.[66, 87, 259]) An undirected weighted graph 𝐺 = (𝑉, 𝐸, 𝑤) is
a graph where:

5
• 𝑉 is the set of vertices.
• 𝐸 ⊆ {{𝑢, 𝑣} | 𝑢, 𝑣 ∈ 𝑉, 𝑢 ≠ 𝑣} is the set of undirected edges.
• 𝑤 : 𝐸 → R+ is a weight function that assigns a non-negative weight to each edge 𝑒 ∈ 𝐸.
Each edge {𝑢, 𝑣} ∈ 𝐸 represents a bidirectional connection between 𝑢 and 𝑣, and the weight 𝑤({𝑢, 𝑣}) indicates
the strength, cost, or capacity of the connection.
2.2 Basic Definitions of Algorithm Complexity
This subsection introduces fundamental definitions for analyzing the algorithms described in later sec-
tions.
Definition 2.7 (Algorithms). [320] Algorithms are step-by-step, well-defined procedures or rules for solving a
problem or performing a task, often implemented in computing.
Definition 2.8 (Time Complexity). (cf.[283, 320]) The time complexity of an algorithm is the total amount of
computational time required to execute it, expressed as a function of the input size. Let 𝑇 (𝑛, 𝑚) denote the time
complexity for inputs of size 𝑛 and 𝑚. The total time complexity is defined as:

𝑇 (𝑛, 𝑚) = max{𝑇step1 (𝑛, 𝑚), 𝑇step2 (𝑛, 𝑚), . . . , 𝑇stepk (𝑛, 𝑚)},

where 𝑇stepi (𝑛, 𝑚) represents the time complexity of the 𝑖-th step of the algorithm.
Definition 2.9 (Space Complexity). (cf.[283, 320]) The space complexity of an algorithm is the total amount of
memory it requires, expressed as a function of the input size. This includes:
• Input space: memory required for storing the input data,
• Auxiliary space: additional memory for temporary variables and data structures used during computation.
Formally, the space complexity 𝑆(𝑛, 𝑚) is:

𝑆(𝑛, 𝑚) = 𝑆input (𝑛, 𝑚) + 𝑆auxiliary (𝑛, 𝑚).

Definition 2.10 (Big-O Notation). (cf.[283, 320]) Big-O notation provides an asymptotic upper bound on the
growth rate of a function. Let 𝑓 (𝑛) and 𝑔(𝑛) be functions that map non-negative integers to non-negative real
numbers. We write:
𝑓 (𝑛) ∈ 𝑂 (𝑔(𝑛))
if there exist positive constants 𝑐 > 0 and 𝑛0 ≥ 0 such that:

𝑓 (𝑛) ≤ 𝑐 · 𝑔(𝑛), ∀𝑛 ≥ 𝑛0 .

Readers may refer to the Lecture Notes or the Introduction for additional details as needed (cf.[1, 33, 86,
110, 173, 283, 320]).
2.3 Basic Graph Neural Network Concepts
Here are several definitions of Graph Neural Networks (GNNs). Readers may refer to the lecture notes
or the introduction for further details(cf.[3, 94, 111, 205, 269, 297, 316, 324, 415, 440]).
Definition 2.11. (cf.[32,135,260]) A matrix is a rectangular array of numbers, symbols, or expressions, arranged
in rows and columns. Formally, an 𝑚 × 𝑛 matrix 𝐴 is defined as:

 𝑎 11 𝑎 12 ··· 𝑎 1𝑛 
𝑎
 21 𝑎 22 ··· 𝑎 2𝑛 
𝐴 =  . .. .. ..  ,
 .. . . . 
···
 
𝑎 𝑚1 𝑎 𝑚2 𝑎 𝑚𝑛 

where:
• 𝑚 is the number of rows,
• 𝑛 is the number of columns,
• 𝑎 𝑖 𝑗 represents the element in the 𝑖-th row and 𝑗-th column.

6
Definition 2.12 (Adjacency Matrix). (cf.[245, 414, 451]) The adjacency matrix of a graph 𝐺 = (𝑉, 𝐸) with
vertex set 𝑉 = {𝑣 1 , 𝑣 2 , . . . , 𝑣 𝑛 } and edge set 𝐸 is an 𝑛 × 𝑛 matrix 𝐴 = [𝑎 𝑖 𝑗 ], defined as:

1 if (𝑣 𝑖 , 𝑣 𝑗 ) ∈ 𝐸,
𝑎𝑖 𝑗 =
0 otherwise.

Definition 2.13 (Weight matrix). (cf.[276, 370]) A weight matrix is a matrix used in mathematical and compu-
tational models, particularly in neural networks, to represent the connection strengths between elements, such
as nodes in a graph or neurons in a layer.
Let X ∈ R𝑛×𝑑 be the input data matrix, where:
• 𝑛 is the number of data points (rows),
• 𝑑 is the number of features (columns).
The weight matrix W ∈ R𝑑× 𝑝 maps the input space to an output space, where:
• 𝑑 is the dimension of the input features,
• 𝑝 is the dimension of the output space.
The transformation is expressed as:
Z = XW,
where Z ∈ R𝑛× 𝑝 is the resulting matrix in the output space.
In the context of neural networks or graph models, the entries 𝑤 𝑖 𝑗 in W represent the weight or strength
of influence between the 𝑖-th input feature and the 𝑗-th output feature.
Definition 2.14 (Feature Vector). (cf.[50,233,387]) Let O be an object or observation, and let 𝐹 = { 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 }
be a set of features, where 𝑓𝑖 : O → R is a function mapping O to the real numbers R. A feature vector of O is
defined as:
x = [ 𝑓1 (O), 𝑓2 (O), . . . , 𝑓𝑛 (O)] ⊤ ∈ R𝑛 ,
where 𝑛 is the number of features, and x is an element of the 𝑛-dimensional real vector space R𝑛 .
Definition 2.15 (Dataset). (cf.[378]) A dataset is a finite set of data points. Formally, it is defined as:

𝐷 = {x𝑖 | x𝑖 ∈ X, 𝑖 = 1, 2, . . . , 𝑛},

where x𝑖 is the 𝑖-th data point in the input space X, and 𝑛 is the total number of data points.
Definition 2.16 (Normalization). (cf.[36, 72, 109, 262, 384]) Normalization is a process of scaling a set of val-
ues to fit within a specific range, typically [0, 1] or [−1, 1]. Given a dataset {𝑥1 , 𝑥2 , . . . , 𝑥 𝑛 }, normalization
transforms each value 𝑥𝑖 into a normalized value 𝑥𝑖′ using the formula:

𝑥𝑖 − min(𝑥)
𝑥 𝑖′ = ,
max(𝑥) − min(𝑥)
where:
• min(𝑥) = min{𝑥1 , 𝑥2 , . . . , 𝑥 𝑛 } is the minimum value in the dataset,
• max(𝑥) = max{𝑥1 , 𝑥2 , . . . , 𝑥 𝑛 } is the maximum value in the dataset.
If the range is [−1, 1], the transformation is adjusted as:

𝑥𝑖 − min(𝑥)
𝑥𝑖′ = 2 · − 1.
max(𝑥) − min(𝑥)

Definition 2.17 (Graph Neural Network (GNN)). (cf.[449, 453]) Let 𝐺 = (𝑉, 𝐸) be a graph, where 𝑉 =
{𝑣 1 , 𝑣 2 , . . . , 𝑣 𝑛 } is the set of vertices and 𝐸 ⊆ 𝑉 × 𝑉 is the set of edges. Each vertex 𝑣 𝑖 ∈ 𝑉 is associated
with a feature vector x𝑖 ∈ R𝑑 , and each edge (𝑣 𝑖 , 𝑣 𝑗 ) ∈ 𝐸 may optionally have a feature e𝑖 𝑗 ∈ R 𝑘 .
(𝑡 )
A Graph Neural Network (GNN) computes node representations h𝑖 ∈ R𝑑 at each layer 𝑡, using the
graph structure and associated features.

7
Definition 2.18 (Key Components of Graph Neural Network). (cf.[449,453]) Several key components of Graph
Neural Networks are outlined below.
1. Node Initialization: At the initial layer (𝑡 = 0), the node representations are initialized as:
(0)
h𝑖 = x𝑖 , ∀𝑣 𝑖 ∈ 𝑉 .

2. Message Passing(cf.[48, 228]): At each layer 𝑡, messages are exchanged between connected nodes.
The messages received by a node 𝑣 𝑖 from its neighbors are computed as:
(𝑡+1) (𝑡 ) (𝑡 )
∑︁
m𝑖 = 𝜙 𝑚 (h𝑖 , h 𝑗 , e𝑖 𝑗 ),
𝑣 𝑗 ∈ N (𝑖)

where:
• N (𝑖) is the set of neighbors of 𝑣 𝑖 ,
• 𝜙 𝑚 : R𝑑 × R𝑑 × R 𝑘 → R𝑑 is the message function.
3. Node Update: (cf.[206]) The representation of each node is updated using the received messages:
(𝑡+1) (𝑡 ) (𝑡+1)
h𝑖 = 𝜙𝑢 (h𝑖 , m𝑖 ),

where 𝜙𝑢 : R𝑑 × R𝑑 → R𝑑 is the update function.


4. Readout Function: For graph-level tasks, a global representation z𝐺 is computed by aggregating node
representations:  
(𝑇 )
z𝐺 = 𝜙𝑟 {h𝑖 | 𝑣 𝑖 ∈ 𝑉 } ,

where 𝜙𝑟 is the readout function (e.g., summation, averaging, or max-pooling).


Example 2.19 (Readout Function Examples). (cf.[23, 55, 428]) A readout function 𝜙𝑟 computes a global repre-
sentation of a graph by aggregating node representations. Below are some commonly used examples:

Mean Readout Function: (cf.[307, 448]) The mean readout function computes the average of all node repre-
sentations: 
(𝑇 )
 1 ∑︁ (𝑇 )
𝜙𝑟 {h𝑖 | 𝑣 𝑖 ∈ 𝑉 } = h𝑖 ,
|𝑉 |
𝑣𝑖 ∈𝑉
(𝑇 )
where h𝑖 is the final representation of node 𝑣 𝑖 at the last layer 𝑇.

Max-Pooling Readout Function: (cf.[27, 301, 452]) The max-pooling readout function selects the maximum
value for each feature across all node representations:
 
(𝑇 ) (𝑇 )
𝜙𝑟 {h𝑖 | 𝑣 𝑖 ∈ 𝑉 } = max h𝑖 ,
𝑣𝑖 ∈𝑉

where the max operator is applied element-wise to the feature vectors.

Sum Readout Function: (cf.[89, 308]) The sum readout function aggregates all node representations by sum-
mation:  
(𝑇 ) (𝑇 )
∑︁
𝜙𝑟 {h𝑖 | 𝑣 𝑖 ∈ 𝑉 } = h𝑖 .
𝑣𝑖 ∈𝑉
This function is particularly useful when the graph size varies, as it preserves the total magnitude of features.
Definition 2.20 (General Framework). (cf.[449, 453]) The node update rule for all nodes at layer 𝑡 can be
expressed in matrix form:  
H (𝑡+1) = 𝜙𝑢 H (𝑡 ) , A, W (𝑡 ) ,

where:
• H (𝑡 ) ∈ R𝑛×𝑑 is the matrix of node representations,

8
• A ∈ R𝑛×𝑛 is the adjacency matrix,

• W (𝑡 ) are learnable weight matrices.


Definition 2.21 (Graph Convolutional Network). (cf.[54,80,446,449,453]) For a Graph Convolutional Network
(GCN), the propagation rule is:  
H (𝑡+1) = 𝜎 ÂH (𝑡 ) W (𝑡 ) ,

where:
• Â = D̃ −1/2 ÃD̃ −1/2 is the normalized adjacency matrix,
• Ã = A + I is the adjacency matrix with self-loops,
• D̃ is the diagonal degree matrix of Ã,
• 𝜎 is an activation function (e.g., ReLU).
To understand Graph Convolutional Networks intuitively, consider the following example.
Example 2.22 (Graph Convolutional Network). Imagine a social network(cf.[319]) where each person (node)
has an attribute such as their interest in a specific topic (e.g., sports, music, or technology). Edges between nodes
represent relationships or friendships between people. Each person also has initial attributes (node features),
such as a score representing their interest in these topics.
The goal of the GCN is to predict a person’s overall interest profile by combining their own features with
information from their friends (neighboring nodes).
At each layer of the GCN:
1. The node collects information from its neighbors. For example, a sports enthusiast might update their
profile based on their friends who are also interested in sports.
2. This information is aggregated using the normalized adjacency matrix Â, ensuring that contributions from
neighbors are weighted appropriately.

3. The aggregated information is then transformed using a learnable weight matrix W (𝑡 ) , and a non-linear
activation function 𝜎 is applied to introduce complexity to the model.
By stacking multiple layers of this process, each node gains a more comprehensive understanding of
its broader neighborhood in the graph. For instance, after two layers, a person’s profile reflects not only their
immediate friends’ interests but also those of their friends’ friends.
This process allows GCNs to effectively learn and propagate information over the graph structure, mak-
ing them powerful tools for tasks like node classification, graph classification, and link prediction.
2.4 Hypergraph Concepts
A hypergraph extends the concept of a traditional graph by allowing edges, called hyperedges, to connect
any number of vertices, rather than being restricted to pairs[51,140,152–154]. This flexibility makes hypergraphs
highly effective for modeling complex relationships in various domains, such as computer science and biology
[114, 148, 195, 294]. The formal definitions are provided below.
Definition 2.23 (Hypergraph). [51, 60] A hypergraph is a pair 𝐻 = (𝑉 (𝐻), 𝐸 (𝐻)), where:
• 𝑉 (𝐻) is a nonempty set of vertices.
• 𝐸 (𝐻) is a set of subsets of 𝑉 (𝐻), called hyperedges. Each hyperedge 𝑒 ∈ 𝐸 (𝐻) can contain one or more
vertices.
In this paper, we restrict our discussion to finite hypergraphs.
Example 2.24 (Hypergraph). Let 𝐻 = (𝑉 (𝐻), 𝐸 (𝐻)) be a hypergraph with:

𝑉 (𝐻) = {𝑣 1 , 𝑣 2 , 𝑣 3 , 𝑣 4 }, 𝐸 (𝐻) = {{𝑣 1 , 𝑣 2 }, {𝑣 2 , 𝑣 3 , 𝑣 4 }, {𝑣 1 }}.

Here:
• 𝑉 (𝐻) is the set of vertices: 𝑣 1 , 𝑣 2 , 𝑣 3 , 𝑣 4 .

9
• 𝐸 (𝐻) is the set of hyperedges: {𝑣 1 , 𝑣 2 }, {𝑣 2 , 𝑣 3 , 𝑣 4 }, and {𝑣 1 }.
Proposition 2.25. A hypergraph is a generalized concept of a graph.
Proof. This is evident. □

Definition 2.26 (subhypergraph). [60] For a hypergraph 𝐻 = (𝑉 (𝐻), 𝐸 (𝐻)) and a subset 𝑋 ⊆ 𝑉 (𝐻), the
subhypergraph induced by 𝑋 is defined as:

𝐻 [𝑋] = 𝑋, {𝑒 ∩ 𝑋 | 𝑒 ∈ 𝐸 (𝐻)} .

Additionally, the hypergraph obtained by removing the vertices in 𝑋 is denoted as:

𝐻 \ 𝑋 := 𝐻 [𝑉 (𝐻) \ 𝑋].

For further details on hypergraph notation and foundational concepts, refer to [60, 90].
2.5 SuperHyperGraph
A SuperHyperGraph is an advanced structure extending hypergraphs by allowing vertices and edges to
be sets. The definition is provided below [340, 341].
Definition 2.27 (SuperHyperGraph [126,340,341]). Let 𝑉0 be a finite set of base vertices. A SuperHyperGraph
is an ordered pair 𝐻 = (𝑉, 𝐸), where:
• 𝑉 ⊆ 𝑃(𝑉0 ) is a finite set of supervertices, each being a subset of 𝑉0 . That is, each supervertex 𝑣 ∈ 𝑉
satisfies 𝑣 ⊆ 𝑉0 .
• 𝐸 ⊆ 𝑃(𝑉) is the set of superedges, where each superedge 𝑒 ∈ 𝐸 is a subset of 𝑉, connecting multiple
supervertices.
Example 2.28 (SuperHyperGraph). Let 𝑉0 = {𝑥1 , 𝑥2 , 𝑥3 } be the base vertex set. Define the supervertices as:

𝑉 = {{𝑥1 , 𝑥2 }, {𝑥3 }, {𝑥1 }}.

Let the superedges be:


𝐸 = {{{𝑥1 , 𝑥2 }, {𝑥3 }}, {{𝑥1 }, {𝑥3 }}}.
Here:
• 𝑉 contains subsets of 𝑉0 : {𝑥1 , 𝑥2 }, {𝑥3 }, {𝑥1 }.
• 𝐸 contains relationships among these supervertices: {{𝑥1 , 𝑥2 }, {𝑥3 }} and {{𝑥1 }, {𝑥3 }}.
This SuperHypergraph extends the concept of a hypergraph by allowing supervertices (subsets of the
base vertex set) to participate in superedges.
Proposition 2.29. A superhypergraph is a generalized concept of a hypergraph.

Proof. This is evident. □

Proposition 2.30. A superhypergraph is a generalized concept of a graph.

Proof. This is evident. □


When expressed concretely, including hypergraphs, a superhypergraph can be represented as follows. In
this way, hypergraphs can be described and generalized using superhypergraphs.
Definition 2.31 (Expanded Hypergraph of a SuperHyperGraph). Given a SuperHyperGraph 𝐻 = (𝑉, 𝐸), the
Expanded Hypergraph 𝐻 ′ = (𝑉0 , 𝐸 ′ ) is defined as follows:
• The vertex set is 𝑉0 , the set of base vertices.

10
• For each superedge 𝑒 ∈ 𝐸, define the corresponding hyperedge 𝑒 ′ ∈ 𝐸 ′ by
Ø
𝑒′ = 𝑣,
𝑣 ∈𝑒

where 𝑣 ∈ 𝑉 are supervertices in 𝑒. Then

𝐸 ′ = {𝑒 ′ | 𝑒 ∈ 𝐸 }.

Example 2.32 (Expanded Hypergraph). Consider the SuperHyperGraph 𝐻 = (𝑉, 𝐸) defined as follows:
• The base vertex set is 𝑉0 = {𝑥1 , 𝑥2 , 𝑥3 }.
• The supervertices are:
𝑉 = {{𝑥1 , 𝑥2 }, {𝑥3 }, {𝑥1 }}.

• The superedges are:


𝐸 = {{{𝑥1 , 𝑥2 }, {𝑥3 }}, {{𝑥1 }, {𝑥3 }}}.

The Expanded Hypergraph 𝐻 ′ = (𝑉0 , 𝐸 ′ ) is constructed as follows:


• The vertex set remains 𝑉0 = {𝑥1 , 𝑥2 , 𝑥3 }, which is the base vertex set.
• For each superedge 𝑒 ∈ 𝐸, the corresponding hyperedge 𝑒 ′ is obtained by taking the union of all super-
vertices 𝑣 in 𝑒: Ø
𝑒′ = 𝑣.
𝑣 ∈𝑒

• The expanded edge set 𝐸′ is:


Ø
𝑒 1′ = 𝑣 = {𝑥1 , 𝑥2 } ∪ {𝑥3 } = {𝑥1 , 𝑥2 , 𝑥3 },
𝑣 ∈ { { 𝑥1 , 𝑥2 }, { 𝑥3 } }
Ø
𝑒 2′ = 𝑣 = {𝑥1 } ∪ {𝑥3 } = {𝑥1 , 𝑥3 }.
𝑣 ∈ { { 𝑥1 }, { 𝑥3 } }

Thus, the expanded edge set is:


𝐸 ′ = {{𝑥1 , 𝑥2 , 𝑥3 }, {𝑥1 , 𝑥3 }}.

To summarize:
• The Expanded Hypergraph 𝐻 ′ has the vertex set:

𝑉0 = {𝑥1 , 𝑥2 , 𝑥3 }.

• The edge set is:


𝐸 ′ = {{𝑥1 , 𝑥2 , 𝑥3 }, {𝑥1 , 𝑥3 }}.

This construction illustrates how the supervertices and superedges in a SuperHyperGraph are transformed into
vertices and edges in the corresponding Expanded Hypergraph.
Theorem 2.33. The Expanded Hypergraph of a SuperHyperGraph generalizes a Hypergraph.
Proof. Let 𝐻 = (𝑉, 𝐸) be a SuperHyperGraph with 𝑉 as the set of supervertices, where each supervertex 𝑣 ∈ 𝑉
is a subset of a base vertex set 𝑉0 . Let 𝐻 ′ = (𝑉0 , 𝐸 ′ ) be the Expanded Hypergraph derived from 𝐻, where:
Ø
𝐸 ′ = {𝑒 ′ | 𝑒 ′ = 𝑣, 𝑒 ∈ 𝐸 }.
𝑣 ∈𝑒

To prove that the Expanded Hypergraph 𝐻′ generalizes a Hypergraph, consider the following cases:

11
Case 1: SuperHyperGraph reduces to a Hypergraph. If each supervertex 𝑣 ∈ 𝑉 corresponds to exactly one
base vertex in 𝑉0 , then 𝑉 = 𝑉0 . In this case, each superedge 𝑒 ∈ 𝐸 is a subset of 𝑉0 , and the expansion rule:
Ø
𝑒′ = 𝑣
𝑣 ∈𝑒

𝑒′
yields = 𝑒. Therefore, 𝐻′
= (𝑉0 , 𝐸 ′)
is identical to the original Hypergraph 𝐻, showing that the Expanded
Hypergraph is equivalent to a Hypergraph when 𝐻 is already a Hypergraph.

Case 2: General SuperHyperGraph. When 𝐻 is a general SuperHyperGraph, each supervertex 𝑣 ∈ 𝑉 may


represent a subset of 𝑉0 . The expansion process aggregates all base vertices in 𝑉0 that are part of the supervertices
in each superedge 𝑒 ∈ 𝐸. This allows 𝐻 ′ = (𝑉0 , 𝐸 ′ ) to represent relationships among base vertices in 𝑉0 in a
way that subsumes the structure of a Hypergraph.
The Expanded Hypergraph 𝐻 ′ retains the flexibility to represent any Hypergraph by treating each vertex
𝑣 ∈ 𝑉 as a single base vertex in 𝑉0 . Simultaneously, it extends the concept of a Hypergraph by allowing vertices
in 𝐸 to represent subsets of base vertices, enabling more complex relational structures.
Since the Expanded Hypergraph 𝐻 ′ encompasses both the structure of Hypergraphs and the extended
relational complexity of SuperHyperGraphs, we conclude that the Expanded Hypergraph of a SuperHyperGraph
generalizes a Hypergraph. □
2.6 HGNN:Hypergraph Neural Network
The Hypergraph Neural Network is a concept designed to utilize the general Graph Neural Network at a
higher level, and it has been studied extensively across numerous frameworks and concepts[115, 229, 231, 236,
239, 239, 244, 410, 425, 426]. The definitions are provided below.
Definition 2.34 (Hypergraph Neural Network). [115] Let 𝐺 = (𝑉, 𝐸, 𝑊) be a hypergraph, where:
• 𝑉 = {𝑣 1 , 𝑣 2 , . . . , 𝑣 𝑛 } is the set of vertices.
• 𝐸 = {𝑒 1 , 𝑒 2 , . . . , 𝑒 𝑚 } is the set of hyperedges, where each hyperedge 𝑒 𝑖 ⊆ 𝑉 connects a subset of vertices.
• 𝑊 = diag(𝑤 1 , 𝑤 2 , . . . , 𝑤 𝑚 ) is a diagonal matrix of hyperedge weights, where 𝑤 𝑖 > 0 represents the
weight of hyperedge 𝑒 𝑖 .
The Hypergraph Neural Network (HGNN) is a neural network framework designed for representation
learning on hypergraphs. It utilizes the hypergraph structure to aggregate features from vertices and their con-
nections through hyperedges. The key components of HGNN are defined as follows:

Incidence Matrix The incidence matrix 𝐻 ∈ R𝑛×𝑚 of the hypergraph 𝐺 is defined as:

1, if vertex 𝑣 𝑖 ∈ 𝑒 𝑗 ,
𝐻𝑖 𝑗 =
0, otherwise.

Vertex and Hyperedge Degrees The degree of a vertex 𝑣 𝑖 ∈ 𝑉 is defined as:


∑︁
𝑑 (𝑣 𝑖 ) = 𝐻𝑖 𝑗 𝑤 𝑗 .
𝑒 𝑗 ∈𝐸

The degree of a hyperedge 𝑒 𝑗 ∈ 𝐸 is defined as:


∑︁
𝛿(𝑒 𝑗 ) = 𝐻𝑖 𝑗 .
𝑣𝑖 ∈𝑉

Let 𝐷 𝑉 ∈ R𝑛×𝑛 and 𝐷 𝐸 ∈ R𝑚×𝑚 be the diagonal matrices of vertex degrees and hyperedge degrees,
respectively, where:
(𝐷 𝑉 )𝑖𝑖 = 𝑑 (𝑣 𝑖 ), (𝐷 𝐸 ) 𝑗 𝑗 = 𝛿(𝑒 𝑗 ).

Hypergraph Laplacian (cf.[75, 137]) The hypergraph Laplacian Δ is defined as:


−1/2 −1 ⊤ −1/2
Δ = 𝐼 − 𝐷𝑉 𝐻𝑊 𝐷 𝐸 𝐻 𝐷𝑉 ,
where 𝐼 is the identity matrix.

12
Spectral Convolution on Hypergraph (cf.[38, 251]) The convolution operation in HGNN is performed in
the spectral domain using the hypergraph Laplacian. Given a feature matrix 𝑋 ∈ R𝑛×𝑑 , where each row 𝑥𝑖
represents the feature vector of vertex 𝑣 𝑖 , the output feature matrix 𝑌 ∈ R𝑛×𝑐 is computed as:
 
−1/2 −1 ⊤ −1/2
𝑌 = 𝜎 𝐷 𝑉 𝐻𝑊 𝐷 𝐸 𝐻 𝐷 𝑉 𝑋Θ ,

where:
• 𝜎 is a nonlinear activation function (e.g., ReLU).
• Θ ∈ R𝑑×𝑐 is the learnable weight matrix.

Node Classification Task For a node classification task, let 𝑋 (0) be the input feature matrix. A multi-layer
HGNN can be defined recursively as:
 
−1/2 −1 ⊤ −1/2 (𝑙) (𝑙)
𝑋 (𝑙+1) = 𝜎 𝐷 𝑉 𝐻𝑊 𝐷 𝐸 𝐻 𝐷𝑉 𝑋 Θ ,

where 𝑙 denotes the layer index, Θ (𝑙) is the learnable weight matrix for layer 𝑙, and 𝑋 (𝑙+1) is the feature matrix
output at layer 𝑙 + 1.

Output Layer In the final layer, the softmax function is applied to the output features to produce class proba-
bilities for each node:
𝑌ˆ = softmax(𝑋 (𝐿) ),
where 𝐿 is the total number of layers and 𝑌ˆ ∈ R𝑛×𝑐 contains the predicted probabilities for 𝑐 classes.
Proposition 2.35. A Hypergraph Neural Network can generalize a Classical Graph Neural Network.
Proof. This is evident from the definitions. □
2.7 Uncertain Graph
The concept of the Fuzzy Set, introduced approximately half a century ago, has spurred the development
of various graph theories aimed at modeling uncertainty[430]. In this section, we outline definitions for several
frameworks, including Fuzzy Graphs, Intuitionistic Fuzzy Graphs, Neutrosophic Graphs, and Single-Valued
Pentapartitioned Neutrosophic Graphs.
A Fuzzy Graph is frequently analyzed in the context of a Crisp Graph [121]. To provide a foundation,
we begin by presenting the definition of a Crisp Graph [121].
Definition 2.36 (Crisp Graph). (cf.[121]) A Crisp Graph 𝐺 = (𝑉, 𝐸) is defined as follows:
1. 𝑉: A non-empty finite set of vertices (or nodes).
2. 𝐸 ⊆ {{𝑢, 𝑣} | 𝑢, 𝑣 ∈ 𝑉 and 𝑢 ≠ 𝑣}: A set of unordered pairs of vertices, called edges. Each edge is
associated with exactly two vertices, referred to as its endpoints. An edge is said to connect its endpoints.

Special Cases
• A graph 𝐺 with 𝐸 = ∅ is called an edgeless graph.
Next, we introduce the concepts of Fuzzy Graph, Intuitionistic Fuzzy Graph, Neutrosophic Graph, Hes-
itant Fuzzy Graph, Quadripartitioned Neutrosophic Graph (QNG), and Single-Valued Pentapartitioned Neutro-
sophic Graph. Readers are encouraged to refer to survey papers (e.g., [121, 123]) for more detailed information
if needed.
Definition 2.37 (Unified Framework for Uncertain Graphs). (cf. [123]) Let 𝐺 = (𝑉, 𝐸) be a classical graph,
where 𝑉 is the set of vertices and 𝐸 is the set of edges. Depending on the type of graph, each vertex 𝑣 ∈ 𝑉 and
edge 𝑒 ∈ 𝐸 is associated with membership values to represent various degrees of truth, indeterminacy, falsity,
and other measures of uncertainty.

1. Fuzzy Graph (cf. [53, 136, 144, 267, 279, 306, 404])
• Each vertex 𝑣 ∈ 𝑉 is assigned a membership degree 𝜎(𝑣) ∈ [0, 1].

13
• Each edge 𝑒 = (𝑢, 𝑣) ∈ 𝐸 is assigned a membership degree 𝜇(𝑢, 𝑣) ∈ [0, 1].
2. Intuitionistic Fuzzy Graph (IFG) (cf. [9, 199, 383, 445])
• Each vertex 𝑣 ∈ 𝑉 has two values: 𝜇 𝐴 (𝑣) ∈ [0, 1] (degree of membership) and 𝜈 𝐴 (𝑣) ∈ [0, 1]
(degree of non-membership), satisfying 𝜇 𝐴 (𝑣) + 𝜈 𝐴 (𝑣) ≤ 1.
• Each edge 𝑒 = (𝑢, 𝑣) ∈ 𝐸 has two values: 𝜇 𝐵 (𝑢, 𝑣) ∈ [0, 1] and 𝜈 𝐵 (𝑢, 𝑣) ∈ [0, 1], with 𝜇 𝐵 (𝑢, 𝑣) +
𝜈 𝐵 (𝑢, 𝑣) ≤ 1.
3. Neutrosophic Graph (cf. [17, 65, 161, 188, 209, 341, 354])
• Each vertex 𝑣 ∈ 𝑉 is associated with a triplet

𝜎(𝑣) = (𝜎𝑇 (𝑣), 𝜎𝐼 (𝑣), 𝜎𝐹 (𝑣))

, where
𝜎𝑇 (𝑣), 𝜎𝐼 (𝑣), 𝜎𝐹 (𝑣) ∈ [0, 1]
and 𝜎𝑇 (𝑣) + 𝜎𝐼 (𝑣) + 𝜎𝐹 (𝑣) ≤ 3.
• Each edge 𝑒 = (𝑢, 𝑣) ∈ 𝐸 is associated with a triplet 𝜇(𝑒) = (𝜇𝑇 (𝑒), 𝜇 𝐼 (𝑒), 𝜇 𝐹 (𝑒)).
4. Hesitant Fuzzy Graph (cf. [39, 146, 281, 286, 417])
• Each vertex 𝑣 ∈ 𝑉 is assigned a hesitant fuzzy set 𝜎(𝑣) ⊆ [0, 1].
• Each edge 𝑒 = (𝑢, 𝑣) ∈ 𝐸 is assigned a hesitant fuzzy set 𝜇(𝑒) ⊆ [0, 1].
5. Quadripartitioned Neutrosophic Graph (QNG) (cf. [190, 191, 193, 313, 327])
• Each vertex 𝑣 ∈ 𝑉 is associated with a quadripartitioned neutrosophic membership

𝜎(𝑣) = (𝜎1 (𝑣), 𝜎2 (𝑣), 𝜎3 (𝑣), 𝜎4 (𝑣))

, where
𝜎1 (𝑣), 𝜎2 (𝑣), 𝜎3 (𝑣), 𝜎4 (𝑣) ∈ [0, 1]
and
𝜎1 (𝑣) + 𝜎2 (𝑣) + 𝜎3 (𝑣) + 𝜎4 (𝑣) ≤ 4
.
• Each edge 𝑒 = (𝑢, 𝑣) ∈ 𝐸 is associated with a quadripartitioned membership

𝜎(𝑒) = (𝜎1 (𝑒), 𝜎2 (𝑒), 𝜎3 (𝑒), 𝜎4 (𝑒))

, satisfying:
𝜎1 (𝑒) ≤ min{𝜎1 (𝑢), 𝜎1 (𝑣)},
𝜎2 (𝑒) ≤ min{𝜎2 (𝑢), 𝜎2 (𝑣)},
𝜎3 (𝑒) ≤ max{𝜎3 (𝑢), 𝜎3 (𝑣)},
𝜎4 (𝑒) ≤ max{𝜎4 (𝑢), 𝜎4 (𝑣)}.
6. Single-Valued Pentapartitioned Neutrosophic Graph (cf. [91, 189, 191, 298])
• Each vertex 𝑣 ∈ 𝑉 is assigned a quintuple

𝜎(𝑣) = (𝜎1 (𝑣), 𝜎2 (𝑣), 𝜎3 (𝑣), 𝜎4 (𝑣), 𝜎5 (𝑣))

, where
𝜎1 (𝑣), 𝜎2 (𝑣), 𝜎3 (𝑣), 𝜎4 (𝑣), 𝜎5 (𝑣) ∈ [0, 1]
and
𝜎1 (𝑣) + 𝜎2 (𝑣) + 𝜎3 (𝑣) + 𝜎4 (𝑣) + 𝜎5 (𝑣) ≤ 5
.

14
• Each edge 𝑒 = (𝑢, 𝑣) ∈ 𝐸 is assigned a quintuple

𝜎(𝑒) = (𝜎1 (𝑒), 𝜎2 (𝑒), 𝜎3 (𝑒), 𝜎4 (𝑒), 𝜎5 (𝑒))

, satisfying:
𝜎1 (𝑒) ≤ min{𝜎1 (𝑢), 𝜎1 (𝑣)},
𝜎2 (𝑒) ≤ min{𝜎2 (𝑢), 𝜎2 (𝑣)},
𝜎3 (𝑒) ≥ max{𝜎3 (𝑢), 𝜎3 (𝑣)},
𝜎4 (𝑒) ≥ max{𝜎4 (𝑢), 𝜎4 (𝑣)},
𝜎5 (𝑒) ≥ max{𝜎5 (𝑢), 𝜎5 (𝑣)}.

We provide examples of Fuzzy Graphs and Neutrosophic Graphs applied to real-world scenarios. These
examples demonstrate how Uncertain Graphs are well-known for their ability to model various phenomena in
the real world[7, 18, 64, 160, 192, 329].
Example 2.38 (Fuzzy Graph: Social Network with Varying Friendship Strengths). Consider a social network
where individuals are connected based on their friendships, with varying strengths (cf.[248,252,310,402]). This
can be modeled using a fuzzy graph, where vertices represent individuals, and edges represent friendships with
varying degrees of strength.

Definition: Let 𝐺 = (𝑉, 𝐸) be a fuzzy graph where:


• 𝑉 = {Alice, Bob, Carol, Dave} is the set of individuals.
• 𝐸 ⊆ 𝑉 × 𝑉 represents the friendships between individuals.

Membership Functions:
• Vertex Membership Degrees (𝜎(𝑣)): The membership degree of each vertex represents the individual’s
level of activity or influence in the social network:

𝜎(Alice) = 0.9 (Highly active user),


𝜎(Bob) = 0.7 (Active user),
𝜎(Carol) = 0.5 (Moderately active user),
𝜎(Dave) = 0.3 (Less active user).

• Edge Membership Degrees (𝜇(𝑢, 𝑣)): The membership degree of each edge represents the strength of the
friendship:
𝜇(Alice, Bob) = 0.8 (Strong friendship),
𝜇(Bob, Carol) = 0.6 (Moderate friendship),
𝜇(Carol, Dave) = 0.4 (Weak friendship),
𝜇(Alice, Dave) = 0.2 (Very weak friendship).

Alice is highly active in the network, engaging frequently, while Dave is the least active. Alice and Bob
share a strong friendship, while Carol and Dave have a weak connection.
This fuzzy graph allows for a nuanced analysis of social networks by modeling the varying strengths of
relationships and activity levels, aiding in tasks like community detection or recommendation systems (cf.[71,
93, 409, 413]).
Example 2.39 (Neutrosophic Graph: Disease Transmission Network with Uncertainty). In epidemiology, un-
derstanding the spread of disease through a population is crucial. A neutrosophic graph can model the uncer-
tainty in infection statuses and transmission probabilities (cf.[4, 270, 328]).

Definition: Let 𝐺 = (𝑉, 𝐸) be a neutrosophic graph where:


• 𝑉 = {Patient1, Patient2, Patient3, Patient4} represents individuals.
• 𝐸 ⊆ 𝑉 × 𝑉 represents potential transmission paths.

15
Membership Functions:
• Vertex Membership Triplets (𝜎(𝑣) = (𝜎𝑇 (𝑣), 𝜎𝐼 (𝑣), 𝜎𝐹 (𝑣))): Each vertex is assigned degrees of truth
(𝜎𝑇 ), indeterminacy (𝜎𝐼 ), and falsity (𝜎𝐹 ):

𝜎(Patient1) = (0.9, 0.1, 0.0) (Highly likely infected),


𝜎(Patient2) = (0.5, 0.4, 0.1) (Uncertain status),
𝜎(Patient3) = (0.2, 0.3, 0.5) (Possibly not infected),
𝜎(Patient4) = (0.0, 0.1, 0.9) (Highly likely not infected).

• Edge Membership Triplets (𝜇(𝑒) = (𝜇𝑇 (𝑒), 𝜇 𝐼 (𝑒), 𝜇 𝐹 (𝑒))): Each edge is assigned degrees of truth,
indeterminacy, and falsity:

𝜇(Patient1, Patient2) = (0.8, 0.1, 0.1) (High likelihood of transmission),


𝜇(Patient2, Patient3) = (0.4, 0.4, 0.2) (Uncertain transmission),
𝜇(Patient3, Patient4) = (0.1, 0.2, 0.7) (Low likelihood of transmission),
𝜇(Patient1, Patient4) = (0.2, 0.3, 0.5) (Possible but unlikely transmission).

Patient1 is highly likely infected and may transmit the disease to Patient2. The transmission between
Patient2 and Patient3 is uncertain. Patient4 is highly unlikely to be infected, with low chances of transmission
from others.
Neutrosophic graphs can aid in modeling uncertain infection and transmission dynamics, supporting
efforts in contact tracing, resource allocation, and risk assessment.
Proposition 2.40. Neutrosophic graphs can generalize Fuzzy Graphs.
Proof. This follows directly (cf.[355]). □

A Plithogenic Graph is a generalized graph based on the concept of a Plithogenic Set. This graph is
known for its ability to generalize structures such as Fuzzy Graphs and Neutrosophic Graphs described earlier.
The definition is provided below [338].
Definition 2.41. [145, 338, 339, 357, 364] Let 𝐺 = (𝑉, 𝐸) be a crisp graph where 𝑉 is the set of vertices and
𝐸 ⊆ 𝑉 × 𝑉 is the set of edges. A Plithogenic Graph 𝑃𝐺 is defined as:

𝑃𝐺 = (𝑃𝑀, 𝑃𝑁)
where:

1. Plithogenic Vertex Set 𝑃𝑀 = (𝑀, 𝑙, 𝑀𝑙, 𝑎𝑑𝑓 , 𝑎𝐶 𝑓 ):


• 𝑀 ⊆ 𝑉 is the set of vertices.
• 𝑙 is an attribute associated with the vertices.
• 𝑀𝑙 is the range of possible attribute values.
• 𝑎𝑑𝑓 : 𝑀 × 𝑀𝑙 → [0, 1] 𝑠 is the Degree of Appurtenance Function (DAF) for vertices.
• 𝑎𝐶 𝑓 : 𝑀𝑙 × 𝑀𝑙 → [0, 1] 𝑡 is the Degree of Contradiction Function (DCF) for vertices.
2. Plithogenic Edge Set 𝑃𝑁 = (𝑁, 𝑚, 𝑁𝑚, 𝑏𝑑𝑓 , 𝑏𝐶 𝑓 ):
• 𝑁 ⊆ 𝐸 is the set of edges.
• 𝑚 is an attribute associated with the edges.
• 𝑁𝑚 is the range of possible attribute values.
• 𝑏𝑑𝑓 : 𝑁 × 𝑁𝑚 → [0, 1] 𝑠 is the Degree of Appurtenance Function (DAF) for edges.
• 𝑏𝐶 𝑓 : 𝑁𝑚 × 𝑁𝑚 → [0, 1] 𝑡 is the Degree of Contradiction Function (DCF) for edges.

The Plithogenic Graph 𝑃𝐺 must satisfy the following conditions:

16
1. Edge Appurtenance Constraint: For all (𝑥, 𝑎), (𝑦, 𝑏) ∈ 𝑀 × 𝑀𝑙:

𝑏𝑑𝑓 ((𝑥𝑦), (𝑎, 𝑏)) ≤ min{𝑎𝑑𝑓 (𝑥, 𝑎), 𝑎𝑑𝑓 (𝑦, 𝑏)}

where 𝑥𝑦 ∈ 𝑁 is an edge between vertices 𝑥 and 𝑦, and (𝑎, 𝑏) ∈ 𝑁𝑚 × 𝑁𝑚 are the corresponding attribute
values.
2. Contradiction Function Constraint: For all (𝑎, 𝑏), (𝑐, 𝑑) ∈ 𝑁𝑚 × 𝑁𝑚:

𝑏𝐶 𝑓 ((𝑎, 𝑏), (𝑐, 𝑑)) ≤ min{𝑎𝐶 𝑓 (𝑎, 𝑐), 𝑎𝐶 𝑓 (𝑏, 𝑑)}

3. Reflexivity and Symmetry of Contradiction Functions:

𝑎𝐶 𝑓 (𝑎, 𝑎) = 0, ∀𝑎 ∈ 𝑀𝑙
𝑎𝐶 𝑓 (𝑎, 𝑏) = 𝑎𝐶 𝑓 (𝑏, 𝑎), ∀𝑎, 𝑏 ∈ 𝑀𝑙
𝑏𝐶 𝑓 (𝑎, 𝑎) = 0, ∀𝑎 ∈ 𝑁𝑚
𝑏𝐶 𝑓 (𝑎, 𝑏) = 𝑏𝐶 𝑓 (𝑏, 𝑎), ∀𝑎, 𝑏 ∈ 𝑁𝑚

Example 2.42. (cf.[121]) The following examples of Plithogenic Graphs are provided.

• When 𝑠 = 𝑡 = 1, 𝑃𝐺 is called a Plithogenic Fuzzy Graphs.


• When 𝑠 = 2, 𝑡 = 1, 𝑃𝐺 is called a Plithogenic Intuitionistic Fuzzy Graphs.
• When 𝑠 = 3, 𝑡 = 1, 𝑃𝐺 is called a Plithogenic Neutrosophic Graphs.
• When 𝑠 = 4, 𝑡 = 1, 𝑃𝐺 is called a Plithogenic quadripartitioned Neutrosophic Graphs (cf.[193,302,327]).
• When 𝑠 = 5, 𝑡 = 1, 𝑃𝐺 is called a Plithogenic pentapartitioned Neutrosophic Graphs (cf.[56, 92, 256]).
• When 𝑠 = 6, 𝑡 = 1, 𝑃𝐺 is called a Plithogenic hexapartitioned Neutrosophic Graphs (cf.[287]).
• When 𝑠 = 7, 𝑡 = 1, 𝑃𝐺 is called a Plithogenic heptapartitioned Neutrosophic Graphs (cf.[62, 271]).
• When 𝑠 = 8, 𝑡 = 1, 𝑃𝐺 is called a Plithogenic octapartitioned Neutrosophic Graphs.
• When 𝑠 = 9, 𝑡 = 1, 𝑃𝐺 is called a Plithogenic nonapartitioned Neutrosophic Graphs.

2.8 Fuzzy Graph Neural Network (F-GNN)


In this subsection, we introduce the concept of the Fuzzy Graph Neural Network (F-GNN). A Fuzzy
Graph Neural Network (F-GNN) is a graph inference model that combines the principles of fuzzy logic and
graph neural networks (GNNs). It is specifically designed to address fuzzy and uncertain data within graph-
structured information (cf.[78, 116, 162, 224, 295, 392, 439, 442]). Below, we present the formal definition of
F-GNN.
Definition 2.43. [104] An F-GNN is defined as a quintuple:

F-GNN = (𝐺, F𝑉 , F𝐸 , R, D) ,

where:
• 𝐺 = (𝑉, 𝐸) is a graph where 𝑉 represents the set of vertices and 𝐸 represents the set of edges.
• F𝑉 and F𝐸 are the fuzzification functions for vertices and edges, respectively. These functions map vertex
and edge attributes to fuzzy membership values:

F𝑉 : X𝑉 → [0, 1] 𝑀 , F𝐸 : X𝐸 → [0, 1] 𝑀 ,

where 𝑀 is the number of fuzzy subsets, and X𝑉 and X𝐸 denote the attribute spaces for vertices and
edges.

17
• R represents the rule layer, which encodes fuzzy rules of the form:
𝑁
Û
IF vertex 𝑣 𝑖 satisfies F𝑉 (𝑣 𝑖 ) THEN D (𝑣 𝑖 ) outputs the prediction,
𝑖=1

where D is the defuzzification layer.


• D is the defuzzification function, which aggregates the outputs of the rule layer to produce a crisp output
for each vertex or edge.
Definition 2.44. [104] Given an input graph 𝐺 = (𝑉, 𝐸) with vertex features 𝑋𝑉 and edge features 𝑋𝐸 , F-GNN
operates as follows:
1. Fuzzification Layer: Each vertex 𝑣 ∈ 𝑉 and edge 𝑒 ∈ 𝐸 is fuzzified using membership functions:

F𝑉 (𝑣) = [𝜇1 (𝑣), 𝜇2 (𝑣), . . . , 𝜇 𝑀 (𝑣)] , F𝐸 (𝑒) = [𝜇1 (𝑒), 𝜇2 (𝑒), . . . , 𝜇 𝑀 (𝑒)] .

2. Rule Layer: A set of fuzzy rules is defined to aggregate neighborhood information. For example:

IF 𝑣 ∈ 𝐴𝑚 AND 𝑢 ∈ 𝐴𝑛 THEN 𝑦 𝑘 = 𝑓 𝑘 (𝑥 𝑣 , 𝑥𝑢 ),

where 𝐴𝑚 , 𝐴𝑛 are fuzzy subsets, 𝑥 𝑣 , 𝑥𝑢 are vertex features, and 𝑓 𝑘 is a trainable function.
3. Normalization Layer: The firing strength of each rule is normalized:
𝑟
𝑟ˆ𝑘 = Í𝐾 𝑘 ,
𝑗=1 𝑟 𝑗

where 𝑟 𝑘 is the firing strength of the 𝑘-th rule.


4. Defuzzification Layer: The normalized rule outputs are aggregated to produce crisp predictions:
𝐾
∑︁
𝑦= 𝑟ˆ𝑘 · 𝑓 𝑘 (𝑥).
𝑘=1

Definition 2.45. [104] For a multi-layer F-GNN, the 𝑙-th layer is defined as:
   
𝐻 (𝑙) = 𝜎 𝑓 𝜃 𝐻 (𝑙−1) , 𝐴 + 𝐻 (𝑙−1) ,

where:
• 𝐻 (𝑙) is the output of the 𝑙-th layer.
• 𝜎 is a non-linear activation function (e.g., ReLU).
• 𝐴 is the adjacency matrix of the graph.
• 𝑓 𝜃 is a trainable function.
The final output of the F-GNN is:
 
𝑌 = Softmax 𝐻 (𝐿) ,

where 𝐿 is the number of layers in the F-GNN.


Theorem 2.46. A Fuzzy Graph Neural Network (F-GNN) generalizes a Graph Neural Network (GNN).

Proof. To prove this, we show that the definition of an F-GNN encompasses the definition of a GNN as a special
case.

18
1. Graph Structure: Both GNNs and F-GNNs operate on a graph 𝐺 = (𝑉, 𝐸), where 𝑉 is the set of vertices,
and 𝐸 ⊆ 𝑉 × 𝑉 is the set of edges. While GNNs use crisp edge connections, F-GNNs extend this by assigning
fuzzy membership values to vertices and edges through the fuzzification functions F𝑉 and F𝐸 :

F𝑉 : X𝑉 → [0, 1] 𝑀 , F𝐸 : X𝐸 → [0, 1] 𝑀 .

When 𝑀 = 1 and membership values are restricted to binary {0, 1}, the F-GNN reduces to a standard GNN,
where F𝑉 and F𝐸 represent crisp vertices and edges.

2. Message Passing: In a GNN, messages between nodes are exchanged using functions 𝜙 𝑚 and aggregated
at each node 𝑣 𝑖 as:
(𝑡+1) (𝑡 ) (𝑡 )
∑︁
m𝑖 = 𝜙 𝑚 (h𝑖 , h 𝑗 , e𝑖 𝑗 ),
𝑣 𝑗 ∈ N (𝑖)

where N (𝑖) is the set of neighbors of 𝑣 𝑖 .


In an F-GNN, the message passing incorporates fuzzy membership values through the rule layer R,
which defines fuzzy rules such as:

IF 𝑣 𝑖 ∈ 𝐴𝑚 AND 𝑣 𝑗 ∈ 𝐴𝑛 THEN 𝑓 𝑘 (h𝑖 , h 𝑗 , e𝑖 𝑗 ),

where 𝐴𝑚 and 𝐴𝑛 are fuzzy subsets, and 𝑓 𝑘 is a trainable function. If fuzzy subsets 𝐴𝑚 and 𝐴𝑛 are crisp (e.g.,
𝐴𝑚 = 𝐴𝑛 = {1}), the F-GNN reduces to the standard message passing mechanism of a GNN.

3. Node Updates: In a GNN, node updates are defined as:


(𝑡+1) (𝑡 ) (𝑡+1)
h𝑖 = 𝜙𝑢 (h𝑖 , m𝑖 ),

where 𝜙𝑢 is a node update function.


In an F-GNN, node updates are governed by fuzzy rules and defuzzification, aggregating over normalized
firing strengths:
𝐾
∑︁
𝑦= 𝑟ˆ𝑘 · 𝑓 𝑘 (h𝑖 ),
𝑘=1
where 𝑟ˆ𝑘 is the normalized firing strength of the 𝑘-th fuzzy rule. If there is only one rule (𝐾 = 1) and no
fuzzification is applied, the F-GNN node update simplifies to the standard GNN node update.

4. Generalization: The fuzzification and defuzzification layers in an F-GNN extend the crisp operations of a
GNN by introducing degrees of membership, enabling the model to handle uncertainty and imprecision. When
these additional features are disabled (e.g., by setting 𝑀 = 1 and 𝐾 = 1), the F-GNN reduces exactly to a GNN.
Since every operation in a GNN is a special case of the corresponding operation in an F-GNN, we
conclude that the F-GNN generalizes the GNN. □

3 Result: SuperHypergraph Neural Network


In this section, we explore the SuperHyperGraph Neural Network.
3.1 SuperHypergraph Neural Network
In this subsection, we explore the definition and theoretical framework of the SuperHypergraph Neural
Network. This concept is a mathematical extension of the Hypergraph Neural Network. It is important to note
that this study is purely theoretical, with no practical implementation or testing conducted on actual systems.
Definition 3.1 (SuperHypergraph Neural Network). Let 𝐻 = (𝑉, 𝐸) be a SuperHyperGraph with base vertices
𝑉0 , and let 𝐻 ′ = (𝑉0 , 𝐸 ′ ) be its Expanded Hypergraph. Let 𝑋 ∈ R |𝑉0 | ×𝑑 be the feature matrix for the base
vertices. Define:

• The incidence matrix 𝐻 ′ ∈ R |𝑉0 | × | 𝐸 | with entries
(
′ 1, if 𝑣 𝑖 ∈ 𝑒 ′𝑗 ,
𝐻𝑖 𝑗 =
0, otherwise.

19
• The diagonal vertex degree matrix 𝐷 𝑉 ∈ R |𝑉0 | × |𝑉0 | with entries

𝐸′ |
|∑︁
(𝐷 𝑉 )𝑖𝑖 = 𝑑𝑉 (𝑣 𝑖 ) = 𝐻𝑖′ 𝑗 𝑤(𝑒 ′𝑗 ),
𝑗=1

where 𝑤(𝑒 ′𝑗 ) is the weight of hyperedge 𝑒 ′𝑗 .


′ ′
• The diagonal hyperedge degree matrix 𝐷 𝐸 ∈ R | 𝐸 | × | 𝐸 | with entries

|𝑉0 |
∑︁
(𝐷 𝐸 ) 𝑗 𝑗 = 𝑑 𝐸 (𝑒 ′𝑗 ) = 𝐻𝑖′ 𝑗 .
𝑖=1

The convolution operation in the SHGNN is defined as


 
−1/2 −1 ′ ⊤ −1/2
𝑌 = 𝜎 𝐷 𝑉 𝐻 ′𝑊 𝐷 𝐸 𝐻 𝐷 𝑉 𝑋Θ ,

where:
• 𝑌 ∈ R |𝑉0 | ×𝑐 is the output feature matrix.
′ ′
• 𝑊 ∈ R | 𝐸 | × | 𝐸 | is the diagonal matrix of hyperedge weights.
• Θ ∈ R𝑑×𝑐 is the learnable weight matrix.
• 𝜎 is an activation function (e.g., ReLU[44, 234]).
Theorem 3.2. A SuperHypergraph Neural Network (SHGNN) inherently possesses the structure of a SuperHy-
perGraph 𝐻 = (𝑉, 𝐸), where:
1. The vertex set 𝑉 corresponds to the subsets of the base vertices 𝑉0 used in the SHGNN.
2. The edge set 𝐸 corresponds to the relationships (superedges) among the supervertices, as encoded in the
hyperedge-weighted incidence matrix 𝐻 ′ .

Proof. By definition, the SuperHyperGraph vertex set 𝑉 ⊆ 𝑃(𝑉0 ) consists of subsets of the base vertex set 𝑉0 . In
the SHGNN, the input feature matrix 𝑋 ∈ R |𝑉0 | ×𝑑 defines the features associated with each base vertex 𝑣 𝑖 ∈ 𝑉0 .
These features are subsequently aggregated and processed in layers, preserving the subset structure of 𝑉.
The edge set 𝐸 in a SuperHyperGraph is defined as 𝐸 ⊆ 𝑃(𝑉), connecting multiple supervertices. In the
SHGNN, the relationships between subsets (supervertices) are captured by the hyperedges 𝑒 ∈ 𝐸, represented
in the weighted incidence matrix 𝐻 ′ . The matrix 𝐻 ′ explicitly encodes whether a base vertex 𝑣 𝑖 ∈ 𝑉0 belongs
to a hyperedge 𝑒 ′𝑗 ∈ 𝐸 ′ , thereby maintaining the SuperHyperGraph’s structure.
The convolution operation in the SHGNN, defined as:
 
−1/2 −1 ′ ⊤ −1/2
𝑌 = 𝜎 𝐷 𝑉 𝐻 ′𝑊 𝐷 𝐸 𝐻 𝐷 𝑉 𝑋Θ ,

propagates and updates features across the graph while preserving the structural relationships encoded in 𝐻.
This operation respects the adjacency relationships among subsets of 𝑉0 as defined by the superedges.
The SHGNN’s architecture, including its vertex and edge representations and layer-wise operations,
directly corresponds to the mathematical structure of a SuperHyperGraph 𝐻 = (𝑉, 𝐸). Therefore, the SHGNN
inherently possesses the structure of a SuperHyperGraph. □

Theorem 3.3. The Hypergraph Neural Network (HGNN) is a special case of the SuperHypergraph Neural
Network (SHGNN). Specifically, when all supervertices are singleton subsets of 𝑉0 , and all superedges connect
these singleton supervertices, the SHGNN reduces to the HGNN.

20
Proof. Assume that all supervertices are singletons, i.e.,

𝑉 = {{𝑣 𝑖 } | 𝑣 𝑖 ∈ 𝑉0 } .

Then, each superedge 𝑒 ∈ 𝐸 connects supervertices that correspond directly to base vertices in 𝑉0 .
For each superedge 𝑒 ∈ 𝐸, the corresponding hyperedge in the Expanded Hypergraph is
Ø Ø
𝑒′ = 𝑣= {𝑣 𝑖 } = {𝑣 𝑖 | 𝑣 = {𝑣 𝑖 } ∈ 𝑒}.
𝑣 ∈𝑒 𝑣 ∈𝑒

Thus, the Expanded Hypergraph 𝐻′= (𝑉0 , 𝐸 ′ ) is identical to the original hypergraph defined over 𝑉0 with

hyperedges 𝐸 .
The convolution operation in SHGNN becomes
 
−1/2 −1 ⊤ −1/2
𝑌 = 𝜎 𝐷 𝑉 𝐻𝑊 𝐷 𝐸 𝐻 𝐷 𝑉 𝑋Θ ,

which is exactly the convolution operation used in the Hypergraph Neural Network (HGNN) .
Therefore, the SHGNN reduces to the HGNN in this case, demonstrating that SHGNN generalizes
HGNN. □

Corollary 3.4. The Graph Convolutional Network (GCN) is a special case of the SHGNN when all hyperedges
connect exactly two vertices.

Proof. When all hyperedges 𝑒 ′𝑗 in the Expanded Hypergraph 𝐻 ′ satisfy |𝑒 ′𝑗 | = 2, the hypergraph Laplacian
simplifies to the graph Laplacian. Consequently, the SHGNN convolution operation reduces to the GCN opera-
tion. □

3.2 Algorithm for SuperHypergraph Neural Network (SHGNN)


We present a detailed algorithm for implementing the SuperHypergraph Neural Network (SHGNN),
along with an analysis of its time and space complexity. The algorithm is described below.

21
Algorithm 1: SuperHypergraph Neural Network Convolution
Input:
• SuperHyperGraph 𝐻 = (𝑉, 𝐸) with base vertices 𝑉0 (where |𝑉0 | = 𝑛);
• Feature matrix 𝑋 ∈ R𝑛×𝑑 ;
• Hyperedge weights 𝑤(𝑒 ′𝑗 ) for each hyperedge 𝑒 ′𝑗 ∈ 𝐸 ′ ;

• Weight matrix Θ ∈ R𝑑×𝑐 ;


• Activation function 𝜎.
Output: Output feature matrix 𝑌 ∈ R𝑛×𝑐
1 1. Expand SuperHyperGraph to obtain Expanded Hypergraph 𝐻 ′ = (𝑉0 , 𝐸 ′ );
2 foreach superedge 𝑒 ∈ 𝐸 do
𝑒 ′ ← 𝑣 ∈𝑒 𝑣 ; // Expand to base vertices
Ð
3
4 Add 𝑒 ′ to 𝐸 ′ ;
5 end
6 2. Construct incidence matrix 𝐻 ′ ∈ R𝑛×𝑚 , where 𝑚 = |𝐸 ′ |;
7 Initialize 𝐻 ′ as a sparse zero matrix;
8 for 𝑗 ← 1 to 𝑚 do
9 foreach vertex 𝑣 𝑖 ∈ 𝑒 ′𝑗 do
10 𝐻𝑖′ 𝑗 ← 1;
11 end
12 end
13 3. Compute vertex degrees 𝐷 𝑉 ;
14 for 𝑖 ← 1 to 𝑛 do
Í
𝑑𝑉 (𝑣 𝑖 ) ← 𝑚 ′ ′
15
𝑗=1 𝐻𝑖 𝑗 · 𝑤(𝑒 𝑗 );
16 (𝐷 𝑉 )𝑖𝑖 ← 𝑑𝑉 (𝑣 𝑖 );
17 end
18 4. Compute hyperedge degrees 𝐷 𝐸 ;
19 for 𝑗 ← 1 to 𝑚 Í do
20 𝑑 𝐸 (𝑒 ′𝑗 ) ← 𝑖=1
𝑛 𝐻′ ;
𝑖𝑗
21 (𝐷 𝐸 ) 𝑗 𝑗 ← 𝑑 𝐸 (𝑒 ′𝑗 );
22 end
23 ˜
5. Normalize incidence matrix 𝐻;
−1/2 −1 (diagonal matrices);
24 Compute 𝐷 𝑉 and 𝐷 𝐸
25 foreach non-zero element 𝐻𝑖′ 𝑗 do
−1/2
26 𝐻˜ 𝑖 𝑗 ← (𝐷 𝑉 )𝑖𝑖 · 𝐻 ′ · 𝑤(𝑒 ′ ) · (𝐷 −1 ) 𝑗 𝑗 ;
𝑖𝑗 𝑗 𝐸
27 end
28 6. Compute intermediate matrix 𝑀;
−1/2
29 Compute 𝑆 ← 𝐻 ′ ⊤ 𝐷 𝑉 𝑋 ; // Sparse matrix multiplication
30 Compute 𝑀 ← 𝐻˜ · 𝑆 ; // Sparse matrix multiplication
31 7. Compute output features 𝑌 ;
32 𝑌 ← 𝜎(𝑀 · Θ);
33 return 𝑌 ;

Theorem 3.5. Given a SuperHyperGraph 𝐻 = (𝑉, 𝐸), base vertices 𝑉0 , feature matrix 𝑋, weight matrix Θ, and
activation function 𝜎, the algorithm computes the output feature matrix 𝑌 according to the SHGNN convolution
operation:  
−1/2 −1 ′ ⊤ −1/2
𝑌 = 𝜎 𝐷 𝑉 𝐻 ′𝑊 𝐷 𝐸 𝐻 𝐷 𝑉 𝑋Θ ,

22
where 𝐻 ′ is the incidence matrix of the Expanded Hypergraph 𝐻 ′ = (𝑉0 , 𝐸 ′ ), 𝐷 𝑉 and 𝐷 𝐸 are the vertex and
hyperedge degree matrices, and 𝑊 is the diagonal matrix of hyperedge weights.
Proof. The algorithm follows the steps required to compute the SHGNN convolution operation:

1. Expansion to 𝐻 ′ : The algorithm correctly expands each superedge 𝑒 ∈ 𝐸 into a hyperedge 𝑒 ′ ∈ 𝐸 ′ by


taking the union of all base vertices in the supervertices of 𝑒. This ensures that 𝐻 ′ accurately represents
the Expanded Hypergraph.
2. Construction of 𝐻 ′ : By iterating over each hyperedge 𝑒 ′𝑗 and setting 𝐻𝑖′ 𝑗 = 1 for all 𝑣 𝑖 ∈ 𝑒 ′𝑗 , the incidence
matrix 𝐻 ′ is correctly constructed.
3. Degree Matrices 𝐷 𝑉 and 𝐷 𝐸 : The degrees are computed as per their definitions:
𝑚
∑︁ 𝑛
∑︁
𝑑𝑉 (𝑣 𝑖 ) = 𝐻𝑖′ 𝑗 · 𝑤(𝑒 ′𝑗 ), 𝑑 𝐸 (𝑒 ′𝑗 ) = 𝐻𝑖′ 𝑗 .
𝑗=1 𝑖=1

The diagonal matrices 𝐷 𝑉 and 𝐷 𝐸 are correctly populated with these degrees.
4. Normalization and Computation of 𝐻:˜ The normalized incidence matrix 𝐻˜ is computed using the degrees
and weights, matching the formula:
−1/2
𝐻˜ 𝑖 𝑗 = (𝐷 𝑉 )𝑖𝑖 · 𝐻𝑖′ 𝑗 · 𝑤(𝑒 ′𝑗 ) · (𝐷 𝐸
−1
)𝑗 𝑗.

5. Convolution Operation: The algorithm computes:


 
−1/2
𝑌 = 𝜎 𝐻˜ · 𝐻 ′ ⊤ 𝐷 𝑉 𝑋Θ ,

which simplifies to:  


−1/2 −1 ′ ⊤ −1/2
𝑌 = 𝜎 𝐷 𝑉 𝐻 ′𝑊 𝐷 𝐸 𝐻 𝐷 𝑉 𝑋Θ ,

as per the SHGNN convolution definition.


6. Activation Function: The application of 𝜎 ensures the non-linear transformation is applied to the output.

Thus, each step of the algorithm correctly implements the corresponding mathematical operation in the
SHGNN convolution, ensuring correctness. □
Theorem 3.6. Let 𝑛 = |𝑉0 | be the number of base vertices, 𝑚 = |𝐸 ′ | be the number of hyperedges in the
Expanded Hypergraph, 𝑑 be the input feature dimension, 𝑐 be the output feature dimension, and nnz(𝐻 ′ ) be the
number of non-zero entries in the incidence matrix 𝐻 ′ . The time complexity of the algorithm is:

𝑂 |𝐸 | · 𝑘 · 𝑠 + nnz(𝐻 ′ ) · (𝑑 + 1) + 𝑛 · 𝑑 · 𝑐 ,


where 𝑘 is the average number of supervertices per superedge, and 𝑠 is the average size of a supervertex.
Proof. We analyze the time complexity of each step in the algorithm:

1. Expansion to 𝐻 ′ :
• For each superedge 𝑒 ∈ 𝐸, the expansion 𝑒 ′ = 𝑣 ∈𝑒 𝑣 involves 𝑂 (𝑘 𝑠) operations, where 𝑘 is the
Ð
average number of supervertices in 𝑒, and 𝑠 is the average size of a supervertex.
• Total time for this step: 𝑂 (|𝐸 | · 𝑘 · 𝑠).
2. Construction of 𝐻 ′ :
• For each hyperedge 𝑒 ′𝑗 , we iterate over its vertices 𝑣 𝑖 ∈ 𝑒 ′𝑗 and set 𝐻𝑖′ 𝑗 = 1.
• Time complexity: 𝑂 (nnz(𝐻 ′ )).
3. Compute 𝐷 𝑉 :
• For each vertex 𝑣 𝑖 , sum over hyperedges where 𝐻𝑖′ 𝑗 = 1.

23
• Time complexity: 𝑂 (nnz(𝐻 ′ )).
4. Compute 𝐷 𝐸 :
• For each hyperedge 𝑒 ′𝑗 , sum over vertices where 𝐻𝑖′ 𝑗 = 1.
• Time complexity: 𝑂 (nnz(𝐻 ′ )).
˜
5. Normalize 𝐻:
• Multiplying diagonal matrices and updating non-zero entries.
• Time complexity: 𝑂 (nnz(𝐻 ′ )).
−1/2
6. Compute 𝑆 = 𝐻 ′ ⊤ 𝐷 𝑉 𝑋:
• Sparse matrix-vector multiplication.
• Time complexity: 𝑂 (nnz(𝐻 ′ ) · 𝑑).
7. Compute 𝑀 = 𝐻˜ · 𝑆:
• Sparse matrix-vector multiplication.
• Time complexity: 𝑂 (nnz(𝐻 ′ ) · 𝑑).
8. Compute 𝑌 = 𝜎(𝑀 · Θ):
• Dense matrix multiplication: 𝑂 (𝑛 · 𝑑 · 𝑐).
• Activation function application: 𝑂 (𝑛 · 𝑐).

Adding up the time complexities:

𝑂 |𝐸 | · 𝑘 · 𝑠 + nnz(𝐻 ′ ) · (1 + 𝑑) + 𝑛 · 𝑑 · 𝑐 .


Thus, the time complexity of the algorithm is as stated. □


Theorem 3.7. The space complexity of the algorithm is:

𝑂 nnz(𝐻 ′ ) + 𝑛 · (𝑑 + 𝑐) + 𝑚 · 𝑑 + 𝑑 · 𝑐 ,


where 𝑛, 𝑚, 𝑑, 𝑐, and nnz(𝐻 ′ ) are as previously defined.


Proof. We account for the space used by the algorithm:

1. Incidence Matrix 𝐻 ′ :
• Stored in sparse format.
• Space complexity: 𝑂 (nnz(𝐻 ′ )).
2. Degree Matrices 𝐷 𝑉 and 𝐷 𝐸 :
• Diagonal matrices.
• Space complexity: 𝑂 (𝑛 + 𝑚).
3. Feature Matrix 𝑋:
• Space complexity: 𝑂 (𝑛 · 𝑑).
4. Weight Matrix Θ:
• Space complexity: 𝑂 (𝑑 · 𝑐).
5. Intermediate Matrices 𝑆 and 𝑀:
• 𝑆 ∈ R𝑚×𝑑 : 𝑂 (𝑚 · 𝑑).
• 𝑀 ∈ R𝑛×𝑑 : 𝑂 (𝑛 · 𝑑).
6. Output Matrix 𝑌 :

24
• Space complexity: 𝑂 (𝑛 · 𝑐).

Adding up the space complexities:

𝑂 nnz(𝐻 ′ ) + 𝑛 + 𝑚 + 𝑛 · 𝑑 + 𝑚 · 𝑑 + 𝑛 · 𝑐 + 𝑑 · 𝑐 .


Simplifying, and noting that 𝑛 + 𝑚 is dominated by 𝑛 · 𝑑 and 𝑚 · 𝑑, we have:

𝑂 nnz(𝐻 ′ ) + 𝑛 · (𝑑 + 𝑐) + 𝑚 · 𝑑 + 𝑑 · 𝑐 .


Thus, the space complexity is as stated. □

Theorem 3.8. If the Expanded Hypergraph 𝐻 ′ is sparse, i.e., nnz(𝐻 ′ ) = 𝑂 (𝑛), then the algorithm operates in
linear time and space with respect to the number of vertices 𝑛.
Proof. When 𝐻 ′ is sparse, nnz(𝐻 ′ ) = 𝑂 (𝑛). Substituting this into the time and space complexities:

Time Complexity:
𝑂 (|𝐸 | · 𝑘 · 𝑠 + 𝑛 · (𝑑 + 1) + 𝑛 · 𝑑 · 𝑐) .
If |𝐸 | · 𝑘 · 𝑠 = 𝑂 (𝑛) (which holds if the average superedge and supervertex sizes are bounded), the total time
complexity becomes 𝑂 (𝑛 · 𝑑 · 𝑐).

Space Complexity:
𝑂 (𝑛 + 𝑛 · (𝑑 + 𝑐) + 𝑛 · 𝑑 + 𝑑 · 𝑐) = 𝑂 (𝑛 · (𝑑 + 𝑐) + 𝑑 · 𝑐) .
Thus, both time and space complexities are linear in 𝑛 when 𝐻 ′ is sparse and superedge/supervertex
sizes are bounded. □
3.3 𝑛-SuperHyperGraph Neural Network
A SuperHyperGraph can be generalized to an 𝑛-SuperHyperGraph. This is defined based on the concept
of the 𝑛-th powerset. The formal definition is provided below.
Definition 3.9 (Power Set). (cf.[97]) Let 𝑆 be a set. The power set of 𝑆, denoted by P (𝑆), is defined as the set
of all subsets of 𝑆, including the empty set and 𝑆 itself. Formally, we write:

P (𝑆) = {𝑇 | 𝑇 ⊆ 𝑆}.

The power set P (𝑆) contains 2 |𝑆 | elements, where |𝑆| represents the cardinality of 𝑆. This is because each
element of 𝑆 can either be included in or excluded from each subset.
Definition 3.10 (𝑛-th PowerSet (Recall)). (cf.[340, 352]) Let 𝐻 be a set representing a system or structure,
such as a set of items, a company, an institution, a country, or a region. The 𝑛-th PowerSet, denoted as P𝑛∗ (𝐻),
describes a hierarchical organization of 𝐻 into subsystems, sub-subsystems, and so forth. It is defined recursively
as follows:
1. Base Case:
P0∗ (𝐻) := 𝐻.

2. First-Level PowerSet:
P1∗ (𝐻) = P (𝐻),
where P (𝐻) is the power set of 𝐻.
3. Higher Levels: For 𝑛 ≥ 2, the 𝑛-th PowerSet is defined recursively as:

P𝑛∗ (𝐻) = P (P𝑛−1



(𝐻)).

Thus, P𝑛∗ (𝐻) represents a nested hierarchy, where the power set operation P is applied 𝑛 times. Formally:

P𝑛∗ (𝐻) = P (P (· · · P (𝐻) · · · )),

where the power set operation P is repeated 𝑛 times.

25
Example 3.11 (𝑛-th PowerSet of a Simple Set). Let 𝐻 = {𝑎, 𝑏} be a set. The computation of P𝑛∗ (𝐻) for different
𝑛 is as follows:

1. Base Case (𝑛 = 0):


P0∗ (𝐻) = 𝐻 = {𝑎, 𝑏}.

2. First-Level PowerSet (𝑛 = 1):

P1∗ (𝐻) = P (𝐻) = {∅, {𝑎}, {𝑏}, {𝑎, 𝑏}}.

3. Second-Level PowerSet (𝑛 = 2):

P2∗ (𝐻) = P (P (𝐻)) = P ({∅, {𝑎}, {𝑏}, {𝑎, 𝑏}}) .

The elements of P2∗ (𝐻) are all subsets of P (𝐻), such as:

P2∗ (𝐻) = {∅, {∅}, {{𝑎}}, {{𝑏}}, {{𝑎, 𝑏}}, {∅, {𝑎}}, . . . , {∅, {𝑎}, {𝑏}, {𝑎, 𝑏}}}.

4. Third-Level PowerSet (𝑛 = 3):


P3∗ (𝐻) = P (P2∗ (𝐻)).
The elements of P3∗ (𝐻) are all subsets of P2∗ (𝐻), forming a higher-order hierarchy.

This process illustrates how the 𝑛-th PowerSet recursively expands the original set 𝐻 into increasingly
complex hierarchical structures.
Theorem 3.12. The 𝑛-th power set generalizes the power set.
Proof. This is evident. □
Definition 3.13 (𝑛-SuperHyperGraph). (cf.[340]) Let 𝑉0 be a finite set of base vertices. Define the 𝑛-th iterated
power set of 𝑉0 recursively as:
 
P 0 (𝑉0 ) = 𝑉0 , P 𝑘+1 (𝑉0 ) = P P 𝑘 (𝑉0 ) ,

where P ( 𝐴) denotes the power set of set 𝐴.


An 𝑛-SuperHyperGraph is an ordered pair 𝐻 = (𝑉, 𝐸), where:
• 𝑉 ⊆ P 𝑛 (𝑉0 ) is the set of supervertices, which are elements of the 𝑛-th power set of 𝑉0 .
• 𝐸 ⊆ P 𝑛 (𝑉0 ) is the set of superedges, also elements of P 𝑛 (𝑉0 ).
Each supervertex 𝑣 ∈ 𝑉 can be:
• A single vertex (𝑣 ∈ 𝑉0 ),
• A subset of 𝑉0 (𝑣 ⊆ 𝑉0 ),
• A subset of subsets of 𝑉0 , up to 𝑛 levels (𝑣 ∈ P 𝑛 (𝑉0 )),
• An indeterminate or fuzzy set(cf.[430]),
• The null set (𝑣 = ∅).
Each superedge 𝑒 ∈ 𝐸 connects supervertices, potentially at different hierarchical levels up to 𝑛.
Theorem 3.14. [126] An 𝑛-SuperHyperGraph can generalize a superhypergraph.
Proof. This follows directly from the definition. Refer to [126] as needed for further details. □
Corollary 3.15. An 𝑛-SuperHyperGraph generalizes both hypergraphs and classical graphs.
Proof. The result follows directly. □
Theorem 3.16. [126] An 𝑛-SuperHyperGraph has a structure based on the 𝑛-th PowerSet.

26
Proof. This follows directly from the definition. Refer to [126] as needed for further details. □
Definition 3.17 (Expanded Hypergraph for 𝑛-SuperHyperGraph). Given an 𝑛-SuperHyperGraph 𝐻 = (𝑉, 𝐸),
the Expanded Hypergraph 𝐻 ′ = (𝑉0 , 𝐸 ′ ) is defined as follows:
• The vertex set is 𝑉0 , the base vertices.
• For each superedge 𝑒 ∈ 𝐸, the corresponding hyperedge 𝑒 ′ ∈ 𝐸 ′ is defined by recursively expanding all
elements to base vertices: Ø
𝑒 ′ = Expand(𝑒) = Expand(𝑣),
𝑣 ∈𝑒
where the expansion function Expand is defined recursively:

{𝑣}, if 𝑣 ∈ 𝑉0 ,
Expand(𝑣) = Ð 𝑘
𝑢∈𝑣 Expand(𝑢), if 𝑣 ⊆ P (𝑉0 ), 𝑘 ≤ 𝑛.

Theorem 3.18. The Expanded Hypergraph for an 𝑛-SuperHyperGraph generalizes the Expanded Hypergraph
of a SuperHyperGraph.
Proof. Let 𝐻 = (𝑉, 𝐸) be an 𝑛-SuperHyperGraph and 𝐻 ′ = (𝑉0 , 𝐸 ′ ) its Expanded Hypergraph, where 𝑉0
represents the base vertices. By definition, for each superedge 𝑒 ∈ 𝐸, the corresponding hyperedge 𝑒 ′ ∈ 𝐸 ′ is
obtained through recursive expansion of all elements in 𝑒 to base vertices using the function Expand.
If 𝐻 is a SuperHyperGraph (i.e., 𝑛 = 1), each supervertex 𝑣 ∈ 𝑒 is either a base vertex or a subset of base
vertices. Thus, the expansion process simplifies to:
Ø
𝑒′ = 𝑣,
𝑣 ∈𝑒

which matches the definition of the Expanded Hypergraph for a SuperHyperGraph.


For 𝑛 > 1, the recursive nature of Expand allows the expansion of 𝑛-nested supervertices into base
vertices. This generalization accommodates the additional levels of nesting present in 𝑛-SuperHyperGraphs,
ensuring the resulting hyperedges 𝑒 ′ in 𝐻 ′ are consistent with the definition of an Expanded Hypergraph.
Hence, the definition of the Expanded Hypergraph for 𝑛-SuperHyperGraphs subsumes that for Super-
HyperGraphs, making it a generalization. □
We consider the following network.
Definition 3.19 (Network for 𝑛-SuperHyperGraph). Let 𝑋 ∈ R |𝑉0 | ×𝑑 be the feature matrix for base vertices 𝑉0 ,
where 𝑥𝑖 ∈ R𝑑 is the feature vector of vertex 𝑣 𝑖 ∈ 𝑉0 .

Define the incidence matrix 𝐻 ′ ∈ R |𝑉0 | × | 𝐸 | of the Expanded Hypergraph 𝐻 ′ by:
(
′ 1, if 𝑣 𝑖 ∈ 𝑒 ′𝑗 ,
𝐻𝑖 𝑗 =
0, otherwise.
′ ′
Define the diagonal vertex degree matrix 𝐷 𝑉 ∈ R |𝑉0 | × |𝑉0 | and hyperedge degree matrix 𝐷 𝐸 ∈ R |𝐸 | × | 𝐸 |
by:
𝐸′ |
|∑︁
(𝐷 𝑉 )𝑖𝑖 = 𝑑𝑉 (𝑣 𝑖 ) = 𝐻𝑖′ 𝑗 𝑤(𝑒 ′𝑗 ),
𝑗=1

|𝑉0 |
∑︁
(𝐷 𝐸 ) 𝑗 𝑗 = 𝑑 𝐸 (𝑒 ′𝑗 ) = 𝐻𝑖′ 𝑗 .
𝑖=1
Here, 𝑤(𝑒 ′𝑗 )
is the weight assigned to hyperedge 𝑒 ′𝑗 .
The convolution operation in the 𝑛-SHGNN is defined as:
 
−1/2 −1 ′⊤ −1/2
𝑌 = 𝜎 𝐷 𝑉 𝐻 ′𝑊 𝐷 𝐸 𝐻 𝐷 𝑉 𝑋Θ ,

where:

27
• 𝑌 ∈ R |𝑉0 | ×𝑐 is the output feature matrix.
′ ′
• 𝑊 ∈ R | 𝐸 | × | 𝐸 | is the diagonal matrix of hyperedge weights.
• Θ ∈ R𝑑×𝑐 is the learnable weight matrix.
• 𝜎 is an activation function (e.g., ReLU[175]).
Theorem 3.20. The SuperHyperGraph Neural Network (SHGNN) is a special case of the 𝑛-SHGNN when 𝑛 = 1.

Proof. When 𝑛 = 1, the 𝑛-SuperHyperGraph reduces to a standard SuperHyperGraph:

𝑉 ⊆ P (𝑉0 ), 𝐸 ⊆ P (𝑉).

The expansion operation simplifies to:



{𝑣}, if 𝑣 ∈ 𝑉0 ,
Expand(𝑣) =
𝑣, if 𝑣 ⊆ 𝑉0 .

Thus, the definitions and algorithms of 𝑛-SHGNN coincide with those of SHGNN. Therefore, SHGNN is a
special case of 𝑛-SHGNN when 𝑛 = 1. □

As algorithms for n-SuperHyperGraphs, the following two algorithms are considered.


Algorithm 2: Expanded Hypergraph Construction
Input: An 𝑛-SuperHyperGraph 𝐻 = (𝑉, 𝐸)
Output: Expanded Hypergraph 𝐻 ′ = (𝑉0 , 𝐸 ′ )
1 Initialize 𝐸 ′ = ∅;
2 foreach superedge 𝑒 ∈ 𝐸 do
3 𝑒 ′ ← Expand(𝑒);
4 Add 𝑒 ′ to 𝐸 ′ ;
5 end
6 return 𝐻 ′ = (𝑉0 , 𝐸 ′ );

Algorithm 3: 𝑛-SHGNN Convolution Operation


Input:
• Feature matrix 𝑋 ∈ R |𝑉0 | ×𝑑 .
• Expanded Hypergraph 𝐻 ′ = (𝑉0 , 𝐸 ′ ).
• Hyperedge weight matrix 𝑊.
• Learnable weight matrix Θ.
• Activation function 𝜎.
Output: Output feature matrix 𝑌 ∈ R |𝑉0 | ×𝑐
1 Compute incidence matrix 𝐻 ′ ;
2 Compute degree matrices 𝐷 𝑉 and 𝐷 𝐸 ;
−1/2
3 Normalize matrices: 𝐻ˆ = 𝐷 𝑉 𝐻 ′ 𝑊 𝐷 𝐸−1 ;
 
ˆ ′⊤ −1/2
4 Compute 𝑌 = 𝜎 𝐻𝐻 𝐷 𝑉 𝑋Θ ;
5 return 𝑌 ;

Theorem 3.21. The 𝑛-SHGNN convolution algorithm correctly computes the output feature matrix 𝑌 as per the
convolution operation defined for 𝑛-SuperHyperGraphs.

Proof. The algorithm follows the steps of the convolution operation:


1. Constructs the Expanded Hypergraph 𝐻 ′ by expanding superedges 𝑒 to base vertices 𝑉0 .
2. Computes the incidence matrix 𝐻 ′ accurately.

28
3. Calculates degree matrices 𝐷 𝑉 and 𝐷 𝐸 according to their definitions.
ˆ
4. Performs normalization and computes 𝐻.
 
5. Computes the convolution 𝑌 = 𝜎 𝐻𝐻ˆ ′⊤ 𝐷 −1/2 𝑋Θ .
𝑉

Each step adheres to the mathematical definitions, ensuring correctness. □


Theorem 3.22. Let 𝑁 = |𝑉0 |, 𝑀 = |𝐸 |, 𝑑 be the feature dimension, 𝑐 be the output dimension, and 𝑘 be
the maximum size of expanded hyperedges. The time complexity of the 𝑛-SHGNN convolution algorithm is
𝑂 (𝑀 𝑘 𝑛 + 𝑁 𝑑𝑐).
Proof. We examine the complexity of each step in the algorithm.

• Expanded Hypergraph Construction:


– For each superedge 𝑒, Expand(𝑒) may involve up to 𝑘 𝑛 operations.
– Total time: 𝑂 (𝑀 𝑘 𝑛 ).
• Incidence Matrix Computation:
– Time proportional to the number of non-zero entries: 𝑂 (𝑁 𝑘 𝑛 ).
• Degree Matrices and Normalization:
– Time: 𝑂 (𝑁 + |𝐸 ′ |).
• Convolution Computation:
– Matrix multiplications involving sparse matrices.
– Time: 𝑂 (𝑁 𝑑𝑐).

Total time complexity is dominated by 𝑂 (𝑀 𝑘 𝑛 + 𝑁 𝑑𝑐). □

Theorem 3.23. The space complexity of the 𝑛-SHGNN convolution algorithm is 𝑂 (𝑁 𝑘 𝑛 + 𝑁 𝑑 + 𝑁𝑐).

Proof. We examine the complexity of each step in the algorithm.

• Incidence Matrix 𝐻 ′ :
– Space: 𝑂 (𝑁 𝑘 𝑛 ).
• Degree Matrices:
– Space: 𝑂 (𝑁 + |𝐸 ′ |).
• Feature Matrices:
– Input 𝑋: 𝑂 (𝑁 𝑑).
– Output 𝑌 : 𝑂 (𝑁𝑐).

Total space complexity is 𝑂 (𝑁 𝑘 𝑛 + 𝑁 𝑑 + 𝑁𝑐). □

29
3.4 Dynamic Superhypergraph Neural Network
In this subsection, we define the Dynamic Superhypergraph Neural Network, building upon the concept
of the Dynamic Hypergraph Neural Network [204]. A Dynamic Hypergraph Neural Network models evolving
relationships within hypergraphs, learning from time-varying node and hyperedge interactions to facilitate dy-
namic data analysis (cf. [172, 210, 240, 395, 400, 454]). The Dynamic Hypergraph Neural Network can also be
viewed as an extension of dynamic graph neural networks[118,159,237,361] to the domain of hypergraphs. The
definitions and theorems of related concepts are provided below.
Definition 3.24 (Dynamic Hypergraph). [204] A Dynamic Hypergraph at layer 𝑙 is represented as 𝐻𝑙 = (𝑉, 𝐸 𝑙 ),
where:
• 𝑉 is the set of vertices corresponding to data samples.
• 𝐸 𝑙 is the set of hyperedges at layer 𝑙, dynamically constructed based on the feature embeddings 𝑋𝑙 of the
vertices at layer 𝑙.
Hyperedges in 𝐸 𝑙 are constructed using clustering or nearest-neighbor methods to capture local and global
relationships among vertices.
Definition 3.25 (Dynamic Hypergraph Neural Network (DHGNN)). [204] A Dynamic Hypergraph Neural Net-
work (DHGNN) is a neural network architecture where each layer 𝑙 consists of:
• Dynamic Hypergraph Construction (DHG): Updates the hypergraph 𝐻𝑙 = (𝑉, 𝐸 𝑙 ) based on the feature
embeddings 𝑋𝑙 from the previous layer.
• Hypergraph Convolution (HGC): Performs feature aggregation from vertices to hyperedges and vice versa
to produce updated embeddings 𝑋𝑙+1 .
The output of the 𝑙-th layer is:
𝑋𝑙+1 = 𝜎 (𝑊𝑙 𝑋𝑙 + HGC(𝐻𝑙 , 𝑋𝑙 )) ,
where 𝑊𝑙 is a learnable weight matrix and 𝜎 is an activation function.

Definition 3.26. A Dynamic SuperHypergraph is a sequence of 𝑛-SuperHyperGraphs {𝐻 (𝑙) = (𝑉 (𝑙) , 𝐸 (𝑙) )}𝑙=0
𝐿 ,
where each layer 𝑙 represents a SuperHyperGraph at a specific time or iteration, and:

• 𝑉 (𝑙) ⊆ P 𝑛 (𝑉0 ) is the set of supervertices at layer 𝑙, where 𝑉0 is the base set of vertices, and P 𝑛 (𝑉0 ) is
the 𝑛-th iterated power set of 𝑉0 .

• 𝐸 (𝑙) ⊆ P 𝑛 (𝑉0 ) is the set of superedges at layer 𝑙.

The evolution of the SuperHyperGraph from layer 𝑙 to 𝑙 + 1 may depend on the features or embeddings
of the supervertices at layer 𝑙.

Theorem 3.27. A Dynamic SuperHypergraph {𝐻 (𝑙) = (𝑉 (𝑙) , 𝐸 (𝑙) )}𝑙=0


𝐿 generalizes the concept of a SuperHy-
perGraph 𝐻 = (𝑉, 𝐸), as:
1. Each static layer 𝐻 (𝑙) is a valid SuperHyperGraph.
2. The sequence of layers allows for dynamic evolution, which extends the static structure of a single Super-
HyperGraph to include temporal or iterative dynamics.
Proof. We prove this theorem in two steps:
1. Static Layer Correspondence: By definition, each layer 𝐻 (𝑙) = (𝑉 (𝑙) , 𝐸 (𝑙) ) satisfies the properties of
a SuperHyperGraph:

• 𝑉 (𝑙) ⊆ P 𝑛 (𝑉0 ), ensuring that the vertices are subsets of the 𝑛-th iterated power set of the base vertex set
𝑉0 .
• 𝐸 (𝑙) ⊆ P 𝑛 (𝑉0 ), ensuring that the edges connect subsets of 𝑉 (𝑙) .

30
Thus, each individual 𝐻 (𝑙) is a valid SuperHyperGraph.
2. Dynamic Evolution: In a Dynamic SuperHypergraph, the evolution from layer 𝑙 to 𝑙 + 1 is governed
by transformations applied to the supervertices or superedges. These transformations can be defined using
feature propagation, embedding updates, or external conditions. This dynamic evolution introduces a temporal or
iterative dimension to the SuperHyperGraph structure, which cannot be captured by a static SuperHyperGraph.
A SuperHyperGraph 𝐻 = (𝑉, 𝐸) can be viewed as a special case of a Dynamic SuperHypergraph where
all layers 𝐻 (𝑙) are identical for 𝑙 = 0, . . . , 𝐿, and no evolution occurs between layers.
The Dynamic SuperHypergraph {𝐻 (𝑙) } generalizes the static SuperHyperGraph 𝐻 by adding a layer-
wise temporal or iterative structure. □

Theorem 3.28. A Dynamic SuperHypergraph generalizes a Dynamic Hypergraph.


Proof. A Dynamic Hypergraph is a special case of a Dynamic SuperHypergraph when 𝑛 = 0 or when the
supervertices are simply the base vertices 𝑉0 .
In a Dynamic Hypergraph, at each layer 𝑙, we have a hypergraph 𝐻 (𝑙) = (𝑉, 𝐸 (𝑙) ), where 𝑉 is a fixed
set of vertices, and 𝐸 (𝑙) is the set of hyperedges at layer 𝑙.
In a Dynamic SuperHypergraph, when we set 𝑛 = 0 and 𝑉 (𝑙) = 𝑉0 for all 𝑙, the supervertices reduce to
the base vertices, and the structure becomes a sequence of hypergraphs {𝐻 (𝑙) = (𝑉0 , 𝐸 (𝑙) )}, which is exactly a
Dynamic Hypergraph.
Therefore, Dynamic SuperHypergraphs generalize Dynamic Hypergraphs. □

Definition 3.29 (Dynamic SuperHypergraph Neural Network (DSHGNN)). A Dynamic SuperHypergraph Neu-
ral Network (DSHGNN) is a neural network where at each layer 𝑙, a new SuperHyperGraph 𝐻 (𝑙) = (𝑉 (𝑙) , 𝐸 (𝑙) )
is constructed based on the feature embeddings 𝑋 (𝑙) at that layer. The DSHGNN performs convolution opera-
tions on these dynamically constructed superhypergraphs.
Specifically, the output of layer 𝑙 is given by:
 
(𝑙) −1/2 ′(𝑙) (𝑙) (𝑙) −1 ′ (𝑙) ⊤ (𝑙) −1/2 (𝑙) (𝑙)
𝑋 (𝑙+1) = 𝜎 𝐷 𝑉 𝐻 𝑊 𝐷𝐸 𝐻 𝐷𝑉 𝑋 Θ ,

where:
• 𝐻 (𝑙) = (𝑉 (𝑙) , 𝐸 (𝑙) ) is the SuperHyperGraph at layer 𝑙.

• 𝐻 ′(𝑙) is the incidence matrix of the Expanded Hypergraph 𝐻 ′(𝑙) = (𝑉0 , 𝐸 ′(𝑙) ).
(𝑙) (𝑙)
• 𝐷 𝑉 and 𝐷 𝐸 are the degree matrices at layer 𝑙.

• 𝑊 (𝑙) is the diagonal hyperedge weight matrix at layer 𝑙.

• Θ (𝑙) is the learnable weight matrix at layer 𝑙.


• 𝜎 is an activation function.
Theorem 3.30. A Dynamic SuperHypergraph Neural Network has the structure of a Dynamic SuperHyper-
graph.

Proof. In a Dynamic SuperHypergraph Neural Network, at each layer 𝑙, a new SuperHyperGraph 𝐻 (𝑙) =
(𝑉 (𝑙) , 𝐸 (𝑙) ) is constructed based on the embeddings 𝑋 (𝑙) . The network updates the embeddings 𝑋 (𝑙) by per-
forming operations that involve the structure of 𝐻 (𝑙) .
Since the sequence of superhypergraphs {𝐻 (𝑙) } evolves over the layers of the network, and each 𝐻 (𝑙) is
a SuperHyperGraph, the network inherently operates on a Dynamic SuperHypergraph.
Therefore, the Dynamic SuperHypergraph Neural Network has the structure of a Dynamic SuperHyper-
graph. □

We present the algorithm for dynamically constructing the superhypergraph at each layer based on the
current feature embeddings.

31
Algorithm 4: Dynamic SuperHypergraph Construction (DSHC) at Layer 𝑙
Input:
• Current feature embeddings 𝑋 (𝑙) ∈ R |𝑉0 | ×𝑑 .
• Parameters: number of supervertices 𝑠, supervertex size 𝑘, number of superedges 𝑡, superedge size 𝑚.

Output: Dynamic SuperHyperGraph 𝐻 (𝑙) = (𝑉 (𝑙) , 𝐸 (𝑙) ).


1 1. Construct Supervertices;
2 Perform clustering (e.g., 𝑘-means) on 𝑋 (𝑙) to obtain 𝑠 clusters;
3 For each cluster 𝑐 𝑖 , form a supervertex 𝑣 𝑖 = {𝑣 𝑗 ∈ 𝑉0 | 𝑣 𝑗 belongs to 𝑐 𝑖 };
4 Set 𝑉 (𝑙) = {𝑣 1 , 𝑣 2 , . . . , 𝑣 𝑠 };
5 2. Construct Superedges;
6 Perform higher-level clustering or grouping on supervertices to form 𝑡 superedges;
7 For each group 𝑔𝑖 , form a superedge 𝑒 𝑖 = {𝑣 𝑗 ∈ 𝑉 (𝑙) | 𝑣 𝑗 belongs to 𝑔𝑖 };
8 Set 𝐸 (𝑙) = {𝑒 1 , 𝑒 2 , . . . , 𝑒 𝑡 };
9 return 𝐻 (𝑙) = (𝑉 (𝑙) , 𝐸 (𝑙) );

Theorem 3.31. The DSHGNN algorithm computes the feature embeddings 𝑋 (𝑙+1) at each layer 𝑙 correctly
according to the convolution operation defined for the dynamically constructed superhypergraph 𝐻 (𝑙) .
Proof. The DSHGNN algorithm follows these steps:

1. Dynamic SuperHypergraph Construction: The algorithm constructs 𝐻 (𝑙) based on 𝑋 (𝑙) , ensuring that the
supervertices 𝑉 (𝑙) and superedges 𝐸 (𝑙) capture the relationships inherent in the current feature embed-
dings.

2. Expanded Hypergraph Construction: The Expanded Hypergraph 𝐻 ′(𝑙) accurately reflects the connections
between base vertices 𝑉0 through the supervertices and superedges in 𝐻 (𝑙) .
(𝑙)
3. Incidence Matrix and Degree Matrices: The incidence matrix 𝐻 ′(𝑙) and the degree matrices 𝐷 𝑉 and
(𝑙)
𝐷 𝐸 are computed correctly as per the definitions.
4. Convolution Operation: The convolution operation is performed exactly as defined, applying the appro-
priate normalization and combining the feature embeddings with the learnable parameters Θ (𝑙) .
5. Activation Function: The non-linear activation 𝜎 is applied to introduce non-linearity.

Thus, the algorithm correctly implements the DSHGNN convolution operation, ensuring that 𝑋 (𝑙+1) is
computed accurately at each layer. □

Theorem 3.32. Let 𝑛 = |𝑉0 | be the number of base vertices, 𝑠 be the number of supervertices, 𝑡 be the number
of superedges, 𝑑 be the feature dimension, and 𝑐 be the output dimension. The time complexity of the DSHGNN
algorithm at each layer is:
𝑂 (𝑛𝑑𝑘 + 𝑠𝑑𝑘 + 𝑡𝑠𝑘 + 𝑛𝑐) ,
where 𝑘 is the average size of supervertices and superedges.

Proof. We analyze the time complexity step by step.

Dynamic SuperHypergraph Construction


• Clustering to form supervertices: 𝑂 (𝑛𝑑) (e.g., 𝑘-means clustering).
• Forming superedges from supervertices: 𝑂 (𝑠𝑑) (clustering supervertices).

32
Expanded Hypergraph Construction
• For each superedge 𝑒, forming 𝑒 ′ = 𝑣 ∈𝑒 𝑣: 𝑂 (𝑘 2 ) per superedge, assuming 𝑘 is the average size of 𝑣
Ð
and 𝑒.
• Total time: 𝑂 (𝑡𝑠𝑘).

Convolution Operation
• Multiplications involving sparse matrices 𝐻 ′(𝑙) : 𝑂 (nnz(𝐻 ′(𝑙) )𝑑).
• Since nnz(𝐻 ′(𝑙) ) ≈ 𝑛𝑘, total time: 𝑂 (𝑛𝑑𝑘).

Total Time Complexity Combining the above:


𝑂 (𝑛𝑑 + 𝑠𝑑 + 𝑡𝑠𝑘 + 𝑛𝑑𝑘 + 𝑛𝑐) = 𝑂 (𝑛𝑑𝑘 + 𝑠𝑑𝑘 + 𝑡𝑠𝑘 + 𝑛𝑐) .
Assuming 𝑠, 𝑡, and 𝑘 are much smaller than 𝑛, the dominant term is 𝑂 (𝑛𝑑𝑘). □
Theorem 3.33. The space complexity of the DSHGNN algorithm at each layer is:
 
𝑂 𝑛𝑑 + 𝑠𝑑 + nnz(𝐻 ′(𝑙) ) + 𝑑𝑐 ,

where nnz(𝐻 ′(𝑙) ) is the number of non-zero entries in the incidence matrix 𝐻 ′(𝑙) .
Proof. We account for the space used:

• Feature embeddings 𝑋 (𝑙) and 𝑋 (𝑙+1) : 𝑂 (𝑛𝑑).


• Supervertices and their embeddings: 𝑂 (𝑠𝑑).
• Incidence matrix 𝐻 ′(𝑙) : 𝑂 (nnz(𝐻 ′(𝑙) )).
• Weight matrices Θ (𝑙) : 𝑂 (𝑑𝑐).
Total space complexity:  
𝑂 𝑛𝑑 + 𝑠𝑑 + nnz(𝐻 ′(𝑙) ) + 𝑑𝑐 .

Theorem 3.34. The Dynamic Hypergraph Neural Network (DHGNN) is a special case of the Dynamic Super-
Hypergraph Neural Network (DSHGNN). Specifically, when all supervertices in DSHGNN are singleton subsets
of 𝑉0 (i.e., ∀𝑣 ∈ 𝑉 (𝑙) , 𝑣 = {𝑣 𝑖 } for some 𝑣 𝑖 ∈ 𝑉0 ), the DSHGNN reduces to the DHGNN.
Proof. When all supervertices are singletons:

𝑉 (𝑙) = {{𝑣 1 }, {𝑣 2 }, . . . , {𝑣 𝑛 }}.

Each supervertex corresponds directly to a base vertex in 𝑉0 . The superedges 𝐸 (𝑙) then connect these
singleton supervertices, effectively becoming hyperedges over 𝑉0 .
The Expanded Hypergraph 𝐻 ′(𝑙) has hyperedges 𝑒 ′ formed as:
Ø Ø
𝑒′ = 𝑣= {𝑣 𝑖 } = {𝑣 𝑖 | 𝑣 𝑖 ∈ 𝑒}.
𝑣 ∈𝑒 𝑣 ∈𝑒

Thus, the Expanded Hypergraph 𝐻 ′(𝑙) is identical to the hypergraph used in DHGNN at layer 𝑙.
The convolution operation in DSHGNN becomes:
 
(𝑙) −1/2 ′(𝑙) (𝑙) (𝑙) −1 ′ (𝑙) ⊤ (𝑙) −1/2 (𝑙) (𝑙)
𝑋 (𝑙+1) = 𝜎 𝐷 𝑉 𝐻 𝑊 𝐷𝐸 𝐻 𝐷𝑉 𝑋 Θ ,

which matches the convolution operation in DHGNN.


Therefore, DSHGNN reduces to DHGNN when supervertices are singletons, proving that DSHGNN
generalizes DHGNN. □

33
3.5 Multi-Graph Neural Networks and Their Generalization
Multi-Graph Neural Networks have been proposed in recent years[421]. However, we demonstrate that
they can be mathematically generalized within the framework of n-SuperHyperGraph Neural Networks. Below,
we present the relevant definitions and theorems, including related concepts.
Definition 3.35. (cf.[57]) A multi-graph is a generalization of a graph that allows multiple edges, also called
parallel edges, between the same pair of vertices. Formally, a multi-graph 𝐺 is defined as:

𝐺 = (𝑉, 𝐸, 𝜑),
where:
• 𝑉 is a finite set of vertices (nodes).
• 𝐸 is a finite set of edges.
• 𝜑 : 𝐸 → {{𝑢, 𝑣} | 𝑢, 𝑣 ∈ 𝑉 } is a mapping that associates each edge 𝑒 ∈ 𝐸 with an unordered pair of
vertices 𝑢, 𝑣 ∈ 𝑉. For directed multi-graphs, 𝜑(𝑒) maps to ordered pairs (𝑢, 𝑣).

Properties
• Parallel Edges: Unlike a simple graph, a multi-graph allows multiple edges between the same pair of
vertices.
• Loops: Depending on the context, a multi-graph may also allow edges that connect a vertex to itself,
called loops.
• Representation: Each edge 𝑒 is distinguished by its unique identity in 𝐸, even if it connects the same
vertices as another edge.
Theorem 3.36. An 𝑛-SuperHyperGraph generalizes a multi-graph.
Proof. To show that an 𝑛-SuperHyperGraph can generalize a multi-graph, we construct a mapping from a multi-
graph 𝐺 = (𝑉, 𝐸, 𝜑) to an 𝑛-SuperHyperGraph 𝐻 = (𝑉 ′ , 𝐸 ′ ) and demonstrate that the operations and represen-
tations in 𝐺 can be captured within 𝐻.
In the multi-graph 𝐺, the vertex set is 𝑉. In the 𝑛-SuperHyperGraph 𝐻, let the base vertex set 𝑉0
correspond directly to 𝑉. Thus, each vertex 𝑣 ∈ 𝑉 in 𝐺 is represented as a supervertex 𝑣 ∈ 𝑉0 ⊆ P 𝑛 (𝑉0 ) in 𝐻.
Each edge 𝑒 ∈ 𝐸 in the multi-graph 𝐺 is mapped to a superedge 𝑒 ′ ∈ 𝐸 ′ in 𝐻. Specifically:

𝑒 ′ = {𝑢, 𝑣}, where 𝜑(𝑒) = {𝑢, 𝑣}, and 𝑢, 𝑣 ∈ 𝑉0 .

For parallel edges, each edge 𝑒 in 𝐺 is assigned a unique identity and mapped to a distinct superedge in 𝐸 ′ .
Thus, 𝐸 ′ may contain multiple superedges connecting the same pair of vertices, replicating the parallel edge
property of a multi-graph.
If 𝐺 allows loops (edges connecting a vertex to itself), such edges 𝑒 ∈ 𝐸 can be mapped to superedges
𝑒 ′ = {𝑣, 𝑣} in 𝐻. This is valid in the 𝑛-SuperHyperGraph framework since 𝑣 ∈ 𝑉0 .
For 𝑛 > 1, the 𝑛-SuperHyperGraph structure provides additional hierarchical levels that are not utilized
in the basic mapping of a multi-graph. Thus, a multi-graph is a special case of an 𝑛-SuperHyperGraph where
𝑛 ≥ 1 and all supervertices and superedges reside at the base level (P 0 (𝑉0 ) = 𝑉0 ).
The construction above demonstrates that the vertex and edge structures of any multi-graph 𝐺 can be
faithfully represented within an 𝑛-SuperHyperGraph 𝐻. Additionally, the 𝑛-SuperHyperGraph framework sup-
ports the generalization to hierarchical and nested structures beyond what is possible in a multi-graph. Therefore,
𝑛-SuperHyperGraphs generalize multi-graphs. □
Definition 3.37. [421] A Multi-Graph Neural Network (MGNN) is an extension of Graph Neural Networks
(GNNs) designed to operate on multi-graphs. In a multi-graph, multiple edges (possibly of different types) are
allowed between the same pair of nodes. This structure enables the modeling of complex relationships in data
where interactions can occur through various channels or modalities.
Formally, let 𝐺 = (𝑉, 𝐸, 𝑇) be a multi-graph, where:
• 𝑉 is the set of nodes.
• 𝐸 ⊆ 𝑉 × 𝑉 × 𝑇 is the set of edges.

34
• 𝑇 is the set of edge types.
Each edge 𝑒 = (𝑢, 𝑣, 𝑡) ∈ 𝐸 represents an interaction of type 𝑡 ∈ 𝑇 between nodes 𝑢 and 𝑣.
In an MGNN, the message passing and aggregation functions are adapted to handle multiple edge types.
The node representation update typically involves aggregating messages over all edge types:

(𝑡+1) © (𝑡 ) Ê Ê 𝑡 ′  (𝑡 ) (𝑡 ) 𝑡 ′  ª
h𝑣 = 𝜙 ­­h𝑣 , 𝜓 h𝑢 , h𝑣 , e𝑢𝑣 ®® ,
𝑡 ′ ∈𝑇 𝑢∈ N𝑣𝑡 ′
« ¬
where:
(𝑡 )
• h𝑣 is the representation of node 𝑣 at layer 𝑡.

• N𝑣𝑡 is the set of neighbors of node 𝑣 connected via edges of type 𝑡 ′ .

• 𝜓 𝑡 is the message function for edge type 𝑡 ′ .
• 𝜙 is the node update function.
É
• denotes an aggregation operator (e.g., sum, mean, max).
𝑡 ′ is the feature of edge (𝑢, 𝑣, 𝑡 ′ ).
• e𝑢𝑣
Theorem 3.38. An 𝑛-SuperHyperGraph Neural Network (n-SHGNN) can generalize a Multi-Graph Neural
Network (MGNN).
Proof. To prove this theorem, we need to demonstrate that any MGNN can be represented as a special case of
an n-SHGNN for some appropriate 𝑛.

Mapping the Multi-Graph to an 𝑛-SuperHyperGraph Let 𝐺 = (𝑉, 𝐸, 𝑇) be a multi-graph, where multiple


edges of different types can exist between the same pair of nodes. We aim to construct an 𝑛-SuperHyperGraph
𝐻 = (𝑉 ′ , 𝐸 ′ ) such that the MGNN operations on 𝐺 can be emulated by an n-SHGNN operating on 𝐻.

Construction of the 𝑛-SuperHyperGraph


• Base Vertices: Let 𝑉0 = 𝑉, the original set of nodes in the multi-graph.
• Supervertices: For each edge type 𝑡 ∈ 𝑇, define a supervertex 𝑣 𝑡 at the first level of the power set (𝑛 = 1):
𝑣 𝑡 = {𝑣 ∈ 𝑉0 | 𝑣 participates in at least one edge of type 𝑡}.

• Superedges: For each edge 𝑒 = (𝑢, 𝑣, 𝑡) ∈ 𝐸, define a superedge 𝑒 ′ connecting the corresponding nodes
and the supervertex 𝑣 𝑡 :
𝑒 ′ = {𝑢, 𝑣, 𝑣 𝑡 }.
By constructing supervertices corresponding to each edge type and connecting them via superedges
to the nodes involved in edges of that type, we encapsulate the multi-graph’s multiple edge types within the
𝑛-SuperHyperGraph structure.
In the n-SHGNN, message passing can proceed as follows:
• Nodes exchange messages via superedges, which now represent the multi-graph’s edges along with their
types.
• The supervertex 𝑣 𝑡 serves as a mediator that allows nodes connected by edges of type 𝑡 to share informa-
tion specific to that edge type.
The MGNN’s handling of multiple edge types through type-specific message functions 𝜓 𝑡 can be repli-
cated in the n-SHGNN by defining superedges and supervertices that correspond to these types. The hierarchi-
cal structure of the 𝑛-SuperHyperGraph allows for the encapsulation of edge type information within the graph
topology.
For more complex multi-graphs or for edge types that have hierarchical relationships, a higher 𝑛 can be
chosen to capture the necessary levels of nesting. However, for standard MGNNs, setting 𝑛 = 1 suffices.
Since we can construct an 𝑛-SuperHyperGraph 𝐻 such that the MGNN operations on 𝐺 are equivalent
to n-SHGNN operations on 𝐻, it follows that an n-SHGNN can generalize an MGNN. □

35
3.6 Revisiting Definitions for SHGNN
In this subsection, we revisit several definitions relevant to the SuperHyperGraph Neural Network (SHGNN).
Specifically, we briefly examine concepts such as the SuperHyperGraph Laplacian, SuperHyperGraph Convolu-
tion, SuperHyperGraph Clustering, and SuperHyperGraph Degree Centrality.
3.6.1 SuperHyperGraph Laplacian
The SuperHyperGraph Laplacian can be specifically defined as follows. We prove that it generalizes the
HyperGraph Laplacian. For clarity, the Graph Laplacian is a matrix representing a graph’s structure, used to
analyze connectivity and spectral properties (cf.[282, 438]).
Definition 3.39 (HyperGraph Laplacian). (cf.[75, 137]) Define the incidence matrix 𝐻 ∈ R𝑛×𝑚 of the hyper-
graph H by:

1, if 𝑣 𝑖 ∈ 𝑒 𝑗 ,
𝐻𝑖 𝑗 =
0, otherwise.
Define the diagonal vertex degree matrix 𝐷 𝑣 ∈ R𝑛×𝑛 with entries:
𝑚
∑︁
(𝐷 𝑣 )𝑖𝑖 = 𝑑 𝑣 (𝑣 𝑖 ) = 𝐻𝑖 𝑗 𝑤(𝑒 𝑗 ),
𝑗=1
where 𝑤(𝑒 𝑗 ) is the weight assigned to hyperedge 𝑒 𝑗 .
Define the diagonal hyperedge degree matrix 𝐷 𝑒 ∈ R𝑚×𝑚 with entries:
𝑛
∑︁
(𝐷 𝑒 ) 𝑗 𝑗 = 𝑑 𝑒 (𝑒 𝑗 ) = 𝐻𝑖 𝑗 .
𝑖=1
The hypergraph Laplacian 𝐿 ∈ R𝑛×𝑛 is defined as:
−1/2 −1/2
𝐿 = 𝐼 − 𝐷 𝑣 𝐻𝑊 𝐷 𝑒−1 𝐻 ⊤ 𝐷 𝑣 ,
where 𝑊 ∈ R𝑚×𝑚 is the diagonal matrix of hyperedge weights 𝑤(𝑒 𝑗 ), and 𝐼 is the identity matrix.
Definition 3.40 (SuperHyperGraph Laplacian). To define the Laplacian for a SuperHyperGraph, we construct
the Expanded Hypergraph 𝐻 ′ = (𝑉0 , 𝐸 ′ ):
• The vertex set is 𝑉0 .
• For each superedge 𝑒 ∈ 𝐸, the corresponding hyperedge 𝑒 ′ ∈ 𝐸 ′ is:
Ø
𝑒′ = 𝑣.
𝑣 ∈𝑒

Define the incidence matrix 𝐻′ ∈ R |𝑉0 | × | 𝐸 | :
(
1, if 𝑣 𝑖 ∈ 𝑒 ′𝑗 ,
𝐻𝑖′ 𝑗 =
0, otherwise.
Define the diagonal vertex degree matrix 𝐷 𝑉 ∈ R |𝑉0 | × |𝑉0 | :
|𝐸 ′
∑︁|
(𝐷 𝑉 )𝑖𝑖 = 𝑑𝑉 (𝑣 𝑖 ) = 𝐻𝑖′ 𝑗 𝑤(𝑒 ′𝑗 ).
𝑗=1
′ ′
Define the diagonal hyperedge degree matrix 𝐷 𝐸 ∈ R | 𝐸 | × |𝐸 | :
|𝑉0 |
∑︁
(𝐷 𝐸 ) 𝑗 𝑗 = 𝑑 𝐸 (𝑒 ′𝑗 ) = 𝐻𝑖′ 𝑗 .
𝑖=1
The SuperHyperGraph Laplacian 𝐿 ∈ R |𝑉0 | × |𝑉0 | is defined as:
−1/2 −1 ′ ⊤ −1/2
𝐿 = 𝐼 − 𝐷𝑉 𝐻 ′𝑊 𝐷 𝐸 𝐻 𝐷𝑉 ,
where 𝑊 is the diagonal matrix of hyperedge weights 𝑤(𝑒 ′𝑗 ).

36
Theorem 3.41. The SuperHyperGraph Laplacian 𝐿 generalizes the hypergraph Laplacian. Specifically, when
all supervertices are singleton sets (i.e., 𝑉 = 𝑉0 ), the SuperHyperGraph Laplacian reduces to the hypergraph
Laplacian.

Proof. When 𝑉 = 𝑉0 , each supervertex 𝑣 ∈ 𝑉 is a singleton set {𝑣}. Consequently, each superedge 𝑒 ⊆ 𝑉
corresponds directly to a hyperedge in the hypergraph H = (𝑉, 𝐸).
In the Expanded Hypergraph 𝐻 ′ , each hyperedge 𝑒 ′ is:
Ø Ø
𝑒′ = 𝑣= {𝑣} = 𝑒.
𝑣 ∈𝑒 𝑣 ∈𝑒
Thus, 𝐻 ′ coincides with the incidence matrix 𝐻 of the hypergraph. The degree matrices 𝐷 𝑉 and 𝐷 𝐸
become 𝐷 𝑣 and 𝐷 𝑒 of the hypergraph.
Therefore, the SuperHyperGraph Laplacian 𝐿 reduces to:
−1/2 −1/2
𝐿 = 𝐼 − 𝐷𝑣 𝐻𝑊 𝐷 𝑒−1 𝐻 ⊤ 𝐷 𝑣 ,
which is the hypergraph Laplacian. Hence, the SuperHyperGraph Laplacian generalizes the hypergraph
Laplacian. □
3.6.2 SuperHyperGraph Convolution
Define SuperHyperGraph Convolution and examine its relationship with HyperGraph Convolution. For
clarity, Graph Convolution is an operation aggregating node features and their neighbors’ information, capturing
graph structure for learning (cf.[390, 444, 455]).
Definition 3.42 (HyperGraph Convolution). (cf.[38, 251]) In Hypergraph Neural Networks, the convolution
operation aggregates information from hyperedges to vertices.
Given:

• Feature matrix 𝑋 ∈ R𝑛×𝑑 , where 𝑥𝑖 is the feature vector of vertex 𝑣 𝑖 .


• Learnable weight matrix Θ ∈ R𝑑×𝑐 .

The hypergraph convolution is defined as:


 
−1/2 −1/2
𝑌 = 𝜎 𝐷 𝑣 𝐻𝑊 𝐷 𝑒−1 𝐻 ⊤ 𝐷 𝑣 𝑋Θ ,

where 𝜎 is an activation function (e.g., ReLU).

Definition 3.43. Let 𝑋 ∈ R |𝑉0 | ×𝑑 be the feature matrix for the base vertices 𝑉0 , where each row 𝑥𝑖 corresponds
to the feature vector of vertex 𝑣 𝑖 ∈ 𝑉0 . The convolution operation is defined as:
 
−1/2 −1 ′⊤ −1/2
𝑌 = 𝜎 𝐷 𝑉 𝐻 ′𝑊 𝐷 𝐸 𝐻 𝐷 𝑉 𝑋Θ ,

where:
• 𝜎 is an activation function (e.g., ReLU).
• Θ ∈ R𝑑×𝑐 is a learnable weight matrix.
• Other matrices are as previously defined.
Theorem 3.44. The SuperHyperGraph convolution operation generalizes the hypergraph convolution. When
𝑉 = 𝑉0 , the SuperHyperGraph convolution reduces to the hypergraph convolution.

Proof. With 𝑉 = 𝑉0 and 𝐻 ′ = 𝐻, the convolution formula becomes:


 
−1/2 −1/2
𝑌 = 𝜎 𝐷 𝑣 𝐻𝑊 𝐷 𝑒−1 𝐻 ⊤ 𝐷 𝑣 𝑋Θ ,

which is the hypergraph convolution formula. Thus, the SuperHyperGraph convolution generalizes the
hypergraph convolution. □

37
3.6.3 SuperHyperGraph Clustering
Define SuperHyperGraph Clustering and examine its relationship with HyperGraph Clustering[67, 138,
227, 230]. Note that graph clustering partitions a graph into groups of nodes (clusters) such that nodes within
the same cluster are highly connected [381, 391, 424].
Definition 3.45 (Graph Clustering). (cf.[243, 280]) Let 𝐺 = (𝑉, 𝐸, 𝑤) be a weighted graph, where:
• 𝑉 is the set of vertices,
• 𝐸 ⊆ 𝑉 × 𝑉 is the set of edges,
• 𝑤 : 𝐸 → R+ assigns a positive weight to each edge.
A clustering of the graph 𝐺 is a partition of the vertex set 𝑉 into 𝑘 disjoint subsets:

𝐶 = {𝐶1 , 𝐶2 , . . . , 𝐶 𝑘 },

such that:
Ð𝑘
1. 𝑖=1 𝐶𝑖 = 𝑉,
2. 𝐶𝑖 ∩ 𝐶 𝑗 = ∅ for 𝑖 ≠ 𝑗.
Each subset 𝐶𝑖 is called a cluster. The quality of the clustering is often measured by evaluating the edge weights
within clusters (intra-cluster similarity) and between clusters (inter-cluster dissimilarity).
Example 3.46 (Clustering a Simple Graph). Consider the graph 𝐺 = (𝑉, 𝐸, 𝑤) with:

𝑉 = { 𝐴, 𝐵, 𝐶, 𝐷, 𝐸 }, 𝐸 = {( 𝐴, 𝐵), ( 𝐴, 𝐶), (𝐵, 𝐶), (𝐵, 𝐷), (𝐶, 𝐸)},

and edge weights:

𝑤( 𝐴, 𝐵) = 1, 𝑤( 𝐴, 𝐶) = 2, 𝑤(𝐵, 𝐶) = 2, 𝑤(𝐵, 𝐷) = 1, 𝑤(𝐶, 𝐸) = 3.

A possible clustering is:


𝐶1 = { 𝐴, 𝐵, 𝐶}, 𝐶2 = {𝐷, 𝐸 }.
Evaluation:
• Intra-cluster weight (within 𝐶1 ):

𝑤( 𝐴, 𝐵) + 𝑤( 𝐴, 𝐶) + 𝑤(𝐵, 𝐶) = 1 + 2 + 2 = 5.

• Inter-cluster weight (between 𝐶1 and 𝐶2 ):

𝑤(𝐵, 𝐷) + 𝑤(𝐶, 𝐸) = 1 + 3 = 4.

This clustering balances high intra-cluster similarity and low inter-cluster dissimilarity, making it a good
partition.
Definition 3.47 (HyperGraph Clustering). (cf.[67, 138, 227, 230]) In hypergraph clustering, the goal is to parti-
tion the vertex set V into 𝑘 clusters {𝐶1 , 𝐶2 , . . . , 𝐶 𝑘 } that minimize the normalized cut:
𝑘
∑︁ cut(𝐶𝑖 , 𝐶𝑖 )
NCut(C) = ,
vol(𝐶𝑖 )
𝑖=1
where:
Í |𝑒∩𝐶𝑖 | · |𝑒∩𝐶𝑖 |
• cut(𝐶𝑖 , 𝐶𝑖 ) = 𝑒∈ E 𝑤(𝑒) |𝑒| .
Í
• vol(𝐶𝑖 ) = 𝑣 𝑗 ∈𝐶𝑖 𝑑 𝑣 (𝑣 𝑗 ).

38
Definition 3.48 (SuperHyperGraph clustering). A clustering of a SuperHyperGraph 𝐻 = (𝑉, 𝐸) is a partition
C = {𝐶1 , 𝐶2 , . . . , 𝐶 𝑘 } of the base vertex set 𝑉0 , where each cluster 𝐶𝑖 ⊆ 𝑉0 .
The normalized cut criterion for clustering in a SuperHyperGraph is defined using the Laplacian 𝐿 of
the Expanded Hypergraph 𝐻 ′ . The objective is to minimize:
𝑘
∑︁ vol(𝐶𝑖 , 𝐶𝑖 )
NCut(C) = ,
vol(𝐶𝑖 )
𝑖=1

where:
Í
• vol(𝐶𝑖 ) = 𝑣 𝑗 ∈𝐶𝑖 𝑑𝑉 (𝑣 𝑗 ),
Í
• vol(𝐶𝑖 , 𝐶𝑖 ) = 𝑣 𝑗 ∈𝐶𝑖 ,𝑣𝑘 ∈𝐶𝑖
𝐿 𝑗𝑘,

• 𝐶𝑖 = 𝑉0 \ 𝐶𝑖 .
Theorem 3.49. The clustering methods for SuperHyperGraphs generalize those for hypergraphs. In particular,
spectral clustering using the SuperHyperGraph Laplacian reduces to hypergraph spectral clustering when 𝑉 =
𝑉0 .

Proof. In hypergraph spectral clustering, the Laplacian of the hypergraph is used to compute eigenvectors cor-
responding to the smallest non-zero eigenvalues, which are then used to partition the vertex set 𝑉0 .
For the SuperHyperGraph, when 𝑉 = 𝑉0 , the Laplacian 𝐿 becomes the hypergraph Laplacian. Therefore,
spectral clustering on the SuperHyperGraph reduces to spectral clustering on the hypergraph.
Hence, clustering methods in SuperHyperGraphs generalize those in hypergraphs. □
3.6.4 Degree Centrality in Superhypergraph
We discuss the concept of degree centrality in a superhypergraph. Degree centrality measures the im-
portance of a node in a graph by counting the number of direct connections (edges) it has (cf.[37, 441]).
Definition 3.50 (degree centrality in hypergraph). [211, 220, 397] In hypergraphs, the degree centrality of a
vertex 𝑣 𝑖 is:
𝑚
∑︁
𝐶 (𝑣 𝑖 ) = 𝑑 𝑣 (𝑣 𝑖 ) = 𝐻𝑖 𝑗 𝑤(𝑒 𝑗 ).
𝑗=1

Definition 3.51 (degree centrality in superhypergraph). The degree centrality of a base vertex 𝑣 𝑖 ∈ 𝑉0 in super-
hypergraph is defined as:
𝐸′ |
|∑︁
𝐶 (𝑣 𝑖 ) = 𝑑𝑉 (𝑣 𝑖 ) = 𝐻𝑖′ 𝑗 𝑤(𝑒 ′𝑗 ).
𝑗=1

Theorem 3.52. The degree centrality defined for SuperHyperGraphs generalizes the degree centrality in hyper-
graphs. Specifically, when 𝑉 = 𝑉0 , the centrality measure reduces to the hypergraph degree centrality.

Proof. When 𝑉 = 𝑉0 , the degree centrality formula becomes:

|𝐸 |
∑︁
𝐶 (𝑣 𝑖 ) = 𝐻𝑖 𝑗 𝑤(𝑒 𝑗 ),
𝑗=1

which is the standard degree centrality in hypergraphs.


Therefore, the SuperHyperGraph centrality measure generalizes the hypergraph centrality measure. □

39
3.6.5 𝑛-SuperHyperGraph Attention
We provide precise mathematical definitions of Hypergraph Attention and extend it to 𝑛-SuperHyperGraphs,
defining the 𝑛-SuperHyperGraph Attention mechanism. Note that graph Attention leverages attention mecha-
nisms to dynamically weigh neighbor nodes, enhancing message-passing efficiency and representation learning
in graph neural networks (cf.[61, 68, 318, 385, 398, 399]).
Definition 3.53 (Hypergraph Attention). [38,77,103,222,247,315,394] In Hypergraph Attention, we introduce
learnable attention coefficients to the incidence matrix to capture the importance of connections between vertices
and hyperedges.
For each vertex 𝑣 𝑖 and hyperedge 𝑒 𝑗 , we compute an attention coefficient 𝛼𝑖 𝑗 defined as:

exp 𝜎 𝑎 ⊤ [𝑥𝑖 ∥ 𝑢 𝑗 ]

𝛼𝑖 𝑗 = Í ⊤ ,
𝑘 ∈ E𝑖 exp (𝜎 (𝑎 [𝑥 𝑖 ∥ 𝑢 𝑘 ]))

where:
• 𝜎 is a nonlinear activation function (e.g., LeakyReLU).

• 𝑎 ∈ R2𝑑 is a learnable weight vector.
• ∥ denotes vector concatenation.

• 𝑥𝑖′ = 𝑥𝑖 Θ and 𝑢 ′𝑗 = 𝑢 𝑗 Θ, where Θ ∈ R𝑑×𝑑 is a shared weight matrix.

• 𝑢 𝑗 is the feature representation of hyperedge 𝑒 𝑗 , typically defined as:

1 ∑︁
𝑢𝑗 = 𝑥 .
|𝑒 𝑗 | 𝑣 ∈𝑒 𝑘
𝑘 𝑗

• E𝑖 = {𝑒 𝑗 ∈ E | 𝐻𝑖 𝑗 = 1} is the set of hyperedges incident to vertex 𝑣 𝑖 .

The attention-based incidence matrix 𝐻˜ has entries 𝐻˜ 𝑖 𝑗 = 𝛼𝑖 𝑗 .


The hypergraph attention convolution operation is then defined as:
 
𝑋 ′ = 𝜎 𝐷 𝑣−1 𝐻𝑊˜ 𝐷 𝑒−1 𝐻˜ ⊤ 𝑋 .

Definition 3.54 (𝑛-SuperHyperGraph Attention). In 𝑛-SuperHyperGraph Attention, we introduce attention co-


efficients between supervertices and superedges.
For each base vertex 𝑣 𝑖 ∈ 𝑉0 and superedge 𝑒 ′𝑗 ∈ E ′(𝑛) , we compute an attention coefficient 𝛼𝑖 𝑗 as:

exp 𝜎 𝑎 ⊤ [𝑥𝑖 ∥ 𝑢 𝑗 ]

𝛼𝑖 𝑗 = Í ⊤ ,
𝑘 ∈ E𝑖 exp (𝜎 (𝑎 [𝑥 𝑖 ∥ 𝑢 𝑘 ]))

where:
• 𝑥𝑖 is the feature vector of base vertex 𝑣 𝑖 .
• 𝑢 𝑗 is the feature representation of superedge 𝑒 ′𝑗 , defined as an aggregation of features of the elements
(which can be supervertices or sets thereof) in 𝑒 ′𝑗 .

• E𝑖 is the set of superedges incident to base vertex 𝑣 𝑖 .


(𝑛)
The attention-based incidence matrix 𝐻˜ (𝑛) has entries 𝐻˜ 𝑖 𝑗 = 𝛼𝑖 𝑗 .
The 𝑛-SuperHyperGraph attention convolution operation is defined as:
 
𝑋 ′ = 𝜎 𝐷 𝑣−1 𝐻˜ (𝑛) 𝑊 𝐷 𝑒−1 𝐻˜ (𝑛)⊤ 𝑋 .

Theorem 3.55. The 𝑛-SuperHyperGraph Attention mechanism generalizes the Hypergraph Attention mecha-
nism. Specifically, when 𝑛 = 1, the 𝑛-SuperHyperGraph Attention reduces to the standard Hypergraph Attention.

40
Proof. Consider the case when 𝑛 = 1. Then:

P 1 (𝑉0 ) = P (𝑉0 ),

so the supervertices V (1) ⊆ P (𝑉0 ).


However, to align with the standard hypergraph setting, we consider V (1) = 𝑉0 , and E (1) = {𝑒 𝑗 ⊆ 𝑉0 |
𝑒 𝑗 ≠ ∅}, which is exactly the set of hyperedges in a standard hypergraph.
In the attention mechanism, the attention coefficients 𝛼𝑖 𝑗 are computed between vertices 𝑣 𝑖 ∈ 𝑉0 and
hyperedges 𝑒 𝑗 ⊆ 𝑉0 .
Thus, when 𝑛 = 1, the 𝑛-SuperHyperGraph Attention reduces to the standard Hypergraph Attention
mechanism.
Therefore, the 𝑛-SuperHyperGraph Attention generalizes the Hypergraph Attention. □

4 Result: Uncertain Graph Neural Networks


In this section, we explore uncertain graph networks, including Fuzzy Graph Neural Networks, Neutro-
sophic Graph Neural Networks, and Plithogenic Graph Neural Networks.
4.1 Neutrosophic Graph Neural Network (N-GNN)
In this subsection, we define the concept of the Neutrosophic Graph Neural Network (N-GNN) and
demonstrate how it generalizes the Fuzzy Graph Neural Network (F-GNN). This framework extends the Fuzzy
Graph Neural Network by incorporating the structure of Neutrosophic Graphs. The following sections provide
the formal definitions and related theorems.
Definition 4.1 (Neutrosophic Graph Neural Network (N-GNN)). A Neutrosophic Graph Neural Network (N-
GNN) is a graph inference model that integrates neutrosophic logic into the framework of graph neural networks
to handle uncertain, indeterminate, and inconsistent data in graph-structured information. Formally, an N-GNN
is defined as a quintuple:
N-GNN = (𝐺, N𝑉 , N𝐸 , R 𝑁 , D 𝑁 ) ,
where:
• 𝐺 = (𝑉, 𝐸) is a graph with vertex set 𝑉 and edge set 𝐸.
• N𝑉 and N𝐸 are the neutrosophic fuzzification functions for vertices and edges, respectively. These func-
tions map vertex and edge attributes to neutrosophic membership triplets:

N𝑉 : X𝑉 → [0, 1] 3 , N𝐸 : X𝐸 → [0, 1] 3 ,

where each output is a triplet (𝜇𝑇 , 𝜇 𝐼 , 𝜇 𝐹 ) representing the degrees of truth-membership, indeterminacy-
membership, and falsity-membership.
• R 𝑁 represents the rule layer, which encodes neutrosophic rules to aggregate neutrosophic information
from neighboring nodes and edges.
• D 𝑁 is the neutrosophic defuzzification function, which aggregates the outputs of the rule layer to produce
crisp outputs for each vertex or edge.
Definition 4.2 (Operations in N-GNN). Given an input graph 𝐺 = (𝑉, 𝐸) with vertex features 𝑋𝑉 and edge
features 𝑋𝐸 , the N-GNN operates as follows:

1. Neutrosophic Fuzzification Layer: Each vertex 𝑣 ∈ 𝑉 and edge 𝑒 ∈ 𝐸 is fuzzified into neutrosophic
membership triplets using membership functions:

N𝑉 (𝑣) = (𝜇𝑇 (𝑣), 𝜇 𝐼 (𝑣), 𝜇 𝐹 (𝑣)) , N𝐸 (𝑒) = (𝜇𝑇 (𝑒), 𝜇 𝐼 (𝑒), 𝜇 𝐹 (𝑒)) .

2. Rule Layer: A set of neutrosophic rules is defined to aggregate neutrosophic information. For example:

IF 𝑣 has (𝜇𝑇𝑣 , 𝜇 𝑣𝐼 , 𝜇 𝐹
𝑣
) AND 𝑢 has (𝜇𝑇𝑢 , 𝜇𝑢𝐼 , 𝜇𝑢𝐹 ) THEN 𝑦 𝑘 = 𝑓 𝑘 (N𝑉 (𝑣), N𝑉 (𝑢)) ,

where 𝑓 𝑘 is a trainable function that operates on neutrosophic membership values.

41
3. Normalization Layer: The firing strength 𝑟 𝑘 of each rule is calculated and normalized:
𝑟
𝑟 𝑘 = Comb (N𝑉 (𝑣), N𝑉 (𝑢)) , 𝑟ˆ𝑘 = Í𝐾 𝑘 ,
𝑗=1 𝑟 𝑗

where Comb is a combination function suitable for neutrosophic logic.


4. Defuzzification Layer: The normalized rule outputs are aggregated to produce crisp predictions:
𝐾
∑︁
𝑦= 𝑟ˆ𝑘 · 𝑓 𝑘 (𝑥 𝑣 , 𝑥𝑢 ) .
𝑘=1

Definition 4.3 (Stacked N-GNN Architecture). For a multi-layer N-GNN, the 𝑙-th layer is defined as:
   
(𝑙)
𝐻 (𝑙) = 𝜎 𝑓 𝜃 𝐻 (𝑙−1) , 𝐴 + 𝐻 (𝑙−1) ,

where:
• 𝐻 (𝑙) is the output of the 𝑙-th layer.
• 𝜎 is a non-linear activation function (e.g., ReLU).
• 𝐴 is the adjacency matrix of the graph.
(𝑙)
• 𝑓 𝜃 is a trainable function incorporating neutrosophic operations.
The final output of the N-GNN is:
 
𝑌 = Softmax 𝐻 (𝐿) ,

where 𝐿 is the number of layers in the N-GNN.


Theorem 4.4. The Neutrosophic Graph Neural Network (N-GNN) generalizes the Fuzzy Graph Neural Network
(F-GNN).

Proof. In an N-GNN, each vertex and edge is associated with a neutrosophic membership triplet (𝜇𝑇 , 𝜇 𝐼 , 𝜇 𝐹 ).
Consider the special case where the indeterminacy and falsity components are zero for all vertices and edges,
i.e., 𝜇 𝐼 (𝑣) = 0 and 𝜇 𝐹 (𝑣) = 0 for all 𝑣 ∈ 𝑉, and similarly for edges. Then, the neutrosophic membership reduces
to the fuzzy membership:
𝜇𝑇 (𝑣) = 𝜎(𝑣), ∀𝑣 ∈ 𝑉,
where 𝜎(𝑣) is the fuzzy membership degree in F-GNN. Under these conditions, the N-GNN operations reduce
to those of the F-GNN. Therefore, the N-GNN generalizes the F-GNN. □

Theorem 4.5. A Neutrosophic Graph Neural Network (N-GNN), as defined, has the structural properties of a
Neutrosophic Graph.

Proof. To prove this, we verify that the structure of the N-GNN satisfies the defining properties of a Neutro-
sophic Graph.

1. Vertices and Edges in Neutrosophic Graphs: In a Neutrosophic Graph 𝐺 = (𝑉, 𝐸), each vertex 𝑣 ∈ 𝑉
is associated with a triplet 𝜎(𝑣) = (𝜎𝑇 (𝑣), 𝜎𝐼 (𝑣), 𝜎𝐹 (𝑣)) where 𝜎𝑇 (𝑣), 𝜎𝐼 (𝑣), 𝜎𝐹 (𝑣) ∈ [0, 1] and 𝜎𝑇 (𝑣) +
𝜎𝐼 (𝑣) + 𝜎𝐹 (𝑣) ≤ 3. Similarly, each edge 𝑒 ∈ 𝐸 is associated with a triplet 𝜇(𝑒) = (𝜇𝑇 (𝑒), 𝜇 𝐼 (𝑒), 𝜇 𝐹 (𝑒))
satisfying the same constraints.
In the N-GNN, the neutrosophic fuzzification layer assigns triplets to vertices and edges:

N𝑉 (𝑣) = (𝜇𝑇 (𝑣), 𝜇 𝐼 (𝑣), 𝜇 𝐹 (𝑣)), N𝐸 (𝑒) = (𝜇𝑇 (𝑒), 𝜇 𝐼 (𝑒), 𝜇 𝐹 (𝑒)),

where 𝜇𝑇 , 𝜇 𝐼 , 𝜇 𝐹 ∈ [0, 1] and the sum constraint is explicitly ensured during the mapping process. Thus, the
first property of a Neutrosophic Graph is satisfied.

42
2. Neutrosophic Membership Consistency: In a Neutrosophic Graph, the membership of an edge depends
on the membership of its incident vertices. For instance:

𝜇𝑇 (𝑒) ≤ min{𝜎𝑇 (𝑢), 𝜎𝑇 (𝑣)}, 𝜇 𝐼 (𝑒) ≤ max{𝜎𝐼 (𝑢), 𝜎𝐼 (𝑣)}, 𝜇 𝐹 (𝑒) ≥ max{𝜎𝐹 (𝑢), 𝜎𝐹 (𝑣)},

for an edge 𝑒 = (𝑢, 𝑣).


In the N-GNN, during the aggregation step in the rule layer, the neutrosophic membership values for
edges are derived from the memberships of adjacent vertices according to neutrosophic logical rules. This
ensures that edge memberships are consistent with vertex memberships, satisfying the second property.

3. Propagation of Neutrosophic Membership: A Neutrosophic Graph allows the propagation of neutro-


sophic properties through its structure. In the N-GNN, the rule and aggregation layers propagate vertex and
edge memberships throughout the network while preserving the neutrosophic constraints.
Let R 𝑁 represent the rule layer and A 𝑁 represent the aggregation mechanism. For a vertex 𝑣, the output
neutrosophic triplet at layer 𝑙 is computed as:
 
𝜎 (𝑙) (𝑣) = A 𝑁 {R 𝑁 (𝜎 (𝑙−1) (𝑢), 𝜇 (𝑙−1) (𝑒)) | 𝑢 ∈ neighbors(𝑣)} ,

where 𝜎 (𝑙−1) (𝑢) and 𝜇 (𝑙−1) (𝑒) represent the triplets from the previous layer. This propagation mechanism
ensures that the neutrosophic graph structure is preserved across layers.

4. Defuzzification to Classical Graph Outputs: The defuzzification layer in the N-GNN converts neutro-
sophic triplets into crisp outputs while maintaining consistency with the original neutrosophic structure. This
aligns with the final output of a Neutrosophic Graph.
Each layer of the N-GNN maintains the structure and properties of a Neutrosophic Graph. Therefore, a
Neutrosophic Graph Neural Network inherently possesses the structure of a Neutrosophic Graph, as required.

4.2 Plithogenic Graph Neural Network (P-GNN)
Next, we define the Plithogenic Graph Neural Network (P-GNN) and show how it generalizes both
N-GNN and F-GNN.
Definition 4.6 (Plithogenic Graph Neural Network (P-GNN)). A Plithogenic Graph Neural Network (P-GNN) is
a graph inference model that integrates plithogenic logic into the framework of graph neural networks to handle
data with degrees of appurtenance and contradiction in graph-structured information. Formally, a P-GNN is
defined as:
P-GNN = (𝐺, P𝑉 , P𝐸 , R 𝑃 , D 𝑃 ) ,
where:
• 𝐺 = (𝑉, 𝐸) is a graph with vertex set 𝑉 and edge set 𝐸.
• P𝑉 and P𝐸 are the plithogenic fuzzification functions for vertices and edges, respectively. These func-
tions map vertex and edge attributes to plithogenic membership values, which include degrees of appur-
tenance and contradiction.
• R 𝑃 represents the rule layer, which encodes plithogenic rules to aggregate plithogenic information from
neighboring nodes and edges.
• D 𝑃 is the plithogenic defuzzification function, which aggregates the outputs of the rule layer to produce
crisp outputs for each vertex or edge.
Definition 4.7 (Operations in P-GNN). Given an input graph 𝐺 = (𝑉, 𝐸) with vertex features 𝑋𝑉 and edge
features 𝑋𝐸 , the P-GNN operates as follows:

1. Plithogenic Fuzzification Layer: Each vertex 𝑣 ∈ 𝑉 and edge 𝑒 ∈ 𝐸 is fuzzified into plithogenic member-
ship values using degrees of appurtenance and contradiction.
2. Rule Layer: A set of plithogenic rules is defined to aggregate plithogenic information. For example:

IF 𝑣 has DAF 𝛼𝑣 AND 𝑢 has DAF 𝛼𝑢 AND DCF 𝛿 𝑣𝑢 THEN 𝑦 𝑘 = 𝑓 𝑘 (P𝑉 (𝑣), P𝑉 (𝑢)) ,

where 𝑓 𝑘 is a trainable function that operates on plithogenic membership values.

43
3. Normalization Layer: The firing strength 𝑟 𝑘 of each rule is calculated and normalized, taking into account
degrees of contradiction.
4. Defuzzification Layer: The normalized rule outputs are aggregated to produce crisp predictions.

Definition 4.8. For a multi-layer P-GNN, the 𝑙-th layer is defined similarly, incorporating plithogenic operations
(𝑙)
in 𝑓 𝜃 .
Theorem 4.9. The Plithogenic Graph Neural Network (P-GNN) generalizes both the Neutrosophic Graph Neu-
ral Network (N-GNN) and the Fuzzy Graph Neural Network (F-GNN).
Proof. In a P-GNN, each vertex and edge is associated with degrees of appurtenance and contradiction. Consider
the special case where the degrees of contradiction are zero for all vertices and edges, and the plithogenic
membership reduces to neutrosophic membership with degrees of truth, indeterminacy, and falsity. Under this
condition, the P-GNN reduces to an N-GNN.
Further, if we also set the indeterminacy and falsity components to zero, the neutrosophic membership
reduces to fuzzy membership, and the P-GNN reduces to an F-GNN.
Therefore, the P-GNN generalizes both the N-GNN and the F-GNN. □
Corollary 4.10. The Plithogenic Graph Neural Network can generalize the Hesitant Fuzzy Graph Neural Net-
work [162].
Proof. A Hesitant Fuzzy Set [375, 376] can be generalized by a Plithogenic Set. Similarly, a Hesitant Fuzzy
Graph can be generalized by a Plithogenic Graph. Therefore, following the same reasoning as for Neutrosophic
Graphs, the Plithogenic Graph Neural Network generalizes the Hesitant Fuzzy Graph Neural Network. □
Theorem 4.11. A Plithogenic Graph Neural Network (P-GNN), as defined, possesses the structural properties
of a Plithogenic Graph.
Proof. In a Plithogenic Graph 𝑃𝐺 = (𝑃𝑀, 𝑃𝑁), each vertex 𝑣 ∈ 𝑀 is associated with:
• An attribute 𝑙 and a set of possible values 𝑀𝑙.
• A Degree of Appurtenance Function (DAF) 𝑎𝑑𝑓 : 𝑀 × 𝑀𝑙 → [0, 1] 𝑠 .
• A Degree of Contradiction Function (DCF) 𝑎𝐶 𝑓 : 𝑀𝑙 × 𝑀𝑙 → [0, 1] 𝑡 .
Similarly, each edge 𝑒 ∈ 𝑁 is associated with:
• An attribute 𝑚 and a set of possible values 𝑁𝑚.
• A DAF 𝑏𝑑𝑓 : 𝑁 × 𝑁𝑚 → [0, 1] 𝑠 .
• A DCF 𝑏𝐶 𝑓 : 𝑁𝑚 × 𝑁𝑚 → [0, 1] 𝑡 .
The plithogenic fuzzification functions P𝑉 and P𝐸 in the P-GNN assign these plithogenic memberships, satis-
fying the structural requirements.
In a Plithogenic Graph, for all (𝑥, 𝑎), (𝑦, 𝑏) ∈ 𝑀 × 𝑀𝑙,

𝑏𝑑𝑓 ((𝑥𝑦), (𝑎, 𝑏)) ≤ min{𝑎𝑑𝑓 (𝑥, 𝑎), 𝑎𝑑𝑓 (𝑦, 𝑏)}.

In the rule layer R 𝑃 of the P-GNN, edge DAFs are computed based on vertex DAFs using logical rules, ensuring
this constraint.
Plithogenic graphs impose reflexivity and symmetry constraints:

𝑎𝐶 𝑓 (𝑎, 𝑎) = 0, ∀𝑎 ∈ 𝑀𝑙,
𝑎𝐶 𝑓 (𝑎, 𝑏) = 𝑎𝐶 𝑓 (𝑏, 𝑎), ∀𝑎, 𝑏 ∈ 𝑀𝑙,
𝑏𝐶 𝑓 (𝑚, 𝑚) = 0, ∀𝑚 ∈ 𝑁𝑚,
𝑏𝐶 𝑓 (𝑚, 𝑛) = 𝑏𝐶 𝑓 (𝑛, 𝑚), ∀𝑚, 𝑛 ∈ 𝑁𝑚.

The P-GNN enforces these constraints through its contradiction functions 𝑎𝐶 𝑓 and 𝑏𝐶 𝑓 , ensuring compliance.
The P-GNN propagates plithogenic properties through the rule layer R 𝑃 and defuzzification layer D 𝑃 ,
maintaining structural consistency.
The P-GNN satisfies all the defining properties of a Plithogenic Graph, thus proving the theorem. □

44
Theorem 4.12. In a P-GNN, the degrees of appurtenance and contradiction are preserved during the aggrega-
tion process across the network layers.
Proof. The plithogenic aggregation functions in the P-GNN operate as follows:
1. At layer 𝑙, the updated DAF for vertex 𝑣 is computed as:
 
𝑎𝑑𝑓 (𝑙) (𝑣, 𝑙 𝑣 ) = A 𝑃 {𝑎𝑑𝑓 (𝑙−1) (𝑢, 𝑙𝑢 ) | 𝑢 ∈ neighbors(𝑣)}, {𝑏𝑑𝑓 (𝑙−1) (𝑒, 𝑚 𝑒 ) | 𝑒 = (𝑣, 𝑢)} ,

where A 𝑃 is the plithogenic aggregation function.


2. The updated DCFs are computed analogously, ensuring contradiction information is preserved.
As A 𝑃 is closed under plithogenic operations, the degrees of appurtenance and contradiction remain valid.
Hence, the theorem is proven. □
Theorem 4.13. The P-GNN can model higher levels of uncertainty and contradiction compared to traditional
Graph Neural Networks (GNNs).

Proof. The P-GNN incorporates degrees of contradiction through the DCF, which traditional GNNs do not
explicitly model. Plithogenic logic extends beyond fuzzy and neutrosophic logic by introducing contradiction
degrees, enabling superior expressiveness.
Thus, the P-GNN’s ability to handle contradiction degrees allows it to model complex data with inherent
uncertainty and contradictions, thus proving the theorem. □

Theorem 4.14. Under certain conditions, the P-GNN converges to a stable solution that reflects the underlying
plithogenic graph structure.
Proof. The iterative updates in the P-GNN maintain the plithogenic constraints, ensuring boundedness and
stability. The use of contraction mappings in the aggregation functions ensures convergence to a fixed point
under suitable conditions. Thus, the P-GNN converges to a stable state that preserves the plithogenic properties,
confirming the theorem. □

The algorithm for the Plithogenic Graph Neural Network is described below. We also analyze its validity,
time complexity, and other relevant aspects.

45
Algorithm 5: Plithogenic Graph Neural Network (P-GNN)
Input: Graph 𝐺 = (𝑉, 𝐸); Vertex features 𝑋𝑉 ; Edge features 𝑋𝐸 ; Number of layers 𝐿
Output: Predictions 𝑌
1 foreach vertex 𝑣 ∈ 𝑉 do
2 Compute degrees of appurtenance and contradiction for 𝑣:
3 𝛼𝑣 ← DAF(𝑣)
4 𝛿 𝑣 ← DCF(𝑣)
5 end
6 foreach edge 𝑒 = (𝑢, 𝑣) ∈ 𝐸 do
7 Compute degrees of appurtenance and contradiction for 𝑒:
8 𝛼𝑒 ← DAF(𝑒)
9 𝛿𝑒 ← DCF(𝑒)
10 end
11 Initialize vertex representations:
(0)
12 𝐻 𝑣 ← 𝑋𝑉 (𝑣), ∀𝑣 ∈ 𝑉
13 for 𝑙 ← 1 to 𝐿 do
14 foreach vertex 𝑣 ∈ 𝑉 do
15 Aggregate messages from neighbors:
(𝑙) (𝑙−1)
∑︁
16 𝑚𝑣 ← 𝛾𝑢𝑣 · 𝐻𝑢
𝑢∈ N (𝑣)
17 Update vertex representation:
  
(𝑙) (𝑙) (𝑙−1) (𝑙)
18 𝐻𝑣 ← 𝜎 𝑓 𝜃 𝐻𝑣 , 𝑚𝑣
19 end
20 end
21 Compute final predictions:
 
(𝐿)
22 𝑌𝑣 ← Softmax 𝐻 𝑣 , ∀𝑣 ∈ 𝑉

Remark 4.15 (Algorithm Explanation). A brief description of the algorithm is provided below.

• Input: The algorithm takes as input a graph 𝐺 = (𝑉, 𝐸), vertex features 𝑋𝑉 , edge features 𝑋𝐸 , and the
number of layers 𝐿.
• Degrees of Appurtenance and Contradiction: For each vertex and edge, compute the Degree of Appurte-
nance Function (DAF) and Degree of Contradiction Function (DCF) as defined in the plithogenic frame-
work.
• Message Passing: For each vertex 𝑣, aggregate messages from its neighbors N (𝑣), weighted by a coeffi-
cient 𝛾𝑢𝑣 that incorporates the degrees of appurtenance and contradiction:

𝛾𝑢𝑣 = Comb (𝛼𝑢 , 𝛿𝑢𝑣 ) ,

where Comb(·) is a combination function suitable for plithogenic logic.


(𝑙)
• Update Rule: Update the vertex representations using a trainable function 𝑓 𝜃 and an activation function
𝜎 (e.g., ReLU).
• Output: After 𝐿 layers, compute the final predictions using the Softmax function.

Theorem 4.16 (Algorithm Validity). The P-GNN algorithm correctly computes the predictions 𝑌 according to
the plithogenic logic framework.

Proof. The P-GNN algorithm integrates plithogenic logic into the message-passing framework of graph neural
networks. By computing the degrees of appurtenance (𝛼𝑣 , 𝛼𝑒 ) and contradiction (𝛿 𝑣 , 𝛿𝑒 ) for each vertex and
edge, the algorithm captures the plithogenic properties of the graph.
During message passing, the aggregation coefficient 𝛾𝑢𝑣 combines the appurtenance and contradiction
degrees using a suitable combination function. This ensures that messages are weighted appropriately based on
the plithogenic relationships between vertices.

46
The update rule incorporates the aggregated messages and the previous vertex representation, allowing
the model to learn complex patterns in the data. The use of activation functions and trainable parameters ensures
that the model can approximate any continuous function, according to the universal approximation theorem.
Therefore, the algorithm correctly implements the plithogenic logic within the graph neural network
framework, leading to accurate predictions 𝑌 . □

Theorem 4.17 (Time Complexity). The time complexity of the P-GNN algorithm is O (𝐿 · (|𝑉 |𝑑 + |𝐸 |𝑑)), where
|𝑉 | is the number of vertices, |𝐸 | is the number of edges, and 𝑑 is the dimensionality of the feature vectors.

Proof. The time complexity analysis is as follows:

• Degrees Computation:
– For vertices: Computing 𝛼𝑣 and 𝛿 𝑣 for all 𝑣 ∈ 𝑉 takes O (|𝑉 |) time.
– For edges: Computing 𝛼𝑒 and 𝛿𝑒 for all 𝑒 ∈ 𝐸 takes O (|𝐸 |) time.
(0)
• Initialization: Initializing 𝐻 𝑣 for all 𝑣 ∈ 𝑉 takes O (|𝑉 |𝑑) time.
• Message Passing and Update (per layer):
– Aggregation: For each vertex 𝑣 ∈ 𝑉, aggregating messages from neighbors involves:
(𝑙) (𝑙−1)
∑︁
𝑚𝑣 = 𝛾𝑢𝑣 · 𝐻𝑢
𝑢∈ N (𝑣)

¯ this takes O ( 𝑘¯ 𝑑) time per vertex, totaling O (|𝑉 | 𝑘¯ 𝑑) per layer.


Assuming the average degree is 𝑘,
(𝑙)
– Update: Updating 𝐻 𝑣 for all 𝑣 ∈ 𝑉 takes O (|𝑉 |𝑑) time per layer.
• Total per Layer: O (|𝑉 | 𝑘¯ 𝑑) (since 𝑘¯ is constant for sparse graphs, this simplifies to O (|𝑉 |𝑑)).
• Total for 𝐿 Layers: O (𝐿 · |𝑉 |𝑑)
• Overall Time Complexity: Including the degrees computation and message passing over 𝐿 layers:

O (|𝑉 | + |𝐸 | + 𝐿 · |𝑉 |𝑑) = O (𝐿 · |𝑉 |𝑑 + |𝐸 |)

For graphs where |𝐸 | is O (|𝑉 |) (sparse graphs), the complexity simplifies to O (𝐿 · |𝑉 |𝑑).

Theorem 4.18 (Space Complexity). The space complexity of the P-GNN algorithm is O (|𝑉 |𝑑 + |𝐸 |).

Proof. The space complexity analysis is as follows:


(𝑙)
• Vertex Representations: Storing 𝐻 𝑣 for all 𝑣 ∈ 𝑉 and all 𝑙 = 0, . . . , 𝐿 requires O (𝐿 · |𝑉 |𝑑) space.
(𝑙−1) (𝑙)
However, if we overwrite 𝐻 𝑣 with 𝐻 𝑣 at each layer (i.e., do not store all previous layers), the space
required reduces to O (|𝑉 |𝑑).
• Degrees of Appurtenance and Contradiction: Storing 𝛼𝑣 , 𝛿 𝑣 for all 𝑣 ∈ 𝑉 requires O (|𝑉 |) space. Simi-
larly, storing 𝛼𝑒 , 𝛿𝑒 for all 𝑒 ∈ 𝐸 requires O (|𝐸 |) space.
(𝑙)
• Aggregation Messages: Storing 𝑚 𝑣 for all 𝑣 ∈ 𝑉 requires O (|𝑉 |𝑑) space.
• Total Space Complexity: Combining the above, the total space complexity is:

O (|𝑉 |𝑑 + |𝐸 | + |𝑉 |) = O (|𝑉 |𝑑 + |𝐸 |)

Since |𝑉 |𝑑 generally dominates |𝑉 |, and for sparse graphs |𝐸 | is O (|𝑉 |), the overall space complexity
remains O (|𝑉 |𝑑).

47
4.3 Fuzzy Hypergraph Neural Network
The concept of a Fuzzy Hypergraph Neural Network integrates the principles of Hypergraph Neural
Networks and Fuzzy Neural Networks. It can also be understood as a neural network representation of a Fuzzy
Hypergraph. Similar to Fuzzy Graphs, extensive research has been conducted on Fuzzy Hypergraphs [11,16,52,
59, 98, 99, 284, 285, 396]. The relevant definitions and theorems are presented below.
Definition 4.19 (Fuzzy Hypergraph). [311] Let 𝑋 be a finite set of vertices, and let 𝐸 be a finite family of non-
trivial fuzzy subsets of 𝑋, where each fuzzy set 𝐴 ∈ 𝐸 is defined by a membership function 𝜇 𝐴 : 𝑋 → [0, 1]. A
pair 𝐻 = (𝑋, 𝐸) is called a Fuzzy Hypergraph if the following conditions are satisfied:
Ð
• 𝑋 = {supp( 𝐴) | 𝐴 ∈ 𝐸 }, where the support of a fuzzy set 𝐴 is defined as supp( 𝐴) = {𝑥 ∈ 𝑋 | 𝜇 𝐴 (𝑥) >
0}.
• 𝐸 is the fuzzy edge set, consisting of fuzzy subsets of 𝑋.
The height of a fuzzy hypergraph 𝐻, denoted ℎ(𝐻), is defined as:

ℎ(𝐻) = max{max 𝜇 𝐴 (𝑥) | 𝐴 ∈ 𝐸 }.


𝑥 ∈𝑋

A Fuzzy Hypergraph 𝐻 = (𝑋, 𝐸) is:


• Simple if 𝐸 contains no repeated fuzzy edges and, for any 𝐴, 𝐵 ∈ 𝐸 with 𝐴 ⊆ 𝐵, it follows that 𝐴 = 𝐵.
• Support Simple if 𝐴, 𝐵 ∈ 𝐸, 𝐴 ⊆ 𝐵, and supp( 𝐴) = supp(𝐵), then 𝐴 = 𝐵.
Definition 4.20 (Crisp Level Hypergraph of a Fuzzy Hypergraph). Let 𝐻 = (𝑋, 𝐸) be a Fuzzy Hypergraph. For
a threshold 𝑐 ∈ (0, 1], the 𝑐-cut (or 𝑐-level) of a fuzzy edge 𝐴 ∈ 𝐸 is defined as:

𝐴𝑐 = {𝑥 ∈ 𝑋 | 𝜇 𝐴 (𝑥) ≥ 𝑐}.

The 𝑐-level hypergraph 𝐻𝑐 = (𝑋𝑐 , 𝐸 𝑐 ) of 𝐻 is defined as:


Ø
𝑋𝑐 = { 𝐴𝑐 | 𝐴 ∈ 𝐸 }, 𝐸 𝑐 = { 𝐴𝑐 | 𝐴 ∈ 𝐸 }.

Theorem 4.21. (cf.[15, 268]) A Fuzzy Hypergraph generalizes both Fuzzy Graphs and (crisp) Hypergraphs.
Proof. A Fuzzy Graph 𝐺 = (𝑋, 𝐸, 𝜇𝑉 , 𝜇 𝐸 ) is a special case of a Fuzzy Hypergraph 𝐻 = (𝑋, 𝐸), where:
• The vertex membership function 𝜇𝑉 : 𝑋 → [0, 1] in 𝐺 corresponds to the vertex set 𝑋 in 𝐻.
• Each edge membership function 𝜇 𝐸 : 𝑋 × 𝑋 → [0, 1] in 𝐺 can be represented as a fuzzy subset 𝐴 ∈ 𝐸 in
𝐻, where 𝐴 ⊆ 𝑋 and 𝜇 𝐴 (𝑥) = max{𝜇 𝐸 (𝑥, 𝑦) | 𝑦 ∈ 𝑋 }.
Thus, a Fuzzy Graph is a Fuzzy Hypergraph where each edge connects at most two vertices.
A Hypergraph 𝐻 ∗ = (𝑋, 𝐸) is a special case of a Fuzzy Hypergraph 𝐻 = (𝑋, 𝐸), where:
• Each edge 𝐴 ∈ 𝐸 in 𝐻 ∗ is a crisp subset of 𝑋, corresponding to a fuzzy edge in 𝐻 with 𝜇 𝐴 (𝑥) ∈ {0, 1}
for all 𝑥 ∈ 𝑋.
• The membership function of each fuzzy edge 𝐴 in 𝐻 reduces to an indicator function, 𝜇 𝐴 (𝑥) = 1 if 𝑥 ∈ 𝐴,
and 𝜇 𝐴 (𝑥) = 0 otherwise.
Hence, a Hypergraph is a Fuzzy Hypergraph where all edges are crisp subsets. □
Definition 4.22 (Fuzzy incidence matrix). The fuzzy incidence matrix 𝐻 𝑓 ∈ R𝑛×𝑚 of the fuzzy hypergraph 𝐻
is defined by:
(𝐻 𝑓 )𝑖 𝑗 = 𝜇 𝐴 𝑗 (𝑥𝑖 ),
where 𝑥𝑖 ∈ 𝑋 and 𝐴 𝑗 ∈ 𝐸.
The fuzzy degree of a vertex 𝑥𝑖 ∈ 𝑋 is defined as:
𝑚
∑︁
𝑑 (𝑥𝑖 ) = (𝐻 𝑓 )𝑖 𝑗 𝑤 𝑗 ,
𝑗=1

where 𝑤 𝑗 is the weight of fuzzy hyperedge 𝐴 𝑗 .

48
The fuzzy degree of a hyperedge 𝐴 𝑗 ∈ 𝐸 is defined as:
𝑛
∑︁
𝛿( 𝐴 𝑗 ) = (𝐻 𝑓 )𝑖 𝑗 .
𝑖=1

Let 𝐷 𝑉 ∈ R𝑛×𝑛 and 𝐷 𝐸 ∈ R𝑚×𝑚 be the diagonal matrices of fuzzy vertex degrees and fuzzy hyperedge
degrees, respectively:
(𝐷 𝑉 )𝑖𝑖 = 𝑑 (𝑥𝑖 ), (𝐷 𝐸 ) 𝑗 𝑗 = 𝛿( 𝐴 𝑗 ).
Theorem 4.23. The fuzzy incidence matrix 𝐻 𝑓 can represent both a Fuzzy Hypergraph and a Hypergraph as
special cases.

Proof. Let 𝐻 = (𝑋, 𝐸) be a Fuzzy Hypergraph, where 𝑋 = {𝑥1 , 𝑥2 , . . . , 𝑥 𝑛 } is the set of vertices and 𝐸 =
{ 𝐴1 , 𝐴2 , . . . , 𝐴𝑚 } is the fuzzy edge set. Each fuzzy edge 𝐴 𝑗 is defined by a membership function 𝜇 𝐴 𝑗 : 𝑋 →
[0, 1]. The fuzzy incidence matrix 𝐻 𝑓 ∈ R𝑛×𝑚 is defined as:

(𝐻 𝑓 )𝑖 𝑗 = 𝜇 𝐴 𝑗 (𝑥𝑖 ),

where 𝜇 𝐴 𝑗 (𝑥𝑖 ) ∈ [0, 1] represents the degree of membership of vertex 𝑥𝑖 in the fuzzy edge 𝐴 𝑗 .
The rows of 𝐻 𝑓 correspond to the vertices 𝑥𝑖 ∈ 𝑋, and the columns correspond to the fuzzy edges
𝐴 𝑗 ∈ 𝐸. The support of each fuzzy edge 𝐴 𝑗 can be recovered as:

supp( 𝐴 𝑗 ) = {𝑥𝑖 ∈ 𝑋 | (𝐻 𝑓 )𝑖 𝑗 > 0}.

The vertex degrees 𝑑 (𝑥𝑖 ) and hyperedge degrees 𝛿( 𝐴 𝑗 ) are defined in terms of 𝐻 𝑓 , as shown in the definition of
the fuzzy incidence matrix. Thus, 𝐻 𝑓 fully encodes the structure of the Fuzzy Hypergraph.
A Hypergraph H = (𝑋, 𝐸) is a special case of a Fuzzy Hypergraph where all membership values are
binary, i.e., 𝜇 𝐴 𝑗 (𝑥𝑖 ) ∈ {0, 1}. In this case, the incidence matrix 𝐻 𝑓 reduces to the classical incidence matrix 𝐻,
where: 
1, if 𝑥𝑖 ∈ 𝐴 𝑗 ,
(𝐻)𝑖 𝑗 =
0, otherwise.
For binary 𝜇 𝐴 𝑗 (𝑥 𝑖 ), the support of each edge 𝐴 𝑗 is:

supp( 𝐴 𝑗 ) = {𝑥𝑖 ∈ 𝑋 | 𝜇 𝐴 𝑗 (𝑥𝑖 ) = 1},

which matches the standard definition of a hyperedge in a Hypergraph. The vertex and hyperedge degree defini-
tions also simplify to their classical counterparts:
𝑚
∑︁ 𝑛
∑︁
𝑑 (𝑥𝑖 ) = (𝐻)𝑖 𝑗 , 𝛿( 𝐴 𝑗 ) = (𝐻)𝑖 𝑗 .
𝑗=1 𝑖=1

The fuzzy incidence matrix 𝐻 𝑓 generalizes the classical incidence matrix 𝐻, allowing it to represent
both Fuzzy Hypergraphs and Hypergraphs. By setting 𝜇 𝐴 𝑗 (𝑥 𝑖 ) ∈ [0, 1], it represents a Fuzzy Hypergraph, and
by restricting 𝜇 𝐴 𝑗 (𝑥𝑖 ) to binary values, it represents a Hypergraph. □

Definition 4.24 (Fuzzy Hypergraph Laplacian). The fuzzy hypergraph Laplacian Δ 𝑓 is defined as:

−1/2 −1 ⊤ −1/2
Δ 𝑓 = 𝐼 − 𝐷𝑉 𝐻 𝑓 𝑊 𝐷𝐸 𝐻 𝑓 𝐷𝑉 ,

where 𝑊 = diag(𝑤 1 , 𝑤 2 , . . . , 𝑤 𝑚 ) is the diagonal matrix of fuzzy hyperedge weights, and 𝐼 is the identity
matrix.
Theorem 4.25. The Fuzzy Hypergraph Laplacian Δ 𝑓 generalizes the Hypergraph Laplacian 𝐿.

49
Proof. 1. Generalization Setup:
The fuzzy hypergraph Laplacian Δ 𝑓 is defined as:

−1/2 −1 ⊤ −1/2
Δ 𝑓 = 𝐼 − 𝐷𝑉 𝐻 𝑓 𝑊 𝐷𝐸 𝐻 𝑓 𝐷𝑉 ,

where 𝐻 𝑓 is the fuzzy incidence matrix, and 𝑊 is the diagonal matrix of fuzzy hyperedge weights. The hyper-
graph Laplacian 𝐿 is a special case of this construction, defined as:
−1/2 −1/2
𝐿 = 𝐼 − 𝐷𝑣 𝐻𝑊 𝐷 𝑒−1 𝐻 ⊤ 𝐷 𝑣 .

2. Connection Between 𝐻 and 𝐻 𝑓 :


The classical incidence matrix 𝐻 is binary, with entries:

1, if 𝑣 𝑖 ∈ 𝑒 𝑗 ,
𝐻𝑖 𝑗 =
0, otherwise.

𝑓
In contrast, the fuzzy incidence matrix 𝐻 𝑓 allows entries 𝐻𝑖 𝑗 ∈ [0, 1], representing the degree of membership
of vertex 𝑣 𝑖 in hyperedge 𝑒 𝑗 . When 𝐻 𝑓 is restricted to binary values, it coincides with 𝐻.
3. Generalization of Matrices:

• Vertex Degree Matrix: In the classical case, the diagonal vertex degree matrix 𝐷 𝑣 has entries:
𝑚
∑︁
(𝐷 𝑣 )𝑖𝑖 = 𝐻𝑖 𝑗 𝑤(𝑒 𝑗 ).
𝑗=1

In the fuzzy case, this generalizes to:


𝑚
∑︁
𝑓
(𝐷 𝑉 )𝑖𝑖 = 𝐻𝑖 𝑗 𝑤(𝑒 𝑗 ),
𝑗=1

𝑓
allowing 𝐻𝑖 𝑗 to take non-binary values.

• Hyperedge Degree Matrix: Similarly, the hyperedge degree matrix 𝐷 𝑒 generalizes to:
𝑛
∑︁
𝑓
(𝐷 𝐸 ) 𝑗 𝑗 = 𝐻𝑖 𝑗 .
𝑖=1

4. Substitution in Δ 𝑓 :
Substituting the generalized 𝐻 𝑓 , 𝐷 𝑉 , and 𝐷 𝐸 into Δ 𝑓 , we recover the classical Laplacian 𝐿 when 𝐻 𝑓 is binary.
This shows that 𝐿 is a special case of Δ 𝑓 .
Since Δ 𝑓 reduces to 𝐿 under binary constraints on 𝐻 𝑓 and the associated matrices, Δ 𝑓 is a generalization
of 𝐿.
Thus, the Fuzzy Hypergraph Laplacian generalizes the Hypergraph Laplacian by extending the binary
incidence matrix to a fuzzy membership matrix, enabling the representation of partial or uncertain membership
relationships. □
Definition 4.26 (Fuzzy Hypergraph Neural Network). An Fuzzy Hypergraph Neural Network (F-HGNN) is a
neural network designed to operate on fuzzy hypergraphs. Given a fuzzy hypergraph 𝐻 = (𝑋, 𝐸) with fuzzy
incidence matrix 𝐻 𝑓 , vertex feature matrix 𝑋 ∈ R𝑛×𝑑 , and fuzzy hyperedge weight matrix 𝑊, the F-HGNN
performs convolution operations defined as:
 
−1/2 −1 ⊤ −1/2
𝑌 = 𝜎 𝐷𝑉 𝐻 𝑓 𝑊 𝐷 𝐸 𝐻 𝑓 𝐷 𝑉 𝑋Θ ,

where:

50
• 𝜎 is an activation function (e.g., ReLU).
• Θ ∈ R𝑑×𝑐 is the learnable weight matrix.
• 𝑌 ∈ R𝑛×𝑐 is the output feature matrix.
Definition 4.27 (Multi-Layer F-HGNN). For a multi-layer F-HGNN, the 𝑙-th layer’s output is computed as:
 
−1/2 −1 ⊤ −1/2 (𝑙) (𝑙)
𝑋 (𝑙+1) = 𝜎 𝐷 𝑉 𝐻 𝑓 𝑊 𝐷 𝐸 𝐻 𝑓 𝐷𝑉 𝑋 Θ ,

where 𝑋 (0) is the input feature matrix, and Θ (𝑙) is the learnable weight matrix at layer 𝑙.
Theorem 4.28. The Fuzzy Hypergraph Neural Network (F-HGNN) generalizes both the Hypergraph Neural
Network (HGNN) and the Fuzzy Graph Neural Network (F-GNN).
Proof. We will prove that:

1. When the fuzzy hypergraph reduces to a crisp hypergraph (i.e., membership functions 𝜇 𝐴 (𝑥) ∈ {0, 1}),
the F-HGNN reduces to the HGNN.
2. When the hyperedges are fuzzy edges connecting at most two vertices, the F-HGNN reduces to the F-
GNN.

Case 1: F-HGNN Reduces to HGNN


Assume that the fuzzy hypergraph 𝐻 = (𝑋, 𝐸) is crisp; that is, for all 𝐴 ∈ 𝐸 and 𝑥 ∈ 𝑋, the membership
functions 𝜇 𝐴 (𝑥) ∈ {0, 1}.
In this case, the fuzzy incidence matrix 𝐻 𝑓 becomes the standard incidence matrix 𝐻 of a hypergraph,
where: 
1, if 𝑥𝑖 ∈ 𝐴 𝑗 ,
(𝐻 𝑓 )𝑖 𝑗 = 𝜇 𝐴 𝑗 (𝑥𝑖 ) =
0, otherwise.
Similarly, the fuzzy vertex degrees 𝑑 (𝑥𝑖 ) and hyperedge degrees 𝛿( 𝐴 𝑗 ) become the standard degrees in
a hypergraph.
Therefore, the F-HGNN convolution operation simplifies to:
 
−1/2 −1 ⊤ −1/2
𝑌 = 𝜎 𝐷 𝑉 𝐻𝑊 𝐷 𝐸 𝐻 𝐷 𝑉 𝑋Θ ,

which is exactly the convolution operation in the Hypergraph Neural Network (HGNN).
Case 2: F-HGNN Reduces to F-GNN
Assume that each fuzzy hyperedge 𝐴 𝑗 ∈ 𝐸 connects at most two vertices. This means that the supports
of 𝐴 𝑗 are such that | supp( 𝐴 𝑗 )| ≤ 2.
In this case, the fuzzy hypergraph reduces to a fuzzy graph, where edges are fuzzy and connect two
vertices. The fuzzy incidence matrix 𝐻 𝑓 becomes analogous to the adjacency representation in a fuzzy graph.
The convolution operation in F-HGNN becomes similar to that in Fuzzy Graph Neural Networks, where
messages are passed between connected vertices, weighted by the fuzzy membership degrees.
Therefore, the F-HGNN generalizes the F-GNN in this case.
Since F-HGNN reduces to HGNN when the fuzzy hypergraph is crisp, and reduces to F-GNN when
hyperedges connect at most two vertices, we conclude that F-HGNN generalizes both HGNN and F-GNN. □

Theorem 4.29. A Fuzzy Hypergraph Neural Network (F-HGNN) retains the structure of a Fuzzy Hypergraph.

Proof. The Fuzzy Hypergraph Neural Network (F-HGNN) operates on the fuzzy incidence matrix 𝐻 𝑓 of a Fuzzy
Hypergraph 𝐻 = (𝑋, 𝐸). All transformations, including convolution operations, rely on 𝐻 𝑓 , which encodes the
fuzzy edge membership functions 𝜇 𝐴 (𝑥) of 𝐴 ∈ 𝐸.
Since the operations preserve the relationships defined by 𝐻 𝑓 , the structure of the Fuzzy Hypergraph 𝐻
is inherently retained throughout the F-HGNN’s computations. □

Question 4.30. Is it possible to extend the concept by utilizing Neutrosophic Hypergraphs [13, 14, 19, 248, 249,
255] and Plithogenic Hypergraphs [258]?

51
5 Other SuperHyperGraph Concepts
In this section, we explore concepts related to SuperHyperGraphs that are not directly connected to the
topics discussed above.
5.1 Multilevel k-way Hypergraph Partitioning
Multilevel graph partitioning is an approach to divide a graph into smaller parts by iteratively coarsening,
partitioning, and refining it for optimization [81, 147, 216, 217]. In Hypergraph Theory, concepts such as Mul-
tilevel Hypergraph Partitioning [214, 215] and Multilevel k-way Hypergraph Partitioning[35, 218, 305, 317, 379]
are frequently studied. These concepts are well-known for their applications in fields like VLSI design. This
section considers the definition of Multilevel k-way n-SuperHyperGraph Partitioning.
Definition 5.1 (Multilevel 𝑘-way Hypergraph Partitioning). [218] Given a hypergraph 𝐻 = (𝑉, 𝐸), where 𝑉
is the set of vertices and 𝐸 is the set of hyperedges, and a positive integer 𝑘, the goal of multilevel 𝑘-way
hypergraph partitioning is to partition the vertex set 𝑉 into 𝑘 disjoint subsets {𝑉1 , 𝑉2 , . . . , 𝑉𝑘 }, such that:
1. The size of each subset satisfies the balancing constraint:
|𝑉 | |𝑉 |
≤ |𝑉𝑖 | ≤ 𝑐 · , ∀𝑖 ∈ {1, 2, . . . , 𝑘 },
𝑘·𝑐 𝑘
where 𝑐 ≥ 1 is the imbalance tolerance factor.
2. An objective function 𝑓 defined over the hyperedges 𝐸 is optimized. Common objectives include:
• Minimizing the hyperedge cut:
∑︁
𝑓cut = (spanned partitions(𝑒) − 1) ,
𝑒∈𝐸

where spanned partitions(𝑒) is the number of subsets 𝑉𝑖 spanned by the hyperedge 𝑒.


• Minimizing the sum of external degrees (SOED):
∑︁
𝑓SOED = external degree(𝑒),
𝑒∈𝐸

where external degree(𝑒) is the number of subsets 𝑉𝑖 that the hyperedge 𝑒 spans.
The multilevel 𝑘-way partitioning algorithm consists of three phases:
• Coarsening Phase: The hypergraph 𝐻 is iteratively coarsened into a series of smaller hypergraphs

𝐻 1 , 𝐻 2 , . . . , 𝐻ℓ

by merging vertices to reduce complexity.


• Initial Partitioning Phase: The smallest hypergraph 𝐻ℓ is directly partitioned into 𝑘 subsets using an
efficient partitioning algorithm.
• Uncoarsening Phase: The partitioning is progressively refined as it is projected back to the original
hypergraph 𝐻, using refinement algorithms such as FM or greedy approaches to optimize the objective
function while maintaining the balancing constraint.
Definition 5.2 (Multilevel 𝑘-way 𝑛-SuperHyperGraph Partitioning). Given an 𝑛-SuperHyperGraph 𝐻 = (𝑉, 𝐸),
where 𝑉 is the set of supervertices and 𝐸 is the set of superedges, and a positive integer 𝑘, the goal of multilevel 𝑘-
way 𝑛-SuperHyperGraph Partitioning is to partition the supervertex set 𝑉 into 𝑘 disjoint subsets {𝑉1 , 𝑉2 , . . . , 𝑉𝑘 },
such that:
1. The size of each subset satisfies the balancing constraint:
|𝑉 | |𝑉 |
≤ |𝑉𝑖 | ≤ 𝑐 · , ∀𝑖 ∈ {1, 2, . . . , 𝑘 },
𝑘·𝑐 𝑘
where 𝑐 ≥ 1 is the imbalance tolerance factor.
2. An objective function 𝑓 defined over the superedges 𝐸 is optimized. Common objectives include:

52
• Minimizing the superedge cut:
∑︁
𝑓cut = (spanned partitions(𝑒) − 1) ,
𝑒∈𝐸

where spanned partitions(𝑒) is the number of subsets 𝑉𝑖 spanned by the superedge 𝑒.


• Minimizing the sum of external degrees (SOED):
∑︁
𝑓SOED = external degree(𝑒),
𝑒∈𝐸

where external degree(𝑒) is the number of subsets 𝑉𝑖 that the superedge 𝑒 spans.
The multilevel 𝑘-way partitioning algorithm consists of three phases:
• Coarsening Phase: The 𝑛-SuperHyperGraph 𝐻 is iteratively coarsened into a series of smaller 𝑛-SuperHyperGraphs

𝐻 1 , 𝐻 2 , . . . , 𝐻ℓ

by merging supervertices to reduce complexity.


• Initial Partitioning Phase: The smallest 𝑛-SuperHyperGraph 𝐻ℓ is directly partitioned into 𝑘 subsets
using an efficient partitioning algorithm.
• Uncoarsening Phase: The partitioning is progressively refined as it is projected back to the original 𝑛-
SuperHyperGraph 𝐻, using refinement algorithms to optimize the objective function while maintaining
the balancing constraint.
Theorem 5.3. The Multilevel 𝑘-way 𝑛-SuperHyperGraph Partitioning generalizes the Multilevel 𝑘-way Hyper-
graph Partitioning. Specifically, when 𝑛 = 1, the Multilevel 𝑘-way 𝑛-SuperHyperGraph Partitioning reduces to
the standard Multilevel 𝑘-way Hypergraph Partitioning.

Proof. To prove that the Multilevel 𝑘-way 𝑛-SuperHyperGraph Partitioning generalizes the Multilevel 𝑘-way
Hypergraph Partitioning, we need to show that when 𝑛 = 1, the definitions coincide.
1. At 𝑛 = 1, the 𝑛-SuperHyperGraph reduces to a Hypergraph:
• The 1-th iterated power set of 𝑉0 is P 1 (𝑉0 ) = P (𝑉0 ), the power set of 𝑉0 .
• However, in standard hypergraphs, the vertex set is 𝑉 = 𝑉0 , not 𝑉 ⊆ P (𝑉0 ). To align the definitions, we
consider only the elements of P 1 (𝑉0 ) that are singletons. That is, 𝑉 = 𝑉0 ⊆ P (𝑉0 ).
• The hyperedges 𝐸 ⊆ P (𝑉0 ), which matches the definition of hyperedges in a standard hypergraph.
2. Partitioning Definitions Align:
• The partitioning of supervertices 𝑉 into 𝑘 subsets {𝑉1 , 𝑉2 , . . . , 𝑉𝑘 } in the 𝑛-SuperHyperGraph becomes
the partitioning of vertices 𝑉0 when 𝑛 = 1.
• The balancing constraints and objective functions remain the same, as they are defined over 𝑉 and 𝐸,
which now correspond to 𝑉0 and 𝐸 of the hypergraph.
3. Algorithm Phases Correspond:
• Coarsening Phase: Merging supervertices in the 𝑛-SuperHyperGraph corresponds to merging vertices in
the hypergraph.
• Initial Partitioning Phase: Partitioning the smallest 𝑛-SuperHyperGraph aligns with partitioning the
coarsest hypergraph.
• Uncoarsening Phase: Refinement steps are analogous in both cases.
Therefore, when 𝑛 = 1, the Multilevel 𝑘-way 𝑛-SuperHyperGraph Partitioning reduces to the Multilevel
𝑘-way Hypergraph Partitioning, proving that the former generalizes the latter. □

53
5.2 Superhypergraph Random Walk
A Graph Random Walk is a discrete-time Markov chain where transitions between vertices follow edge-
based probabilities, modeling stochastic processes on graphs [83, 408]. These concepts have been extended to
hypergraphs, leading to the development of Hypergraph Random Walks[74,82,105,174,275]. In this subsection,
we extend Hypergraph Random Walks to the domain of Superhypergraphs. The related definitions and theorems
are provided below.
Definition 5.4 (Markov Chain). (cf.[30, 84, 157]) A Markov Chain is a mathematical framework used to model
stochastic processes where the future state depends solely on the current state and not on how it was reached.
Formally:

• State Space: The set of possible states is denoted by 𝑆 = {𝑠1 , 𝑠2 , . . . }, which may be finite or countable.
• Transition Rule: The process satisfies the property:

𝑃(𝑋𝑡+1 = 𝑠 𝑗 | 𝑋𝑡 = 𝑠𝑖 , 𝑋𝑡 −1 , . . . , 𝑋0 ) = 𝑃(𝑋𝑡+1 = 𝑠 𝑗 | 𝑋𝑡 = 𝑠𝑖 ).

• Transition Matrix: Probabilities of moving between states are organized in a matrix 𝑃 = [ 𝑝 𝑖 𝑗 ], with:
∑︁
𝑝 𝑖 𝑗 = 𝑃(𝑋𝑡+1 = 𝑠 𝑗 | 𝑋𝑡 = 𝑠𝑖 ), and 𝑝 𝑖 𝑗 = 1 ∀𝑖.
𝑗 ∈𝑆

• Initial State Distribution: The process begins with probabilities 𝜋0 (𝑖) = 𝑃(𝑋0 = 𝑠𝑖 ).

Example 5.5 (Weather System (Markov Chain)). A simplified weather model predicts sunny (𝑆) or rainy (𝑅)
conditions based on current weather:  
0.9 0.1
𝑃= .
0.5 0.5
If today is sunny, there is a 90% chance of sunshine tomorrow.
Definition 5.6 (Discrete-time Markov Chain). (cf.[88, 330, 423]) A Discrete-time Markov Chain (DTMC) is a
stochastic process {𝑋𝑡 }∞
𝑡=0
defined on a discrete state space 𝑆 = {𝑠1 , 𝑠2 , . . . }, satisfying the Markov property,
which states that the probability of transitioning to the next state depends only on the current state and not on
the sequence of previous states. Formally:

𝑃(𝑋𝑡+1 = 𝑠 𝑗 | 𝑋𝑡 = 𝑠𝑖 , 𝑋𝑡 −1 = 𝑠 𝑘 , . . . , 𝑋0 = 𝑠 𝑚 ) = 𝑃(𝑋𝑡+1 = 𝑠 𝑗 | 𝑋𝑡 = 𝑠𝑖 ),
for all 𝑡 ≥ 0, 𝑠𝑖 , 𝑠 𝑗 ∈ 𝑆, and any sequence of states 𝑠 𝑚 , . . . , 𝑠 𝑘 , 𝑠𝑖 .
The dynamics of a DTMC are governed by a transition probability matrix 𝑃 = [ 𝑝 𝑖 𝑗 ], where

𝑝 𝑖 𝑗 = 𝑃(𝑋𝑡+1 = 𝑠 𝑗 | 𝑋𝑡 = 𝑠𝑖 ),
and
∑︁
𝑝𝑖 𝑗 = 1 for all 𝑖 ∈ 𝑆.
𝑗 ∈𝑆

The initial distribution over the states is specified by a vector 𝜋0 , where 𝜋0 (𝑖) = 𝑃(𝑋0 = 𝑠𝑖 ).
Definition 5.7 (Hypergraph Random Walk). [73, 174] A Hypergraph Random Walk is a discrete-time Markov
process defined over the vertices of a hypergraph 𝐻 = (𝑉, 𝐸), with transition probabilities determined as follows:

1. Hyperedge Selection: Starting from the current vertex 𝑣 𝑡 ∈ 𝑉 at time 𝑡, a hyperedge 𝑒 ∈ 𝐸 containing 𝑣 𝑡
is selected with probability proportional to its weight 𝜔(𝑒) > 0. Formally, the selection probability is:

𝜔(𝑒)
𝑃(𝑒 | 𝑣 𝑡 ) = Í .
𝜔(𝑒 ′ )
𝑒′ ∋𝑣𝑡

2. Vertex Selection within the Hyperedge: From the selected hyperedge 𝑒, a vertex 𝑣 𝑡+1 ∈ 𝑒 is chosen. This
selection can follow either:

54
a) Uniform Selection: Choose 𝑣 𝑡+1 uniformly at random from 𝑒, such that:

1
𝑃(𝑣 𝑡+1 | 𝑒) = .
|𝑒|

b) Weighted Selection: Choose 𝑣 𝑡+1 based on a vertex-specific weight 𝛾𝑒 (𝑣) > 0 within 𝑒, such that:

𝛾𝑒 (𝑣 𝑡+1 )
𝑃(𝑣 𝑡+1 | 𝑒) = Í .
𝑣 ∈𝑒 𝛾𝑒 (𝑣)

The full transition probability from 𝑣 𝑡 to 𝑣 𝑡+1 is then given by:


∑︁
𝑃(𝑣 𝑡+1 | 𝑣 𝑡 ) = 𝑃(𝑒 | 𝑣 𝑡 ) · 𝑃(𝑣 𝑡+1 | 𝑒).
𝑒∋𝑣𝑡 ,𝑣𝑡+1

This formulation generalizes random walks on graphs by accounting for hyperedges that can connect
more than two vertices.
Definition 5.8 (𝑛-SuperHyperGraph Random Walk). Let 𝐻 = (𝑉, 𝐸) be an 𝑛-SuperHyperGraph, where 𝑉 ⊆
P 𝑛 (𝑉0 ) is the set of supervertices, and 𝐸 ⊆ P 𝑛 (𝑉0 ) is the set of superedges. Here, P 𝑛 (𝑉0 ) denotes the 𝑛-th
iterated power set of the base set 𝑉0 .
A 𝑛-SuperHyperGraph Random Walk is a discrete-time stochastic process {𝑋𝑡 }∞ 𝑡=0
defined on the super-
vertices 𝑉, with transitions determined as follows:

1. Superedge Selection: Starting from the current supervertex 𝑣 𝑡 ∈ 𝑉 at time 𝑡, select a superedge 𝑒 ∈ 𝐸
containing 𝑣 𝑡 , with probability proportional to its weight 𝜔(𝑒) > 0:

𝜔(𝑒)
𝑃(𝑒 | 𝑣 𝑡 ) = Í .
𝜔(𝑒 ′ )
𝑒′ ∋𝑣𝑡

2. Supervertex Selection within the Superedge: From the selected superedge 𝑒, select a supervertex 𝑣 𝑡+1 ∈ 𝑒
according to a probability distribution, which can be:
a) Uniform Selection: Choose 𝑣 𝑡+1 uniformly at random from 𝑒:

1
𝑃(𝑣 𝑡+1 | 𝑒) = .
|𝑒|

b) Weighted Selection: Choose 𝑣 𝑡+1 based on weights 𝛾𝑒 (𝑣) > 0:

𝛾𝑒 (𝑣 𝑡+1 )
𝑃(𝑣 𝑡+1 | 𝑒) = Í .
𝑣 ∈𝑒 𝛾𝑒 (𝑣)

The full transition probability from 𝑣 𝑡 to 𝑣 𝑡+1 is then:


∑︁
𝑃(𝑣 𝑡+1 | 𝑣 𝑡 ) = 𝑃(𝑒 | 𝑣 𝑡 ) · 𝑃(𝑣 𝑡+1 | 𝑒).
𝑒∋𝑣𝑡 ,𝑣𝑡+1

Theorem 5.9. The 𝑛-SuperHyperGraph Random Walk has the structure of an 𝑛-SuperHyperGraph.

Proof. Since the random walk is defined over supervertices 𝑉 ⊆ P 𝑛 (𝑉0 ) and utilizes superedges 𝐸 ⊆ P 𝑛 (𝑉0 )
for transitions, it inherently possesses the structure of an 𝑛-SuperHyperGraph. □
Corollary 5.10. The 𝑛-SuperHyperGraph Random Walk possesses the structure of a superhypergraph, hyper-
graph, and graph.

Proof. This follows directly from the above theorem. □


Theorem 5.11. The 𝑛-SuperHyperGraph Random Walk is a Discrete-time Markov Chain.

55
Proof. The process {𝑋𝑡 } satisfies the Markov property because the probability of transitioning to 𝑣 𝑡+1 depends
only on the current supervertex 𝑣 𝑡 and not on any previous supervertices 𝑣 𝑡 −1 , 𝑣 𝑡 −2 , . . .. The transition probabil-
ities 𝑃(𝑣 𝑡+1 | 𝑣 𝑡 ) are well-defined, and the process evolves in discrete time steps. Therefore, it is a Discrete-time
Markov Chain. □

Theorem 5.12. The 𝑛-SuperHyperGraph Random Walk generalizes the Hypergraph Random Walk.
Proof. When 𝑛 = 1, the 𝑛-SuperHyperGraph reduces to a standard hypergraph, and the 𝑛-SuperHyperGraph
Random Walk becomes equivalent to the Hypergraph Random Walk. Therefore, the 𝑛-SuperHyperGraph Ran-
dom Walk is a generalization of the Hypergraph Random Walk. □
Question 5.13. The concept of HyperRandom [149–151], which extends the idea of randomness, is well-known.
Can this be used to further extend the concept of Random Walk?
5.3 Superhypergraph Turán Problem
The Hypergraph Turán Problem [165, 219, 241] aims to determine the maximum number of edges in a
uniform hypergraph (cf.[184, 185, 203]) on 𝑛 vertices while avoiding a specific forbidden subhypergraph. This
concept is extended to superhypergraphs, and their characteristics are briefly examined. The relevant definitions
and theorems are presented below.
Definition 5.14 (Forbidden Graph). (cf.[106]) A forbidden graph 𝐹 is a graph that is not allowed as a subgraph
in a larger graph 𝐺. If 𝐺 contains 𝐹 as a subgraph, 𝐺 violates the specified constraints, often used in Turán-type
problems or graph property investigations.
Definition 5.15 (Hypergraph Turán Problem). [219] Let 𝐺 = (𝑉, 𝐸) be an 𝑟-uniform hypergraph, where 𝑉 is
the set of vertices and 𝐸 is the set of edges, with each edge being a subset of 𝑉 containing exactly 𝑟 vertices.
Let 𝐹 be any 𝑟-uniform hypergraph. A hypergraph 𝐺 is said to be 𝐹-free if 𝐺 does not contain 𝐹 as a
subhypergraph.
The Hypergraph Turán Number ex𝑟 (𝑛, 𝐹) is defined as the maximum number of edges in an 𝐹-free
𝑟-uniform hypergraph on 𝑛 vertices:

ex𝑟 (𝑛, 𝐹) = max{|𝐸 (𝐺)| : 𝐺 is an 𝐹-free 𝑟-uniform hypergraph with |𝑉 (𝐺)| = 𝑛}.

Furthermore, the Turán Density 𝜋(𝐹) of 𝐹 is given by:

ex𝑟 (𝑛, 𝐹)
𝜋(𝐹) = lim 𝑛 ,
𝑛→∞
𝑟
𝑛
where 𝑟 denotes the number of all possible 𝑟-element subsets of 𝑛 vertices.
Definition 5.16 (𝑟-Uniform 𝑛-SuperHyperGraph). An 𝑛-SuperHyperGraph 𝐻 = (𝑉, 𝐸) is called 𝑟-uniform if
every superedge 𝑒 ∈ 𝐸 contains exactly 𝑟 supervertices, i.e., 𝑒 ⊆ 𝑉 and |𝑒| = 𝑟.
Definition 5.17 (𝑛-SuperHyperGraph Turán Problem). Let 𝐹 be an 𝑟-uniform 𝑛-SuperHyperGraph.
An 𝑟-uniform 𝑛-SuperHyperGraph 𝐺 = (𝑉, 𝐸) is said to be 𝐹-free if 𝐺 does not contain 𝐹 as a subgraph.
The 𝑛-SuperHyperGraph Turán Number ex𝑟𝑛 (𝑁, 𝐹) is defined as the maximum number of edges in an
𝐹-free 𝑟-uniform 𝑛-SuperHyperGraph 𝐺 with |𝑉 (𝐺)| = 𝑁:

ex𝑟𝑛 (𝑁, 𝐹) = max {|𝐸 (𝐺)| : 𝐺 is an 𝐹-free 𝑟-uniform 𝑛-SuperHyperGraph with |𝑉 (𝐺)| = 𝑁 } .

Furthermore, the 𝑛-SuperHyperGraph Turán Density 𝜋 𝑛 (𝐹) is defined as:

ex𝑟𝑛 (𝑁, 𝐹)
𝜋 𝑛 (𝐹) = lim 𝑁
,
𝑁 →∞
𝑟

𝑁
where 𝑟 denotes the number of all possible 𝑟-element subsets of 𝑁 supervertices.
Theorem 5.18. An 𝑟-uniform hypergraph is a special case of an 𝑟-uniform 𝑛-SuperHyperGraph when 𝑛 = 0.

Proof. When 𝑛 = 0, we have P 0 (𝑉0 ) = 𝑉0 . Thus, the supervertices 𝑉 are exactly the base vertices 𝑉0 . The
superedges 𝐸 are subsets of 𝑉 containing exactly 𝑟 supervertices. Therefore, an 𝑟-uniform 0-SuperHyperGraph
𝐻 = (𝑉, 𝐸) is identical to an 𝑟-uniform hypergraph on the vertex set 𝑉0 . □

56
Theorem 5.19. Every 𝑟-uniform hypergraph can be represented as an 𝑟-uniform 𝑛-SuperHyperGraph for any
𝑛 ≥ 0.
Proof. Given an 𝑟-uniform hypergraph 𝐻 = (𝑉0 , 𝐸), we can construct an 𝑟-uniform 𝑛-SuperHyperGraph 𝐻 ′ =
(𝑉, 𝐸 ′ ) by setting 𝑉 = 𝑉0 ⊆ P 𝑛 (𝑉0 ) and 𝐸 ′ = 𝐸. Since the supervertices 𝑉 are the original vertices 𝑉0 , and the
superedges 𝐸 ′ are the same as 𝐸, 𝐻 ′ is an 𝑟-uniform 𝑛-SuperHyperGraph equivalent to 𝐻. □
Theorem 5.20. The 𝑛-SuperHyperGraph Turán Problem generalizes the Hypergraph Turán Problem.

Proof. When 𝑛 = 0, the 𝑛-SuperHyperGraph Turán Problem reduces to the classical Hypergraph Turán Problem
because the supervertices are the original vertices 𝑉0 , and the superedges are subsets of 𝑉0 of size 𝑟. There-
fore, the 𝑛-SuperHyperGraph Turán Problem includes the Hypergraph Turán Problem as a special case, thus
generalizing it. □

Theorem 5.21. For any 𝑟-uniform hypergraph 𝐹, the Hypergraph Turán Number ex𝑟 (𝑁, 𝐹) is less than or
equal to the 𝑛-SuperHyperGraph Turán Number ex𝑟𝑛 (𝑁, 𝐹 ′ ), where 𝐹 ′ is the corresponding 𝑟-uniform 𝑛-
SuperHyperGraph constructed from 𝐹.

Proof. Since every 𝑟-uniform hypergraph 𝐺 can be viewed as an 𝑟-uniform 𝑛-SuperHyperGraph 𝐺 ′ by treating
vertices as supervertices (as per the previous theorem), any 𝐹-free 𝑟-uniform hypergraph 𝐺 corresponds to an
𝐹 ′ -free 𝑟-uniform 𝑛-SuperHyperGraph 𝐺 ′ . However, the set of 𝑟-uniform 𝑛-SuperHyperGraphs includes more
general structures due to the hierarchical nature of supervertices. Therefore, there may exist 𝐹 ′ -free 𝑟-uniform
𝑛-SuperHyperGraphs with more edges than any 𝐹-free 𝑟-uniform hypergraph. Thus,

ex𝑟 (𝑁, 𝐹) ≤ ex𝑟𝑛 (𝑁, 𝐹 ′ ).

Corollary 5.22. The Turán Density of an 𝑟-uniform hypergraph 𝐹 satisfies:

𝜋(𝐹) ≤ 𝜋 𝑛 (𝐹 ′ ),

where 𝐹 ′ is the corresponding 𝑟-uniform 𝑛-SuperHyperGraph constructed from 𝐹.

Proof. This follows directly from the previous theorem and the definitions of Turán Densities:

ex𝑟 (𝑁, 𝐹) ex𝑟𝑛 (𝑁, 𝐹 ′ )


𝜋(𝐹) = lim 𝑁
≤ lim 𝑁
= 𝜋 𝑛 (𝐹 ′ ).
𝑁 →∞ 𝑁 →∞
𝑟 𝑟

Theorem 5.23. An 𝑛-SuperHyperGraph Turán Number can be strictly greater than the corresponding Hyper-
graph Turán Number.

Proof. Due to the additional complexity and hierarchical structure of supervertices in an 𝑛-SuperHyperGraph,
there are more possibilities for constructing 𝐹-free 𝑟-uniform 𝑛-SuperHyperGraphs with more edges than pos-
sible in the standard hypergraph case. Therefore, for certain 𝐹 and sufficiently large 𝑛, we have:

ex𝑟 (𝑁, 𝐹) < ex𝑟𝑛 (𝑁, 𝐹 ′ ).

57
5.4 Binary decision 𝑛-superhypertree
A Binary Decision Hypertree is a rooted acyclic graph representing Boolean function evaluations, branch-
ing on variables with outputs at leaves [168, 169]. This concept is extended to the superhyper framework. The
definitions and theorems are provided below.
Definition 5.24 (hyperdiagram). (cf. [168])
A hyperdiagram on a finite set 𝐺 = {𝑥1 , 𝑥2 , . . . , 𝑥 𝑛 } is an ordered pair 𝐻 = (𝐺, {𝐸 𝑘 } 𝑚
𝑘=1
) where:
• For each 1 ≤ 𝑘 ≤ 𝑚, 𝐸 𝑘 ⊆ 𝐺 and |𝐸 𝑘 | ≥ 1.
Definition 5.25 (𝑛-Superhyperdiagram). Let 𝑉0 be a finite set of base elements. Define the 𝑛-th iterated power
set of 𝑉0 recursively as:  
P 0 (𝑉0 ) = 𝑉0 , P 𝑘+1 (𝑉0 ) = P P 𝑘 (𝑉0 ) ,

where P ( 𝐴) denotes the power set of set 𝐴.


An 𝑛-Superhyperdiagram is an ordered pair 𝐻 = (𝑉, {𝐸 𝑘 } 𝑚
𝑘=1
) where:
• 𝑉 ⊆ P 𝑛 (𝑉0 ) is the set of supervertices.
• For each 1 ≤ 𝑘 ≤ 𝑚, 𝐸 𝑘 ⊆ 𝑉 is called a superedge (or hyperedge), with |𝐸 𝑘 | ≥ 1.
Theorem 5.26. An 𝑛-Superhyperdiagram generalizes the hyperdiagram.

Proof. When 𝑛 = 0, the 𝑛-th iterated power set is P 0 (𝑉0 ) = 𝑉0 . Therefore, the supervertices 𝑉 ⊆ P 0 (𝑉0 ) = 𝑉0
are simply elements of the base set 𝑉0 .
Thus, when 𝑛 = 0, an 𝑛-Superhyperdiagram 𝐻 = (𝑉, {𝐸 𝑘 } 𝑚 𝑘=1
) reduces to a hyperdiagram on 𝑉0 , since
𝑉 = 𝑉0 and each 𝐸 𝑘 ⊆ 𝑉.
Therefore, the concept of a hyperdiagram is a special case of an 𝑛-Superhyperdiagram when 𝑛 = 0.
Thus, 𝑛-Superhyperdiagrams generalize hyperdiagrams. □
Definition 5.27 (Binary Decision Hypertree). (cf. [169])
A Binary Decision Hypertree is a rooted tree constructed from a Boolean function 𝑓 where:
• Each node corresponds to a variable 𝑥𝑖 ∈ 𝑉0 .
• Each internal node has two outgoing edges representing 𝑥𝑖 = 1 and 𝑥𝑖 = 0.
• Leaves are labeled with the output of 𝑓 .
Definition 5.28 (Binary Decision 𝑛-Superhypertree). Let 𝑉0 be a finite set of variables. Consider a Boolean
function 𝑓 defined on 𝑉0 . A Binary Decision 𝑛-Superhypertree (BD𝑛SHT) is a rooted tree constructed as fol-
lows:
• Each node represents a supervertex 𝑣 ∈ P 𝑛 (𝑉0 ).
• Internal nodes are associated with testing a variable 𝑥𝑖 ∈ 𝑉0 .
• Each internal node has two outgoing edges:
– A solid directed edge representing the assignment 𝑥𝑖 = 1.
– A dashed directed edge representing the assignment 𝑥𝑖 = 0.
• Leaves are labeled with the output value of the function 𝑓 corresponding to the path from the root to the
leaf.
Theorem 5.29. A binary decision 𝑛-superhypertree generalizes the binary decision hypertree.

Proof. When 𝑛 = 0, the 𝑛-th iterated power set is P 0 (𝑉0 ) = 𝑉0 , so the supervertices are simply the base variables
𝑉0 .
In a binary decision hypertree, nodes correspond to variables 𝑥𝑖 ∈ 𝑉0 , and the tree represents the evalu-
ation of the Boolean function 𝑓 by branching on the assignments of these variables.
Therefore, when 𝑛 = 0, the binary decision 𝑛-superhypertree reduces to the binary decision hypertree.
Thus, the binary decision 𝑛-superhypertree generalizes the binary decision hypertree. □

58
6 Future Directions of this Research
This section highlights potential future directions for this research. A key objective is the practical im-
plementation and experimental validation of the SuperHyperGraph Neural Network (SHGNN). Through com-
putational experiments, we hope to discover related concepts that make the SHGNN more suitable for practical
applications.
Another promising avenue is the exploration of extensions to SuperHyperGraph Neural Networks in-
corporating Fuzzy sets [306, 430–437] and Neutrosophic sets [126, 133, 134, 332–337, 353, 356, 359]. This
includes developing and validating frameworks such as Fuzzy SuperHyperGraph Neural Networks and Neu-
trosophic SuperHyperGraph Neural Networks. These frameworks aim to generalize Fuzzy Neural Networks
[176, 242, 250, 365, 366] and Neutrosophic Neural Networks [194] by integrating the structural advantages of
hypergraphs, laying the groundwork for advanced representations and computations. Additionally, future re-
search could explore considerations involving Directed SuperHyperGraphs and their applications [126].
In addition to the concepts mentioned above, numerous frameworks for handling uncertainty, such as
Soft Set (Soft Graph) [127, 254, 266], hypersoft set[2, 119, 131, 180, 314, 323, 344], Rough Set (Rough Graph)
[288–293], Hyperfuzzy set[126,143,207,362], and Plithogenic Set (Plithogenic Graph) [121,132,338,339,357],
are well-known in the literature. Future research could explore how these concepts behave when applied to Graph
Neural Networks, Hypergraph Neural Networks, and SuperHyperGraph Neural Networks. Such investigations
could also shed light on whether these extensions result in more efficient and effective networks. This area holds
significant potential for advancing understanding and innovation.

Funding
This research received no external funding.

Acknowledgments
After publishing this paper as a preprint, we received valuable feedback on improving the manuscript
from Dr. Azza A. Taha. We would like to take this opportunity to express our heartfelt gratitude.
We also humbly extend our deepest thanks to everyone who has provided invaluable support, enabling
the successful completion of this work. Additionally, we sincerely appreciate all readers who have taken the time
to engage with this study. Finally, we extend our utmost respect and gratitude to the authors of the references
cited in this paper. Your significant contributions are greatly acknowledged.

Data Availability
This paper does not involve any data analysis.

Ethical Approval
This article does not involve any research with human participants or animals.

Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.

Disclaimer
This study primarily focuses on theoretical aspects, and its application to practical scenarios has not yet
been validated. Future research may involve empirical testing and refinement of the proposed methods. The
authors have made every effort to ensure that all references cited in this paper are accurate and appropriately
attributed. However, unintentional errors or omissions may occur. The authors bear no legal responsibility for
inaccuracies in external sources, and readers are encouraged to verify the information provided in the references
independently. Furthermore, the interpretations and opinions expressed in this paper are solely those of the
authors and do not necessarily reflect the views of any affiliated institutions.

References
[1] Scott Aaronson and Alex Arkhipov. The computational complexity of linear optics. In Proceedings of the forty-third
annual ACM symposium on Theory of computing, pages 333–342, 2011.
[2] Mujahid Abbas, Ghulam Murtaza, and Florentin Smarandache. Basic operations on hypersoft sets and hypersoft point.
Infinite Study, 2020.
[3] Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The surprising power of graph neural
networks with random node initialization. arXiv preprint arXiv:2010.01179, 2020.

59
[4] Mohamed Abdel-Basset, Mai Mohamed, Mohamed Elhoseny, Le Hoang Son, Francisco Chiclana, and Abdel Nasser H.
Zaied. Cosine similarity measures of bipolar neutrosophic set for diagnosis of bipolar disorder diseases. Artificial
intelligence in medicine, 101:101735, 2019.
[5] Amina Adadi and Mohammed Berrada. Peeking inside the black-box: A survey on explainable artificial intelligence
(xai). IEEE Access, 6:52138–52160, 2018.
[6] Tero Aittokallio and Benno Schwikowski. Graph-based methods for analysing networks in cell biology. Briefings in
bioinformatics, 7(3):243–255, 2006.
[7] D. Ajay, S. John Borg, and P. Chellamani. Domination in pythagorean neutrosophic graphs with an application in fuzzy
intelligent decision making. In International Conference on Intelligent and Fuzzy Systems, pages 667–675, Cham, July
2022. Springer International Publishing.
[8] Muhammad Akram. Bipolar fuzzy graphs. Information sciences, 181(24):5548–5564, 2011.
[9] Muhammad Akram and Noura Omair Alshehri. Intuitionistic fuzzy cycles and intuitionistic fuzzy trees. The Scientific
World Journal, 2014(1):305836, 2014.
[10] Muhammad Akram and Bijan Davvaz. Strong intuitionistic fuzzy graphs. Filomat, 26(1):177–196, 2012.
[11] Muhammad Akram and Wieslaw A. Dudek. Intuitionistic fuzzy hypergraphs with applications. Inf. Sci., 218:182–193,
2013.
[12] Muhammad Akram, MG Karunambigai, K Palanivel, and S Sivasankar. Balanced bipolar fuzzy graphs. Journal of
advanced research in pure mathematics, 6(4):58–71, 2014.
[13] Muhammad Akram and Anam Luqman. Bipolar neutrosophic hypergraphs with applications. Journal of Intelligent &
Fuzzy Systems, 33(3):1699–1713, 2017.
[14] Muhammad Akram and Anam Luqman. Intuitionistic single-valued neutrosophic hypergraphs. OPSEARCH, 54:799
– 815, 2017.
[15] Muhammad Akram and Anam Luqman. Fuzzy hypergraphs and related extensions. In Studies in Fuzziness and Soft
Computing, 2020.
[16] Muhammad Akram and Anam Luqman. Hypergraphs for interval-valued structures. In Fuzzy Hypergraphs and Related
Extensions, pages 125–154. Springer, 2020.
[17] Muhammad Akram, Hafsa M Malik, Sundas Shahzadi, and Florentin Smarandache. Neutrosophic soft rough graphs
with application. Axioms, 7(1):14, 2018.
[18] Muhammad Akram, Danish Saleem, and Talal Al-Hawary. Spherical fuzzy graphs with application to decision-making.
Mathematical and Computational Applications, 25(1):8, 2020.
[19] Muhammad Akram, Sundas Shahzadi, and AB Saeid. Single-valued neutrosophic hypergraphs. TWMS Journal of
Applied and Engineering Mathematics, 8(1):122–135, 2018.
[20] Qeethara Al-Shayea. Artificial neural networks in medical diagnosis. International Journal of Research Publication
and Reviews, 2024.
[21] Md Tanvir Alam, Chowdhury Farhan Ahmed, Md Samiullah, and Carson K Leung. Mining frequent patterns from
hypergraph databases. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 3–15. Springer,
2021.
[22] Md. Tanvir Alam, Chowdhury Farhan Ahmed, Md. Samiullah, and Carson Kai-Sang Leung. Mining frequent patterns
from hypergraph databases. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2021.
[23] Eric Alcaide. Improving graph property prediction with generalized readout functions. arXiv preprint
arXiv:2009.09919, 2020.
[24] Ebrahem Ateatullah Algehyne, Muhammad Lawan Jibril, Naseh A Algehainy, Osama Abdulaziz Alamri, and Abdullah
Khaled J Alzahrani. Fuzzy neural network expert system with an improved gini index random forest-based feature
importance measure algorithm for early diagnosis of breast cancer in saudi arabia. Big Data Cogn. Comput., 6:13,
2022.
[25] Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. arXiv preprint
arXiv:2006.05205, 2020.
[26] Mohammed Alqahtani, M Kaviyarasu, Anas Al-Masarwah, and M Rajeshwari. Application of complex neutrosophic
graphs in hospital infrastructure design. Mathematics, 12(5):719, 2024.
[27] Lilas Alrahis and Ozgur Sinanoglu. Graph neural networks for hardware vulnerability analysis-can you trust your gnn?
In 2023 IEEE 41st VLSI Test Symposium (VTS), pages 1–4. IEEE, 2023.
[28] Mohammed Alshikho, Maissam Jdid, and Said Broumi. Artificial intelligence and neutrosophic machine learning in
the diagnosis and detection of covid 19. Journal Prospects for Applied Mathematics and Data Analysis, 1(2), 2023.

60
[29] Farshad Andam, Ezzatollah Asgharizadeh, and Mohammadreza Taghizadeh-Yazdi. Designing a model for health-
care services supply chain performance evaluation using neutrosophic multiple attribute decision-making technique.
International Journal of Nonlinear Analysis and Applications, 15(9):307–318, 2024.
[30] Christophe Andrieu, A. Doucet, and Roman Holenstein. Particle markov chain monte carlo methods. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 72, 2010.
[31] Renzo Angles and Claudio Gutierrez. Survey of graph database models. ACM Computing Surveys (CSUR), 40(1):1–39,
2008.
[32] Howard Anton. Elementary linear algebra. 1970.
[33] Sanjeev Arora and Boaz Barak. Computational complexity: a modern approach. Cambridge University Press, 2009.
[34] Alejandro Barredo Arrieta, Natalia Dı́az Rodrı́guez, Javier Del Ser, Adrien Bennetot, Siham Tabik, A. Barbado, Sal-
vador Garcı́a, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. Explainable
artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Inf. Fusion,
58:82–115, 2019.
[35] Cevdet Aykanat, B Barla Cambazoglu, and Bora Uçar. Multi-level direct k-way hypergraph partitioning with multiple
constraints and fixed vertices. Journal of Parallel and Distributed Computing, 68(5):609–625, 2008.
[36] Jimmy Lei Ba. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
[37] Elisa C. Baek, Ryan Hyon, Karina López, Emily S. Finn, M. A. Porter, and Carolyn Parkinson. In-degree centrality in
a social network is linked to coordinated neural activity. Nature Communications, 13, 2022.
[38] Song Bai, Feihu Zhang, and Philip HS Torr. Hypergraph convolution and hypergraph attention. Pattern Recognition,
110:107637, 2021.
[39] Wenhui Bai, Juanjuan Ding, and Chao Zhang. Dual hesitant fuzzy graphs with applications to multi-attribute decision
making. International Journal of Cognitive Computing in Engineering, 1:18–26, 2020.
[40] Rassul Bairamkulov and Eby Friedman. Graphs in vlsi circuits and systems. In Graphs in VLSI, pages 59–100.
Springer, 2022.
[41] Rassul Bairamkulov and Eby G Friedman. Graphs in VLSI. Springer, 2023.
[42] Alexandru T Balaban. Applications of graph theory in chemistry. Journal of chemical information and computer
sciences, 25(3):334–343, 1985.
[43] Anuradha Banerjee, Basav Roychoudhury, and Bidyut Jyoti Gogoi. Determining rank in the market using a neutro-
sophic decision support system. Journal of Business Analytics, 3:138 – 157, 2020.
[44] Chaity Banerjee, Tathagata Mukherjee, and Eduardo Pasiliao Jr. The multi-phase relu activation function. In Proceed-
ings of the 2020 ACM Southeast Conference, pages 239–242, 2020.
[45] Jørgen Bang-Jensen and Gregory Z Gutin. Digraphs: theory, algorithms and applications. Springer Science &
Business Media, 2008.
[46] Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-normalized margin bounds for neural networks.
Advances in neural information processing systems, 30, 2017.
[47] Veysi Başhan, Hakan Demirel, and Muhammet Gul. An fmea-based topsis approach under single valued neutrosophic
sets for maritime risk evaluation: the case of ship navigation safety. Soft Computing, 24(24):18749–18764, 2020.
[48] Ilyes Batatia, D’avid P’eter Kov’acs, Gregor N. C. Simm, Christoph Ortner, and Gábor Csányi. Mace: Higher order
equivariant message passing neural networks for fast and accurate force fields. ArXiv, abs/2206.07697, 2022.
[49] Kornelia M. Batko and Andrzej lzak. The use of big data analytics in healthcare. Journal of Big Data, 9, 2022.
[50] G. Baudat and Fatiha Anouar. Feature vector selection and projection using kernels. Neurocomputing, 55:21–38, 2003.
[51] Claude Berge. Hypergraphs: combinatorics of finite sets, volume 45. Elsevier, 1984.
[52] Leonid S Bershtein and Alexander V Bozhenyuk. Fuzzy graphs and fuzzy hypergraphs. In Encyclopedia of Artificial
Intelligence, pages 704–709. IGI Global, 2009.
[53] Anushree Bhattacharya and Madhumangal Pal. A fuzzy graph theory approach to the facility location problem: A case
study in the indian banking system. Mathematics, 11(13):2992, 2023.
[54] Fanghui Bi, Tiantian He, Yuxuan Xie, and Xin Luo. Two-stream graph convolutional network-incorporated latent
feature analysis. IEEE Transactions on Services Computing, 16:3027–3042, 2023.
[55] Jakub Binkowski, Albert Sawczyn, Denis Janiak, Piotr Bielak, and Tomasz Kajdanowicz. Graph-level representations
using ensemble-based readout functions. In International Conference on Computational Science, pages 393–405.
Springer, 2023.

61
[56] Pranab Biswas, Surapati Pramanik, and Bibhas Chandra Giri. Single valued bipolar pentapartitioned neutrosophic set
and its application in madm strategy. 2022.
[57] Bela Bollobas. Modern graph theory. In Graduate Texts in Mathematics, 2002.
[58] John Adrian Bondy, Uppaluri Siva Ramachandra Murty, et al. Graph theory with applications, volume 290. Macmillan
London, 1976.
[59] Fateh Boutekkouk. Digital color image processing using intuitionistic fuzzy hypergraphs. Int. J. Comput. Vis. Image
Process., 11:21–40, 2021.
[60] Alain Bretto. Hypergraph theory. An introduction. Mathematical Engineering. Cham: Springer, 1, 2013.
[61] Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? arXiv preprint
arXiv:2105.14491, 2021.
[62] S Broumi and Tomasz Witczak. Heptapartitioned neutrosophic soft set. International Journal of Neutrosophic Science,
18(4):270–290, 2022.
[63] Said Broumi, Assia Bakali, Mohamed Talea, Florentin Smarandache, and V. Venkateswara Rao. Bipolar complex
neutrosophic graphs of type 1. viXra, 2018.
[64] Said Broumi, Swaminathan Mohanaselvi, Tomasz Witczak, Mohamed Talea, Assia Bakali, and Florentin Smaran-
dache. Complex fermatean neutrosophic graph and application to decision making. Decision Making: Applications in
Management and Engineering, 2023.
[65] Said Broumi, Mohamed Talea, Assia Bakali, and Florentin Smarandache. Interval valued neutrosophic graphs. Critical
Review, XII, 2016:5–33, 2016.
[66] A. Buck and James M. Keller. Evaluating path costs in multi-attributed fuzzy weighted graphs. 2019 IEEE Interna-
tional Conference on Fuzzy Systems (FUZZ-IEEE), pages 1–6, 2019.
[67] Samuel Rota Bulò and Marcello Pelillo. A game-theoretic approach to hypergraph clustering. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 35:1312–1327, 2009.
[68] Dan Busbridge, Dane Sherburn, Pietro Cavallo, and Nils Y Hammerla. Relational graph attention networks. arXiv
preprint arXiv:1904.05811, 2019.
[69] Derun Cai, Moxian Song, Chenxi Sun, Baofeng Zhang, Shenda Hong, and Hongyan Li. Hypergraph structure learning
for hypergraph neural networks. In IJCAI, pages 1923–1929, 2022.
[70] Derun Cai, Moxian Song, Chenxi Sun, Baofeng Zhang, linda Qiao, and Hongyan Li. Hypergraph structure learning
for hypergraph neural networks. In International Joint Conference on Artificial Intelligence, 2022.
[71] Yukun Cao and Yunfeng Li. An intelligent fuzzy-based recommendation system for consumer electronic products.
Expert Syst. Appl., 33:230–240, 2007.
[72] Matteo Carandini and David J Heeger. Normalization as a canonical neural computation. Nature reviews neuroscience,
13(1):51–62, 2012.
[73] Timoteo Carletti, Federico Battiston, Giulia Cencetti, and Duccio Fanelli. Random walks on hypergraphs. Physical
review E, 101(2):022308, 2020.
[74] Timotéo Carletti, Duccio Fanelli, and Renaud Lambiotte. Random walks and community detection in hypergraphs.
Journal of Physics: Complexity, 2, 2020.
[75] T-H Hubert Chan, Anand Louis, Zhihao Gavin Tang, and Chenzi Zhang. Spectral properties of hypergraph laplacian
and approximation algorithms. Journal of the ACM (JACM), 65(3):1–48, 2018.
[76] Vinod Kumar Chauhan, Jiandong Zhou, Ping Lu, Soheila Molaei, and David A Clifton. A brief review of hypernet-
works in deep learning. Artificial Intelligence Review, 57(9):250, 2024.
[77] Chaofan Chen, Zelei Cheng, Zuotian Li, and Manyi Wang. Hypergraph attention networks. In 2020 IEEE 19th
International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), pages 1560–
1565. IEEE, 2020.
[78] Haotian Chen and Jialiang Xie. Eeg-based tsk fuzzy graph neural network for driver drowsiness estimation. Informa-
tion Sciences, 679:121101, 2024.
[79] Hsinchun Chen, Roger H. L. Chiang, and Veda C. Storey. Business intelligence and analytics: From big data to big
impact. MIS Q., 36:1165–1188, 2012.
[80] Ke Cheng, Yifan Zhang, Xiangyu He, Weihan Chen, Jian Cheng, and Hanqing Lu. Skeleton-based action recognition
with shift graph convolutional network. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR), pages 180–189, 2020.
[81] Cédric Chevalier and Ilya Safro. Comparison of coarsening schemes for multilevel graph partitioning. In Learning
and Intelligent Optimization, 2009.

62
[82] Uthsav Chitra and Benjamin J. Raphael. Random walks on hypergraphs with edge-dependent vertex weights. In
International Conference on Machine Learning, 2019.
[83] Minsu Cho, Jungmin Lee, and Kyoung Mu Lee. Reweighted random walks for graph matching. In Computer Vision–
ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Pro-
ceedings, Part V 11, pages 492–505. Springer, 2010.
[84] Vaughn Climenhaga. Markov chains and mixing times. 2013.
[85] Martina Contisciani, Federico Battiston, and Caterina De Bacco. Inference of hyperedges and overlapping communities
in hypergraphs. Nature communications, 13(1):7229, 2022.
[86] Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, and Clifford Stein. Introduction to algorithms. MIT press,
2022.
[87] Chris Cornelis, Pieter M. M. De Kesel, and Etienne E. Kerre. Shortest paths in fuzzy weighted graphs. International
Journal of Intelligent Systems, 19, 2004.
[88] Bruce A Craig and Peter P Sendi. Estimation of the transition matrix of a discrete-time markov chain. Health eco-
nomics, 11(1):33–42, 2002.
[89] Ganqu Cui, Yufeng Du, Cheng Yang, Jie Zhou, Liang Xu, Xing Zhou, Xingyi Cheng, and Zhiyuan Liu. Evaluating
modules in graph contrastive learning. arXiv preprint arXiv:2106.08171, 2021.
[90] Qionghai Dai and Yue Gao. Mathematical foundations of hypergraph. In Hypergraph Computation, pages 19–40.
Springer, 2023.
[91] Suman Das, Rakhal Das, and Surapati Pramanik. Single valued pentapartitioned neutrosophic graphs. Neutrosophic
Sets and Systems, 50(1):225–238, 2022.
[92] Suman Das, Rakhal Das, and Binod Chandra Tripathy. Topology on rough pentapartitioned neutrosophic set. Iraqi
Journal of Science, 2022.
[93] Danilo Dell’Agnello, Anna Maria Fanelli, Corrado Mencar, and Massimo Minervini. Serendipitous fuzzy item recom-
mendation with profilematcher. In International Workshop on Fuzzy Logic and Applications, 2011.
[94] Ailin Deng and Bryan Hooi. Graph neural network-based anomaly detection in multivariate time series. In AAAI
Conference on Artificial Intelligence, 2021.
[95] Narsingh Deo. Graph theory with applications to engineering and computer science. Courier Dover Publications,
2016.
[96] Chinthaka Sajith Devinda and Anil Kumar. Application of fuzzy machine learning algorithm in agro-geography. 2020.
[97] Keith J. Devlin. Fundamentals of contemporary set theory. 1979.
[98] P. M. Dhanya, A. Sreekumar, M. Jathavedan, and P. B. Ramkumar. Algebra of morphological dilation on intuitionistic
fuzzy hypergraphs. International journal of scientific research in science, engineering and technology, 4:300–308,
2018.
[99] P. M. Dhanya, A. Sreekumar, M. Jathavedan, and P. B. Ramkumar. On constructing morphological erosion of intu-
itionistic fuzzy hypergraphs. The Journal of Analysis, 27:583 – 603, 2018.
[100] Reinhard Diestel. Graduate texts in mathematics: Graph theory.
[101] Reinhard Diestel. Graph theory 3rd ed. Graduate texts in mathematics, 173(33):12, 2005.
[102] Reinhard Diestel. Graph theory. Springer (print edition); Reinhard Diestel (eBooks), 2024.
[103] Kaize Ding, Jianling Wang, Jundong Li, Dingcheng Li, and Huan Liu. Be more with less: Hypergraph attention
networks for inductive text classification. arXiv preprint arXiv:2011.00387, 2020.
[104] Boyu Du, Jingya Zhou, Ling Liu, and Xiaolong She. Fl-gnn: Efficient fusion of fuzzy neural network and graph neural
network. In ECAI 2024, pages 1768–1775. IOS Press, 2024.
[105] Aurélien Ducournau and Alain Bretto. Random walks in directed hypergraphs and application to semi-supervised
image segmentation. Computer Vision and Image Understanding, 120:91–102, 2014.
[106] Zdenek Dvorák, Archontia C. Giannopoulou, and Dimitrios M. Thilikos. Forbidden graphs for tree-depth. Eur. J.
Comb., 33:969–979, 2012.
[107] Philip Ehrlich. Real numbers, generalizations of the reals, and theories of continua, volume 242. Springer Science &
Business Media, 2013.
[108] Reem Essameldin, Ahmed A. Ismail, and Saad Mohamed Darwish. Quantifying opinion strength: A neutrosophic
inference system for smart sentiment analysis of social media network. Applied Sciences, 2022.
[109] Pablo A Estévez, Michel Tesmer, Claudio A Perez, and Jacek M Zurada. Normalized mutual information feature
selection. IEEE Transactions on neural networks, 20(2):189–201, 2009.

63
[110] Shimon Even. Graph algorithms. Cambridge University Press, 2011.
[111] Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. Graph neural networks for social
recommendation. In The world wide web conference, pages 417–426, 2019.
[112] Juntao Fei and Lunhaojie Liu. Real-time nonlinear model predictive control of active power filter using self-feedback
recurrent fuzzy neural network estimator. IEEE Transactions on Industrial Electronics, 69:8366–8376, 2022.
[113] Juntao Fei, Zhe Wang, Xiao Liang, Zhilin Feng, and Yuncan Xue. Fractional sliding-mode control for microgyroscope
based on multilayer recurrent fuzzy neural network. IEEE Transactions on Fuzzy Systems, 30:1712–1721, 2022.
[114] Song Feng, Emily Heath, Brett Jefferson, Cliff Joslyn, Henry Kvinge, Hugh D Mitchell, Brenda Praggastis, Amie J
Eisfeld, Amy C Sims, Larissa B Thackray, et al. Hypergraph models of biological networks to identify genes critical
to pathogenic viral response. BMC bioinformatics, 22(1):287, 2021.
[115] Yifan Feng, Haoxuan You, Zizhao Zhang, R. Ji, and Yue Gao. Hypergraph neural networks. In AAAI Conference on
Artificial Intelligence, 2018.
[116] Alessio Ferone and Alfredo Petrosino. A neuro fuzzy approach for handling structured data. In Scalable Uncertainty
Management, 2008.
[117] Ronald C. Freiwald. An introduction to set theory and topology. 2014.
[118] Dongqi Fu and Jingrui He. Sdg: A simplified and dynamic graph neural network. In Proceedings of the 44th Interna-
tional ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2273–2277, 2021.
[119] Takaaki Fujita. Note for hypersoft filter and fuzzy hypersoft filter. Multicriteria Algorithms With Applications, 5:32–51,
2024.
[120] Takaaki Fujita. Note for neutrosophic incidence and threshold graph. SciNexuses, 1:97–125, 2024.
[121] Takaaki Fujita. A review of the hierarchy of plithogenic, neutrosophic, and fuzzy graphs: Survey and applications.
ResearchGate(Preprint), 2024.
[122] Takaaki Fujita. Short note of supertree-width and n-superhypertree-width. Neutrosophic Sets and Systems, 77:54–78,
2024.
[123] Takaaki Fujita. Survey of intersection graphs, fuzzy graphs and neutrosophic graphs. ResearchGate, July 2024.
[124] Takaaki Fujita. Survey of planar and outerplanar graphs in fuzzy and neutrosophic graphs. ResearchGate, July 2024.
[125] Takaaki Fujita. Survey of trees, forests, and paths in fuzzy and neutrosophic graphs. July 2024.
[126] Takaaki Fujita. Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncertainization:
Fuzzy, Neutrosophic, Soft, Rough, and Beyond. Biblio Publishing, 2025.
[127] Takaaki Fujita. A comprehensive discussion on fuzzy hypersoft expert, superhypersoft, and indetermsoft graphs.
Neutrosophic Sets and Systems, 77:241–263, 2025.
[128] Takaaki Fujita. Fundamental computational problems and algorithms for superhypergraphs. March 2025.
[129] Takaaki Fujita and Florentin Smarandache. Antipodal turiyam neutrosophic graphs. Neutrosophic Optimization and
Intelligent Systems, 5:1–13, 2024.
[130] Takaaki Fujita and Florentin Smarandache. A concise study of some superhypergraph classes. Neutrosophic Sets and
Systems, 77:548–593, 2024.
[131] Takaaki Fujita and Florentin Smarandache. A short note for hypersoft rough graphs. HyperSoft Set Methods in
Engineering, 3:1–25, 2024.
[132] Takaaki Fujita and Florentin Smarandache. Study for general plithogenic soft expert graphs. Plithogenic Logic and
Computation, 2:107–121, 2024.
[133] Takaaki Fujita and Florentin Smarandache. Uncertain automata and uncertain graph grammar. Neutrosophic Sets and
Systems, 74:128–191, 2024.
[134] Takaaki Fujita and Florentin Smarandache. Neutrosophic circular-arc graphs and proper circular-arc graphs. Neutro-
sophic Sets and Systems, 78:1–30, 2025.
[135] Francois Le Gall. Faster algorithms for rectangular matrix multiplication. 2012 IEEE 53rd Annual Symposium on
Foundations of Computer Science, pages 514–523, 2012.
[136] A Nagoor Gani and K Radha. On regular fuzzy graphs. 2008.
[137] Shenghua Gao, Ivor Wai-Hung Tsang, and Liang-Tien Chia. Laplacian sparse coding, hypergraph laplacian sparse
coding, and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):92–104, 2012.
[138] Weihao Gao and Tao Mo. Hypergraph clustering with inhomogeneous partitions of hyperedges pan. 2017.

64
[139] Yue Gao, Yifan Feng, Shuyi Ji, and Rongrong Ji. Hgnn+: General hypergraph neural networks. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 45(3):3181–3199, 2022.
[140] Yue Gao, Zizhao Zhang, Haojie Lin, Xibin Zhao, Shaoyi Du, and Changqing Zou. Hypergraph learning: Methods and
practices. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(5):2548–2566, 2020.
[141] Fayed F. M. Ghaleb, Azza A. Taha, Maryam Hazman, Mahmoud Abd Ellatif, and Mona Abbass. On quasi cycles in
hypergraph databases. IEEE Access, 8:147560–147568, 2020.
[142] Meysam Gheisarnejad, Ardashir Mohammadzadeh, Hamed Farsizadeh, and Mohammad Hassan Khooban. Stabi-
lization of 5g telecom converter-based deep type-3 fuzzy machine learning control for telecom applications. IEEE
Transactions on Circuits and Systems II: Express Briefs, 69:544–548, 2022.
[143] Jayanta Ghosh and Tapas Kumar Samanta. Hyperfuzzy sets and hyperfuzzy group. Int. J. Adv. Sci. Technol, 41:27–37,
2012.
[144] Puspendu Giri, Somnath Paul, and Bijoy Krishna Debnath. A fuzzy graph theory and matrix approach (fuzzy gtma) to
select the best renewable energy alternative in india. Applied Energy, 358:122582, 2024.
[145] S Gomathy, D Nagarajan, S Broumi, and M Lathamaheswari. Plithogenic sets and their application in decision making.
Infinite Study, 2020.
[146] Zengtai Gong and Junhu Wang. Hesitant fuzzy graphs, hesitant fuzzy hypergraphs and fuzzy graph decisions. Journal
of Intelligent & Fuzzy Systems, 40(1):865–875, 2021.
[147] Bahareh Goodarzi, Farzad Khorasani, Vivek Sarkar, and Dhrubajyoti Goswami. High performance multilevel graph
partitioning on gpu. 2019 International Conference on High Performance Computing & Simulation (HPCS), pages
769–778, 2019.
[148] Sathyanarayanan Gopalakrishnan, Supriya Sridharan, Soumya Ranjan Nayak, Janmenjoy Nayak, and Swaminathan
Venkataraman. Central hubs prediction for bio networks by directed hypergraph-ga with validation to covid-19 ppi.
Pattern Recognition Letters, 153:246–253, 2022.
[149] Igor I Gorban. Hyper-random phenomena: definition and description. Information Theories and Applications,
15(3):203–211, 2008.
[150] Igor I Gorban. Randomness and Hyper-randomness. Springer, 2018.
[151] II Gorban. The hyperrandom functions and their description. Radioelectronics and Communications Systems, 49(1):1–
9, 2006.
[152] Georg Gottlob, Nicola Leone, and Francesco Scarcello. Hypertree decompositions and tractable queries. In Proceed-
ings of the eighteenth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, pages 21–32,
1999.
[153] Georg Gottlob, Nicola Leone, and Francesco Scarcello. Hypertree decompositions: A survey. In Mathematical Foun-
dations of Computer Science 2001: 26th International Symposium, MFCS 2001 Mariánské Lázne, Czech Republic,
August 27–31, 2001 Proceedings 26, pages 37–57. Springer, 2001.
[154] Georg Gottlob and Reinhard Pichler. Hypergraphs in model checking: Acyclicity and hypertree-width versus clique-
width. SIAM Journal on Computing, 33(2):351–378, 2004.
[155] Linfeng Gou and Yu Zhong. A new fault diagnosis method based on attributes weighted neutrosophic set. IEEE
Access, 7:117740–117748, 2019.
[156] Palash Goyal and Emilio Ferrara. Graph embedding techniques, applications, and performance: A survey. Knowledge-
Based Systems, 151:78–94, 2018.
[157] Peter J. Green. Markov chain monte carlo in practice. 1996.
[158] Jonathan L Gross, Jay Yellen, and Mark Anderson. Graph theory and its applications. Chapman and Hall/CRC, 2018.
[159] Mingyu Guan, Anand Padmanabha Iyer, and Taesoo Kim. Dynagraph: dynamic graph neural networks at scale. In
Proceedings of the 5th ACM SIGMOD Joint International Workshop on Graph Data Management Experiences &
Systems (GRADES) and Network Data Analytics (NDA), pages 1–10, 2022.
[160] Abhishek Guleria and Rakesh Kumar Bajaj. T-spherical fuzzy graphs: Operations and applications in various selection
processes. Arabian Journal for Science and Engineering, 45:2177 – 2193, 2019.
[161] Muhammad Gulistan, Naveed Yaqoob, Zunaira Rashid, Florentin Smarandache, and Hafiz Abdul Wahab. A study on
neutrosophic cubic graphs with real life applications in industries. Symmetry, 10(6):203, 2018.
[162] Xinyu Guo, Bingjie Tian, and Xuedong Tian. Hfgnn-proto: Hesitant fuzzy graph neural network-based prototypical
network for few-shot text classification. Electronics, 11(15):2423, 2022.
[163] Zhiwei Guo, Keping Yu, Alireza Jolfaei, Gang Li, Feng Ding, and Amin Beheshti. Mixed graph neural network-
based fake news detection for sustainable vehicular social networks. IEEE Transactions on Intelligent Transportation
Systems, 24(12):15486–15498, 2022.

65
[164] Tom Gur, Noam Lifshitz, and Siqi Liu. Hypercontractivity on high dimensional expanders. In Proceedings of the 54th
Annual ACM SIGACT Symposium on Theory of Computing, pages 176–184, 2022.
[165] Venkatesan Guruswami and Sai Sandeep. An algorithmic study of the hypergraph turán problem. ArXiv,
abs/2008.07344, 2020.
[166] Dae Geun Ha, Tae Wook Ha, Junghyuk Seo, and Myoung-Ho Kim. Index-based searching for isomorphic subgraphs
in hypergraph databases. Journal of KIISE, 2019.
[167] David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
[168] M Hamidi and A Borumand Saeid. Accessible single-valued neutrosophic graphs. Journal of Applied Mathematics
and Computing, 57:121–146, 2018.
[169] Mohammad Hamidi and Marzieh Rahmati. On binary decision hypertree (hyperdiagram). AUT Journal of Mathematics
and Computing, 5(2):117–130, 2024.
[170] Mohammad Hamidi, Florentin Smarandache, and Mohadeseh Taghinezhad. Decision Making Based on Valued Fuzzy
Superhypergraphs. Infinite Study, 2023.
[171] Mohammad Hamidi and Mohadeseh Taghinezhad. Application of Superhypergraphs-Based Domination Number in
Real World. Infinite Study, 2023.
[172] Jianlong Hao, Zhibin Liu, Qiwei Sun, Chen Zhang, and Jie Wang. A static-dynamic hypergraph neural network
framework based on residual learning for stock recommendation. Complex., 2024:5791802:1–5791802:12, 2024.
[173] Juris Hartmanis and Richard E Stearns. On the computational complexity of algorithms. Transactions of the American
Mathematical Society, 117:285–306, 1965.
[174] Koby Hayashi, Sinan G Aksoy, Cheong Hee Park, and Haesun Park. Hypergraph random walks, laplacians, and
clustering. In Proceedings of the 29th acm international conference on information & knowledge management, pages
495–504, 2020.
[175] Juncai He, Lin Li, Jinchao Xu, and Chunyue Zheng. Relu deep neural networks and linear finite elements. arXiv
preprint arXiv:1807.03973, 2018.
[176] Wei He and Yiting Dong. Adaptive fuzzy neural network control for a constrained robot using impedance learning.
IEEE Transactions on Neural Networks and Learning Systems, 29:1174–1186, 2018.
[177] Yixuan He, Quan Gan, David Wipf, Gesine D Reinert, Junchi Yan, and Mihai Cucuringu. Gnnrank: Learning global
rankings from pairwise comparisons via directed graph neural networks. In international conference on machine
learning, pages 8581–8612. PMLR, 2022.
[178] Yixuan He, Gesine Reinert, David Wipf, and Mihai Cucuringu. Robust angular synchronization via directed graph
neural networks. arXiv preprint arXiv:2310.05842, 2023.
[179] Yixuan He, Xitong Zhang, Junjie Huang, Benedek Rozemberczki, Mihai Cucuringu, and Gesine Reinert. Pytorch
geometric signed directed: a software package on graph neural networks for signed and directed graphs. In Learning
on Graphs Conference, pages 12–1. PMLR, 2024.
[180] R. Hema, R. Sudharani, and M. Kavitha. A novel approach on plithogenic interval valued neutrosophic hypersoft sets
and its application in decision making. Indian Journal Of Science And Technology, 2023.
[181] Nasimeh Heydaribeni, Xinrui Zhan, Ruisi Zhang, Tina Eliassi-Rad, and Farinaz Koushanfar. Hypop: Distributed
constrained combinatorial optimization leveraging hypergraph neural networks. ArXiv, abs/2311.09375, 2023.
[182] Karel Hrbacek and Thomas Jech. Introduction to set theory, revised and expanded. 2017.
[183] Chao Hu, Ruishi Yu, Binqi Zeng, Yu Zhan, Ying Fu, Quan Zhang, Rongkai Liu, and Heyuan Shi. Hyperattack:
Multi-gradient-guided white-box adversarial structure attack of hypergraph neural networks. ArXiv, abs/2302.12407,
2023.
[184] Shenglong Hu and Liqun Qi. Algebraic connectivity of an even uniform hypergraph. Journal of Combinatorial
Optimization, 24:564–579, 2012.
[185] Shenglong Hu and Liqun Qi. The laplacian of a uniform hypergraph. Journal of Combinatorial Optimization, 29:331–
366, 2015.
[186] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure
Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information process-
ing systems, 33:22118–22133, 2020.
[187] Jing Huang and Jie Yang. Unignn: a unified framework for graph and hypergraph neural networks. arXiv preprint
arXiv:2105.00956, 2021.
[188] Liangsong Huang, Yu Hu, Yuxia Li, PK Kishore Kumar, Dipak Koley, and Arindam Dey. A study of regular and
irregular neutrosophic graphs with real life applications. Mathematics, 7(6):551, 2019.

66
[189] S Satham Hussain, N Durga, Muhammad Aslam, G Muhiuddin, and Ganesh Ghorai. New concepts on quadriparti-
tioned neutrosophic competition graph with application. International Journal of Applied and Computational Mathe-
matics, 10(2):57, 2024.
[190] S Satham Hussain, N Durga, Rahmonlou Hossein, and Ghorai Ganesh. New concepts on quadripartitioned single-
valued neutrosophic graph with real-life application. International Journal of Fuzzy Systems, 24(3):1515–1529, 2022.
[191] S Satham Hussain, Hossein Rashmonlou, R Jahir Hussain, Sankar Sahoo, Said Broumi, et al. Quadripartitioned
neutrosophic graph structures. Neutrosophic Sets and Systems, 51(1):17, 2022.
[192] S Satham Hussain, Isnaini Rosyida, Hossein Rashmanlou, and Farshid Mofidnakhaei. Interval intuitionistic neutro-
sophic sets with its applications to interval intuitionistic neutrosophic graphs and climatic analysis. Computational and
Applied Mathematics, 40(4):121, 2021.
[193] Satham Hussain, Jahir Hussain, Isnaini Rosyida, and Said Broumi. Quadripartitioned neutrosophic soft graphs. In
Handbook of Research on Advances and Applications of Fuzzy Sets and Logic, pages 771–795. IGI Global, 2022.
[194] Yasmine M Ibrahim, Reem Essameldin, and Saad M Darwish. An adaptive hate speech detection approach using
neutrosophic neural networks for social media forensics. Computers, Materials & Continua, 79(1), 2024.
[195] Borislav Iordanov. Hypergraphdb: a generalized graph database. In Web-Age Information Management: WAIM 2010
International Workshops: IWGD 2010, XMLDM 2010, WCMT 2010, Jiuzhaigou Valley, China, July 15-17, 2010
Revised Selected Papers 11, pages 25–36. Springer, 2010.
[196] Alex Gabriel Lara Jacome, Elizabeth Mayorga Aldaz, Miguel Ramos Argilagos, and Darvin Manuel Ramı́rez Guerra.
Neutrosophic perspectives in healthcare decision-making: Navigating complexity with ethics, information, and collab-
oration. Neutrosophic Sets and Systems, 62(1):15, 2023.
[197] Sirus Jahanpanah and Roohallah Daneshpayeh. On derived superhyper be-algebras. Neutrosophic Sets and Systems,
57(1):21, 2023.
[198] Sirus Jahanpanah and Roohallah Daneshpayeh. An outspread on valued logic superhyperalgebras. Facta Universitatis,
Series: Mathematics and Informatics, 2024.
[199] Chiranjibe Jana, Tapan Senapati, Monoranjan Bhowmik, and Madhumangal Pal. On intuitionistic fuzzy g-subalgebras
of g-algebras. Fuzzy Information and Engineering, 7(2):195–209, 2015.
[200] Wasnaa Kadhim Jawad and Abbas M. Al-Bakry. Big data analytics: A survey. Iraqi Journal for Computers and
Informatics, 2022.
[201] Thomas Jech. Set theory: The third millennium edition, revised and expanded. Springer, 2003.
[202] Janice Jeffs and Benoı̂t Mario Papillon. Globalization, the new economy and project management: a graph theory
perspective. The Journal of Modern Project Management, 7, 2019.
[203] Bukyoung Jhun, Minjae Jo, and Byungnam Kahng. Simplicial sis model in scale-free uniform hypergraph. Journal of
Statistical Mechanics: Theory and Experiment, 2019, 2019.
[204] Jianwen Jiang, Yuxuan Wei, Yifan Feng, Jingxuan Cao, and Yue Gao. Dynamic hypergraph neural networks. In
International Joint Conference on Artificial Intelligence, 2019.
[205] Weiwei Jiang and Jiayun Luo. Graph neural network for traffic forecasting: A survey. ArXiv, abs/2101.11174, 2021.
[206] Hayoung Jo and Seong-Whan Lee. Edge conditional node update graph neural network for multi-variate time series
anomaly detection. Information Sciences, page 121062, 2024.
[207] Young Bae Jun, Kul Hur, and Kyoung Ja Lee. Hyperfuzzy subalgebras of bck/bci-algebras. Annals of Fuzzy Mathe-
matics and Informatics, 2017.
[208] Ilanthenral Kandasamy, WB Vasantha, Jagan M Obbineni, and Florentin Smarandache. Sentiment analysis of tweets
using refined neutrosophic sets. Computers in Industry, 115:103180, 2020.
[209] Vasantha Kandasamy, K Ilanthenral, and Florentin Smarandache. Neutrosophic graphs: a new dimension to graph
theory. Infinite Study, 2015.
[210] Xiaojun Kang, Xinchuan Li, Hong Yao, Dan Li, Bo Jiang, Xiaoyue Peng, Tiejun Wu, Shihua Qi, and Lijun Dong.
Dynamic hypergraph neural networks based on key hyperedges. Inf. Sci., 616:37–51, 2022.
[211] Komal Kapoor, Dhruv Sharma, and Jaideep Srivastava. Weighted node degree centrality for hypergraphs. 2013 IEEE
2nd Network Science Workshop (NSW), pages 152–155, 2013.
[212] Abdullah Kargın and Memet Şahin. Superhyper groups and neutro–superhyper groups. 2023 Neutrosophic SuperHy-
perAlgebra And New Types of Topologies, page 25, 2023.
[213] Abdullah Kargın, Florentin Smarandache, and Memet Şahin. New Type Hyper Groups, New Type SuperHyper Groups
and Neutro-New Type SuperHyper Groups. Infinite Study, 2023.

67
[214] George Karypis. Multilevel hypergraph partitioning. In Multilevel Optimization in VLSICAD, pages 125–154. Springer,
2003.
[215] George Karypis, Rajat Aggarwal, Vipin Kumar, and Shashi Shekhar. Multilevel hypergraph partitioning: Application
in vlsi domain. In Proceedings of the 34th annual Design Automation Conference, pages 526–529, 1997.
[216] George Karypis and Vipin Kumar. Analysis of multilevel graph partitioning. Proceedings of the IEEE/ACM SC95
Conference, pages 29–29, 1995.
[217] George Karypis and Vipin Kumar. Multilevel graph partitioning schemes. In International Conference on Parallel
Processing, 1995.
[218] George Karypis and Vipin Kumar. Multilevel k-way hypergraph partitioning. In Proceedings of the 36th annual
ACM/IEEE design automation conference, pages 343–348, 1999.
[219] Peter Keevash. Hypergraph turan problems. Surveys in combinatorics, 392:83–140, 2011.
[220] Tamás Képes. The critical node detection problem in hypergraphs using weighted node degree centrality. PeerJ
Computer Science, 9, 2023.
[221] Huda E Khalid, Gonca Durmaz Gungor, and Muslim A Noah. Neutrosophic SuperHyper Bi-Topological Spaces: Extra
Topics. Infinite Study, 2024.
[222] Eun-Sol Kim, Woo Young Kang, Kyoung-Woon On, Yu-Jung Heo, and Byoung-Tak Zhang. Hypergraph attention
networks for multimodal learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recog-
nition, pages 14581–14590, 2020.
[223] Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-normalizing neural networks.
Advances in neural information processing systems, 30, 2017.
[224] Dalibor Krleža and Krešimir Fertalj. Graph matching using hierarchical fuzzy graph neural networks. Ieee transactions
on fuzzy systems, 25(4):892–904, 2016.
[225] David Krueger, Chin-Wei Huang, Riashat Islam, Ryan Turner, Alexandre Lacoste, and Aaron Courville. Bayesian
hypernetworks. arXiv preprint arXiv:1710.04759, 2017.
[226] Adarsh Kumar, C. P. Gandhi, Yuqing Zhou, He sheng Tang, and Jiawei Xiang. Fault diagnosis of rolling element
bearing based on symmetric cross entropy of neutrosophic sets. Measurement, 2020.
[227] Marius Leordeanu and Cristian Sminchisescu. Efficient hypergraph clustering. In International Conference on Artifi-
cial Intelligence and Statistics, 2012.
[228] Juanhui Li, Harry Shomer, Jiayu Ding, Yiqi Wang, Yao Ma, Neil Shah, Jiliang Tang, and Dawei Yin. Are message
passing neural networks really helpful for knowledge graph completion? In Annual Meeting of the Association for
Computational Linguistics, 2022.
[229] Kunhao Li, Zhenhua Huang, and Zhaohong Jia. Rahg: A role-aware hypergraph neural network for node classification
in graphs. IEEE Transactions on Network Science and Engineering, 10:2098–2108, 2023.
[230] Pan Li and Olgica Milenkovic. Inhomogeneous hypergraph clustering with applications. ArXiv, abs/1709.01249, 2017.
[231] Wei Li, Bin Xiang, Fan Yang, Yuchen Rong, Yanbin Yin, Jianhua Yao, and Han Zhang. scmhnn: a novel hypergraph
neural network for integrative analysis of single-cell epigenomic, transcriptomic and proteomic data. Briefings in
bioinformatics, 24 6, 2023.
[232] Xiaowei Liao, Yong Xu, and Haibin Ling. Hypergraph neural networks for hypergraph matching. In Proceedings of
the IEEE/CVF International Conference on Computer Vision, pages 1266–1275, 2021.
[233] Shinyoung Lim, Kwanyong Lee, Okhwan Byeon, and Taiyun Kim. Efficient iris recognition through improvement of
feature vector and classifier. ETRI Journal, 23, 2001.
[234] Guifang Lin and Wei Shen. Research on convolutional neural network based on improved relu piecewise activation
function. Procedia computer science, 131:977–984, 2018.
[235] Sheng-Wei Lin and Huai-Wei Lo. An fmea model for risk assessment of university sustainability: using a combined
itara with topsis-al approach based neutrosophic sets. Annals of Operations Research, pages 1–27, 2023.
[236] Dayun Liu, Xianghui Li, Liangliang Zhang, Xiaowen Hu, Jiaxuan Zhang, Zhirong Liu, and Lei Deng. Hgnnlda:
Predicting lncrna-drug sensitivity associations via a dual channel hypergraph neural network. IEEE/ACM Transactions
on Computational Biology and Bioinformatics, 20:3547–3555, 2023.
[237] Huaiyuan Liu, Donghua Yang, Xianzhang Liu, Xinglei Chen, Zhiyu Liang, Hongzhi Wang, Yong Cui, and Jun Gu.
Todynet: temporal dynamic graph neural network for multivariate time series classification. Information Sciences,
page 120914, 2024.
[238] Li Liu and Fanzhang Li. A survey on dynamic fuzzy machine learning. ACM Computing Surveys, 55:1 – 42, 2022.

68
[239] Luotao Liu, Feng Huang, Xuan Liu, Zhankun Xiong, Menglu Li, Congzhi Song, and Wen Zhang. Multi-view con-
trastive learning hypergraph neural network for drug-microbe-disease association prediction. In International Joint
Conference on Artificial Intelligence, 2023.
[240] Shengyuan Liu, Pei Lv, Yuzhen Zhang, Jie Fu, Junjin Cheng, Wanqing Li, Bing Zhou, and Mingliang Xu. Semi-
dynamic hypergraph neural network for 3d pose estimation. In International Joint Conference on Artificial Intelligence,
2020.
[241] Xizhi Liu and Dhruv Mubayi. A hypergraph turán problem with no stability. Combinatorica, 42:433–462, 2019.
[242] Yu-Ting Liu, Yang-Yin Lin, Shang-Lin Wu, Chun-Hsiang Chuang, and Chin-Teng Lin. Brain dynamics in predicting
driving fatigue using a recurrent self-evolving fuzzy neural network. IEEE Transactions on Neural Networks and
Learning Systems, 27:347–360, 2016.
[243] Yue Liu, Wenxuan Tu, Sihang Zhou, Xinwang Liu, Linxuan Song, Xihong Yang, and En Zhu. Deep graph clustering
via dual correlation reduction. In AAAI Conference on Artificial Intelligence, 2021.
[244] Zijian Liu, Yang Luo, Xitong Pu, Geyong Min, and Chunbo Luo. A multi-modal hypergraph neural network via
parametric filtering and feature sampling. IEEE Transactions on Big Data, 9:1365–1379, 2023.
[245] Han Lu, Quanxue Gao, Qianqian Wang, Ming Yang, and Wei Xia. Centerless multi-view k-means based on the
adjacency matrix. In AAAI Conference on Artificial Intelligence, 2023.
[246] Jie Lu, Guangzhi Ma, and Guangquan Zhang. Fuzzy machine learning: A comprehensive framework and systematic
review. IEEE Transactions on Fuzzy Systems, 32:3861–3878, 2024.
[247] Xiaoyi Luo, Jiaheng Peng, and Jun Liang. Directed hypergraph attention network for traffic forecasting. IET Intelligent
Transport Systems, 16(1):85–98, 2022.
[248] Anam Luqman, Muhammad Akram, and Florentin Smarandache. Complex neutrosophic hypergraphs: New social
network models. Algorithms, 12:234, 2019.
[249] Anam Luqman, Muhammad Akram, and Florentin Smarandache. Complex neutrosophic hypergraphs: new social
network models. Algorithms, 12(11):234, 2019.
[250] Xuejiao Ma, Yu Jin, and Qingli Dong. A generalized dynamic fuzzy neural network based on singular spectrum
analysis optimized by brain storm optimization for short-term wind speed forecasting. Appl. Soft Comput., 54:296–
312, 2017.
[251] Zhongtian Ma, Zhiguo Jiang, and Haopeng Zhang. Hyperspectral image classification using feature fusion hypergraph
convolution neural network. IEEE Transactions on Geoscience and Remote Sensing, 60:1–14, 2021.
[252] Rupkumar Mahapatra, Sovan Samanta, Madhumangal Pal, Tofigh Allahviranloo, and Antonios Kalampakas. A study
on linguistic z-graph and its application in social networks. Mathematics, 12(18):2898, 2024.
[253] Rupkumar Mahapatra, Sovan Samanta, Madhumangal Pal, and Qin Xin. Link prediction in social networks by neutro-
sophic graph. Int. J. Comput. Intell. Syst., 13:1699–1713, 2020.
[254] Pradip Kumar Maji, Ranjit Biswas, and A Ranjan Roy. Soft set theory. Computers & mathematics with applications,
45(4-5):555–562, 2003.
[255] Muhammad Aslam Malik, Ali Hassan, Said Broumi, Assia Bakali, Mohamed Talea, and Florentin Smarandache.
Isomorphism of bipolar single valued neutrosophic hypergraphs. Collected Papers. Volume IX: On Neutrosophic
Theory and Its Applications in Algebra, page 72, 2022.
[256] Rama Mallick and Surapati Pramanik. Pentapartitioned neutrosophic set and its properties. Neutrosophic Sets and
Systems, 35:49, 2020.
[257] J. Manyika. Big data: The next frontier for innovation, competition, and productivity. 2011.
[258] Nivetha Martin and Florentin Smarandache. Concentric plithogenic hypergraph based on plithogenic hypersoft sets ?
a novel outlook. Neutrosophic Sets and Systems, 33:5, 2020.
[259] Sunil Mathew and MS Sunitha. Cycle connectivity in weighted graphs. Proyecciones (Antofagasta), 30(1):1–17, 2011.
[260] Keith R. Matthews. Elementary linear algebra. 1998.
[261] Justin J Miller. Graph database applications and concepts with neo4j. In Proceedings of the southern association for
information systems conference, Atlanta, GA, USA, volume 2324, pages 141–147, 2013.
[262] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adver-
sarial networks. ArXiv, abs/1802.05957, 2018.
[263] Mai Mohamed, Mohamed Abdel-Basset, Abdel-Nasser Hussien, and Florentin Smarandache. Using neutrosophic sets
to obtain pert three-times estimates in project management. Infinite Study, 2017.
[264] Mai Mohamed and Asmaa Elsayed. A novel multi-criteria decision making approach based on bipolar neutrosophic
set for evaluating financial markets in egypt. Multicriteria Algorithms with Applications, 2024.

69
[265] Mona Mohamed, Alaa Elmor, Florentin Smarandache, and Ahmed A Metwaly. An efficient superhypersoft framework
for evaluating llms-based secure blockchain platforms. Neutrosophic Sets and Systems, 72:1–21, 2024.
[266] Dmitriy Molodtsov. Soft set theory-first results. Computers & mathematics with applications, 37(4-5):19–31, 1999.
[267] John N Mordeson and Sunil Mathew. Advanced topics in fuzzy graph theory, volume 375. Springer, 2019.
[268] John N Mordeson and Premchand S Nair. Fuzzy graphs and fuzzy hypergraphs, volume 46. Physica, 2012.
[269] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin
Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference
on artificial intelligence, volume 33, pages 4602–4609, 2019.
[270] Norzieha Mustapha, Suriana Alias, Roliza Md Yasin, Ilyani Abdullah, and Said Broumi. Cardiovascular diseases risk
analysis using distance-based similarity measure of neutrosophic sets. 2021.
[271] M Myvizhi, Ahmed M Ali, Ahmed Abdelhafeez, and Haitham Rizk Fadlallah. MADM Strategy Application of Bipolar
Single Valued Heptapartitioned Neutrosophic Set. Infinite Study, 2023.
[272] S Narasimman, M Shanmugapriya, R Sundareswaran, Laxmi Rathour, Lakshmi Narayan Mishra, Vinita Dewangan,
and Vishnu Narayan Mishra. Identification of influential factors affecting student performance in semester examina-
tions in the educational institution using score topological indices in single valued neutrosophic graphs. Neutrosophic
Sets and Systems, 75:224–240, 2025.
[273] Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. A review of relational machine learning
for knowledge graphs. Proceedings of the IEEE, 104(1):11–33, 2015.
[274] TM Nishad, Talal Ali Al-Hawary, and B Mohamed Harif. General fuzzy graphs. Ratio Mathematica, 47, 2023.
[275] Ya-Wei Niu, Cun-Quan Qu, Guang-Hui Wang, and Gui-Ying Yan. Rwhmda: random walk on hypergraph for microbe-
disease association prediction. Frontiers in microbiology, 10:1578, 2019.
[276] Oluseyi Olurotimi, Amir Dembo, and Thomas Kailath. Neural network weight matrix synthesis using optimal control
techniques. In Neural Information Processing Systems, 1989.
[277] Alessandro Pagano, Raffaele Giordano, and Ivan Portoghese. A pipe ranking method for water distribution network
resilience assessment based on graph-theory metrics aggregated through bayesian belief networks. Water Resources
Management, 36(13):5091–5106, 2022.
[278] Sunay P Pai and Rajesh S Prabhu Gaonkar. Modelling uncertainty using neutrosophic sets for precise risk assessment
of marine systems. International Journal of System Assurance Engineering and Management, pages 1–8, 2023.
[279] Madhumangal Pal, Sovan Samanta, and Ganesh Ghorai. Modern trends in fuzzy graph theory. Springer, 2020.
[280] Erlin Pan and Zhao Kang. Multi-view contrastive graph clustering. In Neural Information Processing Systems, 2021.
[281] Sakshi Dev Pandey, AS Ranadive, and Sovan Samanta. Bipolar-valued hesitant fuzzy graph and its application. Social
Network Analysis and Mining, 12(1):14, 2022.
[282] Jiahao Pang and Gene Cheung. Graph laplacian regularization for image denoising: Analysis in the continuous domain.
IEEE Transactions on Image Processing, 26:1770–1785, 2016.
[283] Christos H Papadimitriou. Computational complexity. In Encyclopedia of computer science, pages 260–265. 2003.
[284] R Parvathi, S Thilagavathi, and MG Karunambigai. Intuitionistic fuzzy hypergraphs. Cybernetics and Information
Technologies, 9(2):46–53, 2009.
[285] Rangasamy Parvathi, S. Thilagavathi, and M. G. Karunambigai. Operations on intuitionistic fuzzy hypergraphs. Inter-
national Journal of Computer Applications, 51:46–54, 2012.
[286] T Pathinathan, J Jon Arockiaraj, and J Jesintha Rosline. Hesitancy fuzzy graphs. Indian Journal of Science and
Technology, 8(35):1–5, 2015.
[287] Vasile Patrascu. Penta and hexa valued representation of neutrosophic information. arXiv preprint arXiv:1603.03729,
2016.
[288] Zdzisław Pawlak. Rough sets. International journal of computer & information sciences, 11:341–356, 1982.
[289] Zdzislaw Pawlak. Rough set theory and its applications to data analysis. Cybernetics & Systems, 29(7):661–688, 1998.
[290] Zdzisław Pawlak. Rough sets and intelligent data analysis. Information sciences, 147(1-4):1–12, 2002.
[291] Zdzislaw Pawlak, Jerzy Grzymala-Busse, Roman Slowinski, and Wojciech Ziarko. Rough sets. Communications of
the ACM, 38(11):88–95, 1995.
[292] Zdzislaw Pawlak, Lech Polkowski, and Andrzej Skowron. Rough set theory. KI, 15(3):38–39, 2001.
[293] Zdzislaw Pawlak, S. K. Michael Wong, Wojciech Ziarko, et al. Rough sets: probabilistic versus deterministic approach.
International Journal of Man-Machine Studies, 29(1):81–95, 1988.

70
[294] Nicole Pearcy, Jonathan J Crofts, and Nadia Chuzhanova. Hypergraph models of metabolism. International Journal
of Biological, Veterinary, Agricultural and Food Engineering, 8(8):752–756, 2014.
[295] Pramod Kumar Poladi and K. Sagar. Reinforcement learning and neuro-fuzzy gnn-based vertical handover decision
on internet of vehicles. Concurrency and Computation: Practice and Experience, 35, 2023.
[296] Stephen Pryke. Towards a social network theory of project governance. Construction Management and Economics,
23:927 – 939, 2005.
[297] Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, and Jie Tang. Gcc:
Graph contrastive coding for graph neural network pre-training. Proceedings of the 26th ACM SIGKDD International
Conference on Knowledge Discovery & Data Mining, 2020.
[298] Shio Gai Quek, Ganeshsree Selvachandran, D Ajay, P Chellamani, David Taniar, Hamido Fujita, Phet Duong,
Le Hoang Son, and Nguyen Long Giang. New concepts of pentapartitioned neutrosophic graphs and applications
for determining safest paths and towns in response to covid-19. Computational and Applied Mathematics, 41(4):151,
2022.
[299] Marzieh Rahmati and Mohammad Hamidi. Extension of g-algebras to superhyper g-algebras. Neutrosophic Sets and
Systems, 55(1):34, 2023.
[300] Marzieh Rahmati and Mohammad Hamidi. On strong super hyper eq algebras: A proof-of-principle study. Plithogenic
Logic and Computation, 2:29–36, 2024.
[301] Sajad Ramezani, Mauzama Firdaus, and Lili Mou. Claim-centric and sentiment guided graph attention network for
rumour detection. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language
Resources and Evaluation (LREC-COLING 2024), pages 3235–3241, 2024.
[302] M Ramya, Sandesh Murali, and R.Radha. Bipolar quadripartitioned neutrosophic soft set. 2022.
[303] Alfréd Rényi. Representations for real numbers and their ergodic properties. Acta Mathematica Academiae Scien-
tiarum Hungarica, 8:477–493, 1957.
[304] Kaspar Riesen and Horst Bunke. Iam graph database repository for graph based pattern recognition and machine
learning. In Structural, Syntactic, and Statistical Pattern Recognition: Joint IAPR International Workshop, SSPR &
SPR 2008, Orlando, USA, December 4-6, 2008. Proceedings, pages 287–297. Springer, 2008.
[305] Soufiane Rital, H. Cherifi, and Serge Miguet. K-way hypergraph partitioning and color image segmentation. 2006.
[306] Azriel Rosenfeld. Fuzzy graphs. In Fuzzy sets and their applications to cognitive and decision processes, pages 77–95.
Elsevier, 1975.
[307] Kashob Kumar Roy, Amit Roy, AKM Mahbubur Rahman, M Ashraful Amin, and Amin Ahsan Ali. Structure-aware
hierarchical graph pooling using information bottleneck. In 2021 International Joint Conference on Neural Networks
(IJCNN), pages 1–8. IEEE, 2021.
[308] Toni Sagayaraj and Carsten Eickhoff. Image-like graph representations for improved molecular property prediction.
arXiv preprint arXiv:2111.10695, 2021.
[309] A. A. Salama, Haitham A. El-Ghareeb, Ayman M. Manie, and Momen M Lotfy. Utilizing neutrosophic set in social
network analysis e-learning systems. 2014.
[310] AA Salama, A Haitham, A Manie, and M Lotfy. Utilizing neutrosophic set in social network analysis e-learning
systems. International Journal of Information Science and Intelligent System, 3(2):61–72, 2014.
[311] Sovan Samanta and Madhumangal Pal. Bipolar fuzzy hypergraphs. International Journal of Fuzzy Logic Systems,
2(1):17–28, 2012.
[312] Jimena Montes De Oca Sánchez, Myriam Paulina Barreno Sánchez, Miriam Janeth Pantoja Burbano, and Os-
manys Pérez Peña. Neutrosophic marketing strategy and consumer behavior. Neutrosophic Sets and Systems, 62:209–
216, 2023.
[313] S Satham Hussain, Durga Nagarajan, Hossein Rashmanlou, and Farshid Mofidnakhaei. Novel supply chain deci-
sion making model under m-polar quadripartitioned neutrosophic environment. Journal of Applied Mathematics and
Computing, pages 1–26, 2024.
[314] P Sathya, Nivetha Martin, and Florentine Smarandache. Plithogenic forest hypersoft sets in plithogenic contradiction
based multi-criteria decision making. Neutrosophic Sets and Systems, 73:668–693, 2024.
[315] Ramit Sawhney, Shivam Agarwal, Arnav Wadhwa, Tyler Derr, and Rajiv Ratn Shah. Stock selection via spatiotemporal
hypergraph attention network: A learning to rank approach. In Proceedings of the AAAI Conference on Artificial
Intelligence, volume 35, pages 497–504, 2021.
[316] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural
network model. IEEE transactions on neural networks, 20(1):61–80, 2008.
[317] Sebastian Schlag, Vitali Henne, Tobias Heuer, Henning Meyerhenke, Peter Sanders, and Christian Schulz. k-way
hypergraph partitioning via n-level recursive bisection. ArXiv, abs/1511.03137, 2015.

71
[318] Idan Schwartz, Seunghak Yu, Tamir Hazan, and Alexander G Schwing. Factor graph attention. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2039–2048, 2019.
[319] John Scott. What is social network analysis? Bloomsbury Academic, 2012.
[320] Robert Sedgewick and Kevin Wayne. Algorithms. Addison-wesley professional, 2011.
[321] Ranu Sewada, Ashwani Jangid, Piyush Kumar, and Neha Mishra. Explainable artificial intelligence (xai). international
journal of food and nutritional sciences, 2023.
[322] Gulfam Shahzadi, Muhammad Akram, Arsham Borumand Saeid, et al. An application of single-valued neutrosophic
sets in medical diagnosis. Neutrosophic sets and systems, 18:80–88, 2017.
[323] Francina Shalini. Trigonometric similarity measures of pythagorean neutrosophic hypersoft sets. Neutrosophic Systems
with Applications, 2023.
[324] Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. Pitfalls of graph neural
network evaluation. ArXiv, abs/1811.05868, 2018.
[325] Lei Shi, Yifan Zhang, Jian Cheng, and Hanqing Lu. Skeleton-based action recognition with directed graph neural
networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7912–7921,
2019.
[326] Lilian Shi. Correlation coefficient of simplified neutrosophic sets for bearing fault diagnosis. Shock and Vibration,
2016:1–11, 2016.
[327] Xiaolong Shi, Saeed Kosari, Hossein Rashmanlou, Said Broumi, and S Satham Hussain. Properties of interval-valued
quadripartitioned neutrosophic graphs with real-life application. Journal of Intelligent & Fuzzy Systems, 44(5):7683–
7697, 2023.
[328] Pritpal Singh. A neutrosophic-entropy based clustering algorithm (nebca) with hsv color system: A special application
in segmentation of parkinson’s disease (pd) mr images. Computer methods and programs in biomedicine, 189:105317,
2020.
[329] S Sivasankar and Said Broumi. A new algorithm to determine the density of a balanced neutrosophic graph and its
application to enhance education quality. In Handbook of research on the applications of neutrosophic sets theory and
their extensions in education, pages 1–17. IGI Global, 2023.

[330] Damjan Škulj. Discrete time markov chains with interval probabilities. International journal of approximate reasoning,
50(8):1314–1329, 2009.
[331] F. Smarandache. Introduction to superhyperalgebra and neutrosophic superhyperalgebra. Journal of Algebraic Hyper-
structures and Logical Algebras, 2022.
[332] Florentin Smarandache. A unifying field in logics: Neutrosophic logic. In Philosophy, pages 1–141. American
Research Press, 1999.
[333] Florentin Smarandache. Definitions derived from neutrosophics. Infinite Study, 2003.
[334] Florentin Smarandache. Neutrosophic set-a generalization of the intuitionistic fuzzy set. International journal of pure
and applied mathematics, 24(3):287, 2005.
[335] Florentin Smarandache. A unifying field in logics: neutrosophic logic. Neutrosophy, neutrosophic set, neutrosophic
probability: neutrsophic logic. Neutrosophy, neutrosophic set, neutrosophic probability. Infinite Study, 2005.
[336] Florentin Smarandache. Neutrosophic physics: More problems, more solutions. 2010.
[337] Florentin Smarandache. n-valued refined neutrosophic logic and its applications to physics. Infinite study, 4:143–146,
2013.
[338] Florentin Smarandache. Plithogenic set, an extension of crisp, fuzzy, intuitionistic fuzzy, and neutrosophic sets-
revisited. Infinite study, 2018.
[339] Florentin Smarandache. Plithogeny, plithogenic set, logic, probability, and statistics. arXiv preprint arXiv:1808.03948,
2018.
[340] Florentin Smarandache. n-superhypergraph and plithogenic n-superhypergraph. Nidus Idearum, 7:107–113, 2019.
[341] Florentin Smarandache. Extension of HyperGraph to n-SuperHyperGraph and to Plithogenic n-SuperHyperGraph,
and Extension of HyperAlgebra to n-ary (Classical-/Neutro-/Anti-) HyperAlgebra. Infinite Study, 2020.
[342] Florentin Smarandache. History of superhyperalgebra and neutrosophic superhyperalgebra (revisited again). Neutro-
sophic Algebraic Structures and Their Applications, page 10, 2022.
[343] Florentin Smarandache. Introduction to the n-SuperHyperGraph-the most general form of graph today. Infinite Study,
2022.
[344] Florentin Smarandache. Practical applications of IndetermSoft Set and IndetermHyperSoft Set and introduction to
TreeSoft Set as an extension of the MultiSoft Set. Infinite Study, 2022.

72
[345] Florentin Smarandache. The SuperHyperFunction and the Neutrosophic SuperHyperFunction (revisited again), vol-
ume 3. Infinite Study, 2022.
[346] Florentin Smarandache. Decision making based on valued fuzzy superhypergraphs. 2023.
[347] Florentin Smarandache. Foundation of the superhypersoft set and the fuzzy extension superhypersoft set: A new
vision. Neutrosophic Systems with Applications, 11:48–51, 2023.
[348] Florentin Smarandache. New types of topologies and neutrosophic topologies. Neutrosophic Systems with Applica-
tions, 1:1–3, 2023.
[349] Florentin Smarandache. New types of topologies and neutrosophic topologies (improved version). Neutrosophic Sets
and Systems, 57(1):14, 2023.
[350] Florentin Smarandache. SuperHyperFunction, SuperHyperStructure, Neutrosophic SuperHyperFunction and Neutro-
sophic SuperHyperStructure: Current understanding and future directions. Infinite Study, 2023.
[351] Florentin Smarandache. Foundation of superhyperstructure & neutrosophic superhyperstructure. Neutrosophic Sets
and Systems, 63(1):21, 2024.
[352] Florentin Smarandache. Superhyperstructure & neutrosophic superhyperstructure, 2024. Accessed: 2024-12-01.
[353] Florentin Smarandache. Short introduction to standard and nonstandard neutrosophic set and logic. Neutrosophic Sets
and Systems, 77:395–404, 2025.
[354] Florentin Smarandache and Said Broumi. Neutrosophic graph theory and algorithms. IGI Global, 2019.
[355] Florentin Smarandache and NM Gallup. Generalization of the intuitionistic fuzzy set to the neutrosophic set. In
International Conference on Granular Computing, pages 8–42. Citeseer, 2006.
[356] Florentin Smarandache, WB Kandasamy, and K Ilanthenral. Applications of bimatrices to some fuzzy and neutrosophic
models. 2005.
[357] Florentin Smarandache and Nivetha Martin. Plithogenic n-super hypergraph in novel multi-attribute decision making.
Infinite Study, 2020.
[358] Florentin Smarandache, Memet Şahin, Derya Bakbak, Vakkas Uluçay, and Abdullah Kargın. Neutrosophic SuperHy-
perAlgebra and New Types of Topologies. Infinite Study, 2023.
[359] Florentin Smarandache and AA Salama. Neutrosophic crisp set theory. 2015.
[360] Florentin Smarandache, A. Saranya, A. Kalavathi, and S. Krishnaprakash. Neutrosophic superhypersoft sets. Neutro-
sophic Sets and Systems, 77:41–53, 2024.
[361] Chenguang Song, Yiyang Teng, Yangfu Zhu, Siqi Wei, and Bin Wu. Dynamic graph neural network for fake news
detection. Neurocomputing, 505:362–374, 2022.
[362] Seok-Zun Song, Seon Jeong Kim, and Young Bae Jun. Hyperfuzzy ideals in bck/bci-algebras. Mathematics, 5(4):81,
2017.
[363] Francesco Sorrentino. Synchronization of hypernetworks of coupled dynamical systems. New Journal of Physics,
14(3):033035, 2012.
[364] Fazeelat Sultana, Muhammad Gulistan, Mumtaz Ali, Naveed Yaqoob, Muhammad Khan, Tabasam Rashid, and Tauseef
Ahmed. A study of plithogenic graphs: applications in spreading coronavirus disease (covid-19) globally. Journal of
ambient intelligence and humanized computing, 14(10):13139–13159, 2023.
[365] Jinjun Tang, Fang Liu, Wenhui Zhang, Ruimin Ke, and Yajie Zou. Lane-changes prediction based on adaptive fuzzy
neural network. Expert Syst. Appl., 91:452–463, 2018.
[366] Jinjun Tang, Fang Liu, Yajie Zou, Weibin Zhang, and Yinhai Wang. An improved fuzzy neural network for traffic speed
prediction considering periodic characteristic. IEEE Transactions on Intelligent Transportation Systems, 18:2340–
2350, 2017.
[367] Trevor Tarr. Leibniz, Calculus, and The Hyperreal Numbers. PhD thesis, 2024.
[368] Sérgio Dinis teixeira de Sousa, Isabel Lopes, and Eusébio Nunes. Graph theory approach to quantify uncertainty of
performance measures. 2015.
[369] Lev Telyatnikov, Maria Sofia Bucarelli, Guillermo Bernardez, Olga Zaghen, Simone Scardapane, and Pietro Lió. Hy-
pergraph neural networks through the lens of message passing: A common perspective to homophily and architecture
design. ArXiv, abs/2310.07684, 2023.
[370] Ankit Thakkar and Kinjal Chaudhari. Predicting stock trend using an integrated term frequency-inverse document
frequency-based feature weight matrix with neural networks. Appl. Soft Comput., 96:106684, 2020.
[371] P. Thirunavukarasu and R. Suresh. Annals of on regular complex neutrosophic graphs. 2017.
[372] P. Thirunavukarasu and R. Suresh. On regular complex neutrosophic graphs. 2017.

73
[373] Yuanyuan Tian, Richard C Mceachin, Carlos Santos, David J States, and Jignesh M Patel. Saga: a subgraph matching
tool for biological graphs. Bioinformatics, 23(2):232–239, 2007.
[374] Eric J. Topol. High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine,
25:44 – 56, 2019.
[375] Vicenç Torra. Hesitant fuzzy sets. International journal of intelligent systems, 25(6):529–539, 2010.
[376] Vicenç Torra and Yasuo Narukawa. On hesitant fuzzy sets and decision. In 2009 IEEE international conference on
fuzzy systems, pages 1378–1382. IEEE, 2009.
[377] Mirko Torrisi, Gianluca Pollastri, and Quan Le. Deep learning methods in protein structure prediction. Computational
and Structural Biotechnology Journal, 18:1301–1310, 2020.
[378] Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles
Gelada, Kevin Swersky, Pierre-Antoine Manzagol, et al. Meta-dataset: A dataset of datasets for learning to learn from
few examples. arXiv preprint arXiv:1903.03096, 2019.
[379] Aleksandar Trifunovic and William J Knottenbelt. A parallel algorithm for multilevel k-way hypergraph partitioning. In
Third International Symposium on Parallel and Distributed Computing/Third International Workshop on Algorithms,
Models and Tools for Parallel Computing on Heterogeneous Networks, pages 114–121. IEEE, 2004.
[380] Nenad Trinajstic. Chemical graph theory. CRC press, 2018.
[381] Anton Tsitsulin, John Palowitch, Bryan Perozzi, and Emmanuel Müller. Graph clustering with graph neural networks.
ArXiv, abs/2006.16904, 2020.
[382] Tran Manh Tuan, Pham Minh Chuan, Mumtaz Ali, Tran Thi Ngan, Mamta Mittal, and Le Hoang Son. Fuzzy and
neutrosophic modeling for link prediction in social networks. Evolving Systems, 10:629 – 634, 2018.
[383] Vakkas Ulucay and Memet Sahin. Intuitionistic fuzzy soft expert graphs with application. Uncertainty Discourse and
Applications, 1(1):1–10, 2024.
[384] D Ulyanov. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022,
2016.
[385] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, et al. Graph
attention networks. stat, 1050(20):10–48550, 2017.
[386] Guillaume Verdon, Trevor McCourt, Enxhell Luzhnica, Vikash Singh, Stefan Leichenauer, and Jack Hidary. Quantum
graph neural networks. arXiv preprint arXiv:1909.12264, 2019.
[387] Arvind Kumar Verma and Sunil Rajotia. Feature vector: a graph-based feature recognition methodology. International
Journal of Production Research, 42:3219 – 3234, 2004.
[388] Johannes Von Oswald, Christian Henning, Benjamin F Grewe, and João Sacramento. Continual learning with hyper-
networks. arXiv preprint arXiv:1906.00695, 2019.
[389] George Voutsadakis. Introduction to set theory. A Problem Based Journey from Elementary Number Theory to an
Introduction to Matrix Theory, 2021.
[390] Chu Wang, Babak Samari, and Kaleem Siddiqi. Local spectral graph convolution for point set feature learning. In
European Conference on Computer Vision, 2018.
[391] Chun Wang, Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, and Chengqi Zhang. Attributed graph clustering: A
deep attentional embedding approach. In International Joint Conference on Artificial Intelligence, 2019.
[392] Dan Wang, Xi ke Tian, Lu Li, Chao Yang, Langchun Xing, and Jun Shang. A fault diagnosis algorithm for distribution
networks based on graph convolutional neural networks. 2024 20th International Conference on Natural Computation,
Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), pages 1–5, 2024.
[393] Jia Wang and Zhenyuan Wang. Using neural networks to determine sugeno measures by statistics. Neural Networks,
10:183–195, 1997.
[394] Jianling Wang, Kaize Ding, Ziwei Zhu, and James Caverlee. Session-based recommendation with hypergraph attention
networks. In Proceedings of the 2021 SIAM international conference on data mining (SDM), pages 82–90. SIAM,
2021.
[395] Jingcheng Wang, Yong Zhang, Yun Wei, Yongli Hu, Xinglin Piao, and Baocai Yin. Metro passenger flow prediction via
dynamic hypergraph convolution networks. IEEE Transactions on Intelligent Transportation Systems, 22:7891–7903,
2021.
[396] Qian Wang and Zengtai Gong. An application of fuzzy hypergraphs and hypergraphs in granular computing. Inf. Sci.,
429:296–314, 2018.
[397] Qian Wang and Zengtai Gong. Structural centrality in fuzzy social networks based on fuzzy hypergraph theory.
Computational and Mathematical Organization Theory, 26:236 – 254, 2020.

74
[398] Xiang Wang, Xiangnan He, Yixin Cao, Meng Liu, and Tat-Seng Chua. Kgat: Knowledge graph attention network for
recommendation. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data
mining, pages 950–958, 2019.
[399] Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, Yanfang Ye, Peng Cui, and Philip S Yu. Heterogeneous graph attention
network. In The world wide web conference, pages 2022–2032, 2019.
[400] Yuge Wang, Xibei Yang, Qiguo Sun, Yuhua Qian, and Qihang Guo. Purity skeleton dynamic hypergraph neural
network. Neurocomputing, 610:128539, 2024.
[401] Yuxin Wang, Quan Gan, Xipeng Qiu, Xuanjing Huang, and David Paul Wipf. From hypergraph energy functions to
hypergraph neural networks. In International Conference on Machine Learning, 2023.
[402] Stanley Wasserman and Katherine Faust. Social network analysis: Methods and applications. 1994.
[403] Ramon Elias Weber, Caitlin Mueller, and Christoph Reinhart. A hypergraph model shows the carbon reduction poten-
tial of effective space use in housing. Nature Communications, 15(1):8327, 2024.
[404] Tong Wei, Junlin Hou, and Rui Feng. Fuzzy graph neural network for few-shot learning. In 2020 International joint
conference on neural networks (IJCNN), pages 1–8. IEEE, 2020.
[405] Yuxiang Wei, Shiqi Wang, and Yun Li. Graph theory based machine learning for analog circuit design. In 2023 28th
International Conference on Automation and Computing (ICAC), pages 1–6. IEEE, 2023.
[406] Douglas Brent West et al. Introduction to graph theory, volume 2. Prentice hall Upper Saddle River, 2001.
[407] Tomasz Witczak. Interior and closure in anti-minimal and anti-biminimal spaces in the frame of anti-topology. Neu-
trosophic Sets and Systems, 56(1):29, 2023.
[408] Wolfgang Woess. Random walks on infinite graphs and groups. Number 138. Cambridge university press, 2000.
[409] Dianshuang Wu, Guangquan Zhang, and Jie Lu. A fuzzy preference tree-based recommender system for personalized
business-to-business e-services. IEEE Transactions on Fuzzy Systems, 23:29–43, 2015.
[410] Hongjie Wu, Weizhong Lu, Meiling Qian, Yu Zhang, Yijie Ding, Jiawei Shen, Xiaoyi Chen, Haiou Li, and Qiming Fu.
Identification of membrane protein types based using hypergraph neural network. Current Bioinformatics, 2023.
[411] Jianxin Wu. Introduction to convolutional neural networks. National Key Lab for Novel Software Technology. Nanjing
University. China, 5(23):495, 2017.
[412] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey
on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1):4–24, 2020.
[413] Ru xia Liang, Qian Zhang, and Jianqiang Wang. Hierarchical fuzzy graph attention network for group recommenda-
tion. 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pages 1–6, 2021.
[414] Jun Xie, Qiguang Miao, Ruyi Liu, Wentian Xin, Lei Tang, Sheng Zhong, and Xuesong Gao. Attention adjacency
matrix based graph convolutional networks for skeleton-based action recognition. Neurocomputing, 440:230–239,
2021.
[415] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint
arXiv:1810.00826, 2018.
[416] Xiaolong Xu, Qinting Jiang, Peiming Zhang, Xuefei Cao, Mohammad Hossein Khosravi, Linss T. Alex, Lianyong Qi,
and Wanchun Dou. Game theory for distributed iov task offloading with fuzzy neural network in edge computing.
IEEE Transactions on Fuzzy Systems, 30:4593–4604, 2022.
[417] Zeshui Xu. Hesitant fuzzy sets theory, volume 314. Springer, 2014.
[418] Yong Bo Xuan, Chang Qiang Huang, and Wang Xi Li. Air combat situation assessment by gray fuzzy bayesian
network. Applied Mechanics and Materials, 69:114–119, 2011.
[419] Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings.
In International conference on machine learning, pages 40–48. PMLR, 2016.
[420] Naveed Yaqoob and Muhammad Akram. Complex neutrosophic graphs. Infinite Study, 2018.
[421] Ouyang Yi, Bin Guo, Xing Tang, Xiuqiang He, Jian Xiong, and Zhiwen Yu. Learning cross-domain representation
with multi-graph neural network. ArXiv, abs/1905.10095, 2019.
[422] Pairote Yiarayong. On 2-superhyperleftalmostsemihyp regroups. Neutrosophic Sets and Systems, 51(1):33, 2022.
[423] G George Yin and Qing Zhang. Discrete-time Markov chains: two-time-scale methods and applications, volume 55.
Springer Science & Business Media, 2005.
[424] Hao Yin, Austin R. Benson, Jure Leskovec, and David F. Gleich. Local higher-order graph clustering. Proceedings of
the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017.

75
[425] Nan Yin, Li Shen, Huan Xiong, Bin Gu, Chong Chen, Xian-Sheng Hua, Siwei Liu, and Xiao Luo. Messages are never
propagated alone: Collaborative hypergraph neural network for time-series forecasting. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 46:2333–2347, 2023.
[426] Zhizhuo Yin, Kai Han, Pengzi Wang, and Xi Zhu. H3gnn: Hybrid hierarchical hypergraph neural network for person-
alized session-based recommendation. ACM Transactions on Information Systems, 42:1 – 30, 2023.
[427] Jean-Gabriel Young, Giovanni Petri, and Tiago P Peixoto. Hypergraph reconstruction from network data. Communi-
cations Physics, 4(1):135, 2021.
[428] Jiajun Yu, Zhihao Wu, Jinyu Cai, Adele Lu Jia, and Jicong Fan. Kernel readout for graph neural networks.
[429] Hao Yuan, Haiyang Yu, Shurui Gui, and Shuiwang Ji. Explainability in graph neural networks: A taxonomic survey.
IEEE transactions on pattern analysis and machine intelligence, 45(5):5782–5799, 2022.
[430] Lotfi A Zadeh. Fuzzy sets. Information and control, 8(3):338–353, 1965.
[431] Lotfi A Zadeh. A fuzzy-set-theoretic interpretation of linguistic hedges. 1972.
[432] Lotfi A Zadeh. Fuzzy sets and their application to pattern classification and clustering analysis. In Classification and
clustering, pages 251–299. Elsevier, 1977.
[433] Lotfi A Zadeh. Fuzzy sets versus probability. Proceedings of the IEEE, 68(3):421–421, 1980.
[434] Lotfi A Zadeh. Fuzzy logic, neural networks, and soft computing. In Fuzzy sets, fuzzy logic, and fuzzy systems: selected
papers by Lotfi A Zadeh, pages 775–782. World Scientific, 1996.
[435] Lotfi A Zadeh. Fuzzy sets and information granularity. In Fuzzy sets, fuzzy logic, and fuzzy systems: selected papers
by Lotfi A Zadeh, pages 433–448. World Scientific, 1996.
[436] Lotfi A Zadeh. A note on prototype theory and fuzzy sets. In Fuzzy sets, fuzzy logic, and fuzzy systems: Selected
papers by Lotfi A Zadeh, pages 587–593. World Scientific, 1996.
[437] Lotfi Asker Zadeh. Fuzzy sets as a basis for a theory of possibility. Fuzzy sets and systems, 1(1):3–28, 1978.
[438] Jin Zeng, Gene Cheung, Michael K. Ng, Jiahao Pang, and Cheng Yang. 3d point cloud denoising using graph laplacian
regularization of a low dimensional manifold model. IEEE Transactions on Image Processing, 29:3474–3489, 2018.
[439] Chun-Yang Zhang, Yue-Na Lin, C. L. Philip Chen, Hong-Yu Yao, Hai-Chun Cai, and Wu-Peng Fang. Fuzzy represen-
tation learning on graph. IEEE Transactions on Fuzzy Systems, 31:3358–3370, 2023.
[440] Chuxu Zhang, Dongjin Song, Chao Huang, Ananthram Swami, and N. Chawla. Heterogeneous graph neural network.
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019.
[441] Junlong Zhang and Yubin Luo. Degree centrality, betweenness centrality, and closeness centrality in social network.
2017.
[442] Leijie Zhang, Ye Shi, Yu-Cheng Chang, and Chin-Teng Lin. Hierarchical fuzzy neural networks with privacy preser-
vation for heterogeneous big data. IEEE Transactions on Fuzzy Systems, 29(1):46–58, 2020.
[443] Yuanzhao Zhang, Maxime Lucas, and Federico Battiston. Higher-order interactions shape collective dynamics differ-
ently in hypergraphs and simplicial complexes. Nature communications, 14(1):1605, 2023.
[444] Yuhao Zhang, Peng Qi, and Christopher D. Manning. Graph convolution over pruned dependency trees improves
relation extraction. ArXiv, abs/1809.10185, 2018.
[445] Hua Zhao, Zeshui Xu, Shousheng Liu, and Zhong Wang. Intuitionistic fuzzy mst clustering algorithms. Computers &
Industrial Engineering, 62(4):1130–1140, 2012.
[446] Ling Zhao, Yujiao Song, Chao Zhang, Yu Liu, Pu Wang, Tao Lin, Min Deng, and Haifeng Li. T-gcn: A temporal graph
convolutional network for traffic prediction. IEEE Transactions on Intelligent Transportation Systems, 21:3848–3858,
2018.
[447] Wufan Zhao, Claudio Persello, and Alfred Stein. Extracting planar roof structures from very high resolution images
using graph neural networks. ISPRS Journal of Photogrammetry and Remote Sensing, 187:34–45, 2022.
[448] Chunhang Zheng and Kechao Cai. Genet: A graph neural network-based anti-noise task-oriented semantic communi-
cation paradigm. arXiv preprint arXiv:2403.18296, 2024.
[449] Xin Zheng, Yi Wang, Yixin Liu, Ming Li, Miao Zhang, Di Jin, Philip S Yu, and Shirui Pan. Graph neural networks for
graphs with heterophily: A survey. arXiv preprint arXiv:2202.07082, 2022.
[450] Guo Zhenyu and Zhang Wanhong. An efficient inference schema for gene regulatory networks using directed graph
neural networks. In 2023 42nd Chinese Control Conference (CCC), pages 6829–6834. IEEE, 2023.
[451] Luying Zhong, Jinbin Yang, Zhaoliang Chen, and Shiping Wang. Contrastive graph convolutional networks with
generative adjacency matrix. IEEE Transactions on Signal Processing, 71:772–785, 2023.

76
[452] Hongliang Zhou and Rik Sarkar. Leveraging graph machine learning for moonlighting protein prediction: A ppi
network and physiochemical feature approach. bioRxiv, pages 2023–11, 2023.
[453] Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and
Maosong Sun. Graph neural networks: A review of methods and applications. AI open, 1:57–81, 2020.
[454] Peng Zhou, Zongqian Wu, Xiangxiang Zeng, Guoqiu Wen, Junbo Ma, and Xiaofeng Zhu. Totally dynamic hypergraph
neural networks. In International Joint Conference on Artificial Intelligence, 2023.
[455] Hao Zhu and Piotr Koniusz. Simple spectral graph convolution. In International Conference on Learning Representa-
tions, 2021.

77

You might also like