superhyperalgorithm_V6
superhyperalgorithm_V6
net/publication/389875390
CITATIONS READS
0 62
1 author:
Takaaki Fujita
SEE PROFILE
All content following this page was uploaded by Takaaki Fujita on 10 May 2025.
Abstract
An algorithm is a finite, well-defined computational procedure that transforms inputs into outputs through a
structured sequence of steps, guaranteeing termination and correctness. A multialgorithm comprises multiple
algorithms augmented with a selection mechanism that dynamically chooses the most appropriate procedure
based on input characteristics or contextual conditions. While these concepts have deep roots in computer
science and beyond, this paper introduces two novel generalizations: the Hyperalgorithm and the Super-
hyperalgorithm. By leveraging the mathematical frameworks of hyperstructures and superhyperstructures,
respectively, we extend the classical notion of computation to higher-order operations on sets and iterated pow-
ersets. We present formal definitions, illustrative examples, and a preliminary analysis of their computational
properties, laying the groundwork for a unified theory of higher-order algorithms.
In what follows, we employ the concepts of the powerset and the 𝑛-th powerset as fundamental building blocks
for our later constructions.
Definition 1.1 (Set). [30] A set is a collection of distinct objects, called elements, which are unambiguously
defined. If 𝐴 is a set and 𝑥 is an element of 𝐴, we write 𝑥 ∈ 𝐴. Sets are usually denoted by enclosing their
elements in curly braces.
Definition 1.2 (Subset). [30] For any two sets 𝐴 and 𝐵, 𝐴 is said to be a subset of 𝐵 (written 𝐴 ⊆ 𝐵) if every
element of 𝐴 is also an element of 𝐵:
∀𝑥 ∈ 𝐴, 𝑥 ∈ 𝐵.
If additionally 𝐴 ≠ 𝐵, then 𝐴 is called a proper subset of 𝐵, denoted 𝐴 ⊂ 𝐵.
Definition 1.3 (Empty Set). [30] The empty set, denoted ∅, is the unique set that contains no elements:
∀𝑥, 𝑥 ∉ ∅.
It follows that ∅ is a subset of every set.
Definition 1.4 (Base Set). [28] A base set 𝑆 is the underlying set from which more elaborate structures, such
as powersets and hyperstructures, are constructed. It is defined by
𝑆 = {𝑥 | 𝑥 belongs to a specified domain}.
All elements appearing in constructions like P (𝑆) or P𝑛 (𝑆) are drawn from 𝑆.
Definition 1.5 (Powerset). [28, 43] The powerset of a set 𝑆, denoted P (𝑆), is the collection of all subsets of
𝑆, including both ∅ and 𝑆 itself:
P (𝑆) = { 𝐴 | 𝐴 ⊆ 𝑆}.
Definition 1.6 (𝑛-th Powerset). (cf. [28,33,47,52,53]) The 𝑛-th powerset of a set 𝐻, denoted 𝑃𝑛 (𝐻), is defined
recursively by:
𝑃1 (𝐻) = P (𝐻), 𝑃𝑛+1 (𝐻) = P (𝑃𝑛 (𝐻)) for 𝑛 ≥ 1.
Similarly, the 𝑛-th nonempty powerset, denoted 𝑃𝑛∗ (𝐻), is given by:
𝑃1∗ (𝐻) = P ∗ (𝐻), ∗
𝑃𝑛+1 (𝐻) = P ∗ (𝑃𝑛∗ (𝐻)),
where P ∗ (𝐻) denotes the powerset of 𝐻 with the empty set omitted.
1
1.2 Hyperstructure and Superhyperstructure
To establish a robust theoretical foundation for hyperstructures [2,3,25,38] and superhyperstructures [19–24,27],
we introduce key definitions that formalize their properties.
Definition 1.7 (Classical Structure). (cf. [47, 53]) A Classical Structure is a mathematical system based on a
nonempty set 𝐻 that is endowed with one or more classical operations satisfying a prescribed set of axioms.
A classical operation is a mapping
#0 : 𝐻 𝑚 → 𝐻,
where 𝑚 ≥ 1 and 𝐻 𝑚 denotes the 𝑚-fold Cartesian product of 𝐻. Typical examples include operations like
addition or multiplication in algebraic systems such as groups, rings, or fields.
◦ : 𝑆 × 𝑆 → P (𝑆),
Definition 1.10 (SuperHyperOperations). [53] Let 𝐻 be a nonempty set and 𝑃(𝐻) its powerset. Define the
𝑛-th powerset 𝑃 𝑛 (𝐻) recursively by:
where 𝑃∗𝑛 (𝐻) denotes the 𝑛-th powerset of 𝐻 (either excluding the empty set, which we refer to as a classical-type
operation, or including it, known as a Neutrosophic-type operation). These operations serve as higher-order
generalizations of hyperoperations by capturing multi-level complexity through iterative powerset constructions.
◦ (2,2) : 𝐻 × 𝐻 −→ P 2 (𝐻)
by
𝑥 ◦ (2,2) 𝑦 = Friends(𝑥), Friends(𝑦), Friends(𝑥) ∩ Friends(𝑦) ,
This SuperHyperOperation captures both first-order and second-order social connections in one higher-order
operation.
2
Definition 1.12 (𝑛-Superhyperstructure). (cf. [18, 23, 26, 47, 53]) An 𝑛-superhyperstructure generalizes a
hyperstructure by incorporating the 𝑛-th powerset of a base set. It is defined as:
where 𝑆 is the base set, P𝑛 (𝑆) is its 𝑛-th powerset, and ◦ is an operation on elements of P𝑛 (𝑆).
Example 1.13 (Cloud Infrastructure as a 3-Superhyperstructure). Let 𝑆 be the finite set of all physical servers
in a global cloud provider. Form the iterated powersets:
P 1 (𝑆) = {rack groupings}, P 2 (𝑆) = {data center assemblies}, P 3 (𝑆) = {regional partitions}.
This example illustrates a 3-superhyperstructure modeling hierarchical cloud infrastructure from servers up to
global regions.
1.3 Algorithm
Algorithm is a finite, well-defined computational procedure that transforms inputs into outputs through a
sequence of effective steps, ensuring termination and correctness [7, 11, 16, 34, 44, 46, 60].
Definition 1.14 (Algorithm). An algorithm is a finite, well-defined, and effective computational procedure
that transforms an input into an output after a finite number of steps. An algorithm must satisfy the following
properties:
𝐴 : 𝐼 → 𝑂,
This algorithm terminates in a finite number of steps and returns gcd(𝑎, 𝑏) as the output.
3
1.4 Multialgorithm and Parallel Algorithm
Multialgorithm is a collection of multiple algorithms with a selection mechanism that dynamically chooses the
most suitable one based on input characteristics or conditions [1, 8, 14, 17, 32, 54, 59, 61].
Definition 1.16 (Multialgorithm). A multialgorithm is a composite computational framework that consists
of a finite collection of algorithms and a selection mechanism that determines, for each input 𝑥 ∈ 𝐼, which
algorithm to apply to produce the output. Formally, let
A = {𝐴1 , 𝐴2 , . . . , 𝐴 𝑘 }
be a finite set of algorithms, where each 𝐴𝑖 : 𝐼 → 𝑂 is a total computable function. A selection function
𝜇 : 𝐼 → {1, 2, . . . , 𝑘 }
𝑀 (𝑥) = 𝐴 𝜇 ( 𝑥 ) (𝑥) ∀ 𝑥 ∈ 𝐼.
The selection function 𝜇 may depend on criteria such as input size, structure, or performance requirements.
Example 1.17 (Hybrid Sorting Algorithm). (cf. [9, 13, 62]) A typical example of a multialgorithm is a hybrid
sorting algorithm that chooses between Insertion Sort and Merge Sort based on the size of the input array. For
an input array 𝑥 of length 𝑛, define the selection function as:
(
1, if 𝑛 ≤ 𝑛0 ,
𝜇(𝑥) =
2, if 𝑛 > 𝑛0 ,
where 𝐴1 is Insertion Sort (efficient for small 𝑛) and 𝐴2 is Merge Sort (efficient for large 𝑛), with 𝑛0 as a
threshold value. The multialgorithm then computes:
𝑀 (𝑥) = 𝐴 𝜇 ( 𝑥 ) (𝑥).
This approach leverages the strengths of each algorithm based on the specific characteristics of the input.
A related concept is parallel algorithms [6,34]. For reference, the mathematical definition of parallel algorithms
is provided below.
Definition 1.18 (Parallel Algorithm). Let 𝐼 be a set of inputs, and let 𝑂 be a set of outputs. A parallel algorithm
consists of:
• A finite set of processing units 𝑃 = {𝑃1 , 𝑃2 , . . . , 𝑃𝑚 }, where each 𝑃𝑖 executes a part of the computation
independently.
• A task decomposition function 𝐷 : 𝐼 → P (𝑆), where 𝑆 is the set of subtasks and 𝐷 (𝑥) partitions an
input 𝑥 ∈ 𝐼 into subtasks 𝑆1 , 𝑆2 , . . . , 𝑆 𝑘 for parallel execution.
• A set of local computations 𝐴𝑖 : 𝑆𝑖 → 𝑂 𝑖 performed by each processing unit 𝑃𝑖 , where 𝑂 𝑖 is the local
output space of 𝑃𝑖 .
• A result aggregation function 𝑅 : 𝑂 1 × 𝑂 2 × · · · × 𝑂 𝑚 → 𝑂 that combines local outputs to produce the
final result.
Example 1.19 (Parallel Merge Sort). (cf. [10, 12]) Consider an unsorted array 𝐴 of 𝑛 elements that needs to
be sorted. A parallel merge sort algorithm works as follows:
4
1. Task Decomposition: Divide the array 𝐴 into 𝑚 subarrays 𝐴1 , 𝐴2 , . . . , 𝐴𝑚 of (approximately) equal
size, where 𝑚 is the number of available processing units.
2. Local Computation: Each processor 𝑃𝑖 independently sorts its assigned subarray 𝐴𝑖 using a sequential
sorting algorithm (for example, merge sort).
3. Aggregation: In a parallel merging phase, the sorted subarrays are combined using a multiway merge
operation to produce the final sorted array.
2.1 Hyperalgorithm
A = { 𝐴𝑖 : 𝐼 → 𝑂 | 𝑖 ∈ 𝐽}
5
Then for any symptom profile 𝑥 ⊆ 𝐼,
𝐴(𝑥) = ◦ { 𝐴1 (𝑥), 𝐴2 (𝑥)} = 𝐴1 (𝑥) ∪ 𝐴2 (𝑥).
For example, if 𝑥 = {engine won’t start, battery light on}, then
𝐴1 (𝑥) = {alternator failure}, 𝐴2 (𝑥) = {dead battery, alternator failure},
so
𝐴(𝑥) = {dead battery, alternator failure}.
This Hyperalgorithm thus captures both rule-based and data-driven diagnostics, aggregating their outputs into
a combined set of candidate faults.
Example 2.3 (House Price Interval Prediction as a Hyperalgorithm). Let
𝐼 = R𝑑
be the set of 𝑑-dimensional feature vectors describing residential properties, and let
𝑂=R
denote predicted sale prices (in thousands of dollars). Consider three regression models:
𝐴1 : 𝐼 → 𝑂, (linear regression)
𝐴2 : 𝐼 → 𝑂, (decision-tree regression)
𝐴3 : 𝐼 → 𝑂, (random-forest regression)
each of which is a total computable function on feature vectors.
Since the selection function 𝜇 picks a unique algorithm for each input, we have
𝐴(𝑥) = { 𝐴 𝜇 ( 𝑥 ) (𝑥)},
which is a singleton. Thus, the multialgorithm 𝑀 is recovered from 𝐴, showing that every multialgorithm is a
special case of a hyperalgorithm. □
6
Theorem 2.5 (Computability of Hyperalgorithms). Let 𝐼 and 𝑂 be countable sets, and let A = { 𝐴𝑖 : 𝐼 →
𝑂 | 𝑖 ∈ 𝐽} be a finite family of total computable functions. Suppose ◦ : P (𝑂) → P (𝑂) is a computable
hyperoperation on finite subsets of 𝑂. Then the Hyperalgorithm
𝐴 : 𝐼 −→ P (𝑂), 𝐴(𝑥) = ◦ { 𝐴𝑖 (𝑥) : 𝑖 ∈ 𝐽}
Proof. On input 𝑥 ∈ 𝐼:
1. For each 𝑖 ∈ 𝐽, compute 𝑦 𝑖 = 𝐴𝑖 (𝑥). Since each 𝐴𝑖 is total computable and 𝐽 is finite, this yields the
finite set {𝑦 𝑖 : 𝑖 ∈ 𝐽} ⊆ 𝑂.
2. Apply the computable hyperoperation ◦ to this finite set, producing ◦({𝑦 𝑖 }) ∈ P (𝑂).
Each step is a computable procedure on finite data, so 𝐴(𝑥) is computed in finite time. Hence 𝐴 is a total
computable function from 𝐼 to P (𝑂). □
Theorem 2.6 (Monotonicity under Inclusion). Suppose the hyperoperation ◦ is monotone in the sense that
whenever 𝑆1 , 𝑆2 ⊆ 𝑂 satisfy 𝑆1 ⊆ 𝑆2 , we have ◦(𝑆1 ) ⊆ ◦(𝑆2 ). Let A 𝐽1 and A 𝐽2 be two finite algorithm families
with 𝐽1 ⊆ 𝐽2 , and let 𝐴 𝐽1 , 𝐴 𝐽2 be their corresponding Hyperalgorithms. Then for every 𝑥 ∈ 𝐼,
𝐴 𝐽1 (𝑥) ⊆ 𝐴 𝐽2 (𝑥).
Hence adding more component algorithms can only enlarge (or leave unchanged) the Hyperalgorithm’s output.
□
Theorem 2.7 (Idempotence under Self-Application). If the hyperoperation ◦ satisfies ◦ ◦(𝑆) = ◦(𝑆) for every
finite 𝑆 ⊆ 𝑂, then the Hyperalgorithm 𝐴 is idempotent in the sense that
𝐴(𝑥) = ◦(𝑆 𝑥 ).
where the third equality uses the idempotence hypothesis ◦(◦(𝑆)) = ◦(𝑆). Thus the Hyperalgorithm is
unchanged by one additional round of hyperoperation. □
Theorem 2.8 (Time Complexity of Hyperalgorithms). Let 𝐼 be a set of inputs encoded in bit-strings of length
𝑛, and let A = { 𝐴𝑖 : 𝐼 → 𝑂 | 𝑖 = 1, . . . , 𝑘 } be a finite family of total computable component algorithms.
Suppose:
7
Then the Hyperalgorithm
𝐴(𝑥) = ◦ { 𝐴𝑖 (𝑥) : 𝑖 = 1, . . . , 𝑘 }
runs in time
𝑘
∑︁
𝑇total (𝑛) = 𝑇𝑖 (𝑛) + 𝐻 (𝑘).
𝑖=1
In particular, if 𝑘 is a fixed constant and each 𝑇𝑖 and 𝐻 are polynomial functions of 𝑛, then 𝐴 runs in polynomial
time.
Í𝑘
Proof. On input 𝑥 of length 𝑛, we first compute each 𝑦 𝑖 = 𝐴𝑖 (𝑥) for 𝑖 = 1, . . . , 𝑘; this takes 𝑖=1 𝑇𝑖 (𝑛) steps
since the 𝐴𝑖 run sequentially. We then collect the finite set {𝑦 1 , . . . , 𝑦 𝑘 } and apply the hyperoperation ◦, which
by hypothesis takes 𝐻 (𝑘) steps. Summing these two phases yields the stated bound. □
Theorem 2.9 (Space Complexity of Hyperalgorithms). Under the same setup, suppose moreover that:
If 𝑘 is fixed and each 𝑆𝑖 and 𝑊 are polynomial in 𝑛, then 𝐴 uses polynomial space.
Í𝑘
Proof. To compute 𝐴(𝑥), we must store each intermediate result 𝑦 𝑖 = 𝐴𝑖 (𝑥), using at most 𝑖=1 𝑆𝑖 (𝑛) space.
Then applying ◦ may use an additional 𝑊 (𝑘) workspace (e.g. for temporary registers or indexing), but does not
overwrite
Í the stored 𝑦 𝑖 . Finally, we produce the output hyper-set, whose storage is already accounted for within
𝑖 𝑆 𝑖 (𝑛). Therefore the total space required is the sum of the component storage plus 𝑊 (𝑘), as claimed. □
2.2 𝑛-Superhyperalgorithm
A = { 𝐴𝑖 : 𝐼 → 𝑂 | 𝑖 ∈ 𝐽}
𝐴 (𝑛) : 𝐼 → P𝑛 (𝑂)
defined by
𝐴 (𝑛) (𝑥) = ◦ (𝑛) { 𝐴𝑖 (𝑥) : 𝑖 ∈ 𝐽} for all 𝑥 ∈ 𝐼.
This definition generalizes the hyperalgorithm by operating within the richer structure of the 𝑛-th powerset. In
particular, when 𝑛 = 1 (so that P1 (𝑂) = P (𝑂)), the 𝑛-superhyperalgorithm coincides with the hyperalgorithm.
8
Example 2.11 (Automotive Fault Diagnosis via a 2-Superhyperalgorithm). Let
⊂ P 1 P 1 (𝑂) = P2 (𝑂).
𝑆(𝑥) = { 𝐴1 (𝑥)}, {𝐴2 (𝑥)}
which returns, for any two singleton-sets 𝑈, 𝑉 ⊆ 𝑂, their union, intersection, and symmetric difference.
so
𝑆(𝑥) = {alternator failure}, {dead battery, alternator failure} ,
and n o
𝐴 (2) (𝑥) = {dead battery, alternator failure}, {alternator failure}, {dead battery} ,
explicitly listing the union, intersection, and symmetric difference of the two candidate-fault sets. This output
lies in P2 (𝑂) and demonstrates how a 2-Superhyperalgorithm encodes both individual diagnoses and their
combined relationships.
Example 2.12 (3-Superhyperalgorithm for Power Computation). Let 𝐼 = N and 𝑂 = N. Define two base
algorithms:
𝐴1 (𝑥) = 𝑥 2 , 𝐴2 (𝑥) = 𝑥 3 .
For each 𝑥 ∈ N, form the embedded singleton outputs
Second-level (2-super) step: apply the 2-superhyperoperation ◦ (2) (𝑆) = P (𝑆). Thus
n o
𝑆1 (𝑥) = ◦ (2) 𝑆0 (𝑥) = P {{𝑥 2 }, {𝑥 3 }} = ∅, {{𝑥 2 }}, {{𝑥 3 }}, {{𝑥 2 }, {𝑥 3 }} ⊆ P 2 (𝑂).
9
Third-level (3-super) step: apply the 3-superhyperoperation ◦ (3) (𝑇) = P (𝑇). Hence the 3-Superhyperalgorithm
𝐴 (3) : 𝐼 → P 3 (𝑂) is
𝐴 (3) (𝑥) = ◦ (3) 𝑆1 (𝑥) = P 𝑆1 (𝑥) = 𝑈 : 𝑈 ⊆ 𝑆1 (𝑥) ,
Thus 𝐴 (3) (2) is the collection of all 16 possible subcollections of {∅, {{4}}, {{8}}, {{4}, {8}}}, vividly illus-
trating how a 3-Superhyperalgorithm returns richly nested multilevel information about the inputs.
Theorem 2.13 (𝑛-Superhyperalgorithm Generalizes Hyperalgorithm). Every hyperalgorithm is a special case
of an 𝑛-superhyperalgorithm.
Proof. Let 𝐴 : 𝐼 → P (𝑂) be a hyperalgorithm defined as in the previous section. Notice that P (𝑂) is naturally
embedded in P𝑛 (𝑂) for any 𝑛 ≥ 1 (since the first powerset is a subset of the 𝑛-th powerset). Define the mapping
𝜄 : P (𝑂) → P𝑛 (𝑂) by
𝜄(𝑆) = 𝑆 for every 𝑆 ∈ P (𝑂).
Then, define the 𝑛-superhyperalgorithm 𝐴 (𝑛) : 𝐼 → P𝑛 (𝑂) by
For every 𝑥 ∈ 𝐼, we have 𝐴 (𝑛) (𝑥) = 𝐴(𝑥) (viewed as an element of P𝑛 (𝑂)). Therefore, the hyperalgorithm 𝐴
is embedded as a special case of the 𝑛-superhyperalgorithm 𝐴 (𝑛) . □
Theorem 2.14 (Computability of 𝑛-Superhyperalgorithms). Let 𝐼 and 𝑂 be countable sets, and let A = { 𝐴𝑖 :
𝐼 → 𝑂 | 𝑖 ∈ 𝐽} be a finite family of total computable functions. Suppose the 𝑛-superhyperoperation
Proof. On input 𝑥 ∈ 𝐼:
1. Compute each 𝑦 𝑖 = 𝐴𝑖 (𝑥). Since 𝐽 is finite and each 𝐴𝑖 is total computable, this yields the finite multiset
{𝑦 𝑖 } ⊆ 𝑂.
2. Embed each 𝑦 𝑖 into P𝑛 (𝑂) by iterated singletons, forming an input in P𝑛 (𝑂).
3. Apply the computable 𝑛-superhyperoperation ◦ (𝑛) to this finite collection, producing 𝐴 (𝑛) (𝑥) ∈ P𝑛 (𝑂).
Proof. Fix 𝑥. Since A1 ⊆ A2 , we have { 𝐴𝑖 (𝑥) : 𝑖 ∈ A1 } ⊆ { 𝐴𝑖 (𝑥) : 𝑖 ∈ A2 } as subsets of 𝑂, and hence their
embeddings into P𝑛 (𝑂) satisfy componentwise inclusion. Monotonicity of ◦ (𝑛) implies
10
Theorem 2.16 (Flattening to Hyperalgorithms). Let 𝜑 𝑘 : P𝑛 (𝑂) → P 𝑛−𝑘 (𝑂) be the 𝑘-fold “flattening” map
given by successive union. If 𝐴 (𝑛) is an 𝑛-Superhyperalgorithm, then for each 𝑘 ≤ 𝑛, the composition
Proof. Since 𝐴 (𝑛) (𝑥) ∈ P𝑛 (𝑂) for every 𝑥 ∈ 𝐼, applying 𝜑 𝑘 produces an element of P 𝑛−𝑘 (𝑂). The operation
◦ (𝑛−𝑘 ) on P 𝑛−𝑘 (𝑂) can be defined by ◦ (𝑛−𝑘 ) = 𝜑 𝑘 ◦ ◦ (𝑛) , and one checks directly that this satisfies the axioms
of an (𝑛 − 𝑘)-superhyperoperation. Hence 𝐴 (𝑛−𝑘 ) is a valid (𝑛 − 𝑘)-Superhyperalgorithm. □
Theorem 2.17 (Idempotence under Self-Application). If ◦ (𝑛) satisfies ◦ (𝑛) ({◦ (𝑛) (𝑆)}) = ◦ (𝑛) (𝑆) for all finite
𝑆 ⊆ P𝑛 (𝑂), then the associated Superhyperalgorithm 𝐴 (𝑛) is idempotent:
is again an 𝑛-Superhyperalgorithm.
Proof. For each 𝑥 ∈ 𝐼, 𝐴 (𝑛) (𝑥) ⊆ P𝑛 (𝑂). Write Y = 𝐴 (𝑛) (𝑥). Then each 𝑦 ∈ Y ⊆ 𝑂 yields 𝐵 (𝑛) (𝑦) ⊆ P𝑛 (𝑃).
Collecting all outputs {𝐵 (𝑛) (𝑦) : 𝑦 ∈ Y} gives a finite subset of P𝑛 (𝑃), to which we apply ◦ 𝑃(𝑛) . This follows
the same pattern as the definition of an 𝑛-Superhyperalgorithm, so the composite is of the required form. □
Theorem 2.19 (Time Complexity Bound). Assume each component algorithm 𝐴𝑖 ∈ A runs in time at most
𝑇𝑖 (|𝑥|) on input 𝑥, and that the 𝑛-superhyperoperation ◦ (𝑛) on 𝑘 = |A| inputs runs in time 𝐻 (𝑛) (𝑘). Then the
𝑛-Superhyperalgorithm
𝐴 (𝑛) (𝑥) = ◦ (𝑛) { 𝐴𝑖 (𝑥)}
runs in time ∑︁
𝑇𝑖 (|𝑥|) + 𝐻 (𝑛) |𝐽 | .
𝑇total (𝑥) =
𝑖∈𝐽
In particular, if |𝐽 | and 𝑛 are fixed constants and each 𝑇𝑖 and 𝐻 (𝑛) are polynomial, then 𝐴 (𝑛) runs in polynomial
time.
Proof. Computing each 𝐴𝑖 (𝑥) takes 𝑇𝑖 (|𝑥|), summing over 𝑖 ∈ 𝐽. Gathering these outputs into the input for ◦ (𝑛)
is 𝑂 (|𝐽 |), and applying the superhyperoperation takes 𝐻 (𝑛) (|𝐽 |). Adding these yields the stated bound. □
11
Theorem 2.20 (Space Complexity and Output-Size Bound). Let |𝑂| = 𝑚 be the size of the output domain.
𝑚
...2
Then |P𝑛 (𝑂)| ≤ 22 (an 𝑛-fold tower of 2’s). Consequently, in the worst case, the space required to represent
𝐴 (𝑛) (𝑥) is 𝑂 P𝑛 (𝑂) . If each 𝐴𝑖 (𝑥) produces at most 𝑠 bits and ◦ (𝑛) produces an output of size at most 𝑆(𝑘)
for 𝑘 = |𝐽 | inputs, then the total space is ∑︁
𝑆 |𝐽 | + 𝑠,
𝑖∈𝐽
which is constant if 𝑛, |𝐽 |, and 𝑠 are fixed.
𝑚
Proof. By definition, P 1 (𝑂) has at most 2𝑚 elements, P 2 (𝑂) at most 22 , and so on, yielding the 𝑛-fold tower.
Representing 𝐴 (𝑛) (𝑥) thus requires space proportional to the size of its encoding within P𝑛 (𝑂), bounded by
this tower. If we restrict to fixed 𝑛 and small outputs, the space reduces to the sum of the sizes of each 𝐴𝑖 (𝑥)
plus the space to store ◦ (𝑛) ’s result. □
• Implement and evaluate Hyperalgorithms and Superhyperalgorithms in practice, applying them to well-
known sorting, search, graph, and approximation algorithms.
• Analyze their computational complexity, proving time and space bounds in various settings.
• Explore further generalizations using advanced uncertainty models, including Fuzzy Sets [63–65],
Intuitionistic Fuzzy Sets [4, 5], Hyperfuzzy Sets [29, 31], Rough Sets [39–41], Soft Sets [35, 36],
Neutrosophic Sets [48–50], and Plithogenic Sets [51].
We anticipate that these investigations will deepen our understanding of higher-order computation and broaden
the applicability of Hyperalgorithms and Superhyperalgorithms across complex and uncertain domains.
Funding
This study did not receive any financial or external support from organizations or individuals.
Acknowledgments
We extend our sincere gratitude to everyone who provided insights, inspiration, and assistance throughout this
research. We particularly thank our readers for their interest and acknowledge the authors of the cited works
for laying the foundation that made our study possible. We also appreciate the support from individuals and
institutions that provided the resources and infrastructure needed to produce and share this paper. Finally, we
are grateful to all those who supported us in various ways during this project.
Data Availability
This research is purely theoretical, involving no data collection or analysis. We encourage future researchers
to pursue empirical investigations to further develop and validate the concepts introduced here.
Ethical Approval
As this research is entirely theoretical in nature and does not involve human participants or animal subjects, no
ethical approval is required.
12
Conflicts of Interest
The authors confirm that there are no conflicts of interest related to the research or its publication.
Disclaimer
This work presents theoretical concepts that have not yet undergone practical testing or validation. Future
researchers are encouraged to apply and assess these ideas in empirical contexts. While every effort has been
made to ensure accuracy and appropriate referencing, unintentional errors or omissions may still exist. Readers
are advised to verify referenced materials on their own. The views and conclusions expressed here are the
authors’ own and do not necessarily reflect those of their affiliated organizations.
References
[1] R Acciarri, C Adams, R An, J Anthony, J Asaadi, M Auger, L Bagby, S Balasubramanian, B Baller, C Barnes, et al. The pandora
multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the microboone detector. The
European Physical Journal C, 78(1):1–25, 2018.
[2] Sunday Adesina Adebisi and Adetunji Patience Ajuebishi. The order involving the neutrosophic hyperstructures, the construction
and setting up of a typical neutrosophic group. HyperSoft Set Methods in Engineering, 3:26–31, 2025.
[3] GR Amiri, R Mousarezaei, and S Rahnama. Soft hyperstructures and their applications. New Mathematics and Natural Computation,
pages 1–19, 2024.
[4] Krassimir Atanassov and George Gargov. Elements of intuitionistic fuzzy logic. part i. Fuzzy sets and systems, 95(1):39–52, 1998.
[5] Krassimir T Atanassov and Krassimir T Atanassov. Intuitionistic fuzzy sets. Springer, 1999.
[6] Guy E Blelloch. Programming parallel algorithms. Communications of the ACM, 39(3):85–97, 1996.
[7] Jack E Bresenham. Algorithm for computer control of a digital plotter. In Seminal graphics: pioneering efforts that shaped the field,
pages 1–6. 1998.
[8] Olacir R Castro, Aurora Pozo, Jose A Lozano, and Roberto Santana. An investigation of clustering strategies in many-objective
optimization: the i-multi algorithm as a case study. Swarm Intelligence, 11:101–130, 2017.
[9] You-Rong Chen, Chien-Chia Ho, Wei-Ting Chen, and Pei-Yin Chen. A low-cost pipelined architecture based on a hybrid sorting
algorithm. IEEE Transactions on Circuits and Systems I: Regular Papers, 71(2):717–730, 2023.
[10] Richard Cole. Parallel merge sort. SIAM Journal on Computing, 17(4):770–785, 1988.
[11] Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, and Clifford Stein. Introduction to algorithms. MIT press, 2022.
[12] Andrew Davidson, David Tarjan, Michael Garland, and John D Owens. Efficient parallel merge sort for fixed and variable length
keys. IEEE, 2012.
[13] Luciana De Micco, Mariano L Acosta, and Maximiliano Antonelli. Hybrid sorting algorithm implemented by high level synthesis.
IEEE Latin America Transactions, 18(02):430–437, 2019.
[14] Pedro S de Souza and Sarosh N Talukdar. Asynchronous organizations for multi-algorithm problems. In Proceedings of the 1993
ACM/SIGAPP symposium on Applied computing: states of the art and practice, pages 286–293, 1993.
[15] John D Dixon. The number of steps in the euclidean algorithm. Journal of number theory, 2(4):414–422, 1970.
[16] Shimon Even. Graph algorithms. Cambridge University Press, 2011.
[17] Julian Fierrez-Aguilar, Yi Chen, Javier Ortega-Garcia, and Anil K Jain. Incorporating image quality in multi-algorithm fingerprint
verification. In Advances in Biometrics: International Conference, ICB 2006, Hong Kong, China, January 5-7, 2006. Proceedings,
pages 213–220. Springer, 2005.
[18] Takaaki Fujita. Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncertainization: Fuzzy, Neutro-
sophic, Soft, Rough, and Beyond. Biblio Publishing, 2025.
[19] Takaaki Fujita. Antihyperstructure, neutrohyperstructure, and superhyperstructure. Advancing Uncertain Combinatorics through
Graphization, Hyperization, and Uncertainization: Fuzzy, Neutrosophic, Soft, Rough, and Beyond, page 311, 2025.
[20] Takaaki Fujita. A concise review on various concepts of superhyperstructures. Preprint, 2025.
[21] Takaaki Fujita. Expanding horizons of plithogenic superhyperstructures: Applications in decision-making, control, and neuro
systems. Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncertainization: Fuzzy, Neutrosophic,
Soft, Rough, and Beyond, page 416, 2025.
[22] Takaaki Fujita. Natural n-superhyper plithogenic language. Advancing Uncertain Combinatorics through Graphization, Hyperization,
and Uncertainization: Fuzzy, Neutrosophic, Soft, Rough, and Beyond, page 294, 2025.
[23] Takaaki Fujita. Short note of superhyperstructures of partitions, integrals, and spaces. Advancing Uncertain Combinatorics through
Graphization, Hyperization, and Uncertainization: Fuzzy, Neutrosophic, Soft, Rough, and Beyond, page 384, 2025.
[24] Takaaki Fujita. Some types of hyperdecision-making and superhyperdecision-making. Advancing Uncertain Combinatorics through
Graphization, Hyperization, and Uncertainization: Fuzzy, Neutrosophic, Soft, Rough, and Beyond, page 221, 2025.
[25] Takaaki Fujita. A theoretical exploration of hyperconcepts: Hyperfunctions, hyperrandomness, hyperdecision-making, and beyond
(including a survey of hyperstructures). Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncertainiza-
tion: Fuzzy, Neutrosophic, Soft, Rough, and Beyond, 344(498):111, 2025.
13
[26] Takaaki Fujita. Theoretical interpretations of large uncertain and hyper language models: Advancing natural uncertain and hyper
language processing. Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncertainization: Fuzzy,
Neutrosophic, Soft, Rough, and Beyond, page 245, 2025.
[27] Takaaki Fujita and Florentin Smarandache. Neutrosophic TwoFold SuperhyperAlgebra and Anti SuperhyperAlgebra. Infinite Study,
2025.
[28] Takaaki Fujita and Florentin Smarandache. Superhypergraph neural networks and plithogenic graph neural networks: Theoretical
foundations. 2025.
[29] Jayanta Ghosh and Tapas Kumar Samanta. Hyperfuzzy sets and hyperfuzzy group. Int. J. Adv. Sci. Technol, 41:27–37, 2012.
[30] Thomas Jech. Set theory: The third millennium edition, revised and expanded. Springer, 2003.
[31] Young Bae Jun, Kul Hur, and Kyoung Ja Lee. Hyperfuzzy subalgebras of bck/bci-algebras. Annals of Fuzzy Mathematics and
Informatics, 2017.
[32] EJC Kelkboom, Xuebing Zhou, Jeroen Breebaart, Raymond NJ Veldhuis, and Christoph Busch. Multi-algorithm fusion with template
protection. In 2009 IEEE 3rd international conference on biometrics: Theory, applications, and systems, pages 1–8. IEEE, 2009.
[33] H. E. Khalid, G. D. Gungor, and M. A. N. Zainal. Neutrosophic superhyper bi-topological spaces: Original notions and new insights.
Neutrosophic Sets and Systems, 51:33–45, 2022.
[34] F Thomson Leighton. Introduction to parallel algorithms and architectures: Arrays· trees· hypercubes. Elsevier, 2014.
[35] Pradip Kumar Maji, Ranjit Biswas, and A Ranjan Roy. Soft set theory. Computers & mathematics with applications, 45(4-5):555–562,
2003.
[36] Dmitriy Molodtsov. Soft set theory-first results. Computers & mathematics with applications, 37(4-5):19–31, 1999.
[37] Th Motzkin. The euclidean algorithm. 1949.
[38] Michal Novák, Štepán Křehlı́k, and Kyriakos Ovaliadis. Elements of hyperstructure theory in uwsn design and data aggregation.
Symmetry, 11(6):734, 2019.
[39] Zdzis law Pawlak. Rough sets. International journal of computer & information sciences, 11:341–356, 1982.
[40] Zdzislaw Pawlak, Lech Polkowski, and Andrzej Skowron. Rough set theory. KI, 15(3):38–39, 2001.
[41] Zdzislaw Pawlak, S. K. Michael Wong, Wojciech Ziarko, et al. Rough sets: probabilistic versus deterministic approach. International
Journal of Man-Machine Studies, 29(1):81–95, 1988.
[42] Akbar Rezaei, Florentin Smarandache, and S. Mirvakili. Applications of (neutro/anti)sophications to semihypergroups. Journal of
Mathematics, 2021.
[43] Judith Roitman. Introduction to modern set theory, volume 8. John Wiley & Sons, 1990.
[44] Robert Sedgewick and Kevin Wayne. Algorithms. Addison-wesley professional, 2011.
[45] Jeffrey Shallit. Origins of the analysis of the euclidean algorithm. Historia Mathematica, 21(4):401–419, 1994.
[46] Steven S Skiena. The algorithm design manual, volume 2. Springer, 2008.
[47] F. Smarandache. Introduction to superhyperalgebra and neutrosophic superhyperalgebra. Journal of Algebraic Hyperstructures and
Logical Algebras, 2022.
[48] Florentin Smarandache. Neutrosophy: neutrosophic probability, set, and logic: analytic synthesis & synthetic analysis. 1998.
[49] Florentin Smarandache. A unifying field in logics: Neutrosophic logic. In Philosophy, pages 1–141. American Research Press,
1999.
[50] Florentin Smarandache. Degrees of membership¿ 1 and¡ 0 of the elements with respect to a neutrosophic offset. Neutrosophic Sets
and Systems, 12:3–8, 2016.
[51] Florentin Smarandache. Plithogeny, plithogenic set, logic, probability, and statistics. Infinite Study, 2017.
[52] Florentin Smarandache. Extension of hyperalgebra to superhyperalgebra and neutrosophic superhyperalgebra (revisited). In Inter-
national Conference on Computers Communications and Control, pages 427–432. Springer, 2022.
[53] Florentin Smarandache. Foundation of superhyperstructure & neutrosophic superhyperstructure. Neutrosophic Sets and Systems,
63(1):21, 2024.
[54] Kouichi Takahashi, Kazunari Kaizu, Bin Hu, and Masaru Tomita. A multi-algorithm, multi-timescale method for cell simulation.
Bioinformatics, 20(4):538–546, 2004.
[55] Souzana Vougioukli. Helix hyperoperation in teaching research. Science & Philosophy, 8(2):157–163, 2020.
[56] Souzana Vougioukli. Hyperoperations defined on sets of s -helix matrices. 2020.
[57] Souzana Vougioukli. Helix-hyperoperations on lie-santilli admissibility. Algebras Groups and Geometries, 2023.
[58] Thomas Vougiouklis. Hyperstructures and their representations. Hadronic Press, 1994.
[59] Fei Wang, Zhou Shi, Asim Biswas, Shengtian Yang, and Jianli Ding. Multi-algorithm comparison for predicting soil salinity.
Geoderma, 365:114211, 2020.
[60] Darrell Whitley. A genetic algorithm tutorial. Statistics and computing, 4:65–85, 1994.
[61] Xiaonan Wu, Borui Liao, Yaogang Su, and Shuang Li. Multi-objective and multi-algorithm operation optimization of integrated
energy system considering ground source energy and solar energy. International Journal of Electrical Power & Energy Systems,
144:108529, 2023.
[62] Ming Xu, Xianbin Xu, Fang Zheng, Yuanhua Yang, and Mengjia Yin. A hybrid sorting algorithm on heterogeneous architectures.
TELKOMNIKA (Telecommunication Computing Electronics and Control), 13(4):1399–1407, 2015.
[63] Lotfi A Zadeh. Fuzzy sets. Information and control, 8(3):338–353, 1965.
[64] Lotfi A Zadeh. Biological application of the theory of fuzzy sets and systems. In The Proceedings of an International Symposium
on Biocybernetics of the Central Nervous System, pages 199–206. Little, Brown and Comp. London, 1969.
[65] Lotfi Asker Zadeh. Fuzzy sets as a basis for a theory of possibility. Fuzzy sets and systems, 1(1):3–28, 1978.
14