1. Introduction
Patterns are the essence of information. Sometimes established, sometimes waiting to be discovered, patterns comprise a world of interpretations and assigned meanings. Patterns are how nature organizes itself. However, patterns are also the means to interpret and advance our understanding of nature. Patterns are the basis of our knowledge. Number systems assign a sense of quantity to the patterns formed by elementary symbols. Languages consist of rules governing the formation of patterns to create messages, descriptions, and models. Despite the mechanism of associating patterns to produce a communication channel, an abstract perspective is often the approach to study patterns. The semantical meaning, if there is any, is not as important as the quantity of information a pattern can bring with it and how this information quantity varies as the same data set obeys different observation criteria, yielding different interpretations. In 2021, Duin [
1] presented an extended treatment of the types of patterns and their crucial role in human knowledge development.
From a formal math point of view, the partitioning of process descriptions into continuous groups was introduced by J. Fieldman [
2]. In his work, Feldman studied ergodic theory and proposed methods to assess entropy ergodic transformations. Ergodic theory has evolved as the field from which to study the statistical properties of dynamic systems. Valuable information about ergodic theory’s history and development is found in [
3,
4].
When the text decomposition is approached as the superposition of several periodic time functions or signals, typical models use sines, cosines, and wavelets as the primal components of the sum. Fourier and wavelet analyses led to the algorithmic cornerstone of the periodic and stationary signal processing tool: the fast Fourier transform (FFT). Nevertheless, nonstationary phenomena escape from the Fourier transform’s capacity to represent them in the domain of frequency. In the comprehensive introduction of their paper, Huang et al. [
5] mention the disadvantages of other strategies to decompose a nonstationary series of values. In the same paper, they introduced Empirical Mode Decomposition (EMD), an alternative method to study nonstationary systems that has gained significant popularity in addressing engineering, mathematics, and science problems, as noted by Mahenswasri and Kumar [
6]. In the search for a faster method, Neill [
7] published a method based on a data sample indicating that potentially relevant patterns are detected.
Besides the non-periodicity, another aspect that adds difficulty to the series synthesizing process is nested pattern detection. Despite its promises, nested pattern detection poses formidable challenges. The inherent complexity of nested structures demands innovative approaches and specifically developed computerized tools that can disentangle overlapping patterns and discern hierarchical relationships. Detection methods inspect data sets to detect patterns. Traditional methods locate repeated symbolic sequences within a long text. Typically, once the localities of a repeated sequence are determined, the space occupied by these sequences is isolated, and the search for more repeated sequences continues over the remaining text. Thus, the procedure does not further search for repeated sequences within the repeated sequences found; it does not further search for nested repeated sequences. We refer to this type of procedure as the Zero-Depth text interpretation or direct decomposition. In contrast, if the inspection continues looking for repeated sequences within the already found longer sequences, we call the procedure a Nested Pattern Decomposition Model.
This paper explores nested pattern detection comprehensively, leaning on the capacity of a software platform named
MoNet [
8].
MoNet comprehends tailored coding pseudo-languages designed to operate with complex data structures as multidimensional orthogonal arrays and data trees.
MoNet handles these structures as objects allocated in a single memory address, managing them as variables with identity; simultaneously, they can be partitioned according to desired segmenting criteria.
MoNet’s possibilities for defining math operations using complex structures—which may be considered tensors—as function arguments permit this system to administer multiple complex system experiments within the same environment.
The rest of this document presents two pseudo algorithms for producing the nested repeated sequence decomposition of texts. This decomposition model was applied to three groups of chaotic systems: the May equation, the Lorentz equation, and four stock market indexes. The decompositions obtained allow for estimating the nested complexity of these processes. Finally, we draw some process characteristics from interpreting two graphic representations of the decomposition.
2. Interpretation Models for Unidimensional Descriptions: The Sequence Repetition Model
Consider a text (
) formed by a sequence of
elementary characters (
), each referred to according to their position (
) in
. Then, the text (
) can be expressed as the following concatenation of
characters:
We define the decomposition process as the criteria for forming groups of characters that reduce the
’s complexity. The process inspects the text (
) for sequences of characters that appear
or more times. The term “symbol” refers to these sequences of characters. However, sometimes we use the term “sequence” to recall the symbols as strings of consecutive characters. Repeated sequences must appear at least
times. The selection of parameter
allows the method to be precisely adjusted to the nature and conditions of the text processed. In any case,
must be equal to or greater than two. Otherwise, a sequence appearing only once is not considered a pattern, and thus, it is regarded as noise. After submitting a text (
) to any process capable of retrieving the repeated char sequences or symbols, we may represent the resulting set (
) of symbols, with their corresponding frequencies of occurrence (
), as follows:
Notice that the series of symbols and frequencies start at index . This means that the number of repeated symbols is . Equivalently, the symbolic diversity of is .
When interpreting text (
) at its maximum scale, sub-contexts are not considered; thus, there must not be overlapping positions occupied by any two symbols. Consequently, symbols (
) must not share any char position. Interpretations of text (
) may differ when diverse criteria are used to select, within the text, the sets of contiguous characters to form a set of symbols (
). Based on the observer’s choice—or the choice implied by the observer’s criteria—a symbol (
) starting at the text position (
) and made of
characters is as follows:
Additionally, all chars contained in the are not necessarily part of a symbol () in the set (). The chars not included in any are noisy in terms of interpretation. This consideration is consistent with the typical existence of non-synthetized fragments in any system’s description. However, the standard objective of any description is to minimize the noise fraction.
For long texts and time series, the symbol diversity may be significant. In these cases, reducing the complexity of the system’s representation is convenient to make it feasible and controllable for a quantitative analysis. To cope with this, we propose working with the set of symbols grouped by their length. Therefore, we define
as the set of all repeated sequences (
) sharing the length
. Correspondingly, let
be the number of appearances of each symbol category (
). Thus, the tendency of a process to repeat sequences of values according to their lengths can be characterized by the spectrogram (
), which we define as follows:
where
and
are the length and the frequency of the longest repeating sequence within the text (
).
The frequencies (
) are linearly related to the fraction of text (
) occupied by symbols of each length category. Then, by convenience, we alternatively express the characteristic spectrogram (
) in terms of the probability of encountering a
-long sequence within the text (
):
We refer to the spectrograms and as the frequency-based and probability-based interpretations. We use the term representation to refer to the selections of char sequences to form sets of symbols ( and ) obtained after some interpretations.
2.1. At-Scale Repeated Sequence Model (ASRSM)
This section aims to build a probability—to be defined by Equations (6) and (7)—model of the repetitions of sequences of similar values in a series and the tendency of the process to repeat same-length cycles. When the original data are a series of scalar numbers, a discretization procedure converts the values into a text formed with an alphabet of as many letters as the resolution of the discretizing process requests (see
Appendix A.1 for details).
Developing a procedure to obtain the ASRSM of a long text is an interesting perspective given our expectation of dealing with complex processes that are usually oscillatory but non-periodic. The Discrete Fourier Transform (DFT) algorithm is the classical tool to study oscillatory processes. However, the DFT is not well suited for low-frequency and non-periodic processes [
9], and its complexity [
10] may represent a barrier with series in the order of thousands of values.
The ASRSM identifies repeating sequences within the text (
). To capture the symbol (represented by the repeated sequence) representing relevant information for describing the
, longer repeating sequences are prioritized over shorter sequences. Thus, when looking for the sequences synthesizing a larger process fraction, the ASRSM procedure looks for the longer patterns first. The remaining text is subject to repeated inspections until all the repeated sequences are localized.
Figure 1 illustrates the repeated sequence searching strategy with two nested loops.
Figure 2 presents a pseudocode (using the syntax of the C# programming language). The routine
Lang1DimRepetitiveSymb() retrieves the ASRSM of the text (
). The values of
and
corresponding to the most extended repeating sequences detected if
is observed at is largest scale are available from this routine.
Three loops dominate the algorithmic complexity () of Lang1DimRepetitiveSymb(): the Outer Loop, the Inner Loop, and the Search Loop. The number of cycles of the Outer Loop is . For the Inner Loop, the approximated average number of cycles is . For the Search Loop, the approximated average number of cycles is . Thus, the Search Loop growth order is Finally, computing the three loops’ growth order indicates that is of the . The algorithm uses hash procedures to code data structures and pass them as arguments. The complexity of these hash codes grows linearly with the order ()). Therefore, these hash codes do not alter the overall complexity ().
Some properties of the ASRSM are the visible graphing probability vs. the sequence length. We start with a
-character-long text (
), formed by
different chars distributed along
. By locating the
-long most extended non-overlapping sequence appearing
times in
, we empirically compute the probability of
containing an
-long sequence that appears
times. At this point, we introduce the syntax (
) to refer to the likelihood of an
-long sequence appearing exactly
times in an argument text. After scanning the text and identifying the length of the most extended sequence that appears
times, we can write the probability of finding the sequence as the fraction it occupies in
. Thus,
Every time
Lang1DimRepetitiveSymb() ends the
SearchLoop (see pseudocode in
Figure 2), the locations of the found repeated sequence are locked from the text, leaving a shorter text to be processed. Following the computation of the probability (
), the search for repeated sequences continues with the remaining text of length
. Identifying the length (
) of the most extended repeated sequences at each search step (
) leads to determining the probability referred to as the length of the remaining text corresponding to the step
. Thus,
The algorithm
Lang1DimRepetitiveSymb() performs the
SearchLoop identifying the most extended repeated sequences for
,
,
, and by successively using Equation (7), obtains the intermediate probability (
). The final remaining text (
), not represented by any repeated sequence, is an expression of the noise contained in the original text (
). The values of the probabilities (
) exhibit an attractive distribution that is convenient to show here.
Figure 3 (left) shows the origin of the ramp-shaped probabilities (
).
Figure 3 (right) is an actual graph showing the interpretation of the results (
).
2.2. Nested Repeated Sequence Length Decomposition (NRSLD)
Looking for symbols repeated within another symbol, as in a nested fashion, alters the scope of the observer’s view. Considering “nested contexts” is needed to assess the complexity of a system integrating several scales of observation, thereby revealing the process of the internal structure and allowing for an estimation of the structural complexity that a system () represents.
This section explains a procedure for obtaining these sets of nested sequences, which we name the Nested Repeated Sequence Decomposition Model (NRSDM). The developed routine is named
LangFractalNestedDecomp(). It recursively iterates over itself and relies on the previously presented procedures,
LangFractalNestedDecomp() and
Lang1DimRepetitiveSymb().
Figure 4 presents a pseudocode (using the syntax of the C# programming language). The routine
LangFractalNestedDecomp() retrieves the NRSDM of the text (
) when observed as a fractal-like structure with sequences compounding at progressively deeper nesting levels.
The algorithmic complexity () of LangFractalNestedDecomp() is estimated considering its two loops: the Recursive Loop and the Depth Decomp Loop. Due to the algorithm’s recursive condition, we can estimate the Depth Decomp Loop cycles as many times as the number of nodes forming the tree FractalDecompTree. For a regular tree of nesting levels (including the root depth and the depth of the leaves), with each node (except the leaves) divided into branches, the number of nodes is . For instance, assuming a sixth-depth-level regular tree () where any sequence is depicted by repeating sequences, the number of steps of LangFractalNestedDecomp() is . Comparing the complexities of routines Lang1DimRepetitiveSymb() and LangFractalNestedDecomp() leads to the question . At first glance, the latter seems to be a larger order; however, parameters and limit each other. When the length of the repeated sequences found is large, meaning a large , the depth of the tree, represented with parameter , tends to decrease. Therefore, in typical situations, we expect , since the spread () of the FractalDecompTree is considerably smaller than the , and D is not much larger than 3. Routine LangFractalNestedDecomp() starts by decomposing the full text () with routine Lang1DimRepetitiveSymb(). Even though Lang1DimRepetitiveSymb() runs several times afterward, these runs process text lengths that are only a fraction of the initial . These considerations allow us to state that, in most practical applications, the algorithmic complexity of LangFractalNestedDecomp() is . Finally, the algorithm LangFractalNestedDecomp() uses hash functions to code data structures and pass them as arguments. The complexity of these hash codes grows linearly with the Order(). Therefore, these hash codes do not alter the overall complexity ().
2.3. An Example of the ASRSM and the NRSDM of a Tiny Text
This section presents the ASRSM and the NRSDM describing the four-value discrete-scale text CADCADCCACAECCACACEDDCADCABCCACAECCACACEDD. Obtaining the models uses the procedures
Lang1DimRepetitiveSymb() and
LangFractalNestedDecomp().
Figure 5 shows how the text (
) is decomposed by applying procedure
Lang1DimRepetitiveSymb() at depth zero to obtain the repetitive sequences (ASRSM(T)). In further stages, the method uses
LangFractalNestedDecomp() to control the recursive application of
Lang1DimRepetitiveSymb() to obtain the tree-like NRSDM(T).
Figure 5 also includes the final ASRSM(T) and NRSDM(T) hash codes.
2.4. Graphical Representations of the NRSDM
We present the ASRSM, a procedure conceived to retrieve the longest character sequence that appears at least times. Once the text () is decomposed, each decomposition element—each repeated sequence found—is subject to the same procedure applied at a deeper nesting level. This recursive nested search repeats through the branches generated whenever the process finds new, more profound repeated sequences. The search for repeated sequences continues until the sequences found do not contain repeated sequences of two or more characters.
The ASRSM is a valid characterization; however, since every found sequence is isolated from the search space, the ASRSM does not account for the number of times shorter sequences appear contained in longer sequences. Thus, the ASRSM captures the shallow text’s characteristic patterns. To attempt to capture deeper characteristic patterns, we rely on the NRSDM, which not only expresses the tendency of the text-represented process to repeat state sequences but also offers a characterization of the system’s structure by showing how deep a sequence is nested within longer char sequences.
The NRSDM graphically describes a process’s tendency to repeat strings of states—namely, sequences. The potential of this graphic objective, which involves adding attributes like colors and transparencies to the bubbles representing sequences, is promising and fascinating. We may further develop this area in a later paper, sparking intrigue and inspiration, and we look forward to further developing this area in future papers. In this document, we delve into two graphic representations of the NRSDM. The NRSDM produces information about several parameters describing the text (), which means that the NRSDM is a multidimensional structure that describes the text (). To appreciate the NRSDM structure in two-dimensional representations, we propose graphing the probability vs. sequence length of the repeated sequences found in the process. Different symbols sharing their lengths are grouped into sets with sequences of the same size. Even though the grouping around symbol lengths reduces the number of points whose probabilities form the graphs, there are still several ways of interpreting these probabilities. We present the following two graphic representations of the NRSDM:
MSIR: Multiscale Integral Representation. This comprises the sets of symbols sharing their lengths and disregarding the nesting depth where they appear;
NPR: Nested Pattern Representation. This regards the symbols’ nesting depth within the tree structure.
Since the range of probability values is expected to be widespread, the MSIR and NPR graph axes are logarithmic. Note that the text fraction that a set sequence occupies—namely, the probability—is not independent of the sequence length. The relationship between the probability and sequence length shows graph points aligned in bands corresponding to an equal number of sequence repetitions.
Figure 6 (left) illustrates the location of these equal-number-of-repetition bands for a 10000-char-long text.
Figure 6 (center and right) includes examples of MSIR and NPR to show the “anatomy” helpful in interpreting these graphs.
3. Multiscale Integral Complexity (MSIC) and Nested Pattern Complexity (NPC)
After Shannon’s influential paper appeared in 1948 [
11], complexity has been assessed as the smallest number of symbols a string needs to convey a message. In Shannon’s paper, the symbols are bits, and the name given to the smallest number of symbols is information. Counting symbols in a string is a simple task. Simple counting is not the issue. The question is what is the order—or equivalently, the patterns—that the symbols should adopt to convey the same message most efficiently? The question transforms the counting symbols problem—not a problem at all—into a rather complex optimization problem that Kolmogorov [
12] later, in the context of algorithmic procedures, stated was non-solvable because there might always be a still unknown better way to organize the symbols in any message—steps in a code for the algorithmic case. Shannon did not explicitly explain how to order symbols. Nevertheless, he provided a math expression to quantify the goodness of expressing a message in binary code, which he named entropy. Shannon’s entropy formula retrieves zero entropy whenever there is a predictable order in the symbols forming the code. When there is absolute disorder, and the whole message is noise, the formula retrieves an entropy equal to one. Associating absolute order with zero entropy and total disorder with entropy = 1 is convenient; it gives us a sense of the system’s order compared with the minimum and maximum values of a system with the same number of components and two symbols in its description.
It is worth mentioning that the number of instances of each repeated sequence (
ni) is not a distribution based on the number of text characters of the text (
). Neither is
equal to T’s length; neither is the summation of
equal to
. Therefore, when computing the complexity of the process the text (
) represents, we cannot directly apply the Shannon–McMillan–Breiman Theorem [
13], which offers a way to calculate entropy for ergodic processes.
In the context of this study, Shannon’s entropy measures the effectiveness of a symbol string describing a system while quantifying the noise and redundancy fractions contained. Adjusting Shannon’s expression for a symbolic diversity (
),
symbols of the type
(
), and
symbols used in the description, we obtain Expression (8), which returns the entropy—the complexity—of a unidimensional description as the Nested Pattern Complexity (NPC):
Although Shannon’s entropy expression was conceived for a series of coded signals, multidimensional descriptions might be transformed into unidimensional symbol strings. Consider, for example, saving a picture in a computer’s disc. Even though the image is a two-dimensional description, the information travels to the recording media as a unidimensional list of bits. Our intuition dictates that we can convert multidimensional descriptions based on orthogonal dimensions into unidimensional descriptions. When the description is a multifaceted array of symbols, we can decompose the description space progressively until we reach a unidimensional string of symbols. However, the fractal decomposition NRSDM representing the description of the text (
) is not an orthogonal structure. Therefore, we cannot directly apply Shannon’s entropy to compute the complexity of this fractal-like description. Arguably, the number of sequences located at each nesting depth is intimately related to the structural complexity of the process that the text represents. We need to find a term to name this property. A close term is “Nestedness”, which was created by biologists to categorize structures in ecological systems. However, it refers to the distribution of species concentrations in an ecosystem [
14], and its sense seems far from our objective.
In 1998, Bar-Yam [
15] proposed complexity profiles to describe systems from several scaled points of view and assess complexity as a function of the scale observation level. Bar-Yam’s complexity profile depicts [
16] the impact of the observation scale, retrieving the complexity associated with the system description realized at a specific observation scale. After applying
LangFractalNestedDecomp(), the NRSDM describes the text (
) as a tree structure. The text description is modeled as significant components located immediately under the tree’s root
, containing deeper elements in a nested fashion until the most profound components—the leaves located at the deepest nested level—are reached. This tree-shaped structure is, therefore, generally fractionally dimensioned between one and two. Consequently, we cannot use Equation (8) to directly assess the tree-shaped structure’s complexity. Instead, we evaluate the complexity of this tree structure as the summation of the complexities of the nodes forming the tree. Computing the MSIC has the same numerical meaning as integrating Bar-Yam’s complexity profile function of text (
) descriptions at its nested scales. Accordingly, we refer to the complexity of this fractal-like structure as the Multiscale Scale Integral Complexity (
) computed as the sum of the
of the tree nodes, except those which are leaves:
Notice the value expresses the summation of the ’s complexities, measured from all possible observation scales. Thus, texts with more than one nested level will show values exceeding one but less than the number of branch nodes in the tree.
4. Discretizing Numerical Series of Values
The proposed pattern detection method is a compound procedure that combines, among other aspects, setting scale limits, resolution adjustments, scale linearity changes, symbol sequence alignment, recursion, and optimization. It resembles a variational calculus applied to a discrete functional field. The approach consists of detecting repeating sequences in a long string of symbols and configuring a space of possible patterns that characterize the subject string. These prospective patterns are compared with each other to select the combination that best represents the subject string based on a defined interpretation objective.
Usually, the subject of study is a series of real numbers. One initial step in the pattern detection process is to build an -long sequence of elementary symbols—or chars—that represents the original numerical value series in that this symbolic string resembles, at some scale, the “rhythm of variations” of the numerical series. We refer to this step as discretization. The quality of the resemblance between the symbolic string and the series of values achieved after discretization is the subject of evaluation and depends on several parameters discussed below. Once a prospective symbolic string represents the original series of values, the method analyzes the string to extract repeated groups of chars that may constitute patterns.
4.1. Discretizing Scales
The method developed relies on detecting repeated sequences of elementary symbols (chars) within a long string with a certain number of symbols. When a series of real numbers represents the process studied, these values are discretized and converted to a sequence of elementary symbols. When discretizing the series of values, a type of scale may be chosen to “better absorb” the process’ nonlinearities.
Appendix A.2 shows procedures for applying these discretizing scales.
4.2. Scale Parameter Selection
The scale parameters—type, resolution, inflection, and nonlinearity—should be selected to maximize, at least potentially, the extractable information—or negentropy—from the resulting discretized text. To measure the negentropy after applying a scale, we compute the difference between one (1), the number associated with the absence of patterns, and Shannon’s entropy [
11], based on the frequency of the characters comprising the discretized text. The well-known Shannon’s entropy refers to the entropy with two different symbol values (0 and 1). To generalize Shannon’s entropy expression to a symbol diversity that might be greater than two, we follow the analysis presented by Febres and Jaffe [
17], where the generalized expression must have the same symbolic diversity, or number of states, as the base of the logarithm in the Shannon’s entropy expression. In the context of this study, the diversity of chars included in the discretized text equals the scale resolution (
). Then, the base of the logarithm becomes the
, corresponding to the base of the symbolic system that we are computing. Finally, the information associated with a selected scale is as follows:
With our goal of selecting scale parameters to maximize the negentropy in mind, we reason that a gross scale, consisting of a few chars, will be incapable of differentiating some values, producing an organized text, yet with little information to offer. The only disadvantage we see when selecting a high resolution is the potential computation burden this selection may carry. Therefore, we adopt the resolution to ensure it does not increase the computation time beyond convenience. The choice of the type of scale (linear, tangential, or hyperbolic) is carefully considered, aiming to “absorb” the process’s nonlinearity better. This nonlinearity “absorption” is reflected as an increase in the extractable information (), defined in Expression (10). The other parameters, , , and , are also selected to maximize the extractable information ().
Selecting the scale to discretize the time series values is performed before the repeated sequence recognition process. Separating the problem of choosing an appropriate scale to discretize a numerical time series provides a relevant advantage because it does not augment the method’s algorithmic complexity.
6. Results, Summary, and Discussion
We present a method based on an algorithm capable of detecting patterns in long texts. By recursively applying this algorithm, the procedure achieves the text decomposition in nested symbolic sequences, resulting in the NRSDM. The procedure functions on the base of the dimensionality reduction. Thus, we can think about the method as transforming a text object, whose dimensionality is of the order of the word diversity, into a lower-dimensional structure. The transformation follows a parameter set to define a specific process interpretation. Among the most relevant of these parameters are the following:
The discretization scale (). Resolution: alphabet size. Scale’s nonlinearity: tangential, hyperbolic, linear;
Repetitions (): the number of instances a sequence must appear to be considered a repeated sequence within the context of our method;
The text length (): text segment selected from the original full text.
Processing indefinitely long texts with these algorithms is prohibitive; the overall complexity of the algorithms is . In our experiments, we used an Intel i7 processor. The 40,000-character texts of the Lorentz simulations were the longest texts we had time to process. To shorten the execution time for 40,000-character texts, we had to increase the repetitions required ().
The structure resulting from this transformation becomes the NRSDM we use to study the process by interpreting and representing it. In this document, we characterized three different types of time series: May’s equation, Lorentz equations, and several stock market price evolutions in time. We obtained two representations for each process: the MSIR and the NPR. The method and its resulting representations offer possibilities for further comparing processes’ characteristics, extracting synthetic descriptions, and a better understanding of systems’ behavior, thereby providing ways to enhance data for modeling and projection purposes.
The MSIR’s dimensionality equals the number of different sequence lengths found. These sequence lengths are the domain of the graphs shown in
Figure 6 (middle row),
Figure 9 and
Figure 11. Thus, each dot in these graphs may represent several sequences sharing their lengths. Hence, the number of dots is considerably less than the number of sequences. The dimensionality of the counterpart representation, the NPR, equals the number of different sequence lengths and the number of nesting depths found in the NRSDM decomposition. The domain of the graphs shown in
Figure 6 (bottom row),
Figure 10 and
Figure 12 represent the sequence lengths, while the size of the bubbles represents the nesting level of each sequence. Consequently, NPR graphs exhibit a denser cloud of bubbles than the MSIR.
The MSIR and the NPR are the pinnacle results of this study. These representations constitute visual fingerprints of the complex processes coded as unidimensional texts. Comparing the aspects of these graphs is already a possibility for contrasting processes. The structural distances among processes emerge in these graphs as noticeable differences. Focusing on the MSIR of Lorentz’s equations, we observe a progressive dot-cloud shape change from triangular for 10,000 time-step simulations to an accentuated bow dot-cloud shape for 40,000 time-step simulations. Since this progressive behavior occurred in dimensions
and
of Lorentz’s process, we hypothesized that the “natural, or typical” shape of the MSIR of the Lorentz process is a bow-shaped cloud of dots, associating the triangular shape with the transient effects when the process is still near non-natural initial conditions and far from its fully developed stable behavior. Changing our attention to the NPR graphs in
Figure 9 and
Figure 11, we notice significant differences between the NPR of Lorentz’s equations for 10,000 time-step simulations versus those for the 20,000 and 40,000 time-step simulations. These last simulations exhibit multiple bubble clouds consistent with the graphic emergence of nested repeated sequence structures.
Inspecting the MSIR graphs for stock market indexes in
Figure 11, we note that GLD shows a triangular dot-cloud shape, while SPX, the WTI, and BTC show a bow dot-cloud shape. The NPR graphs for stock market indexes in
Figure 12 also show a single bubble cloud for GLD. Interestingly, from the four stock market indexes addressed, GLD shows the most dramatic and noticeable change in the records we used here. See the market GLD price and daily variations registered in
Figure A2 in
Appendix A.2; note the oscillation pattern change near the TimeBase Day 1900.
We did not perform a statistical analysis of the connections between the shapes of the MSIR and NPR graphs, as this would have gone beyond the scope of this study. Therefore, significant speculation exists about the system properties and conditions derived from these graphs’ shapes. However, these early results suggest we are just beginning to develop an analysis method with promising potential for studying complex systems.
Left to the end is the direct result obtained from the method: assessing the complexities of the processes studied. Clearly, the results show the complexity ranking of these systems. GLD and BTC, for example, carry less information than the SPX and WTI do. These sorts of comparisons are a result of the method. Perhaps more important is the possibility of obtaining these complexity assessments for the two modes of systems described here: i) the synthetic math model for the dynamic of May and Lorentz processes, and ii) the extended system response to the surrounding, mostly unknown conditions represented by the stock market processes.