Dokumen
Dokumen
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2023
This work is subject to copyright. All rights are solely and exclusively
licensed by the Publisher, whether the whole or part of the material is
concerned, specifically the rights of reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other
physical way, and transmission or information storage and retrieval,
electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The publisher, the authors, and the editors are safe to assume that the advice
and information in this book are believed to be true and accurate at the date
of publication. Neither the publisher nor the authors or the editors give a
warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The
publisher remains neutral with regard to jurisdictional claims in published
maps and institutional affiliations.
This Springer imprint is published by the registered company Springer
Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham,
Switzerland
Acknowledgments
The fact that it is possible today to present the CIB method based on an
extensive body of practice and multifaceted methodological research is due
to a growing community of method users and researchers, from which
numerous suggestions, inspirations, methodological innovations, and
critical questioning have emerged, thus helping the method mature. I would
like to express my gratitude to my colleagues worldwide who have been
inspired by the CIB method and shared their insights, experiences, and
criticism.
It is my pleasure to thank Dr. Diethard Schade, who initiated scenario
research at the Center for Technology Assessment, Prof. Georg Förster, my
comrade during my first walking attempts toward CIB, and Prof. Ortwin
Renn, who fostered the development of CIB at the University of Stuttgart
for 15 years by his continuous support.
I especially desire to thank my colleagues at the CIB-Lab of ZIRIUS at
the University of Stuttgart for the journey they have undertaken together
with me for so many years and who have contributed inestimably to the
further development and maturation of the CIB method. Their research,
motivation, wealth of ideas, and untiring support have always been an
inspiration and encouragement to me. Without them, this book would not
have come about in the way it has. I would also like to thank Dr. Wolfgang
Hauser, Dr. Hannah Kosow, Prof. Vanessa Schweizer, and M.A. Sandra
Wassermann for valuable comments on the book’s manuscript. All
remaining errors are mine.
Abbreviations
ABM Agent-based modelling
BASICS Batelle Scenario Inputs to Corporate Strategy (scenario method)
C Consistency score of a descriptor or scenario
CO2 Carbon dioxide
CIB Cross-Impact Balances (scenario method)
D Diversity score of a scenario portfolio
FAR Field Anomaly Relaxation (scenario method)
Gt C Giga tons (billion tons) of carbon
IC Inconsistency score of a descriptor or scenario
ICS Significance threshold of a scenario inconsistency
IL Intuitive Logics (scenario method)
IPCC Intergovernmental Panel on Climate Change
KSIM Kane’s Simulation Model (simulation method)
m Number of matrices of a matrix ensemble
MINT A group of academic disciplines, consisting of mathematics,
informatics, natural sciences, and technology
N Number of descriptors of a cross impact matrix
OECD Organization for Economic Co-operation and Development
q Quorum applied in an ensemble evaluation
SD Systems Dynamics (simulation method)
SRES Special Report on Emission Scenarios
TIS Total impact score
Vi Number of states of descriptor i
Z Number of possible configurations of a morphological field
Contents
1 Introduction to CIB
References
2 The Application Field of CIB
2.1 Scenarios
2.2 Scenarios and Decisions
2.3 Classifying CIB
References
3 Foundations of CIB
3.1 Descriptors
3.2 Descriptor Variants
3.2.1 Completeness and Mutual Exclusivity of the Descriptor
Variants
3.2.2 The Scenario Space
3.2.3 The Need for Considering Interdependence
3.3 Coping with Interdependence: The Cross-Impact Matrix
3.4 Constructing Consistent Scenarios
3.4.1 The Impact Diagram
3.4.2 Discovering Scenario Inconsistencies Using Influence
Diagrams
3.4.3 Formalizing Consistency Checks: The Impact Sum
3.4.4 The Formalized Consistency Check at Work
3.4.5 From Arrows to Rows and Columns: The Matrix-Based
Consistency Check
3.4.6 Scenario Construction
3.5 How to Present CIB Scenarios
3.6 Key Indicators of CIB Scenarios
3.6.1 The Consistency Value
3.6.2 The Consistency Profile
3.6.3 The Total Impact Score
3.7 Data Uncertainty
3.7.1 Estimating Data Uncertainty
3.7.2 Data Uncertainty and the Robustness of Conclusions
3.7.3 Other Sources of Uncertainty
References
4 Analyzing Scenario Portfolios
4.1 Structuring a Scenario Portfolio
4.1.1 Perspective A: If-Then
4.1.2 Perspective B: Order by Performance
4.1.3 Perspective C: Portfolio Mapping
4.2 Revealing the Whys and Hows of a Scenario
4.2.1 How to Proceed
4.2.2 The Scenario-Specific Cross-Impact Matrix
4.3 Ex Post Consistency Assessment of Scenarios
4.3.1 Intuitive Scenarios
4.3.2 Reconstructing the Descriptor Field
4.3.3 Preparing the Cross-Impact Matrix
4.3.4 CIB Evaluation
4.4 Intervention Analysis
4.4.1 Analysis Example: Interventions to Improve Water Supply
4.4.2 The Cross-Impact Matrix and its Portfolio
4.4.3 Conducting an Intervention Analysis
4.4.4 Surprise-Driven Scenarios
4.5 Expert Dissent Analysis
4.5.1 Classifying Dissent
4.5.2 Rule-Based Decisions
4.5.3 The Sum Matrix
4.5.4 Delphi
4.5.5 Ensemble Evaluation
4.5.6 Group Evaluation
4.6 Storyline Development
4.6.1 Strengths and Weaknesses of CIB-Based Storyline
Development
4.6.2 Preparation of the Scenario-Specific Cross-Impact Matrix
4.6.3 Storyline Creation
4.7 Basic Characteristics of a CIB Portfolio
4.7.1 Number of Scenarios
4.7.2 The Presence Rate
4.7.3 The Portfolio Diversity
References
5 What if… Challenges in CIB Practice
5.1 Insufficient Number of Scenarios
5.2 Too Many Scenarios
5.2.1 Statistical Analysis
5.2.2 Diversity Sampling
5.2.3 Positioning Scenarios on a Portfolio Map
5.2.4 Further Procedures
5.3 Monotonous Portfolio
5.3.1 Unbalanced Judgment Sections
5.3.2 Unbalanced Columns
5.4 Bipolar Portfolio
5.4.1 Causes of Bipolar Portfolios
5.4.2 Special Approaches for Analyzing Bipolar Portfolios
5.5 Underdetermined Descriptors
5.6 Essential Vacancies
5.6.1 Resolving Vacancies by Expanding the Portfolio
5.6.2 Cause Analysis
5.7 Context-Dependent Impacts
References
6 Data in CIB
6.1 About Descriptors
6.1.1 Explanation of Term
6.1.2 Descriptor Types
6.1.3 Methodological Aspects
6.2 About Descriptor Variants
6.2.1 Explanation of Term
6.2.2 Types of Descriptor Variants
6.2.3 Methodological Aspects
6.2.4 Designing the Descriptor Variants
6.3 About Cross-impacts
6.3.1 Explanation of Term
6.3.2 Methodological Aspects
6.3.3 Data Uncertainty
6.4 About Data Elicitation
6.4.1 Self-Elicitation
6.4.2 Literature Review
6.4.3 Expert Elicitation (Written/Online)
6.4.4 Expert Elicitation (Interviews)
6.4.5 Expert Elicitation (Workshops)
6.4.6 Use of Theories or Previous Research as Data Collection
Sources
References
7 CIB at Work
7.1 Iran Nuclear Deal
7.2 Energy and Society
7.3 Public Health
7.4 IPCC Storylines
References
8 Reflections on CIB
8.1 Interpretations
8.1.1 Interpretation I (Time-Related): CIB in Scenario Analysis
8.1.2 Interpretation II (Unrelated to Time): CIB in Steady-State
Systems Analysis
8.1.3 Interpretation III: CIB in Policy Design
8.1.4 Classification of CIB as a Qualitative-Semiquantitative
Method of Analysis
8.2 Strengths of CIB
8.2.1 Scenario Quality
8.2.2 Traceability of the Scenario Consistency
8.2.3 Reproducibility and Revisability
8.2.4 Complete Screening of the Scenario Space
8.2.5 Causal Models
8.2.6 Knowledge Integration and Inter- and Transdisciplinary
Learning
8.2.7 Objectivity
8.2.8 Scenario Criticism
8.3 Challenges and Limitations
8.3.1 Time Resources
8.3.2 Aggregation Level and Limited Descriptor Number
8.3.3 System Boundary
8.3.4 Limits to the Completeness of Future Exploration
8.3.5 Discrete-Valued Descriptors and Scenarios
8.3.6 Trend Stability Assumption
8.3.7 Uncertainty and Residual Subjectivity in Data Elicitation
8.3.8 Context-Sensitive Influences
8.3.9 Consistency as a Principle of Scenario Design
8.3.10 Critical Role of Methods Expertise
8.3.11 CIB Does Not Study Reality but Mental Models of Reality
8.4 Unsuitable Use Cases: A Checklist
8.5 Alternative Methods
References
Appendix: Analogies
Physics
Network Analysis
Game Theory
Glossary
Cross-impact matrix (in the context of CIB)
Portfolio (in CIB)
Scenarios (in the context of CIB)
Index
List of Figures
Fig. 2.1 The scenario funnel
Fig. 3.2 Descriptor variants (alternative futures) for the descriptor “A.
Government”
Fig. 3.12 The influence diagram Fig. 3.11 with the impact sums of the
descriptors
Fig. 3.20 Consistency values of the descriptors calculated for the test
scenario
Fig. 4.13 Justification form for Descriptor variant “E3 Social cohesion:
Unrest”
Fig. 4.15 Scenario axes diagram and the “Somewhereland City mobility”
example
Fig. 4.16 Descriptors and descriptor variants of the “Somewhereland City”
intuitive mobility scenarios
Fig. 4.20 Corrected and extended scenario axes diagram according to the
result of the CIB analysis
Fig. 4.22 The descriptors and descriptor variants of the “Water supply”
intervention analysis
Fig. 4.29 Implementation of the “Global economic crises” wildcard into the
Somewhereland matrix
Fig. 4.30 The Somewhereland portfolio under the impact of the “Global
economic crises” wildcard
Fig. 4.34 Solutions of the “Resource economy” sum matrix, including all
scenarios with nonsignificant inconsistency
Fig. 4.38 The ensemble table of the “Resource economy” matrix ensemble
Fig. 4.42 Data basis for storyline development for Somewhereland scenario
no. 10
Fig. 4.48 The three fully consistent scenarios of the “Oil price” matrix
Fig. 4.49 Descriptor variant vacancies of the “Oil price” matrix (empty
squares)
Fig. 5.5 Nutshell III - Procedure for creating a selection with high scenario
distances (diversity sampling)
Fig. 5.6 Scenario selection according to the “max-min” heuristic (diversity
sampling)
Fig. 5.15 Intervention effects: worst case (dark shading) and best case (light
shading)
Fig. 5.16 Effect of dual interventions in the “Social sustainability” matrix
Fig. 5.25 Conditional cross-impact matrices (the top matrix is valid for E1
scenarios, that below for E2 scenarios)
Fig. 5.26 Mobility demand portfolio after consideration of context
dependencies
Fig. 6.2 “Group opinion” cross-impact matrix and portfolio after removing
the passive descriptor
Fig. 6.5 Portfolios with (top) and without (bottom) intermediary descriptor
D
Fig. 7.1 “Iran 1395” scenarios and their thematic core motifs. Own
illustration based on Ayandeban (2016)
Fig. 7.2 Map of German societies in 2050 and their CO2 emissions.
Modified from Pregger et al. (2020)
Fig. 7.3 Section of the network of impact relations between the factors
influencing the energy balance of an individual. Own representation based
on data from Weimer-Jehle et al. (2012)
Fig. 7.4 CIB analysis of obesity risks for children and adolescents for four
case examples. Data from Weimer-Jehle et al. (2012)
Fig. 7.5 Scenario axes diagram of the forty SRES emissions scenarios. Own
illustration based on data from Nakićenović et al. (2000)
Fig. 7.6 Initial phase of the emissions trajectories of the forty SRES
scenarios (own illustration based on data from Nakićenović et al. (2000)
(SRES scenario emissions) and IPCC (2014) (historical CO2 emissions))
Fig. 7.7 Number of SRES and CIB scenarios in four classes of carbon
intensity. Own illustration based on data from Schweizer and Kriegler
(2012)
Fig. A.1 Analogy of the equilibrium of forces: valleys as rest points for
heavy bodies
1. Introduction to CIB
Wolfgang Weimer-Jehle1
(1) ZIRIUS, University of Stuttgart, Stuttgart, Germany
Wolfgang Weimer-Jehle
Email: [email protected]
Scenarios
As mentioned, scenario construction is the most common application of
CIB thus far. It is therefore to this field of application that the descriptions
in this book refer, with a few exceptions. This focus is not intended to
disregard the value of CIB for qualitative systems analysis but is motivated
by the expectation that the transfer of the methodological descriptions and
considerations formulated here for scenario analysis to the field of
qualitative systems analysis be straightforward.
To fulfill their function as instruments for preparing for the future,
scenarios must be well constructed. They must capture what we can
reasonably assume today about the future and the forces that will shape it.
Taken together, well-constructed scenarios must express the different
directions in which these forces can steer events. There have been differing
views about how best to achieve this purpose since the early days of
scenario making in the 1950s and 1960s, from which two distinct “scenario
cultures” developed.
Simply Thinking
Herman Kahn, the creator of the modern scenario concept, argued that the
most important thing is to “think about the problem” (Schnaars, 1987: 109),
in other words, to prepare scenarios without the use of formal construction
methods. From Kahn’s perspective, formal construction techniques are
perceived as a distraction and an impediment to inspiration, intuition, and
free thinking. Following Kahn’s approach, the intuitive logics (IL) method
emerged (Huss & Honton, 1987; Wilson, 1998), according to which
scenarios are designed “by gut feeling” in expert discussions.3 The first
groundbreaking successes of the scenario technique are due to this
approach,4 and it is by far the most widely used scenario methodology to
date, except probably in the area of scientific scenarios.
The Magical Number Seven Plus/Minus Two
Almost simultaneously with the preceding approach, however, another view
of scenario construction emerged, which emphasized the value of formal
methods in the collection of information and in actual scenario construction.
One of the founders of this school of thought is Olaf Helmer, co-developer
of the Delphi method for structured collection of expert assessments
(Dalkey & Helmer, 1963) and co-inspirer of the first cross-impact
techniques for formal analysis of expert judgments (Gordon & Hayward,
1968).
Advocates of formal scenario construction can draw on weighty
arguments from cognition research. In a 1956 essay that would become one
of the most frequently cited publications in psychology textbooks,5
American psychologist George Miller evaluated a series of cognition
experiments (Miller, 1956). He concluded that there is an upper limit to our
mental capacity to accurately and reliably process information about
simultaneously interacting elements6 and that this limit is seven plus or
minus two elements. The essay triggered extensive and continuing research
on the question, with the result that Miller’s “magical number” must be
regarded as optimistically high (Cowan, 2001).
The transfer of these findings of cognition research to the problem of
scenario construction is inevitable and sobering. If a scenario analysis
addresses ten factors that will define the future (a rather modest number),
then 90 potential interactions arise between these ten factors. If only about
half of the potential interactions actually matter (which, as we will see in
Sect. 6.3.2, is about average), persons attempting the mental construction of
a scenario will have to keep in mind and weigh approximately 45
interrelationships to extract from them a scenario that considers all relevant
interrelationships. Given the limits of our mental capacities shown by
cognition research, can we hope to do justice to this task by intuitive
scenario construction?
A challenge for mental scenario construction also arises from another
angle, that is, from the combinatorial weight of the task. Even if we content
ourselves with a rough analysis and grant each of the ten factors three
conceivable future developments, which we then must combine into
meaningful scenarios, this process results in 3 to the power of 10, i.e.,
approximately 59,000 combinatorial alternatives, each of which must be
considered a possible scenario until disproven. How many of these
alternatives can be evaluated by mental reflection, and how many relevant
scenarios with potentially massive implications go unnoticed when we
finally find ourselves at the end of our time resources after intuitively
identifying a few plausible combinations? Incidentally, as we will see later,
combinatorial spaces with 59,000 combinatorial alternatives are among the
lesser challenges faced in scenario analysis.
However, the question of intuition-based versus formal construction of
scenarios is not the only fundamental controversy in the scenario
community. A second controversy is whether (or for what purposes)
scenarios should rely essentially on quantitative data or whether they should
also build substantially on qualitative bodies of knowledge.
Chapter 5 For most users, the desired result of a CIB analysis is probably
a manageable portfolio of perhaps 3–6 clearly different scenarios. Such a
result is in fact not atypical for a CIB analysis. However, CIB does not
return a result with standardized properties. Rather, the scenario portfolio it
generates is an expression of the systemic relationships formulated in the
cross-impact matrix. The consequent result from a system-analytical
perspective can be a small or large, diverse or rather monotonous scenario
portfolio—independently of the wishes and expectations of the user.
Chapter 5 therefore addresses the case in which the result of a CIB analysis
does not meet expectations. It describes using other or supplementary
analysis approaches to arrive at a result that meets one’s needs or at least at
an understanding why one’s expectations are at odds with the system
picture that was input into the CIB analysis.
Chapter 6 Now that it is clear how CIB functions in principle and what
can be achieved with it, it is time to take a closer look at the three central
data objects of the method: descriptors, descriptor variants, and cross-
impact data. Hidden beneath the surface of the technical application are
many differentiations and design decisions that can be handled well or
poorly, unconsciously or purposefully. Chapter 6 therefore presents four
“dossiers” that compile key information about these data objects and how to
collect them. The dossiers are designed to provide in-depth information and
can be dispensed with when reading the book for the first time. However,
the reader is then advised to read the chapter in a second pass.
Chapter 7 Theory is followed by a visit to the workshops. Chapter 7
outlines four selected studies in which CIB was used by different research
teams to analyze the future, to analyze systems or to critically review
existing scenarios. The selection of examples is intended to reveal the
thematic diversity of the application of the method. The examples also
make clear that it is precisely in disciplines with entrenched methodological
traditions that new perspectives can be gained by using this still young
method.
References
Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage
capacity. Behavioral and Brain Sciences, 24, 87–185.
[Crossref]
Dalkey, N., & Helmer, O. (1963). An experimental application of the Delphi method to the use of
experts. Management Science, 9, 458–467.
[Crossref]
Dörner D. (1997). The logic of failure: Recognizing and avoiding error in complex situations basic
books
Fink, A., Schlake, O., & Siebe, A. (2002). Erfolg durch Szenario-Management – Prinzip und
Werkzeuge der strategischen Vorausschau. Campus.
Gordon, T. J., & Hayward, H. (1968). Initial experiments with the cross impact matrix method of
forecasting. Futures, 1(2), 100–116.
[Crossref]
Gorenflo, D. W., & McConnell, J. (1991). The Most frequently cited journal articles and authors in
introductory psychology textbooks. Teaching of Psychology, 18, 8–12.
[Crossref]
Huss, W. R. (1988). A move toward scenario analysis. International Journal of Forecasting, 4, 377–
388.
[Crossref]
Huss, W. R., & Honton, E. (1987). Alternative methods for developing business scenarios.
Technological Forecasting and Social Change, 31, 219–238.
[Crossref]
Kosow, H., & Gaßner, R. (2008). Methods of future and scenario analysis – Overview, assessment,
and selection criteria. DIE Studies 39, Deutsches Institut für Entwicklungspolitik.
Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for
processing information. The Psychological Review, 63, 81–97.
[Crossref]
Pregger, T., Naegler, T., Weimer-Jehle, W., Prehofer, S., & Hauser, W. (2020). Moving towards
socio-technical scenarios of the German energy transition–lessons learned from integrated energy
scenario building. Climatic Change, 162, 1743–1762. https://doi.org/10.1007/s10584-019-02598-0
[Crossref]
Read, C. (1920). Logic–deductive and inductive (4th ed.). Simpkin & Marshall.
Ringland, G. (2006). Scenario planning – Managing for the future. John Wiley.
Saaty, T. L., & Ozdemir, M. S. (2003). Why the magic number seven plus or minus two.
Mathematical and Computer Modelling, 38, 233–244.
[Crossref]
Schnaars, S. P. (1987). How to develop and use scenarios. Long Range Planning, 20(1), 105–114.
[Crossref]
Schweizer, V. J., & Kurniawan, J. H. (2016). Systematically linking qualitative elements of scenarios
across levels, scales, and sectors. Environmental Modelling & Software, 79, 322–333. https://doi.org/
10.1016/j.envsoft.2015.12.014
[Crossref]
Vögele, S., Hansen, P., Poganietz, W.-R., Prehofer, S., & Weimer-Jehle, W. (2017). Scenarios for
energy consumption of private households in Germany using a multi-level cross-impact balance
approach. Energy, 120, 937–946. https://doi.org/10.1016/j.energy.2016.12.001
[Crossref]
Vögele, S., Rübbelke, D., Govorukha, K., & Grajewski, M. (2019). Socio-technical scenarios for
energy intensive industries: The future of steel production in Germany. Climatic Change, 1–16. in
context of international competition and CO2 reduction. STE preprint 5/2017, Forschungszentrum
Jülich.
Wack, P. (1985a). Scenarios–uncharted waters ahead. Harvard Bussiness Review, 62(5), 73–89.
Wack, P. (1985b). Scenarios–shooting the rapids. Harvard Bussiness Review, 63(6), 139–150.
Weimer-Jehle, W., Buchgeister, J., Hauser, W., Kosow, H., Naegler, T., Poganietz, W.-R., Pregger, T.,
Prehofer, S., von Recklinghausen, A., Schippl, J., & Vögele, S. (2016). Context scenarios and their
usage for the construction of socio-technical energy scenarios. Energy, 111, 956–970. https://doi.org/
10.1016/j.energy.2016.05.073
[Crossref]
Wilson I. (1998). Mental maps of the future: An intuitive logics approach to scenario planning, In:
Fahey L, Randall R.M (Eds) Learning from the future: Competitive foresight scenarios, Wiley, p 81–
108.
Footnotes
1 Method development: Weimer-Jehle (2001). First method application: Förster (2002).
3 Schnaars (1987:106): “Scenario writing is a highly qualitative procedure. It proceeds more from
the gut than from the computer, although it may incorporate the results of quantitative models.
Scenario writing assumes that the future is not merely some mathematical manipulation of the past,
but the confluence of many forces, past, present and future that can best be understood by simply
thinking about the problem.”
4 This refers in particular to the Shell scenarios on the eve of the oil crisis (Wack, 1985a, 1985b).
7 Huss (1988:378), for example, reports the prevalence of this perspective among forecasters during
the 1980s.
8 The phrase is attributed to various individuals, including economist John Maynard Keynes and
philosopher Karl Popper. However, the oldest published source known to the author refers to the
British philosopher Carveth Read (1920:351) (“Better to be vaguely right than precisely wrong”).
10 Godet (1983), pages 181, 182, 189. Godet does not conclude from this view to renounce
quantitative methods but, rather, recommends a combination of qualitative and quantitative methods
for prognostics. These considerations are transferable to the field of scenario methodology.
12 E.g., Kosow and Gaßner (2008), Ringland (2006), Fink et al. (2002).
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis, Contributions to Management
Science
https://doi.org/10.1007/978-3-031-27230-1_2
Wolfgang Weimer-Jehle
Email: [email protected]
2.1 Scenarios
Scenarios are a future research concept for dealing with future openness and
uncertainty. According to the definition by Michael Porter (1985), a
scenario is:
…an internally consistent view of what the future might turn out to
be—not a forecast, but one possible future outcome.
Scenarios thus assume that there are multiple possible futures and that it is
not possible to recognize in the present which of them will occur. From the
perspective of scenario technique, preparing for the future means dealing
with a variety of possible futures instead of—as in forecasting—focusing
on one expected future and aligning our actions specifically with this
expectation.
Figure 2.1 visualizes this concept by means of the so-called “scenario
funnel” (e.g., Kosow & Gaßner, 2008). Here, the development of a system
is sketchily represented by two quantitatively measurable key variables
(“Trend A” and “Trend B”). At time P (the present), the state of the trend
variables is known. In the future, the trend variables may evolve away from
their present state. The further we look into the future, the greater the
uncertainty about the state of the trend variables becomes, and the funnel
enclosing the possibility space of the system opens.
In the case of a forecast, one would rely on the center of the funnel. This
is appropriate when the opening of the funnel is narrow. Often, however, in
long-term decision problems, the opening of the funnel is so wide that the
different locations of the opening represent essentially different ideas of the
future and require different decisions. Then, it would be inappropriate to
focus on the center, and the wideness of the opening is better addressed by a
“portfolio” of different scenarios wisely distributed across the opening.
Figure 2.1 has only an illustrative function and makes the basic idea of
the scenario technique understandable. The front surface of the funnel is not
necessarily circular but can take on more complicated shapes. In reality,
more than two key variables are usually required to describe the future
development of a system in a meaningful way, often including qualitative
variables that cannot be measured on a numerical scale.
The development of the modern scenario concept is usually attributed to
Herman Kahn, who worked at the RAND Corporation in the 1950s, when
he advised the US government on military strategy issues (Kahn, 1960).
Roots of thought, however, can be traced further back historically (cf. von
Reibnitz, 1987). After a striking and economically momentous application
of the concept in the corporate sector by the Shell company shortly before
the first oil crisis (Wack, 1985a, 1985b), the use of scenario techniques
quickly spread in academia, business, and administration. As early as the
beginning of the 1980s, in a corporate survey, approximately 50% of the
responding large US companies already confirmed that they used scenarios.
All users had had sufficiently positive experiences with the method to want
to continue using it (Linneman & Klein, 1983), and later surveys showed
further increases in usage rates. The high importance of the concept in
future research also is reflected in the use of terms in the literature. Textual
analyses of electronically recorded English-language books show that
“scenario” became a dominant term in foresight around 1994, surpassing
the frequencies of use of the competing terms “projection” and “forecast”
(Trutnevyte et al., 2016).
Environmental Scenario
I II III IV
Planning Variant A ++ ++ + -
Planning Variant B + o ++ o
Planning Variant C -- -- - ++
Planning Variant D ++ o - --
Planning Variant E o - -- --
First, the factors that should be part of the scenarios and are able to
represent the most important system interrelationships are selected. These
factors are the nodes of the network and are called “descriptors” in CIB.4
Next, a small number of alternative futures (“descriptor variants”) are
formulated for each descriptor. These describe which future uncertainty (or
future openness) is assumed for the descriptor. The descriptor variants
represent the discrete states of the network nodes in Fig. 2.3. In this respect,
CIB follows the program of morphological analysis, a general method for
structuring possibility spaces (Zwicky, 1969).
In the third step, however, CIB takes a different approach than the
classical morphological analysis and turns, in the style of a cross-impact
analysis (CIA, Gordon & Hayward, 1968), to the influence relationships
between the descriptors, i.e., the arrows between the network nodes. To do
this, information is collected on whether development in one descriptor X
influences which development in another descriptor Y prevails. This
information is then coded on an ordinal scale from strongly hindering to
strongly promoting. These relationships are referred to as “cross-impacts.”
Cross-impact analysis is the name given to a relatively broad group of
methods developed from the 1960s onward to examine qualitative
information on interacting events and trends in very different ways. With
the name “Cross-Impact Balances,” CIB places itself in this tradition, but
with the special name variant, it also refers to a characteristic peculiarity
that distinguishes it from other cross-impact analyses: the use of impact
balances as its central analysis instrument.
Finally, in the fourth step, the collected information about the pair
relationships of the system is synthesized into coherent images of the
overall system, i.e., plausible network configurations, with the help of an
algorithmic procedure. The results are interpreted as consistent scenarios.
How exactly this synthesis step is performed will be the subject of Chap. 3.
Next—no longer part of the CIB analysis in a strict sense and yet its
objective—is the utilization of scenarios by individuals or organizations in
the context of their decision-making and their preparation for the future.
However, the different uses for scenarios in planning and decision-making
processes are not CIB-specific and therefore are not the subject of this
book. Here, reference must be made to the general scenario literature.
As a rule, different people are involved in different roles in a CIB
analysis. For further use in the text, four terms are introduced:
Core The core team consists of the person(s) who design, conduct, evaluate, and document the
team CIB analysis.
Sources The information on the main factors and interdependencies of a system required for a
CIB analysis can be obtained from the literature and/or by interviewing experts. The
term “(knowledge) sources” is used as an umbrella term for both resources of
information acquisition.
Experts When people play the role of knowledge sources for a CIB analysis, they are referred to
as the “experts.”
Target The preparation of the CIB analysis aims to provide the “target audience” with
audience orientation to the system under study and thereby support goal setting or decision-
making.
In practice, the roles may overlap. For instance, people from the target
audience of a CIB analysis also may have expertise on the issue under
investigation and contribute to the analysis by participating in the expert
panel. In the remainder of the text, these terms are therefore used as role
designations, irrespective of the personnel involved.
References
Alcamo, J. (2008). The SAS approach: Combining qualitative and quantitative knowledge in
environmental scenarios. In J. Alcamo (Ed.), Environmental futures–the practice of environmental
scenario analysis (Vol. 2, pp. 123–150). Elsevier.
[Crossref]
Fink, A., Schlake, O., & Siebe, A. (2002). Erfolg durch Szenario-Management – Prinzip und
Werkzeuge der strategischen Vorausschau. Campus verlag.
Gordon, T. J., & Hayward, H. (1968). Initial experiments with the cross impact matrix method of
forecasting. Futures, 1(2), 100–116.
[Crossref]
Honton, E. J., Stacey, G. S., & Millet, S. M. (1985). Future scenarios–the BASICS computational
method, economics and policy analysis occasional paper (Vol. 44). Batelle Columbus Division.
Johansen, I. (2018). Scenario modelling with morphological analysis. Technological Forecasting and
Social Change, 126, 116–125.
[Crossref]
John, S. (2009). Bewertungen der Auswirkungen des demographischen Wandels auf die
Abwasserbetriebe Bautzen mit Hilfe der Szenarioanalyse. Dresdner Beiträge zur Lehre der
betrieblichen Umweltökonomie 34/09. University of Dresden.
Van’t Klooster, S. A., & MBA, v. A. (2006). Practising the scenario-axes technique. Futures, 38(1),
15–30.
[Crossref]
Kosow, H., & Gaßner, R. (2008). Methods of future and scenario analysis – Overview, assessment,
and selection criteria. DIE Studies 39,. Deutsches Institut für Entwicklungspolitik.
Linneman, R. E., & Klein, H. E. (1983). The use of multiple scenarios by U.S. industrial companies:
A comparison study 1977–1981. Long Range Planning, 16, 94–101.
[Crossref]
Trutnevyte, E., McDowall, W., Tomei, J., & Keppo, I. (2016). Energy scenario choices: Insights from
a retrospective review of UK. Renewable and Sustainable Energy Reviews, 55, 326–337.
[Crossref]
Wack, P. (1985a). Scenarios - uncharted waters ahead. Harvard Bussiness Review, 62(5), 73–89.
Wack, P. (1985b). Scenarios - shooting the rapids. Harvard Bussiness Review, 63(6), 139–150.
Weimer-Jehle, W., Buchgeister, J., Hauser, W., Kosow, H., Naegler, T., Poganietz, W.-R., Pregger, T.,
Prehofer, S., von Recklinghausen, A., Schippl, J., & Vögele, S. (2016). Context scenarios and their
usage for the construction of socio-technical energy scenarios. Energy, 111, 956–970. https://doi.org/
10.1016/j.energy.2016.05.073
[Crossref]
Weimer-Jehle, W., Vögele, S., Hauser, W., Kosow, H., Poganietz, W.-R., & Prehofer, S. (2020).
Socio-technical energy scenarios: State-of-the-art and CIB-based approaches. Climatic Change, 162,
1723–1741. https://doi.org/10.1007/s10584-020-02680-y
[Crossref]
Wilson, I. (1998). Mental maps of the future: An intuitive logics approach to scenario planning. In L.
Fahey & R. M. Randall (Eds.), Learning from the future: Competitive foresight scenarios (pp. 81–
108). Wiley.
Zwicky, F. (1969). Discovery, invention, research through the morphological approach. Macmillan.
Footnotes
1 The example is based on a study of the impact of demographic changes on a wastewater company
(John, 2009).
4 The earliest use of the descriptor term in the scenario technique known to the author goes back to
Honton et al. (1985).
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis, Contributions to Management
Science
https://doi.org/10.1007/978-3-031-27230-1_3
3. Foundations of CIB
Wolfgang Weimer-Jehle1
(1) ZIRIUS, University of Stuttgart, Stuttgart, Germany
Wolfgang Weimer-Jehle
Email: [email protected]
3.1 Descriptors
Descriptors are the key topics that are used to compose the scenarios during
the analysis. Together, they should allow us to describe the system under
study and its most important internal interactions. The term originates in
librarianship and computer science to describe words that can be used to
index the content of texts or datasets. Since the 1980s, the term also has
been used in scenario techniques (Honton et al., 1985). In some cases, the
term “(scenario) factors” is used instead in the scenario literature (e.g.,
Gausemeier et al., 1998).
To identify the necessary descriptors, it can be helpful to adopt a
fictitious future perspective: Imagine that the target year of the scenario
analysis had been reached and that you, as a chronicler, were faced with the
task of concisely describing the “past” development and explaining it in
retrospect. Which topics would then appear to be particularly worth
mentioning? Which connections and cause-effect relationships would have
to be explained? In a limited text, it is not possible and not necessary to go
into every detail. However, the chronicler must dissect the system into
various partial developments to the extent that the developments that have
occurred can be made understandable.
Six descriptors are used for the “Somewhereland” demonstration
analysis. Somewhereland is a multiparty democracy. Which party governs
the country and thus shapes its political course is consequently an important
but open question. Because Somewhereland has many neighboring
countries with which it shares a variable history, the stance of its foreign
policy also is an essential part of the story to be told. Economic
development and the distribution of wealth also will contribute to shaping
how the country develops. Whether social cohesion is strengthened or
fractured will be the result but at the same time the cause of developments
in other areas. Finally, closely interwoven as a cultural undercurrent with all
these developments is the question of the social values that prevail in
Somewhereland. In summary, the analysis addresses the descriptor field
shown in Fig. 3.1.
Fig. 3.1 The descriptor field for the “Somewhereland” analysis
The selection of the descriptors is a first and decisive work step for the
analysis quality. Different procedures for the practical execution of this step
are described in Sect. 6.4.
A3 B1 C2 D1 E3 F1
is an abbreviation for the scenario:
N Vi = 2 Vi = [3,2,3,2…] Vi = 3
5 32 108 243
10 1024 7776 59,049
15 32,768 839,808 14.4 m
20 1.05 m 60.5 m 3.49 bn
25 33.6 m 6.53 bn 847 bn
Kosow et al. (2022), following Weitz et al. (2019), point out another
alternative form of representation (Fig. 3.6).
The evaluation of the data is identical in all cases. The difference lies
solely in the style of presentation.
Documentation of the justifications for the cross-impact ratings, as
indicated in Fig. 3.4, is not required for the technical procedure of scenario
construction using CIB. However, it is still recommended for several
reasons. First, the documentation is helpful for the core team itself to
understand the internal logic of the completed scenarios and to be able to
explain them to others. The documented arguments also make it easier for
third parties to independently understand the cross-impacts and thus the
foundation of the scenarios. This makes it easier for the target audience of
the analysis to convince themselves of the plausibility of the scenarios or to
offer more targeted and constructive criticism. Finally, it also can be
difficult for the core team itself after some time to remember in detail the
reasons for the cross-impact assessments. The documentation is then the
key to being able to understand and explain one’s own work even years
later.
All cross-impact assessments taken together form the cross-impact
matrix. In the case of Somewhereland, it takes the form shown in Fig. 3.7.
The cross-impact data used here were chosen by the author and are merely
illustrative. In practice, cross-impact data are usually collected through
literature review and/or expert elicitation (cf. Sect. 6.4).
Just as for Fig. 3.4, it is also true for the entire matrix that in CIB the descriptors and M2
descriptor variants in the rows are regarded as impact sources and in the columns as
impact targets.
Thus, the judgment cell marked by a small circle on the right in Fig. 3.7
describes the strongly promoting impact that social unrest (E3) would have
on the emergence of family orientation as a dominant social value (F3). It is
crucial to carefully observe the convention of rows as the source of impact
and columns as the target of impact because confusing the roles of rows and
columns leads to the reversal of cause and effect in the impact relationship
and thus to the corruption of the internal logic of the scenarios.
As a rule, the diagonal evaluation fields remain empty since they would
not describe interdependencies but the influence of a descriptor on itself. In
special cases, however, the diagonal fields also can be used to describe self-
influences.3 The CIB algorithm is able to handle this issue as well.
Optionally, the cross-impact matrix also can be printed without the
judgment sections completely filled with zeros (Fig. 3.8). This can improve
clarity, especially for matrices that (unlike Somewhereland) have a high
proportion of empty judgment sections. In this book, cross-impact matrices
are generally displayed without empty judgment sections. However,
representations with visible empty judgment sections also are correct and
widespread in the literature.
Fig. 3.8 The cross-impact matrix printed without influence-free judgment sections
The extract from the cross-impact matrix in the upper left-hand corner
of Fig. 3.10 shows that if the descriptor “A. Government” is assigned the
variant “A2 ‘Prosperity party‘” and the descriptor “B. Foreign policy” is
assigned the variant “B1 Cooperation,” the impact of A on B will be a
medium strength promotion (+2, see the highlighted judgment cell in the
upper left-hand corner of Fig. 3.10). This is represented graphically in Fig.
3.10 by a green arrow of medium strength. In terms of content, this
expresses the judgment that an economy-focused party is likely to seek
cooperation with other countries to promote trade relations and thus the
domestic economy. A reverse effect from B1 to A2 is not coded in the
matrix (cross-impact: 0).
Likewise, the cross-impact matrix can be used to graphically represent
all other descriptor relationships for the scenario under study. Figure 3.11
shows this result.
Fig. 3.12 The influence diagram Fig. 3.11 with the impact sums of the descriptors
For instance, Descriptor E attains the impact sum +4, since the
promoting impacts of +3 (Descriptor C), and + 3 (Descriptor D) and the
hindering impact −2 (Descriptor F) are acting on it. In total, this results in
+3 + 3–2 = +4.
The plausibility argument for the impact diagram developed in Sect.
3.4.2 requires that for each descriptor, a variant is active that attracts as
many green arrows as possible and as few red arrows as possible, whereby
strong influences bear a higher weight than weak ones. Translated to the
concept of impact sums, this means that a scenario is plausible and
internally consistent if for each descriptor the impact sum is as high as
possible. In accordance with the above visual inspection of the impact
diagram Fig. 3.11, we now see in Fig. 3.12 already with a glance at the
impact sums that the assumed variant for Descriptor C is particularly
plausible and the assumed variant for Descriptor D is strikingly implausible.
Next, we need to clarify how to interpret the “highest possible impact
sum” requirement. In CIB, the following understanding applies (Weimer-
Jehle, 2006):
The impact sum of a descriptor is not “as high as possible,” and its active variant is M3
therefore inconsistent with the rest of the scenario, if the impact sum could be increased
by changing to a different descriptor variant.
Fig. 3.15 Matrix-based calculation of impact sums for scenario [A2 B1 C3 D1 E1 F1]
For this purpose, all rows and columns belonging to the active
descriptor variants of the examined scenario are marked in the cross-impact
matrix. The intersection cells of the marked rows and columns (highlighted
in dark in Fig. 3.15) represent—if they do not carry the value 0—the impact
arrows shown in the corresponding impact diagram. Thus, entry “3” in the
intersection cell of Row F1 and Column A2 corresponds to the thick green
arrow drawn from descriptor box F to descriptor box A in Fig. 3.12. The
column sums of the intersection cells in Fig. 3.15 equal the impact sums for
the corresponding descriptor, as confirmed by comparison with Fig. 3.12.
The practical advantage of the matrix-based consistency check is that
the impact sums of the nonactive descriptor variants also can be easily
derived. For this purpose, only the rows, but not the columns, of the active
descriptor variants are marked, as shown in Fig. 3.16. The sum of all
marked rows then yields the impact balances,5 i.e., the compilation of the
impact sums of all descriptor variants, regardless of whether they are active
(bottom row in Fig. 3.16).
In the row “impact balances,” the impact sums of the active descriptor
variants (shown inverted) can now be easily compared with the impact
sums of the nonactive variants of the same descriptor. In this way, it can be
determined for which descriptors a nonactive descriptor variant would
achieve a higher impact sum than the active variant and thus indicate
inconsistency (a tie would be acceptable). For our test scenario, we already
know from the graphical consistency check in Sect. 3.4.4 what the result
will be. The matrix-based consistency check in Fig. 3.16 leads to the same
result: Descriptor D (and only Descriptor D) violates the consistency
condition because the nonactive descriptor variant D2 achieves a higher
impact sum of +7 than the active variant D1, which achieves only −7. All
other descriptors, however, satisfy the consistency condition in this
scenario: The active descriptor variant A2 achieves an impact sum of +3
and thus is higher than the impact sums of A1 (0) and A3 (−3). The same
applies to the impact sums of the variants of Descriptors B, C, E, and F.
Of course, even for a small CIB matrix such as that of Somewhereland,
it would be unfeasible to obtain the solutions by manual consistency checks
of all descriptor variant combinations. This check must be executed in a
software-based manner. However, the matrix-based consistency check
makes it possible to convince oneself of the correctness of the calculated
scenarios quickly and easily with the help of paper and pencil. On the one
hand, this also can help the core team better understand the inner logic of
the identified scenarios. Moreover, the manual check also can be a
confidence-building tool to visualize the validity of the computer results to
the participants of the scenario exercise who are not familiar with the CIB
method or to the target audience of the scenario analysis. Because of this
possibility of retrospective validation without technical aids, CIB analysis
can avoid or at least mitigate black-box effects.
Scenario No. 7
A. Government A3 “social party”
B. Foreign policy B1 cooperation
C. Economy C3 dynamic
D. Distribution of wealth D2 strong contrasts
E. Social cohesion E1 social peace
F. Social values F1 meritocratic
Fig. 3.17 The Somewhereland scenarios in tableau format with integrated descriptor listing
Fig. 3.18 The Somewhereland scenarios in tableau format with separate descriptor listing
In the tableau format, the scenarios are to be read vertically from top to
bottom. If neighboring scenarios match the variants of a descriptor, the
relevant table cells are merged to highlight the similarity of the scenarios at
this point and to increase the readability of the tableau by reducing the
amount of text. The order of the scenarios can be changed to bring similar
scenarios into proximity to each other. A respective sorting has already
been done here.
The top row of the tableau contains scenario titles (or mottos) that
summarize the essence of the scenario. These titles are not a result of the
CIB evaluation but are created by interpreting the scenarios. Different
scenarios with similar characteristics can be combined into a scenario
family by means of a shared title. The individual scenarios within the
scenario family then present themselves as subvariants of the shared title.
Often a suitable title is suggested by reading the scenario. It can also be
helpful to look at the scenario’s impact diagram (or its tabular equivalent) to
gain an understanding of the cause-effect relationships in the scenario and
to draw inspiration for the choice of title. Occasionally, one also finds that
certain descriptor variants are exclusive to a single scenario or scenario
family and can therefore be considered characteristic features that suggest
an informative scenario title. For example, “C1 Shrinking economy” and
“E3 Unrest” occur only in Scenario no. 1 and were therefore the inspiration
for the title of Scenario no. 1 chosen in Figs. 3.17 and 3.18. “A2 ‘Prosperity
party’” and “E2 Tensions” occur only in scenario family no. 9/no. 10 and
therefore shape the view of these scenarios. A particularly thorough but
time-consuming way to find a title is to formulate a storyline for the
scenario and then condense the storyline into a title. The procedure for
storyline development is discussed in detail in Sect. 4.6.
Scenario no. 10 builds on the test scenario used to explain the CIB
consistency check in Figs. 3.12 and 3.14. This scenario follows the original
test scenario with respect to Descriptors A, B, C, and F but considers the
change in Descriptor D, the necessity of which was made clear in Fig. 3.12,
and adjusts for the consequential inconsistency in Descriptor E, which arose
after the correction of D in Fig. 3.14. Finally, with these changes, the
complete consistency of the scenario is achieved, which is evident by its
appearance on the solution list. The corresponding impact diagram can be
seen in Fig. 3.19.
In Fig. 3.19, several aspects are worth noting. As expected, the CIB
algorithm ensures that no descriptor in a consistent scenario has a
particularly poor impact sum. However, this does not mean, as Fig. 3.19
and the impact diagrams shown earlier might suggest, that impact sums in
consistent scenarios are generally nonnegative10 or that inconsistent
descriptors have generally negative impact sums. Since the CIB consistency
principle is designed as a comparative requirement for the impact sums, the
consistency of a descriptor is decided only by comparing its impact sum
with the impact sums that the other variants of the descriptor would
achieve. These are not directly visible in the impact diagram, so that the
level of the impact sum in the impact diagram is only a provisional
indication of whether there is consistency for the descriptor in question. In
the end, however, the impact sums of the alternative descriptor variants
must be calculated and compared.
Figure 3.19 further shows that impact diagrams of consistent scenarios
also can contain descriptors that are exposed to hindering (red) impacts.
However, the consistency condition also guarantees for these descriptors
that their active variant was a good choice despite the hindering impacts,
either because the hindering impacts are countered by sufficient promoting
impacts or because the alternative variants of the descriptor would perform
even worse or at least not better in their impact sums. For example, Fig.
3.19 shows that assumed dynamic economic development is hindered by
social tensions (which can cause a deterioration in investor confidence,
impair the consumer climate and disturb industrial peace in companies). On
the other hand, business-friendly policies of the government, cooperative
foreign relations and a widespread meritocratic culture provide so many
forces promoting dynamic economic development that the assumption of
weak economic development would be more questionable than the dynamic
economic development assumed in the scenario.
Fig. 3.20 Consistency values of the descriptors calculated for the test scenario
Inconsistency Scale
The inconsistency value of a descriptor or a scenario describes the result of
the consistency assessment from the opposite perspective: A scenario with a
consistency value of −4 has an inconsistency value of 4. Inconsistency
values, however, do not have negative values by convention, i.e., a scenario
with a consistency value of +1 has the inconsistency value of 0 (cf. Fig.
3.21). This is to account for the understanding that the characterization
“zero inconsistency” expresses the absence of inconsistency and should
therefore generally encompass all consistent scenarios, regardless of their
varying degree of positive consistency. Scenarios with inconsistency values
of 0, 1, 2, etc. (i.e., consistency values ≥0, −1, −2, …), are abbreviated as
IC0 scenarios, IC1 scenarios, IC2 scenarios, and so on.
This shows that the consistency values for the descriptors within a
scenario are highly scattered in both cases, and this is typical for CIB
scenarios. The consistency of a descriptor can be interpreted as an indicator
of the well-foundedness of its active variant. That is, the active variant for
wealth distribution in Scenario no. 10 (D2 Large contrasts) is a particularly
well-founded assumption within this scenario because it is by far the better
assumption than the opposite assumption (D1 Balanced).
IC N
1 <ca. 5
2 <ca. 17
IC N
3 <ca. 37
References
Gausemeier, J., Fink, A., & Schlake, O. (1998). Scenario management: An approach to develop
future potentials. Technological Forecasting and Social Change, 59, 111–130.
[Crossref]
Gordon, T. J., & Hayward, H. (1968). Initial experiments with the cross impact matrix method of
forecasting. Futures, 1(2), 100–116.
[Crossref]
Honton, E. J., Stacey, G. S., & Millet, S. M. (1985). Future scenarios–the BASICS computational
method, economics and policy analysis occasional paper (Vol. 44). Batelle Columbus Division.
Johansen, I. (2018). Scenario modelling with morphological analysis. Technological Forecasting and
Social Change, 126, 116–125.
[Crossref]
Kosow, H., Weimer-Jehle, W., León, C. D., & Minn, F. (2022). Designing synergetic and sustainable
policy mixes–a methodology to address conflictive environmental issues. Environmental Science and
Policy, 130, 36–46.
[Crossref]
Nash, J. F. (1951). Non-cooperative games. The Annals of Mathematics, 54, 286–295.
[Crossref]
Weimer-Jehle, W., Deuschle, J., & Rehaag, R. (2012). Familial and societal causes of juvenile
obesity–a qualitative model on obesity development and prevention in socially disadvantaged
children and adolescents. Journal of Public Health, 20(2), 111–124.
[Crossref]
Weitz, N., Carlsen, H., & Trimmer, C. (2019). SDG synergies: An approach for coherent 2030
agenda implementation. Stockholm Environment Institute Brief.
Footnotes
1 The term “cross-impact” places CIB in the tradition of “cross-impact analysis” first proposed in
the 1960s by Gordon and Helmer and Gordon and Hayward (Gordon & Hayward, 1968). See Sect. 6.
3.1.
2 Strictly speaking, the type of scale used in CIB is an interval scale with a discrete metric. It is more
presupposing than an ordinal scale because it assumes, for example, that the effect of “+2” is twice as
strong as the effect of “+1,” whereas an ordinal scale would assume only that the effect of “+2” is
stronger than the effect of “+1.”
3 An example of the use of diagonal fields is described in Weimer-Jehle et al. (2012). In this study, a
diagonal field is used to represent that the practice of sports promotes the enjoyment of physical
activity and that this further strengthens the inclination to engage in sports.
5 The term “impact sums” refers to the sum of impacts on a single descriptor variant, while the
“impact balance” consists of all impact sums of a descriptor.
6 Sect. 3.6 describes how to calculate and use gradual consistency ratings with the CIB consistency
test and how further additional information can be taken from this rating.
7 At the time of printing of this book, nearly all published CIB applications were carried out using
the freely available software ScenarioWizard (https://www.cross-impact.org). However, the algorithm
in its basic form does not pose great challenges to experienced programmers and allows self-
developed software solutions.
8 The scenario order of the scenario list shown in Table 3.4 is not identical with the result output of
the ScenarioWizard, as it has already been rearranged to prepare the following figures.
9 The tableau format for CIB scenarios is based on a proposal by Dipl.-Ing. Christian D. León.
10 However, they often are, and in fact always are, if the cross-impacts fulfill the so-called
standardization condition (Weimer-Jehle, 2009, cf. Sect. 6.3.2).
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis, Contributions to Management
Science
https://doi.org/10.1007/978-3-031-27230-1_4
Wolfgang Weimer-Jehle
Email: [email protected]
We further assume for the purpose of this analysis that the intended
readership of this Somewhereland analysis is not interested in the question
of the general development of the country but that their interest is focused
on the question of economic development. This means that Descriptor “G.
Global political trend” serves as our “if” descriptor and Descriptor “C.
Economy” is our “then” descriptor (target descriptor).
The Somewhereland-plus IC0 portfolio includes 16 scenarios
(compared to 10 for the basic Somewhereland matrix), listed in short format
in Table 4.1. An increase in the number of scenarios often occurs when an
autonomous descriptor is added (see Sect. 4.7.1).
Table 4.1 The scenarios of Somewhereland-plus in short format
A rating of 0 was assigned for conformity with the present state (i.e.,
Scenario no. 7) and a rating of 2 for clear deviation from the present. A
rating of 1 is used for descriptor variants that describe a gradual difference
from the present state. In principle, ratings can be made by the core team
but with higher legitimacy by a group of experts or stakeholders or by the
target audience. In any case, it is useful to document explanations for all
ratings.
Now, an index value can be determined for all scenarios of the portfolio
by calculating how many points each scenario accumulates based on the
ratings given in Fig. 4.3. For instance, Scenario no. 1 from Fig. 3.17
“Society in crisis” with the code [A1 B3 C1 D2 E3 F3] accumulates
2 + 2 + 2 + 0 + 2 + 2 = 10 points and is thus particularly different from the
present state of Somewhereland.
The progression of dissimilarity to the present in the array of scenarios
also can be visualized by arranging the scenarios in a tableau in the order
shown in Fig. 4.4. The descriptor variants that were assigned a rating of 1 in
Fig. 4.3 are shaded light here, and the descriptor variants that were assigned
a rating of 0 are shaded dark. The descriptor variants that are particularly
unlike the present (the “terra incognita”) remain white. The result is shown
in Fig. 4.5. This form of scenario presentation makes it immediately
apparent in which areas we must prepare for change in the various
scenarios.
Scenario Axes
In the traditional scenario axes approach, two basic questions concerning
the future development of the system under study are identified based on
the participants’ understanding of the problem, and two diametrically
opposed future development options are formulated for each of the basic
questions. For a scenario analysis on international cooperation and its
possible direction, for example, the axes shown in Fig. 4.6 could be chosen.
Fig. 4.6 Example of a scenario axes analysis (The example draws from work done by the IPCC on
the future of global climate gas emissions (Nakićenović et al. 2000))
For each sector of the diagram and the respective combination of basic
developments, a scenario is then elaborated based on discussion. In Sector
I, for example, a scenario would be developed in which international trade
institutions are strengthened, national regulations are reduced, and free
trade agreements are established. In a scenario for Sector II, international
cooperation also would be strengthened, but with the main objective of
concluding international climate protection agreements and cooperatively
pursuing other environmental protection concerns.
Scenario axes is a simple and quick-to-use technique that is often
chosen when formal methods are not to be used and particular emphasis is
placed on creative scenario building.
Usually, the scenarios with a high probability and effect are selected as
particularly relevant for use in planning and decision-making, although care
also must be taken to ensure that the selected scenarios describe
substantially different futures. In addition, selected low-probability
scenarios can be included in the sense of an incident analysis if they have a
particularly high effect (“wildcards”).
Applications of this form of portfolio analysis can be found, for
example, in Aschenbrücker and Löscher (2013) and Sardesai et al. (2019).
4.2 Revealing the Whys and Hows of a Scenario
Since the CIB scenarios are derived from the data of the cross-impact
matrix by a simple algorithm, it is always possible to reconstruct the
reasons for the composition of a scenario from the matrix. This procedure
deepens one’s own understanding of the scenarios and makes it easier to
communicate the scenarios convincingly to others.
This table becomes even more focused on those impacts that account for
the consistency of a scenario if only the positive (green) impacts are noted
because the negative impacts could not prevail in the scenario and therefore
do not contribute to its justification (Fig. 4.12).
Fig. 4.13 Justification form for Descriptor variant “E3 Social cohesion: Unrest”
The case that a subsequent CIB analysis finds meaningful scenarios that
were missed by a preceding intuitive scenario analysis is not uncommon. In
fact, this is reported in most cases where an intuitive scenario analysis has
been validated by CIB (Schweizer and Kriegler 2012; CfWI/Centre for
Workforce Intelligence 2014; Schweizer and O’Neill 2014; Kurniawan
2018).
4.4 Intervention Analysis
CIB cannot only be used to construct or verify scenarios. It also can be used
—within the limits of a qualitative analysis—to investigate the response of
a system to external influences, for example, interventions. Here, we ask
whether a policy under consideration (the “intervention”) would change the
scenario portfolio favorably by pushing undesirable scenarios out of the
portfolio or by allowing desirable scenarios to enter the portfolio. In
contrast, an intervention would be considered counterproductive if the
opposite effect occurs. Ambivalent effects also can be detected by
intervention analysis if the intervention expands the portfolio at both the
desirable and undesirable edges.
The classic way to represent interventions in CIB is to view the
intervention as an action that enforces a particular descriptor variant. This
then implies that all other variants of the respective descriptor are forcibly
excluded. Technically, this is implemented by removing the descriptor
variants excluded by the intervention from the matrix and then recalculating
the scenarios.2 The workflow of a CIB intervention analysis is described in
detail in Nutshell I (Fig. 4.21).
Fig. 4.21 Nutshell I—Workflow of a CIB intervention analysis
Fig. 4.22 The descriptors and descriptor variants of the “Water supply” intervention analysis
Fig. 4.23 “Water supply” cross-impact matrix (basic matrix)
Fig. 4.24 The “Water supply” portfolio without interventions (basic portfolio)
A closer comparison between the base portfolio in Fig. 4.24 and the
intervention portfolio in Fig. 4.26 shows that the intervention portfolio in
this case represents an extract from the base portfolio, i.e., all E1 scenarios
of the base portfolio have survived. In general, in intervention analysis, all
scenarios of the base portfolio that already carry the forced descriptor
variant also will occur in the intervention portfolio, but in addition, there
may be additional scenarios in the intervention portfolio that did not occur
in the base portfolio. These additional scenarios describe configurations in
which reactions of the other descriptors would prevent the forced descriptor
variant from occurring if the system were left to its own choices. However,
the intervention causes these feedback effects to lose their preventive
power. This type of intervention scenario does not occur with the E1
intervention but does appear in the intervention case investigated next.3
Robustness Check
As always in CIB, it is recommended to check the significance of a finding
by examining portfolios with marginal inconsistency. With N = 6
descriptors, the IC1 portfolio must be considered according to Sect. 3.7. In
IC1, however, there are no additional scenarios for the matrix in Fig. 4.27,
so the dominance of the F3 scenarios also is given in IC1, and the
effectiveness conclusion for the A2 intervention is “IC1-robust” and thus
significant.
Under the conditions of the wildcard, the matrix shown in Fig. 4.29
produces an IC0 portfolio of three scenarios. These essentially describe two
types of futures, one of them in two variants (Fig. 4.30).
Fig. 4.30 The Somewhereland portfolio under the impact of the “Global economic crises” wildcard
Fig. 4.31 Different qualities of rating differences. Adapted from Jenssen and Weimer-Jehle (2012)
In Case (a) in Fig. 4.31, there is a less severe form of dissent. The expert
panel agreed that there is an impact, and there also was consensus on the
hindering character of the impact, i.e., on the sign of the cross-impact. Only
the strength of the impact was estimated differently. Things are different in
Case (b). There was no agreement on the fundamental question of whether
an impact exists at all or on its sign. Thus, there is a substantial divergence
of estimates and a stronger dissent than in Case (a).
Regardless of whether expert panels or literature reviews are used as
sources for cross-impact coding, judgment sections with rating differences
can be classified into the following categories based on Hummel (2017):
Category I The knowledge sources consistently indicate that there is an influence relationship
(strength between the descriptors and whether it is promoting or inhibiting. This means that all
rating sources lead to cross-impact values in the judgment cell and that the different sources
controversy) do not propose different signs for any cell. The strength ratings, on the other hand,
are assessed differently.
Category II One part of the knowledge sources sees a noticeable impact between two descriptors
(impact and agrees on the sign of the impact (but possibly diverges in the rating of the
relevance strength). The other part of the sources does not see an influence that is strong
controversy) enough to be considered.
Category III The knowledge sources assume different directions of influence, which leads to
(severe disagreement in the evaluation of the cells of the judgment section even regarding the
dissent) sign.
M5
Fig. 4.34 Solutions of the “Resource economy” sum matrix, including all scenarios with
nonsignificant inconsistency
Here, it is useful that CIB allows a cross-impact matrix to be multiplied in the whole by M6
any positive integer without changing the IC0 portfolio of the matrix. The portfolios to a
nonzero IC level also result identically when the IC value is multiplied by the same factor.
The significance threshold for the inconsistencies also multiplies by the same factor. This
property is part of the so-called multiplication invariance of CIB.5
This property of CIB can be used here to produce a uniform rating scale
even after the matrices have been created. For this purpose, the matrix with
scale [−2… + 2] is multiplied by a factor of 3 and the matrix with scale
[−3… + 3] is multiplied by a factor of 2. Both matrices continue to have
their unchanged portfolios6 but are now both coded on the uniform scale
[−6… + 6] so they can be combined into a sum matrix.
4.5.4 Delphi
One possible aim of dissent analysis can be to make the rating differences
of a matrix ensemble the object of investigation as information about the
diverging system views of the experts instead of consolidating the rating
differences by means of decision rules or by forming a sum matrix. The first
step to this is to identify the substantial part of the rating differences and to
separate it from the part of the rating differences that came about rather by
chance, by misunderstandings or by lack of reflection, before then applying
the evaluation techniques described in Sects. 4.5.5 and 4.5.6 to all
substantial rating differences. Separating substantial and nonsubstantial
rating differences requires additional effort and is not an obligatory step. It
does, however, improve the quality of the dissent analysis, as it reduces the
risk of deriving artificial conclusions from apparent dissent. A widely used
method for examining dissent in expert surveys is the Delphi method (e.g.,
Häder and Häder 2000).
Expert Delphis are multistage survey procedures in which a group of
experts is first asked individually to assess a question, for example, by
which year a certain future technology can be expected to be ready for the
market (e.g., Ammon 2009). If clear divergences are identified in the
responses, then the experts involved in the controversy are given an
overview of the divergent assessments and their justifications, and they are
asked to reconsider their own assessment in light of the assessments and
justifications of the other experts and then either reaffirm or modify their
own assessment. The result is either a convergence of assessments or a
clearer formulation of the assessment dissent. Both are considered
legitimate results of a Delphi and a gain in knowledge (Fig. 4.35).
Fig. 4.37 Compilation of scenarios of the individual evaluations of the “Resource economy” matrix
ensemble
Fig. 4.38 The ensemble table of the “Resource economy” matrix ensemble
Sensitivity Analysis
An evaluation task comparable to the ensemble evaluation of expert dissent
arises when the project team carries out the cross-impact assessments itself
but anticipates the possibility of contrasting assessments of certain impact
relationships after studying the literature or consulting with experts. The
project team can then define a “baseline matrix” in which the assessment
variant judged to be likely is used for each uncertain impact relationship. In
addition, a matrix ensemble is created in which one of the uncertain impact
relationships is varied in each ensemble member, but the base matrix is
otherwise retained. The ensemble evaluation then reveals which of the
assessment uncertainties are associated with significant changes in the
portfolio and should therefore be considered critical uncertainties, and
which assessment uncertainties have no or only insignificant effects on the
portfolio.
Provided that not too many assessment uncertainties have been
identified, it may also be considered for a second analysis step to vary two
influence relationships in each ensemble member. This analysis allows the
study of combination effects.
Examples of sensitivity analyses in CIB practice can be found in
Schweizer and Kriegler (2012) and Schweizer and O’Neill (2014).
The differences in the portfolios of the two groups (Fig. 4.40) can easily
be explained by the main dissensus: In Group (1), the hypothesis of a
conflict-promoting effect of foreign resource investments prevents the
occurrence of scenarios with permanently high resource investments and
resulting high resource supply. Such scenarios become possible only with
the counterhypothesis in Group (2). Neglecting resource efficiency also can
occur only in Group (2) because it presupposes a permanently high resource
supply and thus high resource investments, which indirectly implies, in the
long term, the negation of the hypothesis of a conflict-promoting effect of
high foreign resource investments, a hypothesis characteristic for Group (1).
Fig. 4.42 Data basis for storyline development for Somewhereland scenario no. 10
The fact that the items, apart from descriptor F, are ordered
alphabetically is because in this scenario, a single rearrangement was
sufficient to achieve a satisfactory sequence. Often, more rearrangements
are needed, which then leads to more alphabetical mixing.
The sequence found is valid only for Scenario no. 10 and not for all
Somewhereland scenarios because each scenario has a different specific
cross-impact matrix.
As a literary text in the broadest sense, a storyline remains an individual
product, despite the strong framework provided by CIB analysis, and its
length and style also must depend on the intended purpose and target
audience. The following storyline for Somewhereland scenario no. 10 is
therefore only one possible example:
This shows that storyline texts become possible that no longer
immediately reveal their origin as the transcription of an algorithmically
constructed raw scenario. The result is a seemingly “normal”
argumentative–descriptive text, which is, however, in fact closely
prescribed by the scenario, the cross-impacts behind it, and their
explanatory text. The two reverse impacts from Fig. 4.45 (A2 → F1 and
D2 → F1) that run counter to the linear logic of the text are incorporated
into the storyline by interpreting them as a stabilization and reinforcement
of the already prevailing meritocratic values (descriptor F) that occurs in the
course of time.
It also can be seen that the storyline would have been much less dense if
there were no explanatory text for the cross-impacts. This underlines the
importance of the recommendation from Sect. 3.3 to document the
reasoning behind the cross-impact ratings together with the cross-impact
data, even if these have no direct function for the algorithmic scenario
construction process.
Fig. 4.46 Frequency distribution of the inconsistency value in the Somewhereland matrix
Fig. 4.48 The three fully consistent scenarios of the “Oil price” matrix
Fig. 4.49 Descriptor variant vacancies of the “Oil price” matrix (empty squares)
Fig. 4.51 Distance table of the Somewhereland portfolio with marking of an N/2 selection
References
Ammon, U. (2009). Delphi-Befragung. In S. Kühl, P. Strodtholz, & A. Taffertshofer (Eds.),
Handbuch Methoden der Organisationsforschung. Quantitative und qualitative Methoden (pp. 458–
476). VS Verlag für Sozialwissenschaften.
[Crossref]
Ayandeban. (2016). Future scenarios facing Iran in the coming year 1395 (March 2016–March 2017)
[in Persian]. Ayandeban Iran Futures Studies, www.ayandeban.com
Häder, M., & Häder, S. (Eds.). (2000). Die Delphi-Technik in den Sozialwissenschaften: Methodische
Forschungen und Innovative Anwendungen (ZUMA-Publikationen). Springer VS.
Honton, E. J., Stacey, G. S., & Millet, S. M. (1985). Future scenarios—The BASICS computational
method, economics and policy analysis occasional paper (Vol. 44). Batelle Columbus Division.
Hummel, E. (2017). Das komplexe Geschehen des Ernährungsverhaltens - Erfassen, Darstellen und
Analysieren mit Hilfe verschiedener Instrumente zum Umgang mit Komplexität. Dissertation,
University of Gießen.
Jenssen, T., & Weimer-Jehle, W. (2012). Mehr als die Summe der einzelnen Teile - Konsistente
Szenarien des Wärmekonsums als Reflexionsrahmen für Politik und Wissenschaft. Gaia, 21(4), 290–
299.
[Crossref]
Kopfmüller, J., Weimer-Jehle, W., Naegler, T., Buchgeister, J., Bräutigam, K.-R., & Stelzer, V.
(2021). Integrative scenario assessment as a tool to support decisions in energy transitions. Energies,
14, 1580. https://doi.org/10.3390/en14061580
[Crossref]
Kosow, H., León, C., & Schütze, M. (2013). Escenarios para el futuro - Lima y Callao 2040.
Escenarios CIB, storylines & simulacin LiWatool. Scenario brochure of the LiWa project (www.
lima-water.de).
Le Roux, B., Rouanet, H. (2009). Multiple correspondence analysis. SAGE PUBN. http://www.
ebook.de/de/product/10546753/brigitte_le_roux_henry_rouanet_multiple_correspondence_analysis.
html.
León, C., & Kosow, H. (2019). Wasserknappheit in Megastädten am Beispiel Lima. In J. L. Lozán,
S.-W. Breckle, W. Kuttler, & A. Matzarakis (Eds.), Warnsignal Klima: Die Städte (pp. 191–196).
Universität Hamburg.
Nakićenović, N., et al. (2000). Special report on emissions scenarios. Bericht des Intergovernmental
Panel on Climate Change (IPCC). Cambridge University Press.
Petersen, J. L. (1997). Out of the blue. Wild cards and other big future surprises. The Arlington
Institute.
Ramirez, R., & Wilkinson, A. (2014). Rethinking the 2×2 scenario method: Grid or frames?
Technological Forecasting and Social Change, 86, 254–264.
[Crossref]
Saner, D., Blumer, Y. B., Lang, D. J., & Köhler, A. (2011). Scenarios for the implementation of EU
waste legislation at national level and their consequences for emissions from municipal waste
incineration. Resources, Conservation and Recycling, 57, 67–77.
[Crossref]
Sardesai, S., Kippenberger, J., & Stute, M. (2019). Whitepaper scenario planning for the generation
of future supply chains. Fraunhofer IML. https://doi.org/10.24406/iml-n-566073
[Crossref]
Schütze, M., Seidel, J., Chamorro, A., & León, C. (2018). Integrated modelling of a megacity water
system – The application of a transdisciplinary approach to the Lima metropolitan area. Journal of
Hydrology. https://doi.org/10.1016/j.jhydrol.2018.03.045
Schweizer, V. J., & Kriegler, E. (2012). Improving environmental change research with systematic
techniques for qualitative scenarios. Environmental Research Letters, 7(4), 044011.
[Crossref]
Schweizer, V. J., & O’Neill, B. C. (2014). Systematic construction of global socioeconomic pathways
using internally consistent element combinations. Climatic Change, 122, 431–445.
[Crossref]
Steinmüller, A., & Steinmüller, K. (2004). Wild cards. Wenn das Unwahrscheinliche eintritt.
Murmann Verlag.
van’t Klooster, S. A., & van Asselt, M. B. A. (2006). Practising the scenario-axes technique. Futures,
38(1), 15–30.
[Crossref]
Weimer-Jehle, W., Deuschle, J., & Rehaag, R. (2012). Familial and societal causes of juvenile
obesity - a qualitative model on obesity development and prevention in socially disadvantaged
children and adolescents. Journal of Public Health, 20(2), 111–124.
[Crossref]
Weimer-Jehle, W., Wassermann, S., & Fuchs, G. (2010). Erstellung von Energie- und Innovations-
Szenarien mit der Cross-Impact-Bilanzanalyse: Internationalisierung von Innovationsstrategien im
Bereich der Kohlekraftwerkstechnologie. 11. Symposium Energieinnovation, TU Graz, February 10-
12, 2010.
Footnotes
1 See, for instance, van’t Klooster and van Asselt (2006). For a critical perspective, see Ramirez and
Wilkinson (2014).
2 In the CIB software ScenarioWizard, the function “Force variant” can be used to perform an
intervention analysis.
3 This additional type of intervention scenario may or may not occur as a result of an intervention. In
no case, however, does it occur when intervening on an autonomous descriptor.
4 Still beyond wildcards are concepts that argue that system disruptions in reality are often triggered
by nonanticipatable events beyond the horizon of experience (“black swans,” Taleb 2010).
5 Invariance property IO-2, Weimer-Jehle (2006: 343). For a proof, see also Weimer-Jehle (2009),
Property VIII.
6 To be precise: After multiplication by the factor n, a matrix has the same IC0 portfolio as before,
and the ICn portfolio is the same as the former IC1 portfolio, and the IC(2n) portfolio is the same as
the former IC2 portfolio, and so on.
7 The creation of the ensemble individual evaluations and the intersection table is supported in the
ScenarioWizard software by the “Ensemble evaluation” function.
8 When analyzing marginally inconsistent scenarios in the group evaluation, it should be noted that
the significance threshold for inconsistencies according to Memo M5 (Sect. 4.5.3) depends on the
number of matrices in a group and may therefore differ from group to group if the groups are of
different sizes.
9 In didactics, the term sequencing stands for the creation of a learner-friendly, sequential order of
learning content. The term is adopted in CIB because the arrangement of the thematic components of
a scenario or storyline in an order that promotes understanding is an analogous didactic task.
10 An exception is a matrix containing strongly biased data, which leads to small portfolios (see
Sect. 5.3).
14 As an example, demonstrating that high scenario diversity and high presence of descriptor
variants do not automatically go hand in hand, we consider a matrix with 10 descriptors and two
variants for each descriptor. The portfolio consists of 11 scenarios: one scenario with the first variant
for each descriptor and 10 further scenarios, each with the second variant for one descriptor and the
first variant for the other descriptors. Then, a presence rate of 100% is achieved, and yet the portfolio
shows only minimal diversity because all scenarios are only marginal variations of the parent
scenario (Scenario no. 1).
15 In general, the binomial coefficient z!/(k!(z-k)!) indicates how many different ways there are to
select k scenarios from a portfolio of z scenarios. That is, there are 3 ways to select two scenarios
from the “Oil price” portfolio of 3 scenarios and 252 ways to form a set of 5 scenarios from the 10
Somewhereland scenarios. This is feasible. However, there are more than 10 billion different ways to
select 10 scenarios from a portfolio of 50 scenarios.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis, Contributions to Management
Science
https://doi.org/10.1007/978-3-031-27230-1_5
Wolfgang Weimer-Jehle
Email: [email protected]
Fig. 5.2 “Global socioeconomic pathways” matrix. Adapted and modified from Schweizer and
O’Neill (2014)
Figure 5.3 clearly shows that the descriptor variants play highly
different roles in the portfolio. Scenarios with the descriptor variant “B1 -
Low average income,” for example, are very rare, while scenarios with the
descriptor variant “E2 - Medium educational attainment” represent the
standard case. In contrast, the descriptor variants “F. Quality of
governance” are distributed fairly evenly across the scenarios.
Thus, the reason for the weak presence of descriptor variant B1 lies
in its environment sensitivity and for that of E3 in its formative effect on
the environment.3
Figure 5.6 shows five scenarios that were selected from the 32 scenarios
in the “Global socioeconomic pathways” matrix using the “max-min”
heuristic.
Fig. 5.6 Scenario selection according to the “max-min” heuristic (diversity sampling)
Method Comparison
A comparison of the results of the formal selection procedure with the
content-oriented selection procedure reveals similarities but also
differences. Scenarios no. 1 and no. 32 were nominated in agreement by
both procedures. However, the formal procedure failed to recognize that
Scenarios no. 8 and no. 30 could be assets to the selection although they
appear to contribute only moderately new aspects, if we judge by the formal
scenario distances. The content-driven procedure, in contrast, was blind to
the fact that although scenarios no. 1 and no. 2 arrive at similar index values
and therefore appear as neighbors on the portfolio map, the similar index
totals are reached in very different ways. Thus, behind this seemingly close
proximity are nevertheless strikingly different futures. A combination of
both methods therefore offers chances for additional insights.
Cluster Analysis
Cluster analysis enables one to systematically group large sets of scenarios
into families by sorting them according to similarities (Everitt et al. 2001).
In the resulting scenario families (“clusters”), certain descriptor variants are
the same for all members and thus define the future type for which the
cluster stands. The variants of the other descriptors are not uniform within
the cluster. Statistics offers different forms of cluster analysis. The widely
used statistics software package R, for example, provides several
procedures with which different cluster algorithms can be applied.5
Correspondence Analysis
Correspondence analysis is a statistical technique for identifying latent
structures in datasets (Le Roux and Rouanet 2009). In CIB, it can be used to
determine best-fit dimensions for a portfolio map (Sect. 4.1.3). Pregger et
al. (2020) present an example application of this statistical method to order
a very large CIB portfolio (see Sect. 7.2). The statistical method of
multidimensional scaling (Kruskal and Wish, 1978) represents a similar
approach.
Fig. 5.9 Cross-impact matrix on the social development of an emerging country. Adapted and
modified from Cabrera Méndez et al. (2010) (Translation from Spanish by the author)
C
C1 C2
B B1 −1 1
B2 2 −2
Here, B1 prefers C2, and B2 prefers C1. The judgment sections of this
regular type introduce “if-then” mechanisms into the matrix, which
interrelate with one another and ultimately form the foundation for a
pluralistic portfolio. Rarer are usually one-sided coded judgment sections,
such as Section D → E in Fig. 5.9:
E
E1 E2
D D1 2 −2
D2 2 −2
This type of judgment section leads to a preference for one descriptor
variant (here E1) no matter which development occurs for D. The more
sections of this type with one-sided preference that occur in a descriptor
column, the more difficult it becomes for CIB to also find solutions with a
different descriptor variant. The descriptor is then predetermined, and if a
matrix contains a sufficient number of predetermined descriptors, the matrix
as a whole is also predetermined.
Unbalanced judgment sections and their interpretation are discussed in
more detail in Sect. 6.3.2. It is important to note that the occurrence of
unbalanced sections in a matrix should not be interpreted per se as an
indication of flawed judgments. Such sections may occur in error in certain
cases. However, in other cases, they may represent the correct
implementation of a valid system insight (see Sect. 6.3.2, Paragraph
“Phantom variants”). For a CIB analysis, their presence in a matrix is
therefore simply a fact whose effect on the evaluation results must be
investigated from a technical perspective. After this investigation has been
conducted, the time has come to question their factual correctness.
In the matrix in Fig. 5.9, unbalanced judgment sections appear
unusually often, as Fig. 5.10 illustrates. In certain cases, the preferences of
the unbalanced sections change within a column and thus lose their power.
This is the case in Column B, where two unbalanced sections prefer B1 and
two other unbalanced sections prefer B2. In Columns C, E, F, and H,
however, the unbalanced sections cause a clear bias in favor of the
descriptor variants C1, E1, F1, and H2. Therefore, it is not surprising that
the single scenario of the matrix has just these characteristics and that the
matrix is unable to provide other solutions. The descriptors A and G are
weakly predetermined with only one unbalanced section. Nevertheless, the
only consistent scenario with the descriptor variants A1 and G2 follows this
gradual predetermination.
Fig. 5.10 Unbalanced judgment sections in the “Emerging country” matrix
Note
Column sums must not be confused with impact sums. The column
sum summarizes all cross-impact values of the column of a descriptor
variant, while the impact sum only sums up the cross-impact ratings of a
descriptor variant column that are active in a specific scenario.
Therefore, the column sums do not refer to a specific scenario but are a
general property of the matrix. In contrast, the impact sums are a
property of a specific scenario.
In the case of balanced cross-impact ratings, the column sums are low.
However, if the cross-impact ratings within a descriptor variant column
show a clear preference for one sign, the column sum accumulates to higher
positive or negative values. Unbalanced judgment sections are one possible
cause for an imbalance in a column. Another possible cause is, for instance,
that although there is a balanced mix of positive and negative values in a
column, the negative judgments are typically high and the positive
judgments are typically low.
Figure 5.11 reveals massive imbalances for several columns of the
“Emerging country” matrix. In particular, for descriptors C, E, and H, there
is a bias in favor of C1, E1, and H2. The cause in this case is the
accumulation of unbalanced judgment sections, which we have previously
noted.
In our example, we see that descriptors B and D, which are not
systematically predetermined by unbalanced judgment sections, are not
otherwise subject to significant bias in the cross-impact ratings and
accordingly have low-impact sums. Nevertheless, even these descriptors do
not escape the predetermination of the matrix because all other descriptors
are predetermined and therefore exert predetermined influences on B and D.
Thus, the monotonicity of the portfolio has been successfully explained
from a technical perspective, and the unbalanced judgment sectors have
been identified as the central cause.6 As a next step, the identified causes of
monotonicity can be questioned. It can be discussed whether the cross-
impact ratings that were identified as the cause of the monotonicity are
justified with respect to content (and the monotonicity of the portfolio is
therefore to be accepted as a valid result) or whether the ratings must be
revised and a new evaluation conducted.
Fig. 5.14 Cross-impact matrix “Social sustainability” with sorted descriptor variants
Single Intervention
If we first attempt an intervention at A2 (increasing economic output; for
the technical implementation of the intervention analysis, see Sect. 4.4.3,
the portfolio shown in Table 5.3 results (printed in short format).
Table 5.3 Portfolio after intervention on “A. Economic performance”
A B C D E F G H
No. 1 2 1 1 1 1 1 1 1
No. 2 2 2 2 2 2 2 2 2
A B C D E F G H
No. 1 1 1 1 2 2 1 1 1
No. 2 2 2 2 2 2 2 2 2
A B C D E F G H
No. 1 1 1 1 1 1 2 1 1
No. 2 2 2 1 1 1 2 1 2
No. 3 2 2 2 2 2 2 2 2
The new Scenario no. 3 again only acknowledges that the intervention
does not disturb an already successful network state. Here, the new
Scenarios no. 1 and no. 2 express the effect of the intervention on the
former Scenario no. 1 (“everything decreases”) in Fig. 5.13. The split of the
old Scenario no. 1 into two new scenarios means that the effect of the
intervention is uncertain, and the two scenarios provide information about
the extent of the uncertainty. In the worst case, the intervention could be
ineffective except for the local effect on F (new Scenario no. 1). In the best
case, in contrast, it could succeed in tipping even three other “dominoes”: A
(economic performance), B (innovation ability) and H (education).
Intervention analysis is thus able to produce statements about the effect of an intervention M7
on two different scales:
1. Range of the effect in the system
2. Certainty resp. uncertainty of the effect.
A B C D E F G H
No. 1 2 2 1 1 1 1 1 2
No. 2 2 2 2 1 1 1 2 2
No. 3 2 2 2 2 2 2 2 2
However, even in the unfavorable case (new Scenario no. 1), we have
an effect on at least two further descriptors (plus the local effect on the
intervened descriptor), and in the more favorable case (new Scenario no. 2),
we even have an effect on four further descriptors.
The intervention analysis thus yields a rather differentiated statement
about the suitability of the descriptors for an intervention aimed at reversing
a downward spiral in social sustainability. The effects of the interventions
are summarized in Fig. 5.15, where the intervention effect is measured by
the number of descriptors tipped. Descriptors A, B, C, E, and G are not very
suitable for intervention because they do not have any systemic effect.
Descriptor D promises at least a weak systemic effect. The effect of an
intervention on F is uncertain—worse than D in the worst case but much
better than D in the best case. However, the best choice we could make
according to the present analysis would be the intervention on H
(education): here lies the greatest potential among all interventions in the
favorable case and a significant advance even in the unfavorable case.
Fig. 5.15 Intervention effects: worst case (dark shading) and best case (light shading)
Dual Interventions
According to the results of the present analysis, even the most favorable
stand-alone intervention is not capable of completely reversing a downward
spiral in the “Social sustainability” matrix. Only partial successes could be
achieved. We therefore next ask whether a simultaneous intervention on
two descriptors might be sufficient to force the impact network into a
homogeneous “everything increases” state. In the portfolio, this would be
expressed by having the old Scenario no. 2 from Fig. 5.13 as the only
solution.
With eight descriptors, there are 28 ways to conduct an intervention on
two descriptors. As in the paragraph “single intervention,” the effect is
again assessed according to the maximum and minimum number of
descriptors that could be tipped. Complete success corresponds to tipping
all eight descriptors (“domino intervention”). The results for all 28 possible
dual interventions are summarized in Fig. 5.16.
Fig. 5.16 Effect of dual interventions in the “Social sustainability” matrix
Problem Solution
The cause of the problem described above is that D is an underdetermined
descriptor. The influences described in Column D may be correct in
themselves. However, their incompleteness creates a distorted picture of the
descriptor. The necessary inclusion of the possibility that political
developments in BigCountry might take different paths than specified by
Column D is most easily performed by deleting all entries in the column of
the underdetermined descriptor. The admission, expressed in this way, that
the behavior of descriptor D cannot be inferred on the basis of our
descriptor field leads to a better system analysis than holding on to the
illusion that a statement about D could be made with the existing
descriptors in this matrix. After the entries in Column D of Fig. 5.18 are
deleted, the result is an expanded portfolio (Fig. 5.20):
Lessons Learned
Problems caused by underdetermined descriptors are not limited to analyses
in which, as here, the influence of large systems on small systems must be
considered, although such problems then occur frequently. However, it is
generally necessary when compiling a descriptor field, or at the latest after
completion of the cross-impact matrix, to consider whether one or more
descriptors of the descriptor field are predominantly determined by
influences outside the descriptor field and then to proceed according to the
recommended procedure.
The inadequate handling of underdetermined descriptors is not
uncommon in CIB practice and often results in plausible scenario
alternatives being overlooked. Special attention to this point is therefore a
rewarding investment in analysis quality.
Figure 5.21 shows that the column sums in this example correspond
exactly to the pattern of descriptor variant presence in the IC0 and IC1
portfolios and thus provide a plausible explanation for the presences and
vacancies of descriptor E. The descriptor variant with by far the highest
column sum (E3) is precisely the descriptor variant that is the only one
already occurring in IC0. In IC1, the descriptor variants E2 and E4 enter the
portfolio, which is easily understandable since these are the two descriptor
variants with column sums in the middle range. The column sum of the
significantly vacant descriptor variant E1, shaded in Fig. 5.21, has by far
the lowest column sum, indicating a preponderance of negative values in
this column. It is this dominance of negative values in the E1 column that
prevents the CIB algorithm from finding scenarios capable of pushing the
impact sum of E1 into a range high enough to satisfy the consistency
condition.
This outcome clarifies that no systemic causes must be presumed for the
significant vacancy of “E1 Oil price low.” This is sufficiently explained by
the considerable preponderance of negative values in the column of this
descriptor variant and the disadvantage it thus experiences compared to all
other variants of the descriptor. The content-oriented interpretation of this
formal finding is that in compiling the descriptor variants, that is, in
collecting the ad hoc conceivable developments for the other descriptors,
the authors of the matrix identified numerous potential causes of higher oil
prices but few potential causes of a low oil price. Thus, the authors of the
matrix expressed an implicit skepticism about the possibility of a low oil
price in advance of the analysis, which then found its consequent
expression in the composition of the portfolio.
Following the formal vacancy analysis, a content-related examination of
the vacancies can be conducted. Here, for example, the question can be
discussed whether the one-sidedness of the hindering influences is justified
from the standpoint of resource economics or whether important factors that
would have spoken in favor of the vacant descriptor variant are possibly
missing in the descriptor field.
In practice, many vacancies can be understood by the column sums.
However, in cases where vacancies occur despite balanced column sums,
systemic explanations must be searched for. Typical systemic causes for
vacancies are, for example, vacancies of other descriptors, which disable
potentially promoting influences on the investigated vacancy (thus creating
“corollary vacancies”), or antagonistic relations between potential
promoters of the vacant descriptor variants, which means that in each
scenario only a part of the promoters can become active. Additionally,
constellations in which a descriptor variant effectively acts against its
potential supporters and/or in favor of its potential antagonists can lead to
systemic vacancies despite balanced column sums.
Fig. 5.25 Conditional cross-impact matrices (the top matrix is valid for E1 scenarios, that below for
E2 scenarios)
References
Cabrera Méndez, A. A., Puig López, G., & Valdez Alejandre, F.J. (2010). Análisis al plan national de
desarrollo - una visión prospectiva. XV Congreso international de contaduría, administratión e
informática, México.
Carlsen, H. C., Eriksson, E. A., Dreborg, K. H., Johansson, B., & Bodin, Ö. (2016). Systematic
exploration of scenario space. Foresight, 18, 59–75.
[Crossref]
Everitt, B. S., Landau, S., & Leese, M. (2001). Cluster analysis. Arnold.
Hummel, E. (2017). Das komplexe Geschehen des Ernährungsverhaltens - Erfassen, Darstellen und
Analysieren mit Hilfe verschiedener Instrumente zum Umgang mit Komplexität. Dissertation,
University of Gießen.
Kruskal, J. B., & Wish, M. (1978). Multidimensional scaling, Sage University paper series on
quantitative application in the social sciences, 07–011. Sage Publications.
Le Roux, B., & Rouanet, H. (2009). Multiple correspondence analysis. SAGE PUBN. http://www.
ebook.de/de/product/10546753/brigitte_le_roux_henry_rouanet_multiple_correspondence_analysis.
html).
Lord, S., Helfgott, A., & Vervoort, J. M. (2016). Choosing diverse sets of plausible scenarios in
multidimensional exploratory futures techniques. Futures, 77, 11–27.
[Crossref]
Pregger, T., Naegler, T., Weimer-Jehle, W., Prehofer, S., & Hauser, W. (2020). Moving towards
socio-technical scenarios of the German energy transition - lessons learned from integrated energy
scenario building. Climatic Change, 162, 1743–1762. https://doi.org/10.1007/s10584-019-02598-0
[Crossref]
Renn, O., Deuschle, J., Jäger, A., & Weimer-Jehle, W. (2007). Leitbild Nachhaltigkeit - Eine
normativ-funktionale Konzeption und ihre Umsetzung. VS-Verlag.
Renn, O., Deuschle, J., Jäger, A., & Weimer-Jehle, W. (2009). A normative-functional concept of
sustainability and its indicators. International Journal of Global Environmental Issues, 9(4), 291–
317.
[Crossref]
Schweizer, V. J., & O’Neill, B. C. (2014). Systematic construction of global socioeconomic pathways
using internally consistent element combinations. Climatic Change, 122, 431–445.
[Crossref]
Tietje, O. (2005). Identification of a small reliable and efficient set of consistent scenarios. European
Journal of Operational Research, 162, 418–432.
[Crossref]
Footnotes
1 Scenarios nos. 2, 6, 8, 11, 13, and 15 have a mutual distance of 3 or higher.
2 Another method of statistical evaluation is, for example, to examine the frequency of descriptor
variant combinations in the portfolio.
3 In a more in-depth form of this analysis, the descriptor column (and then the descriptor variant
row) of the rare variant is not set to 0 as a whole. Rather, only one judgment section of the column
(or judgment group of the row) is set to 0 in several successive evaluation runs. In this way, it can be
determined which relationships in particular contribute to the rarity of the examined descriptor
variant.
4 Tietje (2005) presented three different scenario selection procedures developed on the basis of
morphological fields, including the “max-min-selection” procedure used here. The procedures were
originally intended for use with the consistency matrix method. However, they can be applied to CIB
scenarios without difficulty because CIB also uses morphological fields. Diverging from Tietje’s
suggestion, however, all scenarios of the portfolio are treated as basically equivalent here.
6 As a technical check of whether the explanation found for the monotonicity of our example
portfolio is sufficient, all unbalanced judgment sectors can be modified, e.g., by clearing them all or
by converting the signs in the unbalanced sectors so that sectors with regular sign patterns are
created. If the reevaluation then yields a broader portfolio, the monotonicity can be unambiguously
attributed to the unbalanced judgment sectors. Actually, in the example matrix, this test evaluation
results in a breakup of the monotonicity.
7 The only exception is the impact of “A. Economic performance“on “E. Social integration,” where
it can already be seen in the matrix Fig. 5.12 that a counteracting impact is coded, since it is assumed
that people become more egocentric in a booming economy and consequently care less about one
another. However, this single exception is too isolated to have a significant effect on the system.
8 This is not a peculiarity of the case discussed here. It is always the case that when all values in a
column are deleted, all previous scenarios are retained, but new scenarios can be added (but do not
necessarily have to be added).
9 In this way, the cause analysis for the vacancies of a portfolio is also a suitable means of at least
partially checking the quality of the cross-impact data.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis, Contributions to Management
Science
https://doi.org/10.1007/978-3-031-27230-1_6
6. Data in CIB
Wolfgang Weimer-Jehle1
(1) ZIRIUS, University of Stuttgart, Stuttgart, Germany
Wolfgang Weimer-Jehle
Email: [email protected]
This chapter addresses the three fundamental data objects needed for CIB
analysis: descriptors, descriptor variants, and cross-impact data. The basic
aspects of their role were described in the method description in Chap. 3. These
basic aspects will be deepened here by “dossiers” on the data objects. The
dossiers discuss terminology, types, methodological requirements, and
elicitation procedures commonly used in practice.
Two people in the group take on a special role: Daniel does not let any other
group member influence his opinion. The matrix Column B assigned to him is
therefore empty, which means that he is represented by an autonomous
descriptor.
Alec, in contrast, is a new group member whose social reputation within the
group remains low. His opinion is noted. However, no one is influenced by it.
Conversely, he orients himself only to his special reference persons in the group.
In this case, the corresponding row of the matrix is empty, and the
corresponding descriptor is to be classified as a passive descriptor.
All other persons (Anna, Philipp, Sarah, Laura) are represented by
descriptors with cross-impact entries in their rows and in their columns
(systemic descriptors). They assume a dual role in that they are objects of
influence and at the same time participate in shaping the group’s opinion-
forming process.
The scenario table shows which subgroups with homogeneous opinions can
form. For better clarity, the approvals in Fig. 6.1 are presented in light print,
whereas the disapprovals appear in gray. One solution type consists of Sarah
holding an opinion that differs from the rest of the group (scenarios 1 and 2). In
the second type, two camps are formed consisting of Daniel and Alec on the one
hand and Anna, Philipp, Sarah, and Laura on the other (scenarios 3 and 4). For
each solution, there is also an inverse solution in which the yes and no positions
are reversed.2
A special characteristic of passive descriptors is that they can be deleted
from the matrix without consequences for the scenarios with respect to the
remaining descriptors. This is illustrated by removing the descriptor “E. Alec”
from the matrix in Fig. 6.1 and then re-evaluating the matrix.
The result shown in Fig. 6.2 makes clear that the camp formations follow the
same pattern as in the original matrix. The only difference is that the scenarios
now no longer include a statement about Alec.
Fig. 6.2 “Group opinion” cross-impact matrix and portfolio after removing the passive descriptor
Passive descriptors can thus be removed from the matrix without changing
the nature of the portfolio. Motives for removing passive descriptors can be to
make the matrix and the results clearer and easier to understand or, in the case of
large matrices, to reduce the computing time required to evaluate the matrix.
Passive descriptors should be retained, however, if they make important content-
oriented statements, for example, because they are a direct part of the research
question to be addressed by the CIB analysis.
The correct variant of a removed passive descriptor in the context of a
specific scenario obtained from the shortened matrix can be determined
subsequently by inserting the scenario into the complete matrix according to the
scheme shown in Fig. 3.16 and determining the descriptor variant of maximum
impact balance for the passive descriptor. Figure 6.3 shows this procedure for
scenario no. 3 from Fig. 6.2. The scenario does not contain any information
about Alec because the passive descriptor “E. Alec” was temporarily removed
from the matrix. Inserting the scenario [Anna: yes, Daniel: no, Phillip: yes,
Sarah: yes, Laura: yes] into the original matrix (Fig. 6.1) and calculating the
impact balances shows that “no” is the consistent descriptor variant of “Alec” in
this case.
Fig. 6.3 Post hoc determination of the consistent variant of a passive descriptor
The comparison with scenario no. 3 of the portfolio of the complete matrix
(Fig. 6.1) shows that this result is correct.
Fig. 6.5 Portfolios with (top) and without (bottom) intermediary descriptor D
Number of Descriptors
The number of descriptors used for a CIB analysis is not prescribed by the
method. In principle, it results from a trade-off between the goals of adequacy
and completeness of the analysis and that of limiting the effort required to
perform the analysis. A too low number of descriptors endangers the quality of
the analysis because relevant aspects are not represented by descriptors and thus
neglected or because complex aspects, which would be better broken down by
several descriptors, must be aggregated into a single descriptor. However, too
many descriptors can also jeopardize analysis quality. Since usually only limited
time resources are available for an analysis, the attention and care that can be
devoted to the individual descriptor and its interconnections decrease as the
number of descriptors increases, with the risk that the analysis builds on
insufficiently understood descriptors and interrelationships.
Practice shows that the number of descriptors used in CIB studies varies
widely. An evaluation of 84 published CIB studies on different topics found a
minimum descriptor number of 3 and a maximum descriptor number of 43, with
both extremes being exceptions.3 In most method applications, descriptor fields
of between 9 and 15 descriptors are found (see Statistics Box).
Aggregation Level
CIB analyses are generally conducted at a relatively high aggregation level due
to the limited number of descriptors. For example, economic development in
Somewhereland is represented by overall economic growth, whereas a more
detailed analysis could differentiate by economic sector or region, for example.
Similarly, the topic of societal values could be considered in a more
differentiated way than in the Somewhereland matrix if distinctions were made
according to prevailing value attitudes in different social milieus or different age
cohorts. Generally, the more differentiated the descriptors are, the more reliably
cross-impacts can be assessed, and the more detailed and well-founded the
resulting scenarios are. In the end, the time resources that are available
determine how many descriptors can be used and which aggregation level must
be chosen for the analysis.
In any case, care should be taken to ensure that the level of aggregation chosen is M10
approximately uniform across the entire descriptor field. If certain subsystems are fine-grained
while other subsystems are described in a highly aggregated way, biases may arise in the
assessment of the influences of the disaggregated descriptors on the highly aggregated system
parts when a standardized impact rating scale is used. Due to their disproportionately high
number, the disaggregated descriptors can develop an exaggerated influence on the impact
balances .4
Documentation
For each descriptor, a written definition should be formulated: the “descriptor
essay .” This definition is helpful for the core team because it prevents
uncontrolled changes in its interpretation of the descriptor during the analysis
process. Without a detailed written version of the descriptor definition,
misunderstandings among participants and diverging interpretations can occur
unnoticed. This risk increases further if in the course of the process additional
persons who did not participate in descriptor selection and definition become
involved, for example, an expert panel for the estimation of the cross-impact
data. Furthermore, even after the scenario analysis has been completed, the
descriptor essays are of great importance for the comprehensibility, evaluability,
and usability of the results by the target audience of the study.
No general rule can be proposed for the size of descriptor essays because it
necessarily depends on the complexity of the topic, the number of descriptors,
and the level of ambition and willingness to read of the intended audience. In
practice, essays of approximately half a page to one page of text per descriptor
are frequently found.
CIB does not require a specific descriptor type. All types and mixed
descriptor fields, as in the Somewhereland matrix, can be used without further
ado. However, the evaluation algorithm assumes all descriptors to be nominal
descriptors—the type of descriptor that has the fewest requirements for the
measurement properties of a descriptor. In other words, CIB treats all descriptors
as nominal descriptors regardless of their actual type. Therefore, CIB
fundamentally generates qualitative scenarios and can be classified as a
qualitative scenario methodology in terms of its product (see Sect. 8.1 for more
details). Ordinal or quantified descriptors can also be used. However, their
special properties are not assumed and not used during the evaluation.
Additionally, the order of the descriptor variants in the matrix is irrelevant for
the evaluation. That is, changing the order of descriptor variants for an ordinal
descriptor would appear unnatural to the reader but would not lead to different
calculation results. Additionally, the assigned numbers in the variant definitions
of quantified descriptors are not used in CIB evaluation, although they may play
a role later in the interpretation and exploitation of the scenarios.
This implies that CIB also interprets the variants of quantified (“ratio”) descriptors as qualities M11
. For example, dynamic economic growth as an economic state has a different quality than
stagnation, and CIB explores the implications of this difference in quality. Within a CIB
analysis, the numerical variants specified for quantified descriptors have only the illustrative
function of making this quality difference comprehensible and assessable.
Vacant Vacant variants (“vacancies”) are descriptor variants that are completely missing in the
descriptor portfolio because they are not used by any scenario of the portfolio
variants The CIB analysis thus expresses the assessment that serious obstacles stand in the way of
realizing the vacant descriptor variants. The extent of these obstacles is expressed by
whether the vacancies continue to exist in the IC1 portfolio or even in higher IC levels
Examples of vacant descriptor variants in the portfolio of the “Oil price” matrix (Fig. 4.
48) are the descriptor variants “C3 Weak world tensions,” “E4 Very high oil price,” and
other descriptor variants
Robust Robust descriptor variants are variants that are shared by all scenarios of the portfolio.
descriptor This implies that all other variants of the relevant descriptor are vacant
variants Robust descriptor variants denote developments which, according to the results of the CIB
analysis, are insensitive to the future uncertainty of the other descriptors and must
therefore be regarded as likely developments. At this point, the scenario analysis moves
exceptionally close to a prognostic statement. The significance with which this statement
can be made depends on whether the descriptor variant in question is also robust in the
IC1 portfolio or even in higher IC levels.
An example of a robust descriptor variant is “E3 High oil price” in the IC0 portfolio of the
“Oil price” matrix (Fig. 4.48)
Characteristic Characteristic descriptor variants are variants that only occur in a single scenario of the
descriptor portfolio
variants Characteristic descriptor variants indicate that very specific prerequisites are required for
their activation, which are only provided by specific scenarios, or that the influences of
the characteristic descriptor variant on the other descriptors determine the scenario in a
unique way. If the characteristic descriptor variant is also particularly meaningful for the
research question of the analysis, it provides an attractive option for the scenario’s title
Examples of characteristic descriptor variants are “C1 Shrinking economy” and “E3
Unrest” in the portfolio of the Somewhereland matrix (Fig. 3.17). Characteristic variants
are more noteworthy in large portfolios because in small portfolios many descriptor
variants inevitably have this status
Regular All other descriptor variants, i.e., those that are neither vacant, characteristic, nor robust,
descriptor are termed regular variants. Their distinguishing feature is that they occur in more than
variants one, but not all, scenarios of the portfolio
Regular descriptor variants should be the normal case in a diverse portfolio. Only if this
type plays a major role in the portfolio can the ideal of a portfolio rich in varying future
motifs be realized
Definition
For maximum clarity for the cross-impact elicitation and interpretation of
results, the definition of a descriptor variant should not only be reflected by the
abbreviated name by which it is represented in the cross-impact matrix but also
be accompanied by documentation as a textual and, if appropriate, quantitative
explanation (Fig. 6.8). This information can be included in the descriptor essay.
Fig. 6.8 Example of descriptor variants and their definitions
Completeness
Completeness (exhaustiveness) means that the descriptor variants taken together
must represent all possible futures of the descriptor that are relevant for the
analysis. In the course of the data processing, CIB will assign exactly one
variant to each descriptor for each scenario. The case in which a descriptor does
not adopt any of the offered variants and thus refers to a development outside
the variant spectrum is not provided for in CIB. To avoid this methodological
“constraint” leading to an inappropriate narrowing of the scenario space, a well-
considered decision about the range of the possible must be made when defining
the variants for each descriptor.5
Mutual Exclusivity
Mutual exclusivity means that no conceivable real-world development should be
assignable to more than one variant of a descriptor simultaneously. The reason
for this requirement is again that CIB assigns exactly one variant to each
descriptor during scenario construction. Thus, for example, the variants of a
descriptor for economic growth may not be formulated as “below 2%/a,” “1–
3%/a,” and “above 3%/a,” because it is not clear to which descriptor variant an
economic growth of 1.5%/a is to be assigned.
Of course, such obvious errors in the design of the descriptor variants do not
play a significant role in practice. More relevant are hidden exclusivity
deficiencies, for example, when the variants of a descriptor “A. Development of
household incomes” are formulated as follows:
A1: Decreasing household incomes
A2: Increasing household incomes
A3: Rising inequality of household incomes
A4: Declining inequality of household incomes
The exclusivity defect in this case is that two dimensions are mixed in one
descriptor that are actually independent topics, namely, the level and distribution
of household income. In reality, it is quite possible for the household incomes of
a group of persons to increase in both level and inequality at the same time. To
account for this case, CIB would have to include both A2 and A3 in the same
scenario during scenario construction, which is technically impossible. It is
therefore necessary in this case to introduce two separate descriptors for the
level and inequality of household incomes.
Absence of Overlap
The definitions of the descriptor variants do not only have to be coordinated
within one descriptor. Methodological difficulties can also arise between the
variants of different descriptors if care is not taken to ensure absence of overlap .
Absence of overlap means that the variants of different descriptors should avoid
making statements about the same topic. Surely it is hardly to be expected that
someone would consider introducing two descriptors for the same subject. In
practice, the danger of creating overlaps is more likely to arise where the
documentation on the descriptors and their variants describe concomitants of the
actual main developments. For example, the definitions of the Somewhereland
descriptors “A. Government” and “B. Foreign Policy” shown in Fig. 6.9 contain
textual elements that violate the requirement of nonoverlapping descriptors.
Fig. 6.9 Incorrect definition of descriptor variants due to overlap of topics
The example shows that the two descriptors address sufficiently different
topics with respect to the party in power and the style of foreign policy and that
it is therefore justified to define separate descriptors for the topics. Basically, it
is useful to make the intended scenarios as concrete and substantial as possible
by a detailed elaboration of the descriptor variants. However, in this case, a
methodological defect occurs: the fact that the question of the role of
multilateral agreements is addressed in the definitions of both descriptors causes
an overlap. The consequence is that free play between descriptors A and B is no
longer possible, since A1 and B1 can no longer be combined without causing the
definition texts to contradict one another. The CIB analysis can thus no longer
explore whether a government of the “Patriots party” can find its way to a
cooperative foreign policy under certain circumstances without coming into
conflict with the definitional texts, since this combination is ruled out ad hoc by
an aside.
To leave the CIB analysis maximum freedom to investigate a research
question, it is therefore best to avoid overlapping descriptor definitions
altogether. In the case under discussion, this would mean addressing statements
about the attitude toward multilateral agreements in only one of the two
descriptors. At least, however, one must avoid categorical statements and rather
to associate a relative preference with the descriptor variant (e.g., the “Patriots
party” prefers a rejection of multilateral agreements, and a cooperative foreign
policy prefers participation in multilateral agreements).
The counter model with respect to the choice of central descriptor variants is
the decision to focus on the periphery of the spectrum of possibilities (Fig. 6.10,
right). Here, only a coarse definition of the middle range is accepted to afford
access to the outskirts of the spectrum of possibilities. In this way, CIB analysis
can also provide extreme scenarios. In Fig. 6.10, for example, one would argue
that the descriptor variant “A2 moderate” describes an economic development
that, despite the considerable range of developments it includes, is essentially
within the bounds of historical experience and represents more or less a
continuation of existing socioeconomic structures. In contrast, each peripheral
variant would lead in its own way to massive change or even disruption. The
price to be paid for the greater “colorfulness” of an analysis that includes the
peripheral part of the possibility space is that the scenarios using the central
descriptor variants lead to vague and not very informative pictures of the future.
However, at the cost of increased workload in matrix generation, the
advantages of both approaches can be combined by introducing five variants for
the descriptor in Fig. 6.10.
These considerations are not limited to (quantitative) ratio descriptors. The
corresponding design options are also open to ordinal and nominal descriptors.
Rating Interval
For the cross-impact ratings, the interval [–3…+3] is mainly used in the
examples of this book. However, this interval is not stipulated by the method. In
the literature, one also finds application examples using larger or smaller scales,
for example, [–5…+5]10 or [–2…+2]11. Basically, CIB is operable even if only
the sign of the influence is provided without a strength rating, which
corresponds to the interval [–1…+1].
The rating interval is therefore a matter of choice. As mentioned, from a
technical perspective, CIB does not prescribe an interval. The choice of interval
is a trade-off between the goal of providing the scenario construction procedure
with as much information as available about the differences in strength between
influences (a goal that is promoted better by large intervals) and the insight that
qualitative knowledge about influence relationships—which is often the basis
for cross-impact ratings—does not usually justify fine gradations. Intervals that
are too large only lead to a stronger experience of uncertainty for the experts and
pseudoaccuracy in the rating results.
Practical experience shows that the interval [–3…+3] is often perceived as
useable and at the same time sufficient for recording existing knowledge about
strength relations. In a sample of 64 CIB studies with documented rating
intervals , the interval [–3…+3] was used in 49 cases (77%). With this interval,
CIB follows older traditions with other methods based on expert judgments.12
Nevertheless, individual circumstances can also suggest a different choice. For
example, a coarser strength scale may be preferred if a study requires an
assessment of relationships that are particularly difficult and uncertain with
respect to evaluate.
Cross-impact matrices may be densely or sparsely filled. There is no methodological necessity M12
to fill as many judgment sections as possible. One of the essential tasks in creating the cross-
impact matrix is to decide which relationships are to be omitted as less relevant and to focus
the view on the influencing relationships that are actually key.
Thus, cross-impact ratings should be strictly limited to direct influences , and it should be left M13
to the CIB algorithm to account for the indirect effects they imply.
Notice: Changing the descriptor field can convert direct influences into
indirect ones or vice versa
As described above, an indirect influence relationship can be identified by
the appearance of a third descriptor as an intermediate element in the detailed
verbal formulation of the impact process. However, this means that the
classification of an influence relationship as direct or indirect is only valid in
relation to a specific descriptor field. If the descriptor “A. Government” were
removed from the Somewhereland descriptor field, the influence relationship
from F to B described above would become a direct effect for the reduced
Somewhereland matrix. Conversely, the subsequent addition of a new descriptor
to Somewhereland would raise the question of whether some of the mechanisms
previously correctly coded as direct influences in the matrix might now be
classified as indirect and thus should be broken down into their components,
which then lead through the new descriptor as an intermediate link.
Implementation Hints
Limiting the cross-impact assessment to direct influences is a challenging task.
Experience has shown that experts may find it difficult to reliably maintain the
distinction between direct and indirect influences. Careful preparation of the
expert panel for this issue is therefore important for quality assurance. However,
the most important tool for self-control by the experts or for subsequent quality
control by the core team is a written explanation of the coded influence
relationships with a precise description of the assumed impact process.
Descriptions that mention other descriptors in addition to the source and target
descriptor of the influence when describing the assumed impact process suggest
a coding error.
The same consideration can just as well be described by its opposite, i.e., its
dampening effect on the risk of decreasing innovative capacity (Table 6.5)
Table 6.5 Coding an influence relationship using negative impacts
For if, as in this case, only the alternatives that innovation capacity increases
or decreases are offered, then to state that MINT education helps prevent
innovation capacity from decreasing is equivalent to stating that such an
education promotes the increase in innovation capacity. Not surprisingly,
explicitly mentioning both sides of the influence effect, i.e., promoting and
inhibiting, is also a way of expressing the same consideration (Table 6.6).
Table 6.6 Coding an influence relationship using mixed impacts
The three alternative codings of the same consideration formulated here are
methodologically equivalent. The distance between B1 and B2 is always the
same (i.e., 2 points), and CIB evaluation would result in identical portfolios for
all three variants. The equivalence of the three cross-impact codings (a), (b), and
(c) is the consequence of the following general law in CIB:
Uniform addition of any number in all cells of a judgment group does not change the portfolio M14
of a matrix (“addition invariance ”).14 The reason for this is that the CIB algorithm only reacts
to the difference between two impact sums, and this does not change when a judgment group is
uniformly shifted in its score level.
(C2)
The exclusive use of negative cross-impacts in the style of coding (b).
(C3)
The balanced use of positive and negative cross-impacts in coding (c).
With all the conventions mentioned, it is in principle possible to perform a
correct CIB analysis. In application practice, however, cross-impact matrices
with an (at least) approximately balanced use of positive and negative cross-
impact ratings dominate, and this book follows this practice.
At the beginning of the cross-impact assessment of each descriptor column, it is helpful to first M15
ask oneself which is the strongest impact in that column and to rate it +3 or –3. Afterward, this
impact serves as a reference point for the other cross-impact ratings in the column.
The original judgment section, limited to A1 and A2, may have come about
through deliberate exclusion of the conceivable descriptor variant A3, for
example because a regression in environmental legislation was considered
highly unlikely or because it would contradict the premises of the study.17 The
narrowing of the options for descriptor B by the one-sidedness of the judgment
section would then simply be the logical consequence of a deliberate restriction
of the variant spectrum of descriptor A.
Descriptor variants that are conceivable in principle but excluded from the
analysis for good reason, such as A3, which are an unspoken part of the
spectrum of possibilities and must be supplemented mentally to capture the idea
of a judgment section, are referred to as “phantom variants.”
What is needed is sensitivity to the occurrence of one-sided cross-impact
ratings. One-sided judgment sections emerging during expert elicitation should
be discussed with the experts and the implications explained. If there are good
reasons and if the one-sidedness is confirmed by the experts in knowledge of the
implications, the one-sided judgment section should be accepted. If one-sided
judgment sections result in strongly predetermined descriptors, consideration
can also be given to simplifying the matrix by deleting the devalued descriptor
variants.
Examples from practice in which expert judgment led to strongly
predetermined cross-impact matrices include Weimer-Jehle et al. (2010) and
Cabrera Méndez et al. (2010).
6.4.1 Self-Elicitation
In the simplest case, the necessary data objects (descriptors, descriptor variants,
cross-impact ratings) are determined partly or entirely by the core team. This
assumes that the core team has sufficient expertise in the entire relevant subject
area.
Advantages Disadvantages
Advantages Disadvantages
The main advantages of this The main disadvantages are the increased risk of overlooking
elicitation method are its important perspectives as a result of “groupthink” and possibly of
procedural simplicity, low the team lacking competence sufficient to represent all the
required effort, and independence required areas of knowledge.
from the need for external Thus, the disadvantages of the process lie in the increased risk of
experts to contribute to data subjectivity and limited scientific legitimacy. These concerns
elicitation. The method is carry less weight if the explicit purpose of the analysis is to
therefore particularly suitable for express the core team’s system view through scenarios (e.g., in the
studies with limited resources. case of stakeholder scenarios), if the thematic range of the object
of analysis is well covered by the competencies of the team, or if
the matrix is only prepared for illustrative or teaching purposes
(such as in the case of the “Somewhereland” matrix).
Examples
Examples of self-elicitation of descriptors/variants and/or cross-impact data in
CIB studies are Saner et al. (2011), Slawson (2015), Saner et al. (2016),
Weimer-Jehle et al. (2016), Schweizer and Kurniawan (2016), Regett et al.
(2017), and Zimmermann et al. (2017).
Descriptor Screening
A literature review for descriptor screening consists of evaluating the literature
to identify topics that are named relevant influencing factors for the subject of
the analysis. To objectify the literature selection, it is advisable to document the
search procedures (e.g., databases used, search criteria). Recently, the use of
software tools for qualitative content mining has occasionally been observed
(e.g., Kalaitzi et al., 2017; Sardesai et al., 2018). A special form of review is the
evaluation of expert statements in media reports on the subject of the scenario
study (Ayandeban, 2016).
Descriptor Variants
Obtaining descriptor variants through literature reviews can be based on studies
that make future predictions about the descriptors. In the best case, the
descriptors themselves have already been the subject of scenario studies, or
controversial forecasts on the descriptor topic can be collected to capture the
spectrum of possible futures for the descriptors (Nutshell V, Fig. 6.11).
Cross-impact Data
The derivation of cross-impact data from the literature by the core team consists
of identifying passages that make statements about descriptor relationships and
coding the qualitative or quantitative statements on a cross-impact rating scale.
To reduce coding uncertainty, several persons trained in the CIB method can be
asked to code the literature passages independently. Comparison of the results
then makes it possible to assess intercoder reliability.
Assessment
Advantages Disadvantages
The advantage of literature-based elicitation is its utilization If performed carefully, including by
of a broad knowledge base and—if sought—the scientific using the objectivity enhancement
legitimacy of the elicitation results. This advantage is measures described at left, this form of
strengthened by the transparent reference to recognized data elicitation can be highly time-
expert knowledge and by recording and balancing scientific consuming.
dissent. Nevertheless, a subjective component arises in the A fundamental difficulty with
interpretation of the text passages and in the selection of literature-based surveys can also arise
literature. However, this aspect can be mitigated by for CIB studies that investigate
formalized and documented search strategies and employing developments whose interrelationship
multiple coders. Another advantage is the independence has not yet been addressed in the
from external experts and the effort to recruit them to literature but knowledge is
contribute to the data elicitation. nevertheless available from experts.
Such “uncharted territory” is a
particularly attractive field of
application for CIB. However, this
means that the necessity may arise to
switch to other elicitation methods for
parts of the matrix.
Examples
Descriptors, descriptor variants, and/or cross-impact data have been collected in
many CIB studies through a literature review, e.g., Weimer-Jehle et al. (2011),
Schweizer and Kriegler (2012), Meylan et al. (2013), Centre for Workforce
Intelligence (2014), Schweizer and O’Neill (2014), Ayandeban (2016),
Shojachaikar (2016), Musch and von Streit (2017), Kalaitzi et al. (2017),
Sardesai et al. (2018), and Mitchell (2018).
Descriptor Screening
For descriptor screening , the expert panel receives information about the subject
of the scenario analysis and is asked to suggest descriptors in writing or online.
A structured selection of the interviewees can be used to pursue an appropriate
representation of interdisciplinary perspectives, scientific controversies or (in
the case of stakeholders) different interests. A public online survey can also be
considered a special form of elicitation (Mowlaei et al., 2016). Expert elicitation
can also be used as a validation step after a literature review (Meylan et al.,
2013; Pregger et al., 2020).
Descriptor Ranking
If the expert panel is surveyed again after a list of possible descriptors has been
compiled and asked to assess the relevance of the proposed descriptors, for
example, by assessing weight scores, this form of survey can also be used for
descriptor ranking and thus for the final selection of the descriptors to be
considered in the CIB analysis. An obvious option is to select the descriptors
with the highest average scores. However, other approaches can also be found in
the literature for transforming expert judgments into a selection. Schweizer and
O’Neill (2014), for example, surveyed a group of experts online and asked each
individually to mark the most important descriptors on a list. All descriptors
nominated by at least 25% of the experts were then used for the CIB analysis.
Pregger et al. (2020) sent a list of descriptor suggestions to experts in different
disciplines and asked them to assign 0–10 points for each descriptor suggestion.
The researchers then evaluated separately according to discipline which
descriptors best represented the perspectives of each discipline.
Descriptor Variants
Experts may also be asked to suggest alternative futures for the descriptors. An
approach that can be combined with the literature-based approach is one in
which the core team prepares written drafts for the definition of the descriptor
variants after a literature review and asks experts to comment on the drafts
(Pregger et al., 2020). This variant reduces the workload for the expert panel. In
addition, the drafts produced by the core team can incorporate all project
requirements and design decisions (such as the number of descriptor variants or
whether to prioritize central or peripheral variants) from the outset, thereby
providing guidance to the experts for their comments.
Cross-impact Data
The alternative to using literature as “written expert knowledge” is to approach
experts directly and ask them to assess the cross-impacts. For the written survey,
experts are first selected and approached on the basis of their proven expertise in
the field (for example, in the form of publications or project experience). In the
case of an interdisciplinary descriptor field, expert selection must reflect the
disciplinary range and should also cover the most important expert controversies
in the individual knowledge domains, if any. To avoid subjective survey results,
several experts should be included for each discipline. The experts receive the
following:
A brief description of the project and the intended use of the scenarios.
A working guide to the assessment task, including a cursory outline, that
explains how cross-impact data are used by CIB.
Definitions of descriptors and their variants (“descriptor essays”).
Blank forms for entering cross-impact ratings and explanations. It is
convenient to offer blank forms in different formats to accommodate the
different working habits of the experts (printout, spreadsheet file, scw-file).20
It is advisable to ask the experts not only for the cross-impact ratings but
also for brief justifications, i.e., explanations of the impact mechanism they
assume. For this, the experts should be sensitized by a negative and positive
example in the work instructions not only to write a paraphrase of the cross-
impact rating but to formulate an actual explanation for the coding.21 To limit
the workload for the experts, they should not be asked to provide reasons for
each individual judgment cell but only to provide explanations at the judgment
section level. It is also helpful to ask the experts for a self-assessment of their
confidence in each judgment section.
Judgment explanations are not technically required for CIB evaluation.
However, they can significantly increase the quality of the analysis, as they
make the experts’ cross-impact ratings more comprehensible for the core team
and for third parties and enrich the subsequent scenarios with substance because
such explanations can be used in the scenario descriptions and storylines. They
also serve quality assurance purposes, as they make it easier to identify technical
rating errors (for example, sign errors; see Sect. 6.3.2) and to address assessment
dissent. However, providing judgment explanations means considerable
additional work for the expert panel, which must be considered when measuring
the assessment task and announcing the time required for the assessment when
inviting the experts.
In the simplest case, the responses of different experts are summarized and
evaluated by forming a sum matrix (cf. Sect. 4.5.3). However, a careful
examination of controversial assessments (if any) is preferable. Procedures for
handling expert dissent are described in Sect. 4.5. As mentioned, it is also
advisable for the core team to compare the cross-impact ratings of the experts
with the explanations (if available) to identify technical errors of judgment and
to be able to correct them in consultation with the experts concerned.
Fig. 6.13 Two ways to construct a cross-impact matrix by expert group elicitation
Assessment
Advantages Disadvantages
One advantage of this elicitation technique is the There are also disadvantages. Expert
potentially high number of experts who can be knowledge in published written form has
involved and, due to the limited effort required, the usually already been subjected to a quality
low barrier to participation for them. Regardless of check by other experts, which is not the case
the form chosen, expert-based elicitation generally with expert statements acquired by direct
also has the advantage that the experts can project questioning. Furthermore, expert assessments
their system knowledge onto interdependencies are also subject to the risk of being uncertain
regarding which they and others have not yet judgments made in the absence of reliable
commented in publications, thus giving the CIB knowledge. Another disadvantage of using this
analysis access to genuinely new insights. elicitation method alone is the risk of basing
Furthermore, expert dissent can usually be better the analysis on perhaps one-sided opinions by
identified and mapped in the CIB analysis by experts who were selected “by chance,”
working with a group of experts than by interpreting especially if the number of experts interviewed
the literature. is small.
Examples
Descriptor lists have been collected using this elicitation technique, for instance,
by Wachsmuth (2015), Ayandeban (2016), and Mowlaei et al. (2016). Cross-
impact data from written surveys have been used by Förster (2002), Renn et al.
(2007, 2009), and Jenssen and Weimer-Jehle (2012), among others.
Descriptor/Variant Screening
Experts are approached, receive information about the subject of the scenario
analysis, and are asked to make suggestions for descriptors and descriptor
variants in an interview. A structured selection of the interviewees allows for an
adequate representation of interdisciplinary perspectives, scientific controversies
or (in the case of stakeholders) different interests.
A special form of descriptor screening by expert interview was used by
Uraiwong. Based on the interviews, the interviewees’ mental models of the
problem under study were formulated (“multistakeholder mental model
analysis”). The descriptors for the CIB analysis were then selected from the
factors used in the mental models (Uraiwong, 2013).
Cross-impact Data
Experts can also be asked for cross-impact assessments during interviews. If the
respondents are willing to participate in a sufficiently long interview or in a
series of interviews, the questioning can aim at filling out the entire matrix.
Interviewing several experts thus leads to an ensemble of matrices that can be
compared and either evaluated individually or conflated before evaluation.
However, interviews can also be used to prepare partial matrices (see Figs. 6.12
and 6.13). Each interview should then target the part of the matrix for which the
interview partner is particularly competent.
A variant for interview-based elicitations that is worth considering but that
has been rarely used is to ask the experts not directly for cross-impact ratings but
for a verbal description of the interrelationships. The transcribed descriptions
then provide a similar source for the system interrelationships as literature
quotations, and the core team can code the descriptions after the interview. The
advantage of this approach is that it spares the experts the unfamiliar coding task
and enables them to articulate their expertise in a way that is familiar to them.
The coding can be conducted by CIB experts who, unlike most interviewees,
have a precise idea of how different coding patterns play out in the CIB
algorithm. A negative aspect of this elicitation form, however, is the partial loss
of legitimacy for the matrix, which is only indirectly expert-based, and the risk
of not exactly reflecting the expert intentions with the coding. It also requires
experience on the part of the interviewer to continuously ensure during the
interview that the descriptions offered by the experts contain sufficient
substance for the subsequent coding.
In CIB practice, interviews are predominantly used in the form of face-to-
face interviews.22 However, there are also cases of telephone interviews and,
recently, online interviews.23 If the pool of experts is sufficiently large, once can
also considered conducting the interviews not with individuals but with small
groups of, for example, three experts in the same field of knowledge. Here, the
advantage is that discussions occur between the interviewees and thus richer and
provisionally validated justifications for the assessments can be expected.
Moreover, misjudgments due to misunderstandings or insufficiently reflected
reasoning are less likely. However, it should be noted that group discussions
may lead to extended interview times.
Assessment
Advantages Disadvantages
One advantage of interviews compared The disadvantage of using this elicitation method alone is
to workshops (Sect. 6.4.5) is that it is the lack of critical questioning of the assessments by other
generally easier to obtain the experts
participation of experts, as they have toAdditionally, due to the sequential interviewing of different
invest less time than for a workshop. experts, it can occur that clarifications of the task or the
Interviews also have the advantage definitions of descriptors and variants made during an
over the written survey that interview create an interview situation that differs from that
uncertainties about the assessment task of preceding interviews and interview results that are thus
or about the meaning of the descriptors founded on an inconsistent basis. In contrast, respondents in
and descriptor variants can be clarified a workshop benefit equally from all clarifications made
immediately by the interviewer during the elicitation process.
Conversely, the interviewer can also The quality of the interview results depends sensitively on
ask immediate questions if the cross- the interviewer’s ability to recognize discomfort,
impact assessment raises concerns that uncertainties, and misinterpretations of the interviewee
there is a sign error or that the ratings regarding the method and judgment task and his or her
would have assumedly unintended capability to resolve these issues.
effects on the evaluation (for example, As a further disadvantage, the considerable time,
if judgment sections are coded one- coordination and, often, travel effort for the core team
sidedly; see Sect. 5.3.1). associated with the numerous interviews must be considered.
A significant advantage is that the
interviewer can directly work toward
providing sufficient explanations and
directly ensure that the explanations
are comprehensible and not limited to
a mere paraphrase of the fact of the
influence.
Another advantage over workshop-
based expert elicitation (see below)
cited in the literature is that separate
interviews prevent experts from
influencing one another, thus creating
“groupthink” risks.24
Examples
Descriptors, descriptor variants, and/or cross-impact data through expert
interviews were collected by Schneider and Gill (2016), Schmid et al. (2017),
Brodecki et al. (2017), Musch and von Streit (2017), Pregger et al. (2020), and
Oviedo-Toral et al. (2021) among others. The special form in which information
about relationships was collected verbally and then coded by the core team was
used by Meylan et al. (2013) for part of the cross-impact data.
Descriptor Screening
In a workshop, after receiving preliminary written information about the
scenario analysis, experts are asked to propose descriptors and descriptor
variants. A structured selection of the participants can promote an adequate
representation of interdisciplinary perspectives, scientific controversies or (in
the case of stakeholders) different interests. The expert workshop can also be
used as a validation step after a literature review or a written or online survey or
interviews. Creativity techniques can also be used to stimulate working
procedures (for example, Biß et al., 2017; Ernst et al., 2018).
Descriptor Ranking
Descriptor ranking can be conducted in workshops by asking the expert panel to
rank a proposed list of descriptors by importance to the analysis through
discussion or by assigning points. This process can be followed by the final
selection of descriptors.
Cross-impact Data
The expert workshop is also a frequently used means to elicit cross-impact data.
To this end, the expert panel discusses descriptor interdependencies in a plenary
session or small groups to elaborate on the cross-impact ratings. If working in
small groups, the alternative procedures shown in Fig. 6.13 are also applicable.
As with interviews, it may be considered for workshops to only request and
document verbal descriptions of interrelationships and afterward have them
coded by method experts.
Assessment
Advantages Disadvantages
One advantage of workshops over written Attracting recognized experts for one or more
surveys or interviews is the opportunity for workshop dates can be challenging, and difficulties in
discussion between experts and the resulting doing so can result in compromises in workshop
more intensive critical scrutiny and the staffing, with consequences for the quality of the
increased obligation to provide sound reasons results.
for the assessments made. In this way, In the worst case, deficiencies in the discussion
subjectivity can be reduced and poorly culture can undermine the inherent strengths of
substantiated assessments can be filtered out. workshop-based elicitation. While proponents of this
Another advantage is the direct encounter of elicitation method hope that the assessments will
different perspectives on the system under benefit from the “wisdom of the group,” i.e., that the
investigation. Whereas in a written survey or judgmental ability of a group is higher than that of the
in interviews the multidisciplinary views of individual participants, skeptics worry that
the problem are initially only juxtaposed and “groupthink,” i.e., fixation on one viewpoint due to
only later synthesized during the evaluation conformity pressure within the group, will result in
process, in an interdisciplinary workshop alternatives being ignored that might be equally well
there is often a fruitful direct exchange among or better justified that the agreed-on alternatives.26
individuals with differing perspectives already To compensate for this possible disadvantage of
during matrix creation, and frequently a workshops, moderators must be careful to address
genuinely new view of the problem emerges. “sidelined“ discussion contributions and ensure that
The rationality ethos of face-to-face discourse positions are only discarded on the basis of factual
can also encourage stakeholder participants to arguments.
engage in fact-based as opposed to interest-
driven discussion of the interrelationships.
Participants in CIB workshops often report
that this form of interdisciplinary system
reflection was perceived as a new and
inspiring experience, triggered a consideration
of previously unreflected-upon aspects, and
was thus also understood as benefiting the
participants.
Number of Participants
The choice of the number of experts requires finding a balance between
covering the thematic range of the matrix through expert competence, reducing
subjectivity in the expert statements, and creating a fruitful discussion
atmosphere. According to studies that provide information on the number of
experts in workshops for the creation of a CIB matrix, a range of 5–40 experts is
common, whereby the work is usually divided among small groups for
participant numbers in the upper range.27 In most of the consulted studies,
participant numbers ranged from 7 to 18, with a median of 12. Fink et al. (2002)
recommended a group size of 8–13 as the optimal compromise between
groupthink risks and socialization effort.
Time Management
Estimating the time required to create a cross-impact matrix in an expert
workshop is challenging because the time needed can depend heavily on
individual project conditions. Determinants of the time required include the
proportion of zero-valued influence relationships in the matrix, how many
variants the descriptors have, and how difficult or controversial the impact
assessments prove to be for the expert panel. Nevertheless, it is inevitable in
workshop planning to make assumptions about the working speed. Empirical
data from CIB practice can serve as a guide. However, only a small number of
published CIB studies provide information on the time required for matrix
preparation. Thus, only a few indications are available. Based on the available
data and own experience, an average working speed of approximately 1.5–7 min
per judgment section can be estimated. All judgment sections of a matrix,
including the empty ones, are included in the average calculation so that for a
matrix with descriptors, N(N – 1) judgment sections are always assumed. Both
the lower value and the upper value of the mentioned time interval seem to be
exceptions to the norm.28 In most cases, the working speed in the reviewed
studies was in the range of 4–5 min per judgment section. However, it is
generally true that time pressure reduces assessment quality. It can be expected
that the more time one can spend on the assessments, the higher the data quality.
These orientation values are average values and only apply if the experts
have the required interdependency knowledge at hand and only have to mentally
reflect before applying their knowledge to the given case for coding. If
assessments must first be deduced, extrapolated from analogies or debated,
considerably longer working times may be required for individual influence
relationships.
Since the working time can only be approximately estimated and deviations
may occur, it is advisable to prepare a backup plan for use in the event of
unexpectedly long time requirements. Possible approaches are as follows:
Prearrangement of an optional additional appointment for another workshop.
Column-by-column division of the matrix into several parts and switching to
parallel processing in small groups as soon as a lack of time becomes
apparent in the course of the workshop.
Scheduling the processing of matrix parts, for which an elicitation alternative
is available if necessary (such as a literature-based elicitation), at the end of
the workshop.
However, in the interest of a uniform elicitation procedure, these approaches
should be used only when necessary.
General Recommendations
Pretest
Expert time is usually a scarce resource, and disruptions in the elicitation
process should be avoided as much as possible. The quality of the resulting data
can also be impaired if weaknesses in the elicitation concept only become
apparent during implementation. For all expert-based elicitation methods, it
therefore makes sense to conduct a pretest . This means working through the
matrix in advance on a trial basis using the chosen elicitation method with
people who are not involved in the project. In this way, deficiencies in work
instructions, process planning, or descriptor essays can be identified in advance,
and an initial project-specific estimate of the working speed can be obtained.
Combining Elicitation Methods
In addition to the use of a single elicitation method, there are numerous
instances in the literature where methods have been combined to collect cross-
impact matrices.29 Possible reasons for method combinations include:
The chosen elicitation method is the literature-based collection of cross-
impact data. However, for certain influence relationships, no literature
references can be found. Therefore, the respective relationships must be
elicited with an expert-based method.
A workshop-based creation of the matrix is planned. However, one expert
with essential expertise is unable to attend the workshop. This expert will
therefore be interviewed in advance about the parts of the matrix that are
assigned to his or her field of expertise, and the expert’s assessments and
justifications are subsequently introduced into the workshop.
The core team plans to conduct a written expert survey about the cross-impact
data. Some experts are familiar with the CIB method, while others are not.
The latter group is offered the option of being interviewed as an alternative to
completing a written survey so they can receive assistance during the
assessment procedure if needed.
In the first step, the cross-impact data are obtained by a written survey in a
group of experts. Comparison of the returned matrices reveals sufficient
agreement for many judgment sections but also substantially divergent results
for some judgment sections. In a subsequent workshop with the expert panel,
the divergent assessments are discussed, and a final assessment is developed
(see Sect. 4.5.4).
Format combinations can create potential for optimizing expert-based matrix
preparation. Here, a possible disadvantage is that the elicitation method can
affect results. Method combinations can lead to different matrix parts being
based on unequal foundations.
Iteration and Scenario Validation
For expert-based data elicitation, an iterative repetition of the procedure can be
used as a quality assurance tool. After the cross-impact matrix has been created,
the scenarios are calculated and fed back to the expert panel in a validation
workshop. The goal here is to have the experts evaluate the scenarios regarding
their plausibility. Since the scenarios were constructed from the system
perspectives of the expert panel, it is to be expected that the panel will generally
endorse the scenarios as plausible. If, however, doubts are expressed about the
plausibility of certain scenarios, there are three possible causes:
1.
The criticized scenario correctly corresponds to the system view of the
expert panel. However, the experts cannot mentally reconstruct the complex
reasons for the form of the scenario, and thus, there is no spontaneous
“recognition” of their own system view.
2.
The criticized scenario is based in part on cross-impacts that do not
correctly reflect the system view of the expert panel because mistakes were
made during coding or because the strength relations between the cross-
impacts were not set appropriately.
3. During expert elicitation, irresolvable controversial assessments may have
arisen for certain influence relationships. Therefore, when creating the
matrix one of the proposals was followed while the others were
matrix, one of the proposals was followed, while the others were
disregarded. Consequently, the scenarios may no longer be plausible for the
experts whose assessments were disregarded. If instead an averaging of the
controversial assessments was performed, it would be not inevitable but
possible that the “compromise scenarios” would appear somewhat
implausible for both sides.
First, when a scenario is criticized, it must be clarified which case applies.
This clarification is conducted by reading out from the matrix the influences for
and against the criticized scenario as well as the influences for and against the
alternative scenario that was found more plausible by the expert panel. It is also
valuable if explanatory texts for the cross-impacts are collected during the
matrix creation, which can subsequently be used for the review step (cf. Sect. 4.
2). Then, the expert panel can either trace and appreciate the reasons that led to
the construction of the scenario in its original form (case 1), or they can
specifically identify which influences coded in the matrix led to the
misconstruction (case 2). Therefore, the cross-impact matrix can be specifically
revised, or the plausibility critique can be concretely linked to specific
assessment controversies (case 3).
Ultimately, available resources and time constraints as well as the
willingness of the expert panel to participate determine whether an iteration step
can be envisaged. The basic process is illustrated for a fictitious example in
Nutshell VI (Fig. 6.14).
Examples
Expert workshops on data collection have been used, for example, by Schütze et
al. (2018), Lambe et al. (2018), Venjakob et al. (2017), Biß et al. (2017), Ernst et
al. (2018), Musch and von Streit (2017), Hummel (2017), Wachsmuth (2015),
Drakes et al. (2017, 2020), Mphahlele (2012), Weimer-Jehle et al. (2012), Fuchs
et al. (2008), and Aretz and Weimer-Jehle (2004). The special form of the core
team coding independently based on verbal interdependency descriptions can be
found, for example, in Schneider and Gill (2016) and Kurniawan (2018).
Assessment
Advantages Disadvantages
The advantage of using this method of data collection is the The main disadvantage of the format
reduced risk that subjectivity in the opinions of the core group is that it is rarely applicable. In
or experts will shape or distort the analysis. The responsibility addition, the orientation to
for the quality of the data collected in this way is delegated to preliminary research or a theoretical
the authors of the previous study or to the theoretical concept framework also means the
used. Furthermore, the amount of effort required for this form renunciation of one’s own
of data collection is usually low. perspective and an independent view
of the problem.
Examples
Renn et al. (2007, 2009), Kemp-Benedict et al. (2014), Centre for Workforce
Intelligence (2014), Shojachaikar (2016), Hummel (2017), Schmid et al. (2017),
Kurniawan (2018), Mitchell (2018).
References
Aretz, A., & Weimer-Jehle, W. (2004). Cross Impact Methode. In Der Beitrag der deutschen
Stromwirtschaft zum europäischen Klimaschutz. Forum für Energiemodelle und energiewirtschaftliche
Systemanalyse, Hrsg, LIT-Verlag.
Ayandeban. (2016). Future scenarios facing Iran in the coming year 1395 (March 2016–March 2017) [in
Persian]. Ayandeban Iran Futures Studies. www.ayandeban.com
Biß, K., Ernst, A., Gillessen, B., Gotzens, F., Heinrichs, H., Kunz, P., Schumann, D., Shamon, H., Többen,
J., Vögele, S., & Hake, J. -F. (2017). Multimethod design for generating qualitative energy scenarios. STE
Preprint 25/2017, Forschungszentrum Jülich.
Blanchet, D. (1991). Estimating the relationship between population growth and aggregate economic
growth in developing countries: Methodological problems. In Consequences of rapid population growth in
developing countries. Taylor & Francis.
Brodecki, L., Fahl, U., Tomascheck, J., Wiesmeth, M., Gutekunst, F., Siebenlist, A., Salah, A., Baumann,
M., Brethauer, L., Horn, R., Hauser, W., Sonnberger, M., León, C., Pfenning, U., & O’Sullivan, M. (2017).
Analyse der Energie-Autarkiepotenziale für Baden-Württemberg mittels Integrierter
Energiesystemmodellierung. BWPLUS Report, State of Baden-Württemberg.
Cabrera Méndez, A. A., Puig López, G., & Valdez Alejandre, F.J. (2010). Análisis al plan national de
desarrollo - una visión prospectiva. XV Congreso international de contaduría, administratión e informática,
México.
Centre for Workforce Intelligence. (2014). Scenario generation—Enhancing scenario generation and
quantification. CfWI technical paper series no. 7. See also: Willis G, Cave S, Kunc M (2018) Strategic
Workforce Planning in Healthcare: A multi-methodology approach. European Journal of Operational
Research, 267, 250–263.
Drakes, C., Cashman, A., Kemp-Benedict, E., & Laing, T. (2020). Global to small island; a cross-scale
foresight scenario exercise. Foresight, 22(5/6), 579–598. https://doi.org/10.1108/FS-02-2020-0012
[Crossref]
Drakes, C., Laing, T., Kemp-Benedict, E., & Cashman, A. (2017). Caribbean Scenarios 2050—
CoLoCarSce Report. CERMES Technical Report No. 82.
Enzer, S. (1980). INTERAX—An interactive model for studying future business environments.
Technological Forecasting and Social Change, 17(Part I:141–159 / Part II), 211–242.
[Crossref]
Ernst, A., Biß, K., Shamon, H., Schumann, D., & Heinrichs, H. (2018). Benefits and challenges of
participatory methods in qualitative energy scenario development. Technological Forecasting and Social
Change, 127, 245–257.
[Crossref]
Fink, A., Schlake, O., & Siebe, A. (2002). Erfolg durch Szenario-Management—Prinzip und Werkzeuge
der strategischen Vorausschau. Campus.
Gordon, T. J. (1994). Cross-impact method. In The millenium project: Futures research methodology.
ISBN: 978-0981894119 .
Gordon, T. J., & Hayward, H. (1968). Initial experiments with the cross impact matrix method of
forecasting. Futures, 1(2), 100–116.
[Crossref]
Honton, E. J., Stacey, G. S., & Millet, S. M. (1985). Future scenarios—The BASICS computational method,
economics and policy analysis occasional paper (Vol. 44). Batelle Columbus Division.
Jenssen, T., & Weimer-Jehle, W. (2012). Mehr als die Summe der einzelnen Teile—Konsistente Szenarien
des Wärmekonsums als Reflexionsrahmen für Politik und Wissenschaft. GAIA, 21(4), 290–299.
[Crossref]
Johansen, I. (2018). Scenario modelling with morphological analysis. Technological Forecasting and
Social Change, 126, 116–125.
[Crossref]
Kalaitzi, D., Matopoulos, A., et al. (2017). Next generation technologies for networked Europe—Report on
trends and key factors. EU-Programm Mapping the path to future supply chains, NEXT-NET Project report
D2.1.
Kane, J. (1972). A primer for a new cross impact language-KSIM. Technological Forecasting and Social
Change, 4, 129–142.
[Crossref]
Kemp-Benedict, E., de Jong, W., & Pacheco, P. (2014). Forest futures: Linking global paths to local
conditions. In P. Katila, G. Galloway, W. de Jong, P. Pacheco, & G. Mery (Eds.), Forest under pressure—
Local responses to global issues. Part IV—Possible future pathways. IUFRO World Series Vol. 32.
Kosow, H., Weimer-Jehle, W., León, C. D., & Minn, F. (2022). Designing synergetic and sustainable policy
mixes—A methodology to address conflictive environmental issues. Environmental Science and Policy,
130, 36–46.
[Crossref]
Kunz, P. (2018). Discussion of methodological extensions for cross-impact-balance studies. STE preprint
01/2018, Forschungszentrum Jülich.
Kurniawan, J. H. (2018). Discovering alternative scenarios for sustainable urban transportation. In 48th
Annual conference of the Urban Affairs Association, April 4-7, 2018, Toronto, Canada
Lambe, F., Carlsen, H., Jürisoo, M., Weitz, N., Atteridge, A., Wanjiru, H., & Vulturius, G. (2018).
Understanding multi-level drivers of behaviour change—A Cross-impact Balance analysis of what
influences the adoption of improved cookstoves in Kenya (SEI working paper). Stockholm Environment
Institute.
Lee, H., & Geum, Y. (2017). Development of the scenario-based technology roadmap considering layer
heterogeneity: An approach using CIA and AHP. Technology Forecasting & Social Change, 117, 12–24.
https://doi.org/10.1016/j.techfore.2017.01.016
[Crossref]
Lloyd, E. A., & Schweizer, V. J. (2014). Objectivity and a comparison of methodological scenario
approaches for climate change research. Synthese, 191(10), 2049–2088.
[Crossref]
Meylan, G., Seidl, R., & Spoerri, A. (2013). Transitions of municipal solid waste management. Part I:
Scenarios of Swiss waste glass-packaging disposal. Resources, Conservation and Recycling, 74, 8–19.
[Crossref]
Mitchell, R. E. (2018). The human dimensions of climate risk in Africa’s low and lower-middle income
countries. Master thesis, University of Waterloo.
Mowlaei, M., Talebian, H., Talebian, S., Gharari, F., Mowlaei, Z., & Hassanpour, H. (2016). Scenario
building for Iran short-time future—Results of Iran Futures Studies Project. Finding futures in
uncertainties. In 6th International postgraduate conference, Department of Applied Social Sciences, Hong
Kong Polytechnic University, September 22-24, 2016.
Mphahlele, M. I. (2012). Interactive scenario analysis technique for forecasting E-skills development.
Dissertation. Tshwane University of Technology, Pretoria, South Africa.
Musch, A. -K., & von Streit, A. (2017). Szenarien, Zukunftswünsche, Visionen—Ergebnisse der
partizipativen Szenarienkonstruktion in der Modellregion Oberland. INOLA report no. 7, Ludwig-
Maximilians University.
Nakićenović, N., et al. (2000). Special Report on Emissions Scenarios. Report of the Intergovernmental
Panel on Climate Change (IPCC). Cambridge University Press.
National Research Council. (1986). Population growth and economic development: Policy questions.
National Academy Press.
Oviedo-Toral, L.-P., François, D. E., & Poganietz, W.-R. (2021). Challenges for energy transition in
poverty-ridden regions—The case of rural Mixteca, Mexico. Energies, 14, 2596. https://doi.org/10.3390/
en14092596
[Crossref]
Pregger, T., Naegler, T., Weimer-Jehle, W., Prehofer, S., & Hauser, W. (2020). Moving towards socio-
technical scenarios of the German energy transition—Lessons learned from integrated energy scenario
building. Climatic Change, 162, 1743–1762. https://doi.org/10.1007/s10584-019-02598-0
[Crossref]
Regett, A., Zeiselmair, A., Wachinger, K., & Heller, C. (2017). Merit Order Netz-Ausbau 2030. Teil 1:
Szenario-Analyse—potenzielle zukünftige Rahmenbedingungen für den Netzausbau. Project report,
Forschungsstelle für Energiewirtschaft (FfE).
Renn, O., Deuschle, J., Jäger, A., & Weimer-Jehle, W. (2007). Leitbild Nachhaltigkeit—Eine normativ-
funktionale Konzeption und ihre Umsetzung. VS-Verlag.
Renn, O., Deuschle, J., Jäger, A., & Weimer-Jehle, W. (2009). A normative-functional concept of
sustainability and its indicators. International Journal of Global Environmental Issues, 9(4), 291–317.
[Crossref]
Rhyne, R. (1974). Technological forecasting within alternative whole futures projections. Technological
Forecasting and Social Change, 6, 133–162.
[Crossref]
Saner, D., Beretta, C., Jäggi, B., Juraske, R., Stoessel, F., & Hellweg, S. (2016). FoodPrints of households.
International Journal of Life Cycle Assessment, 21, 654–663. https://doi.org/10.1007/s11367-015-0924-5
[Crossref]
Saner, D., Blumer, Y. B., Lang, D. J., & Köhler, A. (2011). Scenarios for the implementation of EU waste
legislation at national level and their consequences for emissions from municipal waste incineration.
Resources, Conservation and Recycling, 57, 67–77.
[Crossref]
Sardesai, S., Kamphues, J., Parlings, M., et al. (2018). Next generation technologies for networked Europe
—Report on future scenarios generation. EU-Programm Mapping the path to future supply chains, NEXT-
NET project report D2.2.
Schmid, E., Pechan, A., Mehnert, M., & Eisenack, K. (2017). Imagine all these futures: On heterogeneous
preferences and mental models in the German energy transition. Energy Research & Social Science, 27,
45–56. https://doi.org/10.1016/j.erss.2017.02.012
[Crossref]
Schneider, M., & Gill, B. (2016). Biotechnology versus agroecology—Entrenchments and surprise at a
2030 forecast scenario workshop. Science and Public Policy, 43, 74–84. https://doi.org/10.1093/scipol/
scv021
[Crossref]
Schütze, M., Seidel, J., Chamorro, A., & León, C. (2018). Integrated modelling of a megacity water system
—The application of a transdisciplinary approach to the Lima metropolitan area. Journal of Hydrology.
https://doi.org/10.1016/j.jhydrol.2018.03.045
Schweizer, V. J., & Kriegler, E. (2012). Improving environmental change research with systematic
techniques for qualitative scenarios. Environmental Research Letters, 7.
Schweizer, V. J., & Kurniawan, J. H. (2016). Systematically linking qualitative elements of scenarios
across levels, scales, and sectors. Environmental Modelling & Software, 79, 322–333. https://doi.org/10.
1016/j.envsoft.2015.12.014
[Crossref]
Schweizer, V. J., & O’Neill, B. C. (2014). Systematic construction of global socioeconomic pathways using
internally consistent element combinations. Climatic Change, 122, 431–445.
[Crossref]
Shojachaikar A (2016) Qualitative but systematic envisioning of socio-technical transitions: Using cross-
impact balance method to construct future scenarios of transitions. In International sustainability
transitions conference, September 6-9, 2016.
Slawson, D. (2015). A qualitative cross-impact balance analysis of the hydrological impacts of land use
change on channel morphology and the provision of stream channel services. In Proceedings of the
international conference on river and stream restoration—Novel approaches to assess and rehabilitate
modified rivers. June 30–Juli 2, 2015, Wageningen, The Netherlands (pp. 350–354).
Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103, 677–680.
[Crossref]
Tori, S., te Boveldt, G., Keseru, I., & Macharis, C. (2020). City-specific future urban mobility scenarios—
Determining the impacts of emerging urban mobility environments. Horizon 2020 project “Sprout”
Delivery 3.1. https://sprout-civitas.eu/
Uraiwong, P. (2013). Failure analysis of malfunction water resources project in the Northeastern Thailand
—Integrated mental models and project life cycle approach. Kochi University of Technology.
Venjakob, J., Schüver, D., & Gröne, M. -C. (2017). Leitlinie Nachhaltige Energieinfrastrukturen,
Teilprojekt Transformation und Vernetzung von Infrastrukturen. Project report “Energiewende Ruhr”,
Wuppertal Institut für Klima, Umwelt, Energie.
Vergara-Schmalbach, J. C., Fontalvo Herrera, T., & Morelos Gómez, J. (2012). Aplicación de la Planeación
por Escenarios en Unidades Académicas: Caso Programa de Administración Industrial. Escenarios, 10(1),
40–48.
[Crossref]
Vergara-Schmalbach, J. C., Fontalvo Herrera, T., & Morelos Gómez, J. (2014). La planeación por
escenarios aplicada sobre políticas urbanas: El caso del mercado central de Cartagena (Columbia). Revista
Facultad de Ciencias Económicas, XXII(1), 23–33.
[Crossref]
Vögele, S., Rübbelke, D., Govorukha, K., & Grajewski, M. (2019). Socio-technical scenarios for energy
intensive industries: The future of steel production in Germany in context of international competition and
CO2 reduction. STE Preprint 5/2017, Forschungszentrum Jülich.
Wachsmuth, J. (2015). Cross-sectoral integration in regional adaptation to climate change via participatory
scenario development. Climatic Change, 132, 387–400. https://doi.org/10.1007/s10584-014-1231-z
[Crossref]
Weimer-Jehle, W., Buchgeister, J., Hauser, W., Kosow, H., Naegler, T., Poganietz, W.-R., Pregger, T.,
Prehofer, S., von Recklinghausen, A., Schippl, J., & Vögele, S. (2016). Context scenarios and their usage
for the construction of socio-technical energy scenarios. Energy, 111, 956–970. https://doi.org/10.1016/j.
energy.2016.05.073
[Crossref]
Weimer-Jehle, W., Deuschle, J., & Rehaag, R. (2012). Familial and societal causes of juvenile obesity—A
qualitative model on obesity development and prevention in socially disadvantaged children and
adolescents. Journal of Public Health, 20(2), 111–124.
[Crossref]
Weimer-Jehle, W., Wassermann, S., & Fuchs, G. (2010). Erstellung von Energie- und Innovations-
Szenarien mit der Cross-Impact-Bilanzanalyse: Internationalisierung von Innovationsstrategien im Bereich
der Kohlekraftwerkstechnologie. 11. Symposium Energieinnovation, TU Graz, February 10–12, 2010.
Weimer-Jehle, W., Wassermann, S., & Kosow, H. (2011). Konsistente Rahmendaten für Modellierungen
und Szenariobildung im Umweltbundesamt. Gutachten für das Umweltbundesamt (UBA), UBA-Texte
20/2011, Dessau-Roßlau.
Wiek, A., Keeler, L. W., Schweizer, V., & Lang, D. J. (2013). Plausibility indications in future scenarios.
International Journal of Foresight and Innovation Policy, 9, 133–147.
[Crossref]
Footnotes
1 Secondary autonomous descriptors (descriptors that only receive influences from autonomous
descriptors) and secondary passive descriptors (descriptors that only influence passive descriptors) can also
be defined. Secondary autonomous descriptors become autonomous descriptors when the original
autonomous descriptors are removed from the matrix. Similarly, secondary passive descriptors become
passive descriptors when the original passive descriptors are removed.
2 Inverse solutions occur regularly when CIB matrices consist entirely of descriptors with two variants and
all judgment sections are exchange-symmetric, i.e., show the same numbers after swapping the order of the
variants for both descriptors.
3 Three descriptors can be found in Vergara-Schmalbach et al. (2012, 2014). Forty-three descriptors were
used by Weimer-Jehle et al. (2012).
4 A method for embedding disaggregated subsystems in a CIB matrix is discussed by Kunz (2018, Chapter
III.3).
5 For the question of choosing the realistic range for the descriptor variants, see Sect. 6.2.4.
6 When using the CIB software ScenarioWizard, there is a limit of nine variants per descriptor (at the time
of printing). As the statistics box “Descriptor variant numbers” shows, this does not lead to any practical
restriction.
7 In 1966, prior to the first scientific publication on the cross-impact method, Theodore Gordon and Olaf
Helmer developed the method’s basic concept and applied it for the first time in game entitled “Future,”
which the Kaiser Aluminum and Chemical Company then distributed as a promotional gift (Gordon, 1994).
8 E.g., Kane (1972, “KSIM”), Enzer (1980, “INTERAX”), Honton et al. (1985, “BASICS”).
9 For the consistency matrix method, see Rhyne (1974), von Reibnitz (1987), Johansen (2018).
11 The rating interval [–2…+2] has been used by Wachsmuth (2015), Schmid et al. (2017), and Tori et al.
(2020), among others.
13 MINT: A group of academic disciplines, consisting of mathematics, informatics, natural sciences, and
technology.
14 Invariance property IO-1, Weimer-Jehle (2006: 343). For a proof see Weimer-Jehle (2009), Property XI.
15 The judgment group shown above could just as well have been rated [B1 B2] = [+1 0] to avoid double
negation.
16 This basic openness does not prevent the descriptor variants from being regarded as of varying
likelihood. However, a descriptor variant with probability 0 would hardly be a meaningful element of a CIB
analysis.
17 For example, the scenario study could be commissioned by a ministry of the environment that rules out
weakening its environmental legislation as a policy option and therefore wishes to examine only the effects
of different levels of strengthening environmental legislation.
21 For a cross-impact assessment [A3 Education focus: MINT -> B3 Economic growth: high = +2], for
example, the justification “MINT education promotes economic development” would be a paraphrase,
since it merely repeats in words what the cross-impact rating already expresses on its own. In contrast, an
explanation should reveal the reasoning of the rater regarding the score; e.g., it could read “MINT-oriented
education would, in the long run, increase the available human resources for corporate R&D departments
and applied university research, thus supporting the innovation-based ‘business model’ of
Somewhereland’s export economy.” (MINT education focuses on mathematics, informatics, natural
sciences, and technology)
22 For instance, Hummel (2017), Brodecki et al. (2017, in combination with a workshop as elicitation
format), Pregger et al. (2020).
23 Schmid et al. (2017). Schweizer and O’Neill (2014) also used telephone interviews among other forms
for cross-impact elicitation.
26 On the risk of groupthink in group-based scenario processes and the ability of CIB to avoid it, see
Lloyd and Schweizer (2014).
27 Among the CIB studies reviewed by the author in which matrices were elicited by expert workshops, 11
studies included information on the number of participants in the workshops.
28 The lower value of the interval (1.5 min per processing field) occurred, for example, in a study in which
all descriptors had only two variants, and in which the standardized judgment sections could mostly be
assumed to be internal antisymmetric, so that predominantly only one rating per judgment section was
required (Weimer-Jehle et al., 2012).
29 E.g., Weimer-Jehle et al. (2011), Meylan et al. (2013), Schweizer and O’Neill (2014), Brodecki et al.
(2017).
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis, Contributions to Management
Science
https://doi.org/10.1007/978-3-031-27230-1_7
7. CIB at Work
Wolfgang Weimer-Jehle1
(1) ZIRIUS, University of Stuttgart, Stuttgart, Germany
Wolfgang Weimer-Jehle
Email: [email protected]
From the results, the authors of the study were able to draw conclusions
regarding the relationship between societal and energy development. The
most unfavorable types of society for climate protection were found
between the poles of “inertia” and “market,” i.e., societies that achieve
moderate economic growth but do so on the basis of outdated technical
structures. Low climate gas emissions are found in the center and in the
middle bottom of the map. The lowest emissions, however, occurred for
less desirable scenarios, namely, for societies that implement an energy
transition driven by constraints under strong pressure, for example, when
they lose access to their sources of fossil fuels and whose demographic and
economic development is already impaired by such stress.3
Fig. 7.4 CIB analysis of obesity risks for children and adolescents for four case examples. Data from
Weimer-Jehle et al. (2012)
The reference case (a) shows how the influence network reacts when
social and individual context factors are set randomly. In this case, a clear
weight tendency emerges only for a few context cases. For most context
cases, the result is the medium risk class; i.e., for a group of individuals
with identical context conditions, a mixed weight trend would be expected.
Matters are different if we do not set the social context randomly but align it
with the conditions in Germany (and other OECD countries) at the time of
the study. Then, there are only a few case groups with certain weight loss,
and the scenario space is dominated by context cases with certain weight
gain and those with mixed weight developments (see analysis (b) in Fig.
7.4). However, if we turn our attention specifically to groups of individuals
who enjoy favorable individual conditions, clearly obesogenic scenarios can
be largely avoided despite the unfavorable societal context, and almost all
case groups remain at the intermediate risk level (analysis (c) in Fig. 7.4).
Only if certain key societal conditions were also reformed in a health-
promoting manner could the juvenile obesity risk be safely repressed, at
least for advantaged groups of individuals (analysis (d) in Fig. 7.4).
Unlike most CIB studies, the CIB analysis of juvenile obesity risks did
not examine long-term future developments but, rather, described a current
issue and its network of causes in the form of a system analysis. This
approach opens the possibility of comparing the statements of the CIB
analysis with reality and thus verifying the validity of the analysis results.
For this purpose, the study authors used the empirical data of the project
part “affected persons’ perspective.” The individual circumstances of the
participants in the affected persons surveys were anonymously entered into
the cross-impact matrix, and the analysis result of each individual case for
the weight tendency was compared with actual findings. The CIB analysis
achieved a rate of 90% accurate assessments.
This comparison only covers the first decade of the time horizon of the
scenarios, and the SRES have at least the merit that the actual course was
within the scenario bundle. Nevertheless, it was significant that the mass of
SRES scenarios clearly underestimated the actual emissions course because
the view of politics and the public was naturally directed above all to the
scenarios in the middle range as the supposedly “most probable” or “most
credible” scenarios.
Under the impression of the actual emissions trajectory in the first
decade, Vanessa Schweizer and Elmar Kriegler from the Climate Decision
Making Center at Carnegie Mellon University (Pittsburgh) examined how
the internal consistency of the SRES scenarios presents itself from the
perspective of a CIB analysis (Schweizer & Kriegler, 2012). To this end,
they examined the six key drivers of the carbon intensity of energy
production.5 The guiding question of the study was which scenarios would
likely have emerged in 2000 if CIB had been used as the scenario
methodology in the SRES study. Therefore, in creating their cross-impact
matrix, Schweizer and Kriegler were careful only to use information that
was available to the team of authors of the SRES analysis.
When these data were finally evaluated using the CIB method,
surprising differences emerged between the SRES scenarios and the CIB-
based scenarios. The CIB analysis rated more than 70% of the SRES
scenarios as significantly inconsistent. Conversely, the CIB analysis
revealed numerous scenarios that were not captured in the SRES analysis.
The authors of the study noted as particularly significant that the additional
scenarios suggested by CIB were, for the most part, carbon-intensive
scenarios, precisely the type of scenarios that the SRES analysis had
identified but classified as peripheral to the possibility space. Moreover, the
higher- and high-carbon intensity scenarios in the CIB analysis were
characterized by particularly high average consistency (the key quality
measure for scenarios). In addition, the carbon-intensive scenarios were
also particularly robust in the sensitivity test, in which the authors varied
the cross-impact ratings that they considered uncertain.
Figure 7.7 shows how the SRES scenarios and the CIB scenarios are
distributed over the carbon intensity. This distribution reveals that the CIB
analysis assigned a much higher importance to the category of high-
emissions scenarios than the SRES analysis and that this (as we know
today, crucial) category is particularly intensively explored with scenarios.
The CIB analysis thus correctly recognized that the play of forces among
the drivers of carbon intensity has more opportunities for evolution toward
“high carbon worlds” than toward low-carbon futures and that the carbon-
intensive scenarios should therefore be at the center, not on the margins, of
consideration.
Fig. 7.7 Number of SRES and CIB scenarios in four classes of carbon intensity. Own illustration
based on data from Schweizer and Kriegler (2012)
CIB analysis can only classify scenarios on a coarse grid of carbon
intensity. However, it succeeded in doing this with a more accurate focus
than the SRES study because of its attention to interrelationships between
drivers. The climate simulations conducted in years subsequent to SRES
study would not have resulted in different estimates of the maximum and
minimum climate change if they had been based on the CIB scenarios.
However, the focus of the scenarios would have shifted significantly within
this range. Thus, if CIB could have been used at that time, the bulk of
climate scenarios would have provided much more serious advance warning
of the extent of impending climate change in the absence of climate policy
early in the twenty-first century than the SRES scenarios were able to do.
Vanessa Schweizer, the lead author of this study, later wondered what
difference it would have made if the SRES author team had had the CIB
analysis tool, which did not become widely available until 6 years after the
SRES report and summarized (Schweizer 2020): “Perhaps, global climate
policy commitments as seen in the Paris [Climate Protection] Agreement
could have materialized at the turn of the century, rather than 15–20 years
later.”
References
Ayandeban. (2016). Future scenarios facing iran in the coming year 1395 (March 2016–March 2017)
[in Persian]. Ayandeban Iran Futures Studies. www.ayandeban.com
BMELV/BMG. (2008). In Form. Der Nationale Aktionsplan zur Prävention von Fehlernährung,
Bewegungsmangel, Übergewicht und damit zusammenhängenden Krankheiten, Bundesministerium
für Ernährung, Landwirtschaft und Verbraucherschutz und Bundesministerium für Gesundheit
Berlin, Germany.
Deuschle, J., & Weimer-Jehle, W. (2016). Übergewicht bei Kindern und Jugendlichen—Analyse
eines Gesundheitsrisikos. In L. Benighaus, O. Renn, & C. Benighaus (Eds.), Gesundheitsrisiken im
gesellschaftlichen Diskurs (pp. 66–98). EHV Academic Press.
IPCC. (2014). Climate Change 2014 – Impacts, adaption, and vulnerability. Assessment Report 5,
Report of Working Group II. Intergovernmental Panel on Climate Change, Genf. https://www.ipcc.
ch/report/ar5/wg2. Accessed 28 March 2019.
Maziak, W., Ward, K. D., & Stockton, M. B. (2008). Childhood obesity. Are we missing the big
picture? Obesity Reviews, 9, 35–42.
Nakićenović, N., Alcamo, J., Davis, G., de Vries, B., Fenhann, J., Gaffin, S., Gregory, K., Griibler,
A., Jung, T. Y., Kram, T., La Rovere, E. L., Michaelis, L., Mori, S., Morita, T., Pepper, W., Pitcher,
H., Price, L., Riahi, K., Roehr, A., Rogner, H.-H., Sankovski, A., Schlesinger, M., Shukla, P., Smith,
S., Swart, R., van Rooijen, S., Victor, N., & Dadi, Z. (2000). Special report on emissions scenarios.
Report of the Intergovernmental Panel on Climate Change (IPCC). Cambridge University Press.
OECD. (2017). Obesity update 2017. Organisation for Economic Co-operation and Development.
www.oecd.org/health/obesity-update.htm
Pregger, T., Naegler, T., Weimer-Jehle, W., Prehofer, S., & Hauser, W. (2020). Moving towards
socio-technical scenarios of the German energy transition—lessons learned from integrated energy
scenario building. Climatic Change, 162, 1743–1762. https://doi.org/10.1007/s10584-019-02598-0
[Crossref]
Schippl, J., Grunwald, A., & Renn, O. (Eds.). (2017). Die Energiewende verstehen – orientieren –
gestalten. Erkenntnisse aus der Helmholtz-Allianz ENERGY-TRANS. Nomos Verlag.
Schweizer, V. J., & Kriegler, E. (2012). Improving environmental change research with systematic
techniques for qualitative scenarios. Environmental Research Letters, 7(4), 044011.
[Crossref]
Vögele, S., Hansen, P., Kuckshinrichs, W., Schürmann, K., Schenk, O., Pesch, T., Heinrichs, H., &
Markewitz, P. (2013). Konsistente Zukunftsbilder im Rahmen von Energieszenarien. STE Research
Report 3/2013, Forschungszentrum Jülich.
Vögele, S., Hansen, P., Poganietz, W.-R., Prehofer, S., & Weimer-Jehle, W. (2017). Scenarios for
energy consumption of private households in Germany using a multi-level cross-impact balance
approach. Energy, 120, 937–946. https://doi.org/10.1016/j.energy.2016.12.001
[Crossref]
Weimer-Jehle, W., Buchgeister, J., Hauser, W., Kosow, H., Naegler, T., Poganietz, W.-R., Pregger, T.,
Prehofer, S., von Recklinghausen, A., Schippl, J., & Vögele, S. (2016). Context scenarios and their
usage for the construction of socio-technical energy scenarios. Energy, 111, 956–970. https://doi.org/
10.1016/j.energy.2016.05.073
[Crossref]
Weimer-Jehle, W., Deuschle, J., & Rehaag, R. (2012). Familial and societal causes of juvenile
obesity—a qualitative model on obesity development and prevention in socially disadvantaged
children and adolescents. Journal of Public Health, 20(2), 111–124.
[Crossref]
Footnotes
1 http://www.ayandeban.com.
8. Reflections on CIB
Wolfgang Weimer-Jehle1
(1) ZIRIUS, University of Stuttgart, Stuttgart, Germany
Wolfgang Weimer-Jehle
Email: [email protected]
8.1 Interpretations
Technically, the CIB algorithm uses cross-impact data to provide a
consistency evaluation of a proposed combination of descriptor variants.
The relevance of CIB analysis and its results for decision-making arises
from the interpretations we assign to the descriptors, descriptor variants,
cross-impact data, consistency criterion, and scenarios derived from the
evaluation. Since the variety of usage types of CIB analysis requires case-
specific interpretations, three different interpretations will be discussed
below, each appropriate for a particular usage type.
CIB scenarios of actors’ policy choices describe in which policy mixes all actors have M18
made an individually optimal policy choice to achieve their goals, so that no actor within
the policy mix has an immediate motive to change his or her strategy.
CIB occupies a special position in the spectrum of analysis methods, since it enables a M19
causal system analysis based on qualitative system descriptions without entering the fully
quantified domain of real-valued variables and operations.
To operate completely on the qualitative level has thus far only been
achieved by methods that avoid the use of causal information, with the
consequence of thereby providing a lower system-analytical power (for
example, the consistency matrix, cf. Sect. 8.5).
In summary, CIB also requires experts (serving as sources of qualitative
knowledge) to take a step out of the purely qualitative realm by coding their
knowledge on the interval-scaled cross-impact scale, either directly or as
mediated by the core team. The resulting data, in the form of small integers,
are then further processed by the method (appropriate to the nature of the
data) at a moderate level of quantification. The realm of full quantification
(real numbers and real number operations) is not entered by CIB at any
point in the procedure. Thus, CIB cannot be classified as a fully qualitative
scenario and analysis method. However, it comes closest to this ideal type
of all customary causal-system-analytic scenario methods. It comes even
closer to this ideal if it is applied without strength evaluation in the cross-
impact rating (i.e., using the rating scale [−1... + 1]), a methodological
option that is possible but not common.
⇒ Research For research questions for which the answer can also be inferred by intuition or
questions of low simple reasoning, the effort of a CIB analysis is usually not justified.
complexity One example is a system that tends to be bipolar such that a portfolio of a
consistently favorable and a consistently unfavorable scenario can be expected
without further ado. Another example is a system that has few and weak
interdependencies such that there is little constraint on how the descriptor
variants can be combined.
⇒ Research Systems whose factors and interrelationships can be described by quantitative
questions in variables and mathematical formulas without the need to exclude important
which aspects should be investigated by mathematical analysis and modeling rather
quantification of than CIB analysis (see Sect. 8.5). The role of CIB is not to replace adequate
all factors and mathematical modeling but to provide a system analytical method for cases in
relationships is which adequate mathematical modeling is impossible.
possible
⇒ Research For systems whose factors are not connected by asymmetric causal
questions that relationships (such as “A promotes B, but B does not promote A,” or, more
include no severely, “A promotes B, but B hinders A”) but in which all relationships
asymmetry of between the factors are reciprocal (“A and B promote one another,” or “A and
causal B hinder one another”), it can be considered to use the simpler consistency
relationships matrix method instead of CIB (see Sect. 8.5).
⇒ Research The result of a CIB analysis is based on expert knowledge about system
questions for interrelationships (either collected directly or indirectly by using the research
which sufficient literature, in other words, expert knowledge documented in writing). The
knowledge about quality and relevance of CIB analysis results presuppose the quality and
the relationships relevance of the underlying expert knowledge. CIB does not “magically purify”
is not available unsubstantiated opinions: It is a case of “garbage in—garbage out.” To avoid
the problem of analyses based on unreliable data being overestimated in their
substance, it should always be critically questioned whether sufficient albeit
qualitative knowledge about the essential system interrelationships is available
before a CIB analysis is conducted. Occasional knowledge gaps can be
addressed by sensitivity analysis. If, in contrast, a significant proportion of the
influencing relationships cannot be soundly assessed, consideration should be
given to abandoning the analysis.
⇒ Research CIB analyses are usually performed using approximately 5–20 descriptors. In
questions that most cases, several subsystems are included, e.g., politics, economy, society.
cannot be Therefore, only a few descriptors are assigned to each subsystem, and each
meaningfully aspect can only be addressed in a relatively aggregated and generalized way.
treated at an Thus, there is usually little room for detailed differentiation within the various
aggregate level topics. Questions that require a high degree of differentiation are therefore
usually difficult to address in a CIB analysis.
⇒ Research The capture of system interrelationships in the form of bilateral influence
questions for relationships is a conceptual core element of all cross-impact analyses and thus
which many also of CIB analysis. Systems that cannot be adequately captured by this type of
interdependencies conceptualization are difficult or impossible to access by the CIB approach. If
cannot be there are only a few interrelationships with a more complex structure, CIB can
described, even still be applied by using special procedures (see Sect. 5.7). However, for
approximately, by systems whose interrelationships escape the cross-impact concept to a large
bilateral influence extent, CIB analysis is not recommended.
relationships
⇒ Research CIB in its basic form identifies self-stabilizing trend combinations or state
questions on combinations. It thus identifies the equilibrium states or (if the descriptor
unstable systems variants are trends) the dynamic equilibria of a system. Systems that do not tend
with frequently to occupy steady states or present dynamic equilibria for extended periods are
changing trend therefore less suitable for CIB analysis in its basic form. At a minimum, a
directions dynamic interpretation of the CIB consistency algorithm would be required
(succession analysis; see Appendix).
8.5 Alternative Methods
The optimal choice of method for a scenario analysis requires a
comparative consideration of the alternatives. This section lists the most
important method alternatives to CIB and describes possible reasons that
could suggest the choice of a different method. The assessments refer to the
case in which a scenario analysis is to be performed with the aim of
creating qualitative scenarios.
The diversity of commonly used scenario methods requires that a
selection be made for presentation. Discussed are methods (1) that are used
as well as CIB to exploit qualitative knowledge for building scenarios and
therefore might be of interest as alternatives to CIB, (2) that have more
favorable properties than CIB in at least one respect (but are less favorable
at other respects), and (3) for which a considerable amount of documented
method applications exists so that assessments can be made on the basis of
sufficient experience. For certain methods, different variants exist. In these
cases, the methods discussed below must be understood as “family
representatives,” since it would be beyond the purpose of this book to
discuss all method variants individually.
References
Carlsen, H. C., Richard, J. T., Klein, R. J. T., & Wikman-Svahn, P. (2017). Transparent scenario
development. Nature Climate Change, 7, 613.
[Crossref]
Drakes, C., Cashman, A., Kemp-Benedict, E., & Laing, T. (2020). Global to small Island; a cross-
scale foresight scenario exercise. Foresight, 22(5/6), 579–598. https://doi.org/10.1108/FS-02-2020-
0012
[Crossref]
Drakes, C., Laing, T., Kemp-Benedict, E., & Cashman, A. (2017). Caribbean scenarios 2050 -
CoLoCarSce report. CERMES Technical Report No. 82.
Epstein, J. M., & Axtell, R. (1996). Growing artificial societies: Social science from the bottom up.
Brookings Institution Press.
[Crossref]
Girod, B., Wiek, A., Mieg, H., et al. (2009). The evolution of the IPCC’s emissions scenarios.
Environmental Science & Policy, 12, 103–118.
[Crossref]
Grunwald, A. (2013). Modes of orientation provided by futures studies: Making sense of diversity
and divergence. European Journal of Futures Research, 2, 30.
[Crossref]
Honton, E. J., Stacey, G. S., & Millet, S. M. (1985). Future scenarios—The BASICS computational
method, economics and policy analysis occasional paper (Vol. 44). Batelle Columbus Division.
Inayatullah, S. (1990). Deconstructing and reconstructing the future: Predictive, cultural and critical
epistemology. Futures, 22, 116–141.
[Crossref]
Inayatullah, S. (1998). Causal layered analysis: Poststructuralism as method. Futures, 30, 815–829.
[Crossref]
Johansen, I. (2018). Scenario modelling with morphological analysis. Technological Forecasting and
Social Change, 126, 116–125.
[Crossref]
Kane, J. (1972). A primer for a new cross impact language-KSIM. Technological Forecasting and
Social Change, 4, 129–142.
[Crossref]
Kemp-Benedict, E. (2012). Telling better stories - strengthening the story in story and simulation.
Environmental Research Letters, 7, 041004.
[Crossref]
Kosow, H. (2016). The best of both worlds? An exploratory study on forms and effects of new
qualitative-quantitative scenario methodologies. Dissertation, University of Stuttgart, Germany.
Kosow, H., Weimer-Jehle, W., León, C. D., & Minn, F. (2022). Designing synergetic and sustainable
policy mixes - a methodology to address conflictive environmental issues. Environmental Science
and Policy, 130, 36–46.
[Crossref]
Lloyd, E. A., & Schweizer, V. J. (2014). Objectivity and a comparison of methodological scenario
approaches for climate change research. Synthese, 191(10), 2049–2088.
[Crossref]
Musch, A.-K., & von Streit, A. (2017). Szenarien, Zukunftswünsche, Visionen - Ergebnisse der
partizipativen Szenarienkonstruktion in der Modellregion Oberland. INOLA report no. 7, Ludwig-
Maximilians University, Munich, Germany.
Panula-Ontto, J. (2019). The AXIOM approach for probabilistic and causal modeling with expert
elicited inputs. Technological Forecasting and Social Change, 138, 292–308.
[Crossref]
Pregger, T., Naegler, T., Weimer-Jehle, W., Prehofer, S., & Hauser, W. (2020). Moving towards
socio-technical scenarios of the German energy transition - lessons learned from integrated energy
scenario building. Climatic Change, 162, 1743–1762. https://doi.org/10.1007/s10584-019-02598-0
[Crossref]
Prehofer, S., Kosow, H., Naegler, T., Pregger, T., Vögele, S., & Weimer-Jehle, W. (2021). Linking
qualitative scenarios with quantitative energy models. Knowledge integration in different
methodological designs. Energy, Sustainability and Society, 11, 25. https://doi.org/10.1186/s13705-
021-00298-1
[Crossref]
Scheele, R., Kearney, N. M., Kurniawan, J. H., & Schweizer, V. J. (2018). What scenarios are you
missing? Poststructuralism for deconstructing and reconstructing organizational futures. In H.
Krämer & M. Wenzel (Eds.), How organizations manage the future - theoretical perspectives and
empirical insights. Springer International Publishing., Chapter 8.. https://doi.org/10.1007/978-3-319-
74506-0_8
Schneider, M., & Gill, B. (2016). Biotechnology versus agroecology - entrenchments and surprise at
a 2030 forecast scenario workshop. Science and Public Policy, 43, 74–84. https://doi.org/10.1093/
scipol/scv021
[Crossref]
Schweizer, V. J., & Kriegler, E. (2012). Improving environmental change research with systematic
techniques for qualitative scenarios. Environmental Research Letters, 7, 044011.
[Crossref]
Schweizer, V. J., & O’Neill, B. C. (2014). Systematic construction of global socioeconomic pathways
using internally consistent element combinations. Climatic Change, 122, 431–445.
[Crossref]
Schweizer, V. J., Scheele, R., & Kosow, H. (2018, June). Practical poststructuralism for confronting
wicked problems. In 9th International Congress on Environmental Modelling and Software, Fort
Collins.
Sterman, J. D. (2000). Business dynamics. Systems thinking and modeling for a complex world. Irwin
McGraw-Hill.
Vögele, S., Poganietz, W.-R., & Mayer, P. (2019). How to deal with non-linear pathways towards
energy futures. Concept and application of the cross-impact balance analysis.
Technikfolgenabschätzung in Theorie und Praxis, 29(3).
Weimer-Jehle, W., Wassermann, S., & Kosow, H. (2011). Konsistente Rahmendaten für
Modellierungen und Szenariobildung im Umweltbundesamt. Expert report for the German Federal
Environment Agency (UBA), UBAReport 20/2011, Dessau-Roßlau, Germany.
Weimer-Jehle, W., Buchgeister, J., Hauser, W., Kosow, H., Naegler, T., Poganietz, W.-R., Pregger, T.,
Prehofer, S., von Recklinghausen, A., Schippl, J., & Vögele, S. (2016). Context scenarios and their
usage for the construction of socio-technical energy scenarios. Energy, 111, 956–970. https://doi.org/
10.1016/j.energy.2016.05.073
[Crossref]
Weimer-Jehle, W., Vögele, S., Hauser, W., Kosow, H., Poganietz, W.-R., & Prehofer, S. (2020).
Socio-technical energy scenarios: State-of-the-art and CIB-based approaches. Climatic Change.
https://doi.org/10.1007/s10584-020-02680-y
Wiek, A., Keeler, L. W., Schweizer, V., & Lang, D. J. (2013). Plausibility indications in future
scenarios. Int. J. Foresight and Innovation Policy, 9, 133–147.
[Crossref]
Footnotes
1 For the concept of plausibility and its application to scenarios, see Wiek et al. (2013) and Schmidt-
Scheele (2020).
2 This conclusion presupposes certain system properties, in particular the ergodicity of the system,
i.e., its ability to effectively search its complete state space for stable states.
3 The following description of the strengths, problems, and limitations of CIB analysis draws
heavily on the method discussion in Weimer-Jehle et al. (2020).
4 However, the complete screening of the possibility space is not a unique feature of CIB. It is also
applied by the consistency matrix method (Rhyne, 1974; von Reibnitz, 1987). The difference
between the two methods lies in the type of data analyzed. Consistency matrix analysis uses
correlational information about system interdependencies, whereas CIB uses causal information.
5 Examples of CIB studies with large scenario sets include Schweizer & O’Neill, 2014 and Pregger
et al., 2020.
6 Leading theorists of poststructuralism were Michel Foucault, Jacques Derrida and others.
7 Somewhat more modest is the time requirement of the consistency matrix method, which uses
correlational instead of causal data. This approach assumes symmetry in the system relationships,
and therefore, only half of the matrix must be filled in.
8 Very large matrices can be solved by using approximation algorithms, in particular the Monte
Carlo method integrated in the ScenarioWizard software (see Appendix, Network Analysis section).
9 One way to look at series of trend changes in CIB is succession analysis. However, this type of
analysis is not covered in depth in this book.
10 Here, “justifiably” means that the experts acknowledge the arguments that CIB makes for the
scenario but still reject the scenario.
Appendix: Analogies
CIB’s concept of conducting a qualitative scenario or system analysis is
basically plausible in itself, and the broad reception of the method in
scenario practice can be interpreted as a form of empirical confirmation of
the concept’s validity. Nevertheless, it can be questioned whether the CIB
algorithm can be theoretically justified. Since there is no general theory of
social systems from which a derivation could be considered, the pursuit of
theoretical justification means looking for analogous systems analysis
concepts in other research domains.
For a scenario heuristic that seeks its legitimacy first in practical
success, the search for theoretical analogies to the CIB algorithm is
surprisingly rewarding, all the more because these analogies consist of not
only conceptual similarities but also rigid mathematical equivalences. The
presence of these analogies confirms that the CIB algorithm can be
understood as a transfer of general systems analytic principles to the field of
qualitative scenario analysis. From a practical standpoint, the analogies are
significant because each sheds new light on CIB and entails the possibility
of understanding CIB and its algorithm in a different way. Moreover, as will
be shown by a discussion of the game-theoretic analogy of CIB, analogies
can also inspire new fields of application for the method.
Nevertheless, these considerations are primarily theoretical in nature.
The necessary excursions into the theoretical foundations of other
disciplines may be less inspiring for more practically oriented readers who
may prefer to skip this part of the Appendix.
Physics
The analogy to the theory of equilibrium of forces in physics is the oldest
known CIB analogy and was noted in the original publication of CIB as a
means to support the plausibility of the CIB algorithm.1 The analogy uses
the metaphor of a heavy body moving under the effect of gravity in a
“terrain profile” and ending up at the lowest point of the terrain and
stabilizing there (Fig. A.1).
Fig. A.1 Analogy of the equilibrium of forces: valleys as rest points for heavy bodies
When the impact balances are drawn with positive values downward, they
form terrain profiles in which the descriptors prefer deep positions in the
manner of heavy bodies by seeking the descriptor variants of high impact
sums (Fig. A.2). The impact balances are negatively plotted in this diagram
because high impact sums in CIB indicate consistency, just as valleys in
physics indicate stability.
Fig. A.2 “Terrain profiles” of Somewhereland descriptors in the case of an inconsistent scenario
However, it can also occur that the deformation of the terrain profiles in
the course of the transition of the unstable descriptors into valleys is so
strong that previously stable descriptors now become unstable. Then, the
respective descriptors must move further until finally a state is reached in
which all descriptors are simultaneously positioned in valleys.
Network Analysis
Networks are used in complexity research to study complexity phenomena
through mathematical experiments. One concept used for this purpose is
Boolean networks or Kauffman networks. They are named after Stuart
Kauffman, a pioneer in complexity research and cybernetic biology.
Kauffman proposed this form of network as a conceptual model for the self-
organized (“autocatalytic”) emergence of primordial life from concatenated
protein synthesis reactions, for cell differentiation processes, and generally
for order–disorder phase transitions (Kauffman 1969a, 1969b, 1993).2
In Boolean networks, each node can be active or inactive, and the
network evolves in generational steps. According to a set of rules, the
decision of which state a node will adopt in the next generation is
determined by the current states of the nodes it is connected to. Thus, a rule
could be, for instance, that node no. 87 is active in the next generation if (1)
node no. 12 is currently active and (2) either node no. 35 or node no. 92 is
currently active.
The analogy between this form of network analysis and CIB arises
because CIB can be understood as a generalized Boolean network (Weimer-
Jehle 2008).3 This view is possible because the inference from impact
balances as to which descriptor variant should be active can be exactly
expressed by Boolean rules. For instance, the decision of which variant of
the Somewhereland descriptor “B. Foreign Policy” is activated in a
consistent scenario can be exactly formulated by the Boolean rules shown
in Table A.1.
Table A.1 Representation of the Somewhereland descriptor column “B. Foreign Policy” in Boolean
rules. (Just like the cross-impact matrix, the rule set allows two B variants in certain cases)
Game Theory
Game theory is a branch of mathematics that studies decision problems for
groups of interacting individuals or organizations. The fields of application
of game theory include, among others, the analysis of social conflicts and
economic competition.
The type of “games” in which an analogy to scenario development and
CIB arises concerns the case where a number of players each have a choice
among individual game strategies. Depending on the choice of strategy, a
player gains or loses, depending not only on his or her choice of strategy
but also on the strategies chosen by the other players. A favorable strategy
can become unfavorable if a fellow player changes his or her strategy and
vice versa. A “payoff table” regulates the success of each player for each
possible strategy combination of all players. Game theory then seeks
answers to the question of which rational strategy choices can be expected
from players when each strives to optimize his or her own net gain.
Here, an analogy with CIB’s task of constructing consistent scenarios
from cross-impact data emerges (Fig. A.5). In this analogy, the descriptors
take the role of the players, and the descriptor variants represent the
strategic options of the players. The cross-impact matrix regulates which
gains (positive impacts) or losses (negative impacts) accrue to a player from
the decisions of fellow players, depending on the player’s own game
strategy. While in game theory each player wants to maximize his or her
win–loss balance, CIB assumes that each descriptor activates the variant
that maximizes its balance of promotions and inhibitions (the impact sum).
Fig. A.5 Analogy between CIB and game theory
CIB thus represents a game in which the player has a small number of
different game strategies at his or her disposal and in which each player is
in an individual game relationship with every other player. For each game
relationship, the player’s game outcome depends on his or her own strategy
decision and that of the fellow players according to fixed rules (the
respective judgment sector of the cross-impact matrix). The outcome is
determined separately for each player relationship and is collected or paid
out by a bank. Thus, a player’s overall success depends on choosing a
strategy that is successful with respect to as many fellow players (and their
strategies) as possible. CIB is not a zero-sum game in which A can only win
what B loses. Instead, depending on the nature of the player relationships
(i.e., the judgment sections), joint wins, joint losses, one-sided indifference,
and competitive wins and losses can occur.
The question now is how game theory proceeds to determine the
strategy configurations to which the players are likely to converge in the
end (i.e., the consistent scenarios from the CIB perspective). The central
concept for this process is Nash equilibria, named after the mathematician
John Forbes Nash (Nash 1950, 1951).4 A Nash equilibrium is characterized
by the fact that no player can unilaterally improve his or her outcome by
changing his or her strategy. If this is the case for all players
simultaneously, it is in the self-interest of each player to maintain his or her
strategy, and a stable strategy configuration of the game is found.
Mathematically speaking, the players thereby implement a discrete
multiobjective optimization because each player pursues his or her own
profit goal and each has a finite number of alternative actions available for
this purpose.
Thus, game theory solves its problem in formally the same way that
CIB pursues its task as a scenario method. In CIB, a scenario is consistent if
no descriptor can unilaterally improve its impact sum by switching to
another variant, i.e., if each descriptor achieves its maximum net promotion
in the given environment of the other descriptor variants. Thus, we can say
that CIB solves the task of scenario construction in a way that is consistent
with general mathematical concepts of discrete-value multiobjective
optimization.
Importantly, CIB’s game-theoretic analogy has a practical dimension in
addition to the theoretical support provided for its formal approach. Since
CIB determines its scenarios in a way that corresponds to Nash equilibria in
game theory, CIB can be used to address certain game-theoretic problems.
For example, CIB can be used to study policy design problems among
actors with conflicting goals, provided that the synergy and conflict
relationships can be formulated in terms of a cross-impact matrix, i.e., that
the synergy and conflict effects between the actors are at least
approximately additive (cf. Kosow et al. 2022).5
Glossary
Cross-impact matrix (in the context of CIB)
Autonomous Autonomous descriptors exert influence but are not in turn influenced by any
descriptor other descriptor. Consequently, their column in the cross-impact matrix is empty.
Autonomous descriptors are suitable to represent the external conditions of a
system.
Column sum (of The sum of all cross-impact ratings in the matrix column of a descriptor variant.
a descriptor In contrast to the impact sum, all column entries are added when calculating the
variant) column sum, not only the column entries that belong to the rows of the active
descriptor variants. Therefore, the column sum has a fixed value for each
descriptor variant, while the impact sums are scenario-specific.
Connectivity (of The share of nonempty judgment sections in the total number of judgment
a matrix) sections.
Cross-impact A single entry in the cross-impact matrix indicating the influence of one
descriptor variant (row) on another (column).
Cross-impact A matrix whose rows and columns are formed by the variants of all descriptors
matrix and in which the direct promoting and hindering influences between descriptor
(in CIB) variants are noted in the form of positive or negative strength ratings.
Descriptor Descriptors are the key elements of a system that are required to sufficiently
characterize the system’s state or evolution and whose mutual influence
relationships are capable of explaining that state or evolution.
Descriptor field The list of all descriptors of a cross-impact matrix.
Descriptor Alternative states or developments (also termed, for instance, alternative futures
variant or descriptor categories) that a descriptor can adopt.
Active descriptor Descriptor variants that occur in a scenario are referred to as active descriptor
variant variants of the scenario (e.g., “in scenario no. 1, the variant X3 is active for
descriptor X”).
Driver descriptor A descriptor that is not itself a target descriptor but influences one or more target
descriptors.
Ensemble A collection of cross-impact matrices with matching descriptors and descriptor
(matrix variants but differing cross-impact ratings. For example, a matrix ensemble can
ensemble) be the result of independent creating matrices by several experts with (partly)
different system views.
Impact balance The impact balance of a descriptor consists of the impact sums of all variants of
(of a descriptor) this descriptor.
Impact sum of a Sum of all cross-impact values in the matrix column of a descriptor variant,
descriptor whereby only the rows of the active descriptor variants are included in the
variant (with summation. Therefore, the impact sum of a descriptor variant is only valid for a
respect to a specific scenario and—unlike the column sum—not a general property of the
scenario) descriptor variant.
Intermediary A descriptor that is not a target descriptor and not a driver descriptor butis
descriptor influenced by them and influences one or more driver descriptor(s).
Judgment cell A matrix cell containing a single cross-impact rating (cf. Fig. 3.7).
Judgment group A horizontal row of judgment cells that summarizes the influence of one
descriptor variant on all variants of another descriptor (cf. Fig. 3.7).
Judgment section A rectangular section from the cross-impact matrix containing the impacts of all
variants of one descriptor on all variants of another descriptor (cf. Fig. 3.7).
Passive A descriptor that receives influences from other descriptors but does not itself
descriptor exert influences on other descriptors. This is expressed in the cross-impact
matrix by an empty descriptor row.
Predetermined A descriptor is referred to as predetermined if the cross-impact ratings in its
descriptor column one-sidedly favor a certain descriptor variant. Predetermined matrices
Predetermined are matrices that contain enough predetermined descriptors to be constrained to
matrix a unique solution (scenario).
Scenario space The scenario space consists of all combinatorially possible (inconsistent and
nonconsistent) scenarios.
Sparse matrix Cross-impact matrices with a high proportion of judgment cells carrying the
value zero. Matrices with more than approx. 70% empty cells can be considered
unusually sparse.
Specific cross- Reduced form of the cross-impact matrix for describing the interdependencies
impact matrix within a specific scenario. The specific cross-impact matrix is derived from the
full cross-impact matrix by removing the rows and columns of all nonactive
descriptor variants from the matrix.
Systemic A descriptor that receives influences from and exerts influences on other
descriptor descriptors.
Target descriptor A descriptor that directly represents the research question to be addressed by the
CIB analysis.
Underdetermined A descriptor that is subject to minor influences from the other descriptors of a
descriptor cross-impact matrix but whose major determinants lie outside the descriptor
field of the matrix (see Sect. 5.5).
Index
A
Absolute cross-impact 191–192, 195
Addition invariance 185
Application examples 219–230
Autonomous descriptor 46
B
Boolean networks 260
Boolean rules 260
C
Calibration of strength ratings 186
Central descriptor variant 175–176
CIB miniature
emerging country 132
global socioeconomic pathways 121
group opinion 159
mobility demand 152
oil price 113
resource economy 87
SmallCountry 145
social sustainability 136
Somewhereland 29
Somewhereland-City 71, 72
Somewhereland plus 55
water supply 77
Climate protection 227–230
Column sum 134, 148
Conservative scenarios 175–176
Consistency
consistency check 32–36
consistency profile 47–48
consistency value 44–47, 267
IC0 111
IC1 111
inconsistency value 46
marginal consistency 48, 268
Consistent scenarios
number of scenarios 107
Context-dependent impact 150–155
Context sensitive influence 247
Cross-impact 25–30, 177–194, 265
absolute cross-impact 191–192, 195
balance 183
calibration 186
compensation principle 186
context dependency 150–155
context sensitive influence 247
cross-impact query 26
cross-impact statement 26
definition 177
dissent categories 85
double negation 187–188
empty judgment section 179
expert workshop 206–213
indirect influence 180–182
interviews 204–206
inverse coding 182–183
judgment cell 29, 266
judgment group 29, 266
judgment section 29, 266
key dissent 98
literature review 196–199
rating interval 26, 178, 179
rating scale 26
self-elicitation 195
sign balance 183
sign error 187–188
standardization 186
Cross-impact balances (CIB)
aggregation level 244
algorithm 30–40
algorithm (impact diagram) 34–35
algorithm (table) 37–39
causal model 241
challenges and limitations 243–248
CIB as a qualitative/semiquantitative method 236–238
consistency 247
consistency profile 47–48
consistency value 44–47
construction transparency 239–240
context sensitive influence 247
game theoretical interpretation 262–264
impact balance 38, 44, 266
impact sum 34–35, 38, 266
inconsistency value 46
interpretations 233–234
Kennwerte 44–49
key characteristics 44–49
knowledge integration 241–242
mental models as a object of analysis 248
network analogy 260–262
objectivity 242
physical analogy 258–260
policy design 235–236
scenario quality 239
screening of the scenario space 240
steady state system analysis 234–235
strengths 239–243
system boundary 244–245
time management 209
time-related interpretation of CIB 233–234
time required 243–244
time-unrelated interpretation of CIB 234–235
total impact score 49
Cross-impact matrix 25–30, 265
addition invariance 185
column sum 134, 148
conditional matrix 153
connectivity 179, 265
ensemble 204, 266
mean value matrix 91
multiplication invariance 91
sparse matrix 109–110, 266
specific matrix 101–102, 266
sum matrix 86–92
D
Data elicitation
based on previous work 213–214
expert workshop 206–213
interviews 204–206
iteration 211
literature review 196–199
method combination 210, 211
self-elicitation 195
theory-based 213–214
written/online 199–203
Descriptor 265
aggregation level 165, 166
ambivalent descriptor 138
autonomous descriptor 46, 158, 265
classification 167–171
definition 157, 265
descriptor field 22, 265
documentation 166
driver descriptor 162, 266
essay 166
intermediary descriptor 162, 266
nominal descriptor 169
number of descriptors 164, 165
ordinal descriptor 169
passive descriptor 158, 266
predetermined descriptors 188–189, 266
ratio descriptor 169
screening 196, 199, 204, 207
systemic descriptor 158, 267
target descriptor 54, 161, 162, 267
types 167–171
underdetermined descriptor 143–147, 267
variants 22–23
Descriptor variant 166–177
absence of overlap 173–174
central variant 175–176
characteristic variant 171
completeness 172
definition 166, 167
gradation 175
mutual exclusivity 172, 173
peripheral variant 175–176
phantom variant 189–191
rare descriptor variant 123
regular variant 171
robust variant 170
scales of measurement 168–169
vacant variant 147–150, 170
Dissent
categories 85
Distance table 115
Diversity 114–117
Diversity sampling 124–127, 268
Domino intervention 143
Double negation 187–188
Driver descriptor 162
E
Elicitation
pretest 210
Empty judgment section 179
Energy 222–223
Ensemble 87, 204, 266
ensemble evaluation 267
ensemble table 95, 96
Excursus
alternative scenario selection approaches 127
low descriptor variant frequency 123–124
Extreme scenarios 175–176
F
Feedback 211
G
Game theory 235–236
H
Health prevention 224–227
I
IC0 111
IC1 111
Impact
direct impact 29
impact diagram 32, 268
indirect impact 29
Impact balance 38, 44, 266
Impact sum 34–35, 38, 266
Inconsistency
frequency distribution 110–111
IC classes 110–111
inconsistency value 46
marginal inconsistency 50, 268
significant inconsistency 49–51
Indirect influence 180–182
Interdependence 25–30
Intermediary descriptor 162
Intervention 76–82
Intervention analysis 73–100
J
Judgment cell 29, 266
Judgment group 29, 266
Judgment section 29, 266
M
Marginal consistency 48
Marginal stability 48
Mean value matrix 91
Memo
M 1 - cross-impact query 26
M 2 - impact source and target 28
M 3 - condition of consistency 35
M 4 - significant inconsistency 50
M 5 - significant inconsistency in sum matrices 90
M 6 - multiplication invariance 91
M 7 - intervention analysis 141
M 8 - underdetermined descriptors 147
M 9 - descriptors (definition) 158
M10 - uniform aggregation level 166
M11 - qualitative interpretation of CIB descriptors 170
M12 - empty judgment section 180
M13 - direct and indirect influences 181
M14 - addition invariance 185
M15 - calibration of strength ratings 187
M16 - interpretation of CIB solutions as scenarios 234
M17 - interpreting CIB solutions as network configurations 235
M18 - CIB and Nash equilibria 236
M19 - unique character of CIB 238
Morphological analysis 24
N
Nash equilibrium 235–236, 263
Network analysis 16
Nuclear deal Iran 220–221
Nutshell
Nutshell I - intervention analysis 75
Nutshell II - group evaluation 99
Nutshell III - diversity sampling 126
Nutshell IV - context-dependent impact 151
Nutshell V - using subscenarios as descriptor variants 197
Nutshell VI - scenario validation 212
O
Overlap, absence of 173–174
P
Peripheral descriptor variant 175–176
Portfolio 107–117, 267
bipolar portfolio 136–143
distance table 115
diversity 114–117
diversity sampling 268
diversity score 115, 267
full presence 267
IC0 portfolio 53, 267
IC1 portfolio 53, 267
IC2 portfolio 53, 267
monotonous portfolio 131–135
number of scenarios 108
portfolio volume 108, 267
presence rate 111–112, 267
structuring a portfolio 54–64
Post-structuralism 242–243
Presence rate 111–112, 267
Pretest 210
Q
Quality assurance 186
R
Rating interval 26, 178, 179
S
Scenario 24, 268
axes diagram 60–64, 130
conservative scenario 175–176
consistent scenario 30–40, 268
construction 40
distance 114, 268
diversity sampling 268
extreme scenario 175–176
fully consistent scenario 268
inconsistent scenarios 34–35, 268
list format 41
marginal inconsistent scenario 49–51
number of scenarios 107–111
scenario axes 57–64
scenario portfolio 107–117
short format 41
surprise-driven scenarios 82–84
tableau format 41
validation 211
verification 219
Scenario axes 57–64, 130
Scenario space 24–25, 266
Scenario succession 262
Sensitivity analysis 96–97
Sign balance 183–185
Sign error 187–188
Somewhereland
cross-impact matrix 28
descriptors 22
Standardization 186
Statistics box
connectivity 180
cross-impact ratings 186–187
expansion factor 111
judgment uncertainty 193
number of descriptors 164, 165
number of descriptor variants 175
number of scenarios 108
portfolio diversity 116
zero value density and scenario count 109
Storylines 227–230
Succession analysis 2, 250, 262
Sum matrix 86–92
Surprises 82–84
T
Tableau format 41
Target descriptor 54, 161, 162, 267
Time management 209
Total impact score (TIS) 49, 268
V
Vacancy 147–150
Validation 211
Variant 23–24
completeness 23–24
exclusivity 23–24
W
Wildcard 82
Footnotes
1 Weimer-Jehle W (2006) Cross-Impact Balances: A System-Theoretical Approach to Cross-Impact
Analysis. Technological Forecasting and Social Change, 73(4):334-361.
2 Kauffman SA (1969a) Metabolic stability and epigenesis in randomly constructed nets, Journal of
Theoretical Biology 22:437–467.
Kauffman SA (1969b) Metabolic stability and epigenesis in randomly constructed nets, Journal
of Theoretical Biology 22:437–467.
Kauffman SA (1993) The Origins of Order, Oxford University Press, New York, Oxford.
3 Weimer-Jehle W (2008) Cross-Impact Balances - Applying pair interaction systems and multi-
value Kauffman nets to multidisciplinary systems analysis. Physica A, 387(14):3689-3700.
4 Nash JF (1950) Equilibrium points in n-person games. Proceedings of the National Academy of
Sciences, 36(1):48-49.
Nash JF (1951) Non-Cooperative Games. The Annals of Mathematics 54:286-295.
5 Kosow H, Weimer-Jehle W, León CD, Minn F (2022) Designing synergetic and sustainable policy
mixes - a methodology to address conflictive environmental issues. Environmental Science and
Policy 130:36–46.