0% found this document useful (0 votes)
38 views347 pages

Dokumen

Uploaded by

Erick Corimanya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views347 pages

Dokumen

Uploaded by

Erick Corimanya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 347

Contributions to Management Science

The series Contributions to Management Science contains research


publications in all fields of business and management science. These
publications are primarily monographs and multiple author works
containing new research results, and also feature selected conference-based
publications are also considered. The focus of the series lies in presenting
the development of latest theoretical and empirical research across different
viewpoints.
This book series is indexed in Scopus.
Wolfgang Weimer-Jehle

Cross-Impact Balances (CIB) for Scenario


Analysis
Fundamentals and Implementation
Wolfgang Weimer-Jehle
ZIRIUS, University of Stuttgart, Stuttgart, Germany

ISSN 1431-1941 e-ISSN 2197-716X


Contributions to Management Science
ISBN 978-3-031-27229-5 e-ISBN 978-3-031-27230-1
https://doi.org/10.1007/978-3-031-27230-1

© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2023

This work is subject to copyright. All rights are solely and exclusively
licensed by the Publisher, whether the whole or part of the material is
concerned, specifically the rights of reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other
physical way, and transmission or information storage and retrieval,
electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.

The use of general descriptive names, registered names, trademarks, service


marks, etc. in this publication does not imply, even in the absence of a
specific statement, that such names are exempt from the relevant protective
laws and regulations and therefore free for general use.

The publisher, the authors, and the editors are safe to assume that the advice
and information in this book are believed to be true and accurate at the date
of publication. Neither the publisher nor the authors or the editors give a
warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The
publisher remains neutral with regard to jurisdictional claims in published
maps and institutional affiliations.
This Springer imprint is published by the registered company Springer
Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham,
Switzerland
Acknowledgments
The fact that it is possible today to present the CIB method based on an
extensive body of practice and multifaceted methodological research is due
to a growing community of method users and researchers, from which
numerous suggestions, inspirations, methodological innovations, and
critical questioning have emerged, thus helping the method mature. I would
like to express my gratitude to my colleagues worldwide who have been
inspired by the CIB method and shared their insights, experiences, and
criticism.
It is my pleasure to thank Dr. Diethard Schade, who initiated scenario
research at the Center for Technology Assessment, Prof. Georg Förster, my
comrade during my first walking attempts toward CIB, and Prof. Ortwin
Renn, who fostered the development of CIB at the University of Stuttgart
for 15 years by his continuous support.
I especially desire to thank my colleagues at the CIB-Lab of ZIRIUS at
the University of Stuttgart for the journey they have undertaken together
with me for so many years and who have contributed inestimably to the
further development and maturation of the CIB method. Their research,
motivation, wealth of ideas, and untiring support have always been an
inspiration and encouragement to me. Without them, this book would not
have come about in the way it has. I would also like to thank Dr. Wolfgang
Hauser, Dr. Hannah Kosow, Prof. Vanessa Schweizer, and M.A. Sandra
Wassermann for valuable comments on the book’s manuscript. All
remaining errors are mine.
Abbreviations
ABM Agent-based modelling
BASICS Batelle Scenario Inputs to Corporate Strategy (scenario method)
C Consistency score of a descriptor or scenario
CO2 Carbon dioxide
CIB Cross-Impact Balances (scenario method)
D Diversity score of a scenario portfolio
FAR Field Anomaly Relaxation (scenario method)
Gt C Giga tons (billion tons) of carbon
IC Inconsistency score of a descriptor or scenario
ICS Significance threshold of a scenario inconsistency
IL Intuitive Logics (scenario method)
IPCC Intergovernmental Panel on Climate Change
KSIM Kane’s Simulation Model (simulation method)
m Number of matrices of a matrix ensemble
MINT A group of academic disciplines, consisting of mathematics,
informatics, natural sciences, and technology
N Number of descriptors of a cross impact matrix
OECD Organization for Economic Co-operation and Development
q Quorum applied in an ensemble evaluation
SD Systems Dynamics (simulation method)
SRES Special Report on Emission Scenarios
TIS Total impact score
Vi Number of states of descriptor i
Z Number of possible configurations of a morphological field
Contents
1 Introduction to CIB
References
2 The Application Field of CIB
2.1 Scenarios
2.2 Scenarios and Decisions
2.3 Classifying CIB
References
3 Foundations of CIB
3.1 Descriptors
3.2 Descriptor Variants
3.2.1 Completeness and Mutual Exclusivity of the Descriptor
Variants
3.2.2 The Scenario Space
3.2.3 The Need for Considering Interdependence
3.3 Coping with Interdependence: The Cross-Impact Matrix
3.4 Constructing Consistent Scenarios
3.4.1 The Impact Diagram
3.4.2 Discovering Scenario Inconsistencies Using Influence
Diagrams
3.4.3 Formalizing Consistency Checks: The Impact Sum
3.4.4 The Formalized Consistency Check at Work
3.4.5 From Arrows to Rows and Columns: The Matrix-Based
Consistency Check
3.4.6 Scenario Construction
3.5 How to Present CIB Scenarios
3.6 Key Indicators of CIB Scenarios
3.6.1 The Consistency Value
3.6.2 The Consistency Profile
3.6.3 The Total Impact Score
3.7 Data Uncertainty
3.7.1 Estimating Data Uncertainty
3.7.2 Data Uncertainty and the Robustness of Conclusions
3.7.3 Other Sources of Uncertainty
References
4 Analyzing Scenario Portfolios
4.1 Structuring a Scenario Portfolio
4.1.1 Perspective A: If-Then
4.1.2 Perspective B: Order by Performance
4.1.3 Perspective C: Portfolio Mapping
4.2 Revealing the Whys and Hows of a Scenario
4.2.1 How to Proceed
4.2.2 The Scenario-Specific Cross-Impact Matrix
4.3 Ex Post Consistency Assessment of Scenarios
4.3.1 Intuitive Scenarios
4.3.2 Reconstructing the Descriptor Field
4.3.3 Preparing the Cross-Impact Matrix
4.3.4 CIB Evaluation
4.4 Intervention Analysis
4.4.1 Analysis Example: Interventions to Improve Water Supply
4.4.2 The Cross-Impact Matrix and its Portfolio
4.4.3 Conducting an Intervention Analysis
4.4.4 Surprise-Driven Scenarios
4.5 Expert Dissent Analysis
4.5.1 Classifying Dissent
4.5.2 Rule-Based Decisions
4.5.3 The Sum Matrix
4.5.4 Delphi
4.5.5 Ensemble Evaluation
4.5.6 Group Evaluation
4.6 Storyline Development
4.6.1 Strengths and Weaknesses of CIB-Based Storyline
Development
4.6.2 Preparation of the Scenario-Specific Cross-Impact Matrix
4.6.3 Storyline Creation
4.7 Basic Characteristics of a CIB Portfolio
4.7.1 Number of Scenarios
4.7.2 The Presence Rate
4.7.3 The Portfolio Diversity
References
5 What if… Challenges in CIB Practice
5.1 Insufficient Number of Scenarios
5.2 Too Many Scenarios
5.2.1 Statistical Analysis
5.2.2 Diversity Sampling
5.2.3 Positioning Scenarios on a Portfolio Map
5.2.4 Further Procedures
5.3 Monotonous Portfolio
5.3.1 Unbalanced Judgment Sections
5.3.2 Unbalanced Columns
5.4 Bipolar Portfolio
5.4.1 Causes of Bipolar Portfolios
5.4.2 Special Approaches for Analyzing Bipolar Portfolios
5.5 Underdetermined Descriptors
5.6 Essential Vacancies
5.6.1 Resolving Vacancies by Expanding the Portfolio
5.6.2 Cause Analysis
5.7 Context-Dependent Impacts
References
6 Data in CIB
6.1 About Descriptors
6.1.1 Explanation of Term
6.1.2 Descriptor Types
6.1.3 Methodological Aspects
6.2 About Descriptor Variants
6.2.1 Explanation of Term
6.2.2 Types of Descriptor Variants
6.2.3 Methodological Aspects
6.2.4 Designing the Descriptor Variants
6.3 About Cross-impacts
6.3.1 Explanation of Term
6.3.2 Methodological Aspects
6.3.3 Data Uncertainty
6.4 About Data Elicitation
6.4.1 Self-Elicitation
6.4.2 Literature Review
6.4.3 Expert Elicitation (Written/Online)
6.4.4 Expert Elicitation (Interviews)
6.4.5 Expert Elicitation (Workshops)
6.4.6 Use of Theories or Previous Research as Data Collection
Sources
References
7 CIB at Work
7.1 Iran Nuclear Deal
7.2 Energy and Society
7.3 Public Health
7.4 IPCC Storylines
References
8 Reflections on CIB
8.1 Interpretations
8.1.1 Interpretation I (Time-Related): CIB in Scenario Analysis
8.1.2 Interpretation II (Unrelated to Time): CIB in Steady-State
Systems Analysis
8.1.3 Interpretation III: CIB in Policy Design
8.1.4 Classification of CIB as a Qualitative-Semiquantitative
Method of Analysis
8.2 Strengths of CIB
8.2.1 Scenario Quality
8.2.2 Traceability of the Scenario Consistency
8.2.3 Reproducibility and Revisability
8.2.4 Complete Screening of the Scenario Space
8.2.5 Causal Models
8.2.6 Knowledge Integration and Inter- and Transdisciplinary
Learning
8.2.7 Objectivity
8.2.8 Scenario Criticism
8.3 Challenges and Limitations
8.3.1 Time Resources
8.3.2 Aggregation Level and Limited Descriptor Number
8.3.3 System Boundary
8.3.4 Limits to the Completeness of Future Exploration
8.3.5 Discrete-Valued Descriptors and Scenarios
8.3.6 Trend Stability Assumption
8.3.7 Uncertainty and Residual Subjectivity in Data Elicitation
8.3.8 Context-Sensitive Influences
8.3.9 Consistency as a Principle of Scenario Design
8.3.10 Critical Role of Methods Expertise
8.3.11 CIB Does Not Study Reality but Mental Models of Reality
8.4 Unsuitable Use Cases: A Checklist
8.5 Alternative Methods
References
Appendix: Analogies
Physics
Network Analysis
Game Theory
Glossary
Cross-impact matrix (in the context of CIB)
Portfolio (in CIB)
Scenarios (in the context of CIB)
Index
List of Figures
Fig. 2.1 The scenario funnel

Fig. 2.2 A classification of scenarios and scenario methods

Fig. 2.3 Example of a qualitative network of interacting nodes

Fig. 2.4 Workflow of a CIB analysis

Fig. 3.1 The descriptor field for the “Somewhereland” analysis

Fig. 3.2 Descriptor variants (alternative futures) for the descriptor “A.
Government”

Fig. 3.3 Compilation of descriptors and their variants for Somewhereland

Fig. 3.4 Cross-impact assessment of the influence between two descriptors

Fig. 3.5 Representing cross-impact data without use of numbers

Fig. 3.6 Representation of cross-impact data following Weitz et al. (2019)

Fig. 3.7 The cross-impact matrix of Somewhereland


Fig. 3.8 The cross-impact matrix printed without influence-free judgment
sections

Fig. 3.9 One of 486 scenarios for Somewhereland

Fig. 3.10 Consulting the cross-impact matrix on the influence relationship


A2 → B1

Fig. 3.11 The impact diagram of scenario [A2 B1 C3 D1 F1 E1]

Fig. 3.12 The influence diagram Fig. 3.11 with the impact sums of the
descriptors

Fig. 3.13 Demonstration of inconsistency of descriptor variant D1

Fig. 3.14 Consequential inconsistency in Descriptor E after adjustment of


Descriptor D

Fig. 3.15 Matrix-based calculation of impact sums for scenario [A2 B1 C3


D1 E1 F1]

Fig. 3.16 Matrix-based calculation of the complete impact balances of a


scenario

Fig. 3.17 The Somewhereland scenarios in tableau format with integrated


descriptor listing
Fig. 3.18 The Somewhereland scenarios in tableau format with separate
descriptor listing

Fig. 3.19 Impact diagram of Somewhereland scenario no. 10

Fig. 3.20 Consistency values of the descriptors calculated for the test
scenario

Fig. 3.21 Comparison of consistency and inconsistency scales

Fig. 3.22 Two examples of consistency profiles

Fig. 4.1 The Somewhereland-plus matrix

Fig. 4.2 Somewhereland-plus portfolio arranged according to the “if-then”


perspective

Fig. 4.3 Evaluation of descriptor variants according to their dissimilarity


with the present state of Somewhereland (example values)

Fig. 4.4 Arrangement of Somewhereland scenarios on a performance scale

Fig. 4.5 Somewhereland scenario tableau, ordered by dissimilarity to the


present
Fig. 4.6 Example of a scenario axes analysis (The example draws from
work done by the IPCC on the future of global climate gas emissions
(Nakićenović et al. 2000))

Fig. 4.7 Two-dimensional rating of the Somewhereland descriptor variants

Fig. 4.8 A portfolio map of the Somewherland scenarios

Fig. 4.9 Ordering the portfolio according to probability and effect

Fig. 4.10 Elucidating the background of Somewhereland Scenario no. 1


“Society in crisis”

Fig. 4.11 The specific cross-impact matrix of Somewhereland Scenario no.


1

Fig. 4.12 Positive part of a scenario-specific cross-impact matrix

Fig. 4.13 Justification form for Descriptor variant “E3 Social cohesion:
Unrest”

Fig. 4.14 Illustration of the justifications of a descriptor variant

Fig. 4.15 Scenario axes diagram and the “Somewhereland City mobility”
example
Fig. 4.16 Descriptors and descriptor variants of the “Somewhereland City”
intuitive mobility scenarios

Fig. 4.17 “Somewhereland City” cross-impact matrix

Fig. 4.18 “Somewhereland City” CIB analysis results

Fig. 4.19 Consistency critique of “The Unfinished” intuitive scenario

Fig. 4.20 Corrected and extended scenario axes diagram according to the
result of the CIB analysis

Fig. 4.21 Nutshell I—Workflow of a CIB intervention analysis

Fig. 4.22 The descriptors and descriptor variants of the “Water supply”
intervention analysis

Fig. 4.23 “Water supply” cross-impact matrix (basic matrix)

Fig. 4.24 The “Water supply” portfolio without interventions (basic


portfolio)

Fig. 4.25 “Water supply” cross-impact matrix with intervention at E1


Fig. 4.26 The IC0 portfolio of the E1 intervention matrix

Fig. 4.27 “Water supply” cross-impact matrix with intervention at A2

Fig. 4.28 The IC0 portfolio of the A2 intervention matrix

Fig. 4.29 Implementation of the “Global economic crises” wildcard into the
Somewhereland matrix

Fig. 4.30 The Somewhereland portfolio under the impact of the “Global
economic crises” wildcard

Fig. 4.31 Different qualities of rating differences. Adapted from Jenssen


and Weimer-Jehle (2012)

Fig. 4.32 Example of a matrix ensemble and its sum matrix

Fig. 4.33 Fully consistent solutions of the sum matrix

Fig. 4.34 Solutions of the “Resource economy” sum matrix, including all
scenarios with nonsignificant inconsistency

Fig. 4.35 Procedure of a Delphi survey

Fig. 4.36 Dissent management using the Delphi method


Fig. 4.37 Compilation of scenarios of the individual evaluations of the
“Resource economy” matrix ensemble

Fig. 4.38 The ensemble table of the “Resource economy” matrix ensemble

Fig. 4.39 Key dissent of the “Resource economy” matrix ensemble

Fig. 4.40 Nutshell II—Dissent analysis by group evaluation

Fig. 4.41 Specific cross-impact matrix for Somewhereland scenario no. 10

Fig. 4.42 Data basis for storyline development for Somewhereland scenario
no. 10

Fig. 4.43 Data basis for the development of a storyline in graphical


representation

Fig. 4.44 Example of a perfectly sequencable specific matrix

Fig. 4.45 Improved descriptor order for Somewhereland scenario no. 10

Fig. 4.46 Frequency distribution of the inconsistency value in the


Somewhereland matrix
Fig. 4.47 The cross-impact “Oil price” matrix (Weimer-Jehle 2006)

Fig. 4.48 The three fully consistent scenarios of the “Oil price” matrix

Fig. 4.49 Descriptor variant vacancies of the “Oil price” matrix (empty
squares)

Fig. 4.50 Distance table of the “Oil price” portfolio

Fig. 4.51 Distance table of the Somewhereland portfolio with marking of an


N/2 selection

Fig. 5.1 IC1 portfolio of the “oil price” matrix

Fig. 5.2 “Global socioeconomic pathways” matrix. Adapted and modified


from Schweizer and O’Neill (2014)

Fig. 5.3 Occurrence frequencies of the descriptor variants in the “Global


socioeconomic pathways” portfolio

Fig. 5.4 Exploring the future space through scenario selection

Fig. 5.5 Nutshell III - Procedure for creating a selection with high scenario
distances (diversity sampling)
Fig. 5.6 Scenario selection according to the “max-min” heuristic (diversity
sampling)

Fig. 5.7 Evaluation of descriptor variants according to the criteria of social


and economic development (exemplary data)

Fig. 5.8 Portfolio map of the “Global socioeconomic pathways” portfolio

Fig. 5.9 Cross-impact matrix on the social development of an emerging


country. Adapted and modified from Cabrera Méndez et al. (2010)
(Translation from Spanish by the author)

Fig. 5.10 Unbalanced judgment sections in the “Emerging country” matrix

Fig. 5.11 Column sums of the “Emerging country” matrix

Fig. 5.12 Example of a cross-impact matrix on social sustainability.


Adapted and modified from Renn et al. (2007)

Fig. 5.13 A bipolar portfolio

Fig. 5.14 Cross-impact matrix “Social sustainability” with sorted descriptor


variants

Fig. 5.15 Intervention effects: worst case (dark shading) and best case (light
shading)
Fig. 5.16 Effect of dual interventions in the “Social sustainability” matrix

Fig. 5.17 Fictitious neighboring countries BigCountry and SmallCountry

Fig. 5.18 Economic-social development of the fictitious country


SmallCountry

Fig. 5.19 Portfolio of the “SmallCountry” matrix according to conventional


evaluation

Fig. 5.20 “SmallCountry” portfolio when considering the underdeterminati


on of descriptor D

Fig. 5.21 Column sums of descriptor “E. Oil price”

Fig. 5.22 Conventional (left) and context-dependent impact on B (right)

Fig. 5.23 Nutshell IV - Processing context-dependent impacts

Fig. 5.24 Context-dependent impacts in the “Mobility demand” matrix

Fig. 5.25 Conditional cross-impact matrices (the top matrix is valid for E1
scenarios, that below for E2 scenarios)
Fig. 5.26 Mobility demand portfolio after consideration of context
dependencies

Fig. 5.27 Flawed scenario logic due to neglect of context dependency

Fig. 6.1 “Group opinion” matrix and its portfolio

Fig. 6.2 “Group opinion” cross-impact matrix and portfolio after removing
the passive descriptor

Fig. 6.3 Post hoc determination of the consistent variant of a passive


descriptor

Fig. 6.4 Descriptor roles in the “Water supply” matrix

Fig. 6.5 Portfolios with (top) and without (bottom) intermediary descriptor
D

Fig. 6.6 Examples of state descriptors and trend descriptors

Fig. 6.7 Examples of nominal, ordinal, and ratio descriptors

Fig. 6.8 Example of descriptor variants and their definitions

Fig. 6.9 Incorrect definition of descriptor variants due to overlap of topics


Fig. 6.10 Examples of central and peripheral descriptor variants

Fig. 6.11 Nutshell V. Using subscenarios as descriptor variants

Fig. 6.12 Partitioning of the assessment task according to knowledge


domains

Fig. 6.13 Two ways to construct a cross-impact matrix by expert group


elicitation

Fig. 6.14 Nutshell VI. Workflow in a scenario validation workshop

Fig. 7.1 “Iran 1395” scenarios and their thematic core motifs. Own
illustration based on Ayandeban (2016)

Fig. 7.2 Map of German societies in 2050 and their CO2 emissions.
Modified from Pregger et al. (2020)

Fig. 7.3 Section of the network of impact relations between the factors
influencing the energy balance of an individual. Own representation based
on data from Weimer-Jehle et al. (2012)

Fig. 7.4 CIB analysis of obesity risks for children and adolescents for four
case examples. Data from Weimer-Jehle et al. (2012)
Fig. 7.5 Scenario axes diagram of the forty SRES emissions scenarios. Own
illustration based on data from Nakićenović et al. (2000)

Fig. 7.6 Initial phase of the emissions trajectories of the forty SRES
scenarios (own illustration based on data from Nakićenović et al. (2000)
(SRES scenario emissions) and IPCC (2014) (historical CO2 emissions))

Fig. 7.7 Number of SRES and CIB scenarios in four classes of carbon
intensity. Own illustration based on data from Schweizer and Kriegler
(2012)

Fig. 8.1 Qualitative system model and its semiquantitative representation

Fig. 8.2 Comparative classification of CIB as a qualitative-semiquantitative


analysis method

Fig. A.1 Analogy of the equilibrium of forces: valleys as rest points for
heavy bodies

Fig. A.2 “Terrain profiles” of Somewhereland descriptors in the case of an


inconsistent scenario

Fig. A.3 “Terrain profiles” in a consistent scenario

Fig. A.4 Somewhereland as a dynamic network

Fig. A.5 Analogy between CIB and game theory


List of Tables
Table 2.1 Performance matrix for the evaluation of planning variants. Own
depiction based on Fink et al. (2002)

Table 2.2 Types and functions of scenarios

Table 3.1 Number of scenarios in scenario spaces of different sizes

Table 3.2 Seven-part cross-impact rating scale

Table 3.3 A Somewhereland scenario in list format

Table 3.4 The 10 Somewhereland scenarios in short format

Table 3.5 Significance of inconsistency classes depending on the number of


descriptors

Table 4.1 The scenarios of Somewhereland-plus in short format

Table 5.1 Portfolio of the “Global socioeconomic pathways” matrix

Table 5.2 Only solution of the “Emerging country” matrix

Table 5.3 Portfolio after intervention on “A. Economic performance”


Table 5.4 Portfolio after intervention on “D. Social engagement”

Table 5.5 Portfolio after intervention on “F. Equity of chances”

Table 5.6 Portfolio after intervention on “H. Education”

Table 5.7 Portfolios of the two conditional matrices “Mobility demand”

Table 6.1 Descriptor variant classification

Table 6.2 Example of an inverse coding

Table 6.3 Correction of an inverse coding

Table 6.5 Coding an influence relationship using negative impacts

Table 6.4 Coding an influence relationship using positive impacts

Table 6.6 Coding an influence relationship using mixed impacts

Table 6.7 Example of a double negation

Table 6.8 A judgment section contributing to predetermination


Table 6.9 A phantom variant

Table 6.10 Using absolute cross-impacts

Table 6.11 Coding a text passage

Table 7.1 Descriptor field of the CIB analysis “Iran 1395”

Table A.1 Representation of the Somewhereland descriptor column “B.


Foreign Policy” in Boolean rules. (Just like the cross-impact matrix, the rule
set allows two B variants in certain cases)
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis, Contributions to Management
Science
https://doi.org/10.1007/978-3-031-27230-1_1

1. Introduction to CIB
Wolfgang Weimer-Jehle1
(1) ZIRIUS, University of Stuttgart, Stuttgart, Germany

Wolfgang Weimer-Jehle
Email: [email protected]

Keywords Cross-Impact Balances – CIB – Scenario – Foresight –


Qualitative impact network

Dealing sensibly with future indeterminacy and uncertainty is increasingly


important in a world where organizations at all levels of society must make
long-term, high-stakes decisions and where these decisions must prove their
correctness in an increasingly turbulent environment. Because they enable
us to identify the scope for action and examine the prospects of our
strategies, plans, and decisions, scenarios, i.e., sketches of alternative
futures, have emerged in recent decades as a key tool for systematic
preparation for an unknown future.
Scenarios are generated by different actors for different purposes using
different methods. The methods range from simply thinking about the
future to complex mathematical simulations. Surprisingly, there is a rather
limited stock of methods for the middle range between mental reflection
and mathematical simulation. This is all the more surprising when we
realize that in our preparations for the future, we often must cope with
“systems” that, on the one hand, are too complex to be penetrated by mental
reflection but that, on the other hand, we understand (at least in part) only
qualitatively, making a credible mathematical simulation difficult.
In this middle ground between simple and mathematically treatable
questions about the future, cross-impact balances (CIB) has established
itself since its publication (Weimer-Jehle, 2006) as a method for the
algorithmic construction of qualitative scenarios and for qualitative systems
analysis. With its help, scenarios and systems analyses have been produced
on the topics of waste, the working world, education, biotechnology,
energy, societal change, health and health infrastructure, industry and
services, information technology, innovation, climate, management,
mobility and transport, sustainability, politics, risk and security, urban and
spatial planning, technology management, behavior change, and water
supply. This range of application fields underscores that CIB has been
accepted as a generic method of analysis despite its origin in energy
economics research: CIB was originally developed in 2001 as a scenario
tool in a study by the Center for Technology Assessment in Baden-
Württemberg on the liberalization of European electricity markets.1

About this Book


In the meantime, an extensive and steadily growing body of literature on
CIB applications, methodological research, and methodological reflections
has emerged.2 However, a cohesive presentation of the method and its most
important evaluation and interpretation approaches is lacking. This is a
disadvantage for application practice because, on the one hand, CIB
analysis is, contrary to its simple appearance, by no means without
difficulties and pitfalls, while on the other hand, it offers much more
analytical potential than mere scenario construction, which has been the
focus of application practice in the past. Optimal use of the CIB method,
including an exhaustive data evaluation and an appropriate interpretation of
the results, requires a thorough understanding of the method and a sensible
selection and application of evaluation approaches.
The present volume is intended to provide this comprehensive
presentation and, as an introduction, is also aimed at users who have little or
no experience with CIB. It is intended as a guide for them in their first
applications of the method, providing them with the tools required for solid
basic use and for the interpretation of results. It refrains, however, from
discussing in-depth methodological issues or addressing the more complex
analytical procedures that have only recently been developed, such as
hybrid scenarios (Weimer-Jehle et al., 2016), the combination of CIB with
structure-seeking statistical procedures (Pregger et al., 2020), or the
coupling of CIB analyses performed at different regional levels (Schweizer
& Kurniawan, 2016; Vögele et al., 2017, 2019). Additionally, the CIB
concept of scenario succession and the related interpretation issues and
analysis opportunities are only briefly discussed in an appendix. These and
other advanced issues and possible applications of the CIB method are to be
addressed in a subsequent volume.

Scenarios
As mentioned, scenario construction is the most common application of
CIB thus far. It is therefore to this field of application that the descriptions
in this book refer, with a few exceptions. This focus is not intended to
disregard the value of CIB for qualitative systems analysis but is motivated
by the expectation that the transfer of the methodological descriptions and
considerations formulated here for scenario analysis to the field of
qualitative systems analysis be straightforward.
To fulfill their function as instruments for preparing for the future,
scenarios must be well constructed. They must capture what we can
reasonably assume today about the future and the forces that will shape it.
Taken together, well-constructed scenarios must express the different
directions in which these forces can steer events. There have been differing
views about how best to achieve this purpose since the early days of
scenario making in the 1950s and 1960s, from which two distinct “scenario
cultures” developed.

Simply Thinking
Herman Kahn, the creator of the modern scenario concept, argued that the
most important thing is to “think about the problem” (Schnaars, 1987: 109),
in other words, to prepare scenarios without the use of formal construction
methods. From Kahn’s perspective, formal construction techniques are
perceived as a distraction and an impediment to inspiration, intuition, and
free thinking. Following Kahn’s approach, the intuitive logics (IL) method
emerged (Huss & Honton, 1987; Wilson, 1998), according to which
scenarios are designed “by gut feeling” in expert discussions.3 The first
groundbreaking successes of the scenario technique are due to this
approach,4 and it is by far the most widely used scenario methodology to
date, except probably in the area of scientific scenarios.
The Magical Number Seven Plus/Minus Two
Almost simultaneously with the preceding approach, however, another view
of scenario construction emerged, which emphasized the value of formal
methods in the collection of information and in actual scenario construction.
One of the founders of this school of thought is Olaf Helmer, co-developer
of the Delphi method for structured collection of expert assessments
(Dalkey & Helmer, 1963) and co-inspirer of the first cross-impact
techniques for formal analysis of expert judgments (Gordon & Hayward,
1968).
Advocates of formal scenario construction can draw on weighty
arguments from cognition research. In a 1956 essay that would become one
of the most frequently cited publications in psychology textbooks,5
American psychologist George Miller evaluated a series of cognition
experiments (Miller, 1956). He concluded that there is an upper limit to our
mental capacity to accurately and reliably process information about
simultaneously interacting elements6 and that this limit is seven plus or
minus two elements. The essay triggered extensive and continuing research
on the question, with the result that Miller’s “magical number” must be
regarded as optimistically high (Cowan, 2001).
The transfer of these findings of cognition research to the problem of
scenario construction is inevitable and sobering. If a scenario analysis
addresses ten factors that will define the future (a rather modest number),
then 90 potential interactions arise between these ten factors. If only about
half of the potential interactions actually matter (which, as we will see in
Sect. 6.3.2, is about average), persons attempting the mental construction of
a scenario will have to keep in mind and weigh approximately 45
interrelationships to extract from them a scenario that considers all relevant
interrelationships. Given the limits of our mental capacities shown by
cognition research, can we hope to do justice to this task by intuitive
scenario construction?
A challenge for mental scenario construction also arises from another
angle, that is, from the combinatorial weight of the task. Even if we content
ourselves with a rough analysis and grant each of the ten factors three
conceivable future developments, which we then must combine into
meaningful scenarios, this process results in 3 to the power of 10, i.e.,
approximately 59,000 combinatorial alternatives, each of which must be
considered a possible scenario until disproven. How many of these
alternatives can be evaluated by mental reflection, and how many relevant
scenarios with potentially massive implications go unnoticed when we
finally find ourselves at the end of our time resources after intuitively
identifying a few plausible combinations? Incidentally, as we will see later,
combinatorial spaces with 59,000 combinatorial alternatives are among the
lesser challenges faced in scenario analysis.
However, the question of intuition-based versus formal construction of
scenarios is not the only fundamental controversy in the scenario
community. A second controversy is whether (or for what purposes)
scenarios should rely essentially on quantitative data or whether they should
also build substantially on qualitative bodies of knowledge.

If You Can’t Count It, It Doesn’t Count


This phrase represents the viewpoint that analyses of the future should
focus on quantitative methods (e.g., mathematical system models) and
quantitative data.7 The use of qualitative methods is perceived as a loss of
mathematical rigor, and qualitative information as a “gateway” for data that
in the worst case are ill-defined or ambiguous to interpret or that, it is
argued, too often put forward without a solid evidential base. Many
advocates of this perspective acknowledge that unquantifiable factors can
have an important influence on future events, particularly in systems whose
evolution is shaped by human decisions. Nevertheless, they consider the
disadvantage of the loss of rigor when opening the analysis to qualitative
aspects to be more serious than the disadvantage of not taking such factors
into account.

Better Approximately Right than Precisely Wrong


This phrase summarizes the counterperspective.8 Its advocates argue that
the rigor of mathematical methods and quantitative data are meaningless
and produce pseudoprecision if essential factors are excluded because they
are incompatible with the preferred analysis technique. From this
perspective, an approximate but, in essence, accurate picture of the problem
is considered more helpful than a picture that is drawn in detail but
misleading because of insufficient problem scoping.
Action research teaches that ignoring important problem aspects in
favor of “working convenience” is not uncommon among decision-makers.
In The Logic of Failure, German psychologist Dietrich Dörner analyzed
experiments on the behavioral patterns of decision-makers. He found
various patterns that can easily lead to failure when one is dealing with
complex real-world problems. Among typical causes of failed problem-
solving, he found the tendency to tailor the problem view to one’s familiar
arsenal of methods (rather than the other way around), noting critically:
“We do not solve the problems we are supposed to solve, but the problems
we can solve.”9
Many perceptive future researchers have long been aware that this
danger also exists in their own research domain. The old master of French
future research, Michel Godet, a proven friend of mathematical methods,
nevertheless deplored in his article Reducing the Blunders in Forecasting
the tendency of his professional colleagues to exclude the poorly
quantifiable factors from consideration to the disadvantage of forecast
reliability and wrote of the “… dangers of excessive quantification (the
ever-present tendency to concentrate on things which are quantifiable to the
detriment of those which are not)…”.10

CIB and the Concept of “Mechanical Reasoning”


It is pointless to argue about which positions in these discourses are better-
founded. However, it is not the case that the truth lies somewhere in the
middle and that “everyone is a little bit right.” Rather, in future research, we
encounter an immense variety of very different topics. The challenge lies in
recognizing anew from case to case which arguments carry the greater
weight in the application at hand.
At its core, CIB analysis consists of collecting qualitative information
on “cross-impacts,” i.e., the influence relationships between scenario
factors, and coding these relationships using an ordinal scale. A software-
supported simple balance algorithm is then applied en masse to determine
which system developments form a self-stabilizing trend network and can
thus be accepted as consistent scenarios. Therefore, with respect to the
previously described discourse, the application area of CIB is research
questions that (a) are too complex for exclusively mental treatment and at
the same time (b) urgently require the inclusion of qualitative knowledge.
When synthesizing the overall system picture, CIB combines the collected
partial information about the pair relationships between system elements
and assembles them into coherent constructions.
Correspondingly, American knowledge integration researcher Vanessa
Schweizer compares CIB analysis with the procedure of mechanical
reasoning (Schweizer, 2020). She refers to a discourse initiated by the
psychologist Paul Meehl in 1954 and conducted for decades on how case
predictions in psychology can best be obtained from case data (e.g., “Does
therapy X have prospects of being successful in case Y?” or “Will offender
Z recidivate or rather not?”). Are such case predictions best made through
expert assessments and case conferences? Or are they better based on
formal (today, we would say algorithmic) evaluation of case data, i.e.,
mechanical reasoning? In psychology, the evidence points to the superiority
of mechanical reasoning, and Schweizer refers to the analogy to the process
of creating and evaluating scenarios either through intuitive-discursive
processes in expert workshops (“case conferences”) or through CIB
(“mechanical reasoning”).
Human reasoning is without question much deeper, more multifaceted
and more adaptive than any form of mechanical reasoning. However, this
comes at the price that far fewer factors and alternatives can be considered.
Whenever the mass application of simple reflections yields more benefit
than a small number of selectively deepened reflections, mechanical
reasoning is likely to have an advantage over human reasoning. This is
especially true for combinatorial problems, i.e., problems that are
characterized by a large number of successive forks, each with several
alternatives. Combinatorial problems arise, for example, in chess, where
computer programs today can defeat any human opponent, and also in
scenario construction.

How to Work with This Book


The structure of this book follows the concept of a self-study course.
Chapter 2 presents a short introduction of the scenario technique and the
position of CIB with respect to this technique.

Chapter 3 Chapter 3 presents the technical basics of CIB. We follow step


by step how the workflow of a simple CIB analysis is structured and
describe the concepts that are applied in it. The core of the chapter is a
detailed explanation of the CIB algorithm, which, as experience shows,
requires some time to understand despite its structural simplicity and the
absence of complex mathematics. The reader should persist in this effort
until success is attained because only a detailed understanding of how CIB
evaluates cross-impact data allows access to the full application potential of
the method and a solid interpretation of its results.

Chapter 4 After “basic training” has been completed, Chap. 4 takes a


detailed look at the application of the method by presenting various analysis
procedures with examples. This presentation of the spectrum of possible
analyses is important to overcoming the narrow focus on scenario
generation, which remains observable in CIB practice, and bringing the
other, diverse analysis opportunities offered by the method to the reader’s
attention.

Chapter 5 For most users, the desired result of a CIB analysis is probably
a manageable portfolio of perhaps 3–6 clearly different scenarios. Such a
result is in fact not atypical for a CIB analysis. However, CIB does not
return a result with standardized properties. Rather, the scenario portfolio it
generates is an expression of the systemic relationships formulated in the
cross-impact matrix. The consequent result from a system-analytical
perspective can be a small or large, diverse or rather monotonous scenario
portfolio—independently of the wishes and expectations of the user.
Chapter 5 therefore addresses the case in which the result of a CIB analysis
does not meet expectations. It describes using other or supplementary
analysis approaches to arrive at a result that meets one’s needs or at least at
an understanding why one’s expectations are at odds with the system
picture that was input into the CIB analysis.

Chapter 6 Now that it is clear how CIB functions in principle and what
can be achieved with it, it is time to take a closer look at the three central
data objects of the method: descriptors, descriptor variants, and cross-
impact data. Hidden beneath the surface of the technical application are
many differentiations and design decisions that can be handled well or
poorly, unconsciously or purposefully. Chapter 6 therefore presents four
“dossiers” that compile key information about these data objects and how to
collect them. The dossiers are designed to provide in-depth information and
can be dispensed with when reading the book for the first time. However,
the reader is then advised to read the chapter in a second pass.
Chapter 7 Theory is followed by a visit to the workshops. Chapter 7
outlines four selected studies in which CIB was used by different research
teams to analyze the future, to analyze systems or to critically review
existing scenarios. The selection of examples is intended to reveal the
thematic diversity of the application of the method. The examples also
make clear that it is precisely in disciplines with entrenched methodological
traditions that new perspectives can be gained by using this still young
method.

Chapter 8 The final chapter is dedicated to reflection. Based on the


literature and the author’s own practice, the chapter describes CIB’s
strengths but also the challenges it poses. The limitations of CIB are also
addressed because a comprehensive understanding of the method has only
been acquired when it has also been understood in which cases the use of
this method is not promising.
Some of the book’s content is presented in forms that require
explanation. Where it seems useful, statistical analyses of method practice
are presented in statistics boxes. These boxes are intended to enable the
reader to position his or her own application within the spectrum of method
applications. Nutshells are compact, self-contained presentations of selected
analytical procedures. Memos (M for short) highlight important principles
that one should remain aware of in method practice.
Several small examples of cross-impact matrices are used to illustrate
methodological issues. Occasionally, these matrices are adopted from the
literature, which is indicated accordingly. Mostly, however, CIB miniatures
developed especially for didactic purposes are used. These miniatures do
not claim to treat their respective topics substantially, and the analysis
results they present should not be misunderstood as solid statements about
the subject. The purpose of the miniatures is exclusively to demonstrate the
method. The use of abstract matrices would prevent these demonstrations
being misunderstood as content-oriented statements about the topics of the
miniatures, but this approach would have reduced the miniatures’ vividness.
The practical implementation of CIB analysis requires specialized
computer software. At the time of printing this book, the free software
ScenarioWizard is used almost exclusively in the published literature.11 All
CIB evaluations in this book were performed using Version 4.4 of this
software. As an invitation to reproduce the method demonstrations, the
ScenarioWizard project files for all miniatures used in the book are
provided at https://cross-impact.org/miniatures/miniatures.htm.
Since this book is only intended as a support for CIB users and not as a
general textbook about scenario methods, it makes little mention of other
methods, and comparative methodological considerations are limited to a
brief discussion of method alternatives in Sect. 8.5. The sole purpose of the
text is to explain the CIB method, and it avoids burdening readers with
explanations that are not necessary for this purpose. For more detailed
information on other methods and for comparative analyses, I refer the
reader to the general literature on scenario techniques and to more
specifically focused research literature.
For the same reason, there is little discussion in the book of how
scenarios, once created, can be used to prepare for the future by
organizations or academics. This question is not CIB-specific and therefore
does not require a CIB-specific discussion, and there are many good
descriptions of the question in the general scenario literature.12 The task of
this book essentially ends with the completion of scenario analysis in the
narrow sense, i.e., the development and analysis of scenarios.

References
Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage
capacity. Behavioral and Brain Sciences, 24, 87–185.
[Crossref]

Dalkey, N., & Helmer, O. (1963). An experimental application of the Delphi method to the use of
experts. Management Science, 9, 458–467.
[Crossref]

Dörner D. (1997). The logic of failure: Recognizing and avoiding error in complex situations basic
books

Fink, A., Schlake, O., & Siebe, A. (2002). Erfolg durch Szenario-Management – Prinzip und
Werkzeuge der strategischen Vorausschau. Campus.

Förster, G. (2002). Szenarien einer liberalisierten Stromversorgung. Akademie für


Technikfolgenabschätzung.

Godet, M. (1983). Reducing the blunders in forecasting. Futures, 15, 181–192.


[Crossref]

Gordon, T. J., & Hayward, H. (1968). Initial experiments with the cross impact matrix method of
forecasting. Futures, 1(2), 100–116.
[Crossref]

Gorenflo, D. W., & McConnell, J. (1991). The Most frequently cited journal articles and authors in
introductory psychology textbooks. Teaching of Psychology, 18, 8–12.
[Crossref]

Huss, W. R. (1988). A move toward scenario analysis. International Journal of Forecasting, 4, 377–
388.
[Crossref]

Huss, W. R., & Honton, E. (1987). Alternative methods for developing business scenarios.
Technological Forecasting and Social Change, 31, 219–238.
[Crossref]

Kosow, H., & Gaßner, R. (2008). Methods of future and scenario analysis – Overview, assessment,
and selection criteria. DIE Studies 39, Deutsches Institut für Entwicklungspolitik.

Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for
processing information. The Psychological Review, 63, 81–97.
[Crossref]

Pregger, T., Naegler, T., Weimer-Jehle, W., Prehofer, S., & Hauser, W. (2020). Moving towards
socio-technical scenarios of the German energy transition–lessons learned from integrated energy
scenario building. Climatic Change, 162, 1743–1762. https://doi.org/10.1007/s10584-019-02598-0
[Crossref]

Read, C. (1920). Logic–deductive and inductive (4th ed.). Simpkin & Marshall.

Ringland, G. (2006). Scenario planning – Managing for the future. John Wiley.

Saaty, T. L., & Ozdemir, M. S. (2003). Why the magic number seven plus or minus two.
Mathematical and Computer Modelling, 38, 233–244.
[Crossref]

Schnaars, S. P. (1987). How to develop and use scenarios. Long Range Planning, 20(1), 105–114.
[Crossref]

Schweizer, V. J. (2020). Reflections on cross-impact balances, a systematic method constructing


global socio-technical scenarios for climate change research. Climatic Change, 162, 1705–1722.
[Crossref]

Schweizer, V. J., & Kurniawan, J. H. (2016). Systematically linking qualitative elements of scenarios
across levels, scales, and sectors. Environmental Modelling & Software, 79, 322–333. https://doi.org/
10.1016/j.envsoft.2015.12.014
[Crossref]

Vögele, S., Hansen, P., Poganietz, W.-R., Prehofer, S., & Weimer-Jehle, W. (2017). Scenarios for
energy consumption of private households in Germany using a multi-level cross-impact balance
approach. Energy, 120, 937–946. https://doi.org/10.1016/j.energy.2016.12.001
[Crossref]

Vögele, S., Rübbelke, D., Govorukha, K., & Grajewski, M. (2019). Socio-technical scenarios for
energy intensive industries: The future of steel production in Germany. Climatic Change, 1–16. in
context of international competition and CO2 reduction. STE preprint 5/2017, Forschungszentrum
Jülich.

Wack, P. (1985a). Scenarios–uncharted waters ahead. Harvard Bussiness Review, 62(5), 73–89.

Wack, P. (1985b). Scenarios–shooting the rapids. Harvard Bussiness Review, 63(6), 139–150.

Weimer-Jehle, W. (2001). Verfahrensbeschreibung Szenariokonstruktion im Projekt Szenarien eines


liberalisierten Strommarktes. Akademie für Technikfolgenabschätzung in Baden-Württemberg.

Weimer-Jehle, W. (2006). Cross-impact balances: A system-theoretical approach to cross-impact


analysis. Technological Forecasting and Social Change, 73(4), 334–361.
[Crossref]

Weimer-Jehle, W., Buchgeister, J., Hauser, W., Kosow, H., Naegler, T., Poganietz, W.-R., Pregger, T.,
Prehofer, S., von Recklinghausen, A., Schippl, J., & Vögele, S. (2016). Context scenarios and their
usage for the construction of socio-technical energy scenarios. Energy, 111, 956–970. https://doi.org/
10.1016/j.energy.2016.05.073
[Crossref]

Wilson I. (1998). Mental maps of the future: An intuitive logics approach to scenario planning, In:
Fahey L, Randall R.M (Eds) Learning from the future: Competitive foresight scenarios, Wiley, p 81–
108.

Footnotes
1 Method development: Weimer-Jehle (2001). First method application: Förster (2002).

2 See the CIB bibliography at www.cross-impact.org/english/CIB_e_Pub.htm

3 Schnaars (1987:106): “Scenario writing is a highly qualitative procedure. It proceeds more from
the gut than from the computer, although it may incorporate the results of quantitative models.
Scenario writing assumes that the future is not merely some mathematical manipulation of the past,
but the confluence of many forces, past, present and future that can best be understood by simply
thinking about the problem.”

4 This refers in particular to the Shell scenarios on the eve of the oil crisis (Wack, 1985a, 1985b).

5 According to Gorenflo and McConnell (1991).


6 According to the interpretation of Saaty and Ozdemir (2003). Specifically, the cognitive tasks in
the experiments analyzed by Miller were, for example, the identification of n discriminable stimuli
and the ability to correctly reproduce n items from a read-out list.

7 Huss (1988:378), for example, reports the prevalence of this perspective among forecasters during
the 1980s.

8 The phrase is attributed to various individuals, including economist John Maynard Keynes and
philosopher Karl Popper. However, the oldest published source known to the author refers to the
British philosopher Carveth Read (1920:351) (“Better to be vaguely right than precisely wrong”).

9 Dörner (1997), own translation from German edition.

10 Godet (1983), pages 181, 182, 189. Godet does not conclude from this view to renounce
quantitative methods but, rather, recommends a combination of qualitative and quantitative methods
for prognostics. These considerations are transferable to the field of scenario methodology.

11 Available at: https://www.cross-impact.org/english/CIB_e_ScW.htm

12 E.g., Kosow and Gaßner (2008), Ringland (2006), Fink et al. (2002).
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis, Contributions to Management
Science
https://doi.org/10.1007/978-3-031-27230-1_2

2. The Application Field of CIB


Wolfgang Weimer-Jehle1
(1) ZIRIUS, University of Stuttgart, Stuttgart, Germany

Wolfgang Weimer-Jehle
Email: [email protected]

Keywords Cross-Impact Balances – CIB – Scenario – Foresight –


Qualitative impact network

2.1 Scenarios
Scenarios are a future research concept for dealing with future openness and
uncertainty. According to the definition by Michael Porter (1985), a
scenario is:

…an internally consistent view of what the future might turn out to
be—not a forecast, but one possible future outcome.

Scenarios thus assume that there are multiple possible futures and that it is
not possible to recognize in the present which of them will occur. From the
perspective of scenario technique, preparing for the future means dealing
with a variety of possible futures instead of—as in forecasting—focusing
on one expected future and aligning our actions specifically with this
expectation.
Figure 2.1 visualizes this concept by means of the so-called “scenario
funnel” (e.g., Kosow & Gaßner, 2008). Here, the development of a system
is sketchily represented by two quantitatively measurable key variables
(“Trend A” and “Trend B”). At time P (the present), the state of the trend
variables is known. In the future, the trend variables may evolve away from
their present state. The further we look into the future, the greater the
uncertainty about the state of the trend variables becomes, and the funnel
enclosing the possibility space of the system opens.

Fig. 2.1 The scenario funnel

In the case of a forecast, one would rely on the center of the funnel. This
is appropriate when the opening of the funnel is narrow. Often, however, in
long-term decision problems, the opening of the funnel is so wide that the
different locations of the opening represent essentially different ideas of the
future and require different decisions. Then, it would be inappropriate to
focus on the center, and the wideness of the opening is better addressed by a
“portfolio” of different scenarios wisely distributed across the opening.
Figure 2.1 has only an illustrative function and makes the basic idea of
the scenario technique understandable. The front surface of the funnel is not
necessarily circular but can take on more complicated shapes. In reality,
more than two key variables are usually required to describe the future
development of a system in a meaningful way, often including qualitative
variables that cannot be measured on a numerical scale.
The development of the modern scenario concept is usually attributed to
Herman Kahn, who worked at the RAND Corporation in the 1950s, when
he advised the US government on military strategy issues (Kahn, 1960).
Roots of thought, however, can be traced further back historically (cf. von
Reibnitz, 1987). After a striking and economically momentous application
of the concept in the corporate sector by the Shell company shortly before
the first oil crisis (Wack, 1985a, 1985b), the use of scenario techniques
quickly spread in academia, business, and administration. As early as the
beginning of the 1980s, in a corporate survey, approximately 50% of the
responding large US companies already confirmed that they used scenarios.
All users had had sufficiently positive experiences with the method to want
to continue using it (Linneman & Klein, 1983), and later surveys showed
further increases in usage rates. The high importance of the concept in
future research also is reflected in the use of terms in the literature. Textual
analyses of electronically recorded English-language books show that
“scenario” became a dominant term in foresight around 1994, surpassing
the frequencies of use of the competing terms “projection” and “forecast”
(Trutnevyte et al., 2016).

2.2 Scenarios and Decisions


Scenarios are instruments for preparing for the future. As an example of the
role that scenarios can play here, we consider their use in the development
of long-term planning and decision-making. Preparation for the future is
necessary here because planning must be aligned with its environment to be
successful, not just at the time of planning but for the entire planning
period.
For example, given the very long-term investments involved, the
planning of an urban drainage system cannot be based solely on current
needs. Rather, planning must consider that, in the long term, population
level, age structure, lifestyles, and consumption habits may change, changes
may occur in the amount and structure of industrial and commercial activity
in the settlement, and legal standards, such as environmental requirements,
also may change.1 Only if these and other environmental conditions and
their future uncertainty are adequately considered can we hope for robust
planning.
For short-term planning, it can be assumed for simplicity that the
current environment will continue to exist. For medium-term planning,
however, a change in the planning environment must be assumed, although
the changes are still foreseeable (predictable). In the case of long-term
planning, however, as in the case of urban drainage, this no longer applies.
The planning environment can change so much that the changes lead to
completely different environmental conditions. Trends and structures can
turn, and it is not clearly foreseeable in which direction.
This uncertainty cannot be eliminated by scenarios, but it can be made
manageable. For this, the environmental uncertainty is represented by a
series of scenarios, and for each scenario, how a proposed planning variant
would perform in that scenario can be analyzed. This leads to a
performance matrix (Table 2.1).
Table 2.1 Performance matrix for the evaluation of planning variants. Own depiction based on Fink
et al. (2002)

Environmental Scenario
I II III IV
Planning Variant A ++ ++ + -
Planning Variant B + o ++ o
Planning Variant C -- -- - ++
Planning Variant D ++ o - --
Planning Variant E o - -- --

Planning success is represented here in simplified form by a five-part


ordinal scale [−-, −,0,+, ++]. Decision-makers can now use the performance
matrix to assess which planning variant best meets their objectives.
Planning variants D and E can be discarded immediately, since planning
variants A and B offer alternatives that promise at least the same or better
performance for all scenarios.2 Risk-affine decision-makers will opt for
planning variant A, since a good result can be expected for most
environmental scenarios. Risk-averse decision-makers, however, will
refrain from planning variant A because of the result for environmental
scenario IV and prefer planning variant B instead because planning failure
(i.e., negative performance) is not to be expected for any environmental
scenario.
However, as mentioned above, a robustness test for planning variants is
only one example of the possible uses of scenarios. Scenarios can have very
different functions in preparing for the future. They can aim at knowledge
enhancement, organizational learning, goal formation or decision
preparation (Kosow & Gaßner, 2008). In each case, the focus is on different
questions (Table 2.2).
Table 2.2 Types and functions of scenarios

Scenario type Focus Function Symbol


Explorative What can Expanding the understanding of trends and
scenarios happen? interrelationships in the system under study. Strategy
development and assessment of environmental
influences on planning. Testing the robustness of
planning.
Normative What should Development and concretization of objectives.
scenarios happen? Reflection on the desirability and feasibility of
certain developments.
Backcasting What do we Identify actions needed to realize a targeted
scenarios need to do to development path.
make …
happen?
Policy analysis What happens Development of several scenarios, each assuming a
scenarios if…? different decision today. Comparison of the strengths
(strategy and weaknesses of the alternative paths.
scenarios)
Communication How can I make Promote shared understanding about problems and
scenarios my idea of the perspectives. Organizational learning.
future
understandable
to others?

2.3 Classifying CIB


The position of CIB in the field of scenario methods has already been
indicated in the introduction: CIB is particularly suitable for the analysis of
complex future developments that can be understood only with the help of
qualitative information (Fig. 2.2). Here, CIB has a unique position in some
respects (cf. Sect. 8.1).
Fig. 2.2 A classification of scenarios and scenario methods

For other requirements, other methods are available: Scenarios for


quantitative systems are often created with the help of mathematical
models, for example, scenarios for traffic flows in a city assuming different
variants of road construction projects. Scenarios whose subject matter can
be described entirely or partially by qualitative factors and which are
created quickly and without a sophisticated methodological framework are
often developed using the Intuitive Logics (Wilson, 1998) or Scenario Axes
methods (Van’t Klooster & MBA, 2006). An example would be a citizen
panel on urban development where citizens are asked to express their ideas,
preferences, and concerns in a series of alternative district scenarios.
Approaches to mental processing of quantitative systems (Sector IV) do not
play a significant role in scenario practice.3
For the field in the upper right (“Sector II”), in which the necessity of
the inclusion of qualitative information coincides with the necessity of
dealing with complexity, there are only a few methods established in broad
application practice. The classical method for this profile of requirements is
the Consistency Matrix method (aka Field Anomaly Relaxation) developed
around 1970 (Rhyne, 1974; von Reibnitz, 1987; Johansen, 2018), and CIB
can be understood as its further development. While the consistency matrix
collects and utilizes information about which developments are mutually
exclusive, CIB goes beyond this and works with a qualitative causal model
of the system under investigation, which allows deeper access to the what
and why of system behavior (Weimer-Jehle, 2009).
CIB can be understood as a form of qualitative network analysis. The
system under study is conceived as an array of network nodes, between
which a network of bilateral influence relationships operates (Fig. 2.3). The
network is qualitative because only a limited number of discrete states are
assigned to each node, with the influence relationships between nodes
determining which states are active in the nodes. The arrows between the
nodes abbreviate a matrix indicating which state of the source node
promotes or hinders which state of the destination node. The configurations
of active states that can occur under the influence of the interactions are the
scenarios of the network behavior.
Fig. 2.3 Example of a qualitative network of interacting nodes

CIB’s general approach to analyzing a qualitative network consists of


four steps, as shown in Fig. 2.4, using the example of a simple future
analysis on the topic of societal development.
Fig. 2.4 Workflow of a CIB analysis

First, the factors that should be part of the scenarios and are able to
represent the most important system interrelationships are selected. These
factors are the nodes of the network and are called “descriptors” in CIB.4
Next, a small number of alternative futures (“descriptor variants”) are
formulated for each descriptor. These describe which future uncertainty (or
future openness) is assumed for the descriptor. The descriptor variants
represent the discrete states of the network nodes in Fig. 2.3. In this respect,
CIB follows the program of morphological analysis, a general method for
structuring possibility spaces (Zwicky, 1969).
In the third step, however, CIB takes a different approach than the
classical morphological analysis and turns, in the style of a cross-impact
analysis (CIA, Gordon & Hayward, 1968), to the influence relationships
between the descriptors, i.e., the arrows between the network nodes. To do
this, information is collected on whether development in one descriptor X
influences which development in another descriptor Y prevails. This
information is then coded on an ordinal scale from strongly hindering to
strongly promoting. These relationships are referred to as “cross-impacts.”
Cross-impact analysis is the name given to a relatively broad group of
methods developed from the 1960s onward to examine qualitative
information on interacting events and trends in very different ways. With
the name “Cross-Impact Balances,” CIB places itself in this tradition, but
with the special name variant, it also refers to a characteristic peculiarity
that distinguishes it from other cross-impact analyses: the use of impact
balances as its central analysis instrument.
Finally, in the fourth step, the collected information about the pair
relationships of the system is synthesized into coherent images of the
overall system, i.e., plausible network configurations, with the help of an
algorithmic procedure. The results are interpreted as consistent scenarios.
How exactly this synthesis step is performed will be the subject of Chap. 3.
Next—no longer part of the CIB analysis in a strict sense and yet its
objective—is the utilization of scenarios by individuals or organizations in
the context of their decision-making and their preparation for the future.
However, the different uses for scenarios in planning and decision-making
processes are not CIB-specific and therefore are not the subject of this
book. Here, reference must be made to the general scenario literature.
As a rule, different people are involved in different roles in a CIB
analysis. For further use in the text, four terms are introduced:

Core The core team consists of the person(s) who design, conduct, evaluate, and document the
team CIB analysis.
Sources The information on the main factors and interdependencies of a system required for a
CIB analysis can be obtained from the literature and/or by interviewing experts. The
term “(knowledge) sources” is used as an umbrella term for both resources of
information acquisition.
Experts When people play the role of knowledge sources for a CIB analysis, they are referred to
as the “experts.”
Target The preparation of the CIB analysis aims to provide the “target audience” with
audience orientation to the system under study and thereby support goal setting or decision-
making.

In practice, the roles may overlap. For instance, people from the target
audience of a CIB analysis also may have expertise on the issue under
investigation and contribute to the analysis by participating in the expert
panel. In the remainder of the text, these terms are therefore used as role
designations, irrespective of the personnel involved.
References
Alcamo, J. (2008). The SAS approach: Combining qualitative and quantitative knowledge in
environmental scenarios. In J. Alcamo (Ed.), Environmental futures–the practice of environmental
scenario analysis (Vol. 2, pp. 123–150). Elsevier.
[Crossref]

Fink, A., Schlake, O., & Siebe, A. (2002). Erfolg durch Szenario-Management – Prinzip und
Werkzeuge der strategischen Vorausschau. Campus verlag.

Gordon, T. J., & Hayward, H. (1968). Initial experiments with the cross impact matrix method of
forecasting. Futures, 1(2), 100–116.
[Crossref]

Honton, E. J., Stacey, G. S., & Millet, S. M. (1985). Future scenarios–the BASICS computational
method, economics and policy analysis occasional paper (Vol. 44). Batelle Columbus Division.

Johansen, I. (2018). Scenario modelling with morphological analysis. Technological Forecasting and
Social Change, 126, 116–125.
[Crossref]

John, S. (2009). Bewertungen der Auswirkungen des demographischen Wandels auf die
Abwasserbetriebe Bautzen mit Hilfe der Szenarioanalyse. Dresdner Beiträge zur Lehre der
betrieblichen Umweltökonomie 34/09. University of Dresden.

Kahn, H. (1960). On thermonuclear war. Oxford University Press.

Van’t Klooster, S. A., & MBA, v. A. (2006). Practising the scenario-axes technique. Futures, 38(1),
15–30.
[Crossref]

Kosow, H., & Gaßner, R. (2008). Methods of future and scenario analysis – Overview, assessment,
and selection criteria. DIE Studies 39,. Deutsches Institut für Entwicklungspolitik.

Linneman, R. E., & Klein, H. E. (1983). The use of multiple scenarios by U.S. industrial companies:
A comparison study 1977–1981. Long Range Planning, 16, 94–101.
[Crossref]

Porter, M. E. (1985). Competitive advantage. Free Press.

von Reibnitz, U. (1987). Szenarien - Optionen für die Zukunft. McGraw-Hill.

Rhyne, R. (1974). Technological forecasting within alternative whole futures projections.


Technological Forecasting and Social Change, 6, 133–162.
[Crossref]

Trutnevyte, E., McDowall, W., Tomei, J., & Keppo, I. (2016). Energy scenario choices: Insights from
a retrospective review of UK. Renewable and Sustainable Energy Reviews, 55, 326–337.
[Crossref]

Wack, P. (1985a). Scenarios - uncharted waters ahead. Harvard Bussiness Review, 62(5), 73–89.
Wack, P. (1985b). Scenarios - shooting the rapids. Harvard Bussiness Review, 63(6), 139–150.

Weimer-Jehle, W. (2009). Szenarienentwicklung mit der Cross-Impact-Bilanzanalyse. In J.


Gausemeier (Ed.), Vorausschau und Technologieplanung (pp. 435–454). HNI-Verlagsschriftenreihe
265.

Weimer-Jehle, W., Buchgeister, J., Hauser, W., Kosow, H., Naegler, T., Poganietz, W.-R., Pregger, T.,
Prehofer, S., von Recklinghausen, A., Schippl, J., & Vögele, S. (2016). Context scenarios and their
usage for the construction of socio-technical energy scenarios. Energy, 111, 956–970. https://doi.org/
10.1016/j.energy.2016.05.073
[Crossref]

Weimer-Jehle, W., Vögele, S., Hauser, W., Kosow, H., Poganietz, W.-R., & Prehofer, S. (2020).
Socio-technical energy scenarios: State-of-the-art and CIB-based approaches. Climatic Change, 162,
1723–1741. https://doi.org/10.1007/s10584-020-02680-y
[Crossref]

Wilson, I. (1998). Mental maps of the future: An intuitive logics approach to scenario planning. In L.
Fahey & R. M. Randall (Eds.), Learning from the future: Competitive foresight scenarios (pp. 81–
108). Wiley.

Zwicky, F. (1969). Discovery, invention, research through the morphological approach. Macmillan.

Footnotes
1 The example is based on a study of the impact of demographic changes on a wastewater company
(John, 2009).

2 In short, only Pareto-optimal planning variants must be considered.

3 However, both estimative (“intuitive”) and computational quantifications of qualitative scenarios


partly play a role in the elaboration of qualitative “raw scenarios” into comprehensive pictures of the
future (see, e.g., Alcamo, 2008 or Weimer-Jehle et al., 2016, 2020).

4 The earliest use of the descriptor term in the scenario technique known to the author goes back to
Honton et al. (1985).
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis, Contributions to Management
Science
https://doi.org/10.1007/978-3-031-27230-1_3

3. Foundations of CIB
Wolfgang Weimer-Jehle1
(1) ZIRIUS, University of Stuttgart, Stuttgart, Germany

Wolfgang Weimer-Jehle
Email: [email protected]

Keywords Cross-Impact Balances – CIB – Scenario – Algorithm –


Qualitative systems analysis – QSA – Consistency – Morphological
analysis – ScenarioWizard

This chapter provides a step-by-step description of how to conduct a CIB


analysis. The data objects used in CIB—descriptors, descriptor variants,
and cross-impact ratings—are introduced and CIB’s data evaluation
algorithm is explained. The goal of the chapter is to make the technical
process of applying the method and the concept of consistent scenarios, as
practiced in CIB, understandable. Many methodological and practical issues
are left aside for the time being and are addressed in subsequent
chapters. The process of a CIB analysis is outlined in this chapter in the
context of a simple demonstration. The object of the demonstration is the
fictitious country “Somewhereland” and the question, already briefly
addressed in Sect. 2.3, regarding which societal futures are plausible for this
country in, say, 20 years’ time under the effect of the interdependencies of
political, economic, and social developments.

3.1 Descriptors
Descriptors are the key topics that are used to compose the scenarios during
the analysis. Together, they should allow us to describe the system under
study and its most important internal interactions. The term originates in
librarianship and computer science to describe words that can be used to
index the content of texts or datasets. Since the 1980s, the term also has
been used in scenario techniques (Honton et al., 1985). In some cases, the
term “(scenario) factors” is used instead in the scenario literature (e.g.,
Gausemeier et al., 1998).
To identify the necessary descriptors, it can be helpful to adopt a
fictitious future perspective: Imagine that the target year of the scenario
analysis had been reached and that you, as a chronicler, were faced with the
task of concisely describing the “past” development and explaining it in
retrospect. Which topics would then appear to be particularly worth
mentioning? Which connections and cause-effect relationships would have
to be explained? In a limited text, it is not possible and not necessary to go
into every detail. However, the chronicler must dissect the system into
various partial developments to the extent that the developments that have
occurred can be made understandable.
Six descriptors are used for the “Somewhereland” demonstration
analysis. Somewhereland is a multiparty democracy. Which party governs
the country and thus shapes its political course is consequently an important
but open question. Because Somewhereland has many neighboring
countries with which it shares a variable history, the stance of its foreign
policy also is an essential part of the story to be told. Economic
development and the distribution of wealth also will contribute to shaping
how the country develops. Whether social cohesion is strengthened or
fractured will be the result but at the same time the cause of developments
in other areas. Finally, closely interwoven as a cultural undercurrent with all
these developments is the question of the social values that prevail in
Somewhereland. In summary, the analysis addresses the descriptor field
shown in Fig. 3.1.
Fig. 3.1 The descriptor field for the “Somewhereland” analysis

The selection of the descriptors is a first and decisive work step for the
analysis quality. Different procedures for the practical execution of this step
are described in Sect. 6.4.

3.2 Descriptor Variants


Openness of the future means that the descriptors could generally take
different developments. Only then does the scenario approach make sense.
If the future were foreseeable for all descriptors, the system would be
predictable, and it would be unnecessary to develop different scenarios for
the system. The openness of the future for each descriptor is captured in
CIB by assigning a number of different developments to each descriptor as
“descriptor variants.” Figure 3.2 shows an example of possible variants for
the Somewhereland descriptor “A. Government.”
Fig. 3.2 Descriptor variants (alternative futures) for the descriptor “A. Government”

It is assumed for Somewhereland that three political parties compete


democratically for power, which they can win and often maintain for a long
time in a majority electoral system without coalitions. The parties
characterize their programmatic emphases by the catchwords “patriotic,”
“prosperity,” and “social,” but they also pursue the other political goals to a
lesser extent in each case. The programmatic emphases of the parties are
placed in quotation marks because they are self-attributions.
The choice of variants defines the range of possible futures that are
considered realistic for the descriptor and relevant for the analysis.
Typically, 2–4 variants are assigned to each descriptor, although the number
of variants may vary from descriptor to descriptor, and occasionally, more
variants are used. For special uses, individual descriptors with only one
variant may be considered. In the “Somewhereland” example, the descriptor
variants shown in Fig. 3.3 are used.

Fig. 3.3 Compilation of descriptors and their variants for Somewhereland


The alphabetical labeling of descriptors (A, B, C, …) with the
numbering of descriptor variants (B1, B2, B3) used in the example is
applied throughout this book. However, this is not a binding convention in
CIB.

3.2.1 Completeness and Mutual Exclusivity of the Descriptor


Variants
CIB assumes that the range of possible futures of a descriptor is fully
described, albeit qualitatively, by its set of descriptor variants. The structure
in Fig. 3.3 thus means, for example, that the future rise of a fourth party,
which is insignificant today, is ruled out at least for the time period under
consideration. Otherwise, the possibility of this development would have to
be considered by inserting an additional descriptor variant for the fourth
party. Only then could the subsequent CIB analysis examine whether and
under which circumstances this fourth option is “activated.”
On the other hand, since it would be excessive and impracticable to
cover every even remotely conceivable development with an additional
descriptor variant, a restriction to the alternatives that are actually
considered relevant and significantly different is unavoidable, which
requires dexterity and the awareness that at this point the arena of
possibilities in which the analysis will take place is determined.
To uniquely represent each possible future of the system under study by
exactly one scenario, the variants of each descriptor also must be defined in
a mutually exclusive manner. This means that for the development of a
descriptor that is to be described by the descriptor variant X, no other
variant of the same descriptor may be applicable at the same time.
Together, completeness and mutual exclusivity mean that each relevant
future development of a descriptor must always correspond to one and only
one of its variants.

3.2.2 The Scenario Space


Defining a scenario based on a set of descriptors and descriptor variants
means selecting one variant for each descriptor from its range of variants.
For example:

A3 B1 C2 D1 E3 F1
is an abbreviation for the scenario:

A. Government A3 “social party”


B. Foreign policy B1 cooperation
C. Economy C2 stagnant
D. Distribution of wealth D1 balanced
E. Social cohesion E3 unrest
F. Social values F1 meritocratic

With the descriptors and descriptor variants defined, the number of


scenarios that can be constructed from a combinatorial point of view also is
fixed, forming the scenario space from which we can select scenarios. The
number of scenarios Z in the scenario space is:
$$ Z={V}_1\cdotp {V}_2\cdotp {V}_3\cdotp \dots \cdotp {V}_N $$
where Vi is the number of variants of descriptor i and N is the number of
descriptors. Table 3.1 shows some examples of scenario spaces of different
numbers of descriptors N.
Table 3.1 Number of scenarios in scenario spaces of different sizes

N Vi = 2 Vi = [3,2,3,2…] Vi = 3
5 32 108 243
10 1024 7776 59,049
15 32,768 839,808 14.4 m
20 1.05 m 60.5 m 3.49 bn
25 33.6 m 6.53 bn 847 bn

Capturing possibility spaces through a list of central characteristics and


their optional variants is a general reflection technique called
morphological analysis (Zwicky, 1969). It is widely used in scenario
analysis (Rhyne, 1974; Honton et al., 1985; von Reibnitz, 1987;
Gausemeier et al., 1998; Johansen, 2018). Thus, this first step of the CIB
analysis is not CIB-specific. Characteristic of CIB, however, is how
scenarios are selected from the scenario space.

3.2.3 The Need for Considering Interdependence


CIB emphasizes the perspective that it would be too simplistic to see a
credible scenario in every combination of descriptor variants. This would
imply that the developments of the descriptors are all completely
independent of each other, so that, for example, the future of descriptor “E.
Social cohesion” is completely open, regardless of the direction in which,
for example, descriptors “D. Distribution of wealth” or “F. Social values”
develop. However, in reality, social cohesion in a society certainly depends
to some extent on whether or not society’s wealth is distributed according to
widely accepted criteria. Blindly combining descriptor variants would mean
ignoring such interrelationships, and a scenario thus easily loses the central
quality that distinguishes a good scenario from an arbitrary imagination of
the future and that also is at the heart of the definition of the term
“scenario” (cf. Sect. 2.1): its inner logic, its consistency. The next step on
the way to creating credible scenarios must therefore be to address the
interdependencies between the various descriptors and their variants.
Unlike the traditional consistency matrix method (cf. Sect. 8.5),
however, CIB is not limited to a correlational understanding of
interdependence (which developments can occur together and which
cannot). Instead, CIB works with a causal understanding of interdependence
(who influences whom, and how).

3.3 Coping with Interdependence: The Cross-


Impact Matrix
From a combinatorial point of view, the scenario space for Somewhereland
is easy to describe: When the variants of all descriptors are freely
combined, there are 3 · 3 · 3 · 2 · 3 = 486 possible scenarios. For larger
numbers of descriptors, the number of scenarios increases rapidly, as Table
3.1 shows.
As explained in Sect. 3.2, to ensure the internal consistency of the
scenarios, it is necessary to examine the interdependencies between the
descriptors, and this means first to identify them. In CIB, this is done by
assessing the “cross-impacts” between the descriptor variants, i.e., by
assessing the influence that the development of one descriptor exerts on the
development of another descriptor.1
The cross-impact x → y answers the question: If development x were to occur for M1
descriptor X, would this promote or hinder development y for descriptor Y?

For the rating of the cross-impacts, an integer scale is used, which is


mostly chosen from −3 (strongly hindering) to +3 (strongly promoting)
(Table 3.2).2 However, other rating intervals, for example, from −2 to +2,
also are possible in CIB.
Table 3.2 Seven-part cross-impact rating scale

Rating scale for cross-impact judgments


−3 Strongly hindering influence
−2 Hindering influence
−1 Weakly hindering influence
0 No influence
+1 Weakly promoting influence
+2 Promoting influence
+3 Strongly promoting influence

A cross-impact assessment brings together all cross-impact ratings that


relate the influence of one descriptor on another descriptor. Ideally, the
deliberations underlying the ratings are documented along with the cross-
impact values. For the influence of descriptor “A. Government” on
descriptor “B. Foreign Policy,” the cross-impact assessment might look like
Fig. 3.4. Here, among other things, the judgment is coded that a government
that places patriotic issues in the foreground of political discourse would
have (in Somewhereland) a hindering effect of medium strength on the
emergence of a cooperative style in foreign policy (entry −2 in the upper
left of the number field in Fig. 3.4).

Fig. 3.4 Cross-impact assessment of the influence between two descriptors

It also is possible to express the cross-impact ratings by cumulating plus


and minus signs in the matrix cells (Fig. 3.5) instead of using the numerical
values −3 … +3.
Fig. 3.5 Representing cross-impact data without use of numbers

Kosow et al. (2022), following Weitz et al. (2019), point out another
alternative form of representation (Fig. 3.6).

Fig. 3.6 Representation of cross-impact data following Weitz et al. (2019)

The evaluation of the data is identical in all cases. The difference lies
solely in the style of presentation.
Documentation of the justifications for the cross-impact ratings, as
indicated in Fig. 3.4, is not required for the technical procedure of scenario
construction using CIB. However, it is still recommended for several
reasons. First, the documentation is helpful for the core team itself to
understand the internal logic of the completed scenarios and to be able to
explain them to others. The documented arguments also make it easier for
third parties to independently understand the cross-impacts and thus the
foundation of the scenarios. This makes it easier for the target audience of
the analysis to convince themselves of the plausibility of the scenarios or to
offer more targeted and constructive criticism. Finally, it also can be
difficult for the core team itself after some time to remember in detail the
reasons for the cross-impact assessments. The documentation is then the
key to being able to understand and explain one’s own work even years
later.
All cross-impact assessments taken together form the cross-impact
matrix. In the case of Somewhereland, it takes the form shown in Fig. 3.7.
The cross-impact data used here were chosen by the author and are merely
illustrative. In practice, cross-impact data are usually collected through
literature review and/or expert elicitation (cf. Sect. 6.4).

Fig. 3.7 The cross-impact matrix of Somewhereland

Just as for Fig. 3.4, it is also true for the entire matrix that in CIB the descriptors and M2
descriptor variants in the rows are regarded as impact sources and in the columns as
impact targets.

Thus, the judgment cell marked by a small circle on the right in Fig. 3.7
describes the strongly promoting impact that social unrest (E3) would have
on the emergence of family orientation as a dominant social value (F3). It is
crucial to carefully observe the convention of rows as the source of impact
and columns as the target of impact because confusing the roles of rows and
columns leads to the reversal of cause and effect in the impact relationship
and thus to the corruption of the internal logic of the scenarios.
As a rule, the diagonal evaluation fields remain empty since they would
not describe interdependencies but the influence of a descriptor on itself. In
special cases, however, the diagonal fields also can be used to describe self-
influences.3 The CIB algorithm is able to handle this issue as well.
Optionally, the cross-impact matrix also can be printed without the
judgment sections completely filled with zeros (Fig. 3.8). This can improve
clarity, especially for matrices that (unlike Somewhereland) have a high
proportion of empty judgment sections. In this book, cross-impact matrices
are generally displayed without empty judgment sections. However,
representations with visible empty judgment sections also are correct and
widespread in the literature.

Fig. 3.8 The cross-impact matrix printed without influence-free judgment sections

Only direct influences should be considered when assessing cross-


impacts. The consideration that social peace in Somewhereland encourages
investments and thus has a beneficial effect on economic development
describes a direct relationship that should be coded. In contrast, the
consideration that social peace makes it easier for people to develop a
meritocratic attitude, which in turn subsequently has a favorable effect on
the electoral prospects of the “Prosperity party,” would be an indirect
influence of social cohesion on government, because this chain of reasoning
leads via a third descriptor, namely from Descriptor E to Descriptor F and
only then in a second step from Descriptor F to Descriptor A.
Such indirect relationships are considered automatically by the analysis
algorithm if their components are coded as direct impacts in the matrix. The
additional coding of the indirect influence would therefore lead to a double
counting of the effect. Thus, the correct way to express the previously
described train of thought in CIB is to code both parts of the indirect effect
path separately: promoting meritocracy through social peace (E1 → F1) and
promoting the electoral prospects of the “Prosperity party” through
meritocracy (F1 → A2). For a detailed discussion of indirect influences, see
Sect. 6.3.2.
The completed cross-impact matrix is a formal representation of the
system view of the experts who made the assessments. It can only be as
good and realistic as the experts’ understanding of the system, and other
experts may arrive at different assessments. The role of a CIB analysis is to
capture the systemic implications of the interdependencies laid out in the
matrix, whether or not the coded system view is contestable.

3.4 Constructing Consistent Scenarios


Having defined the space in which scenarios can be searched by specifying
the descriptors and their variants, and having formulated a conceptual
system model by collecting the cross-impacts that will be the database for
distinguishing between consistent and inconsistent scenarios, we turn to the
heart of the CIB method: scenario construction using the CIB algorithm.
The idea of CIB can be demonstrated graphically or in tabular form. We
start with the graphical visualization of the approach.

3.4.1 The Impact Diagram


Without considering the interdependencies between the descriptors, any
combination of descriptor variants could be considered a scenario. As
mentioned in Sect. 3.3, in the case of the Somewhereland example, this
would be 486 scenarios. With the help of the cross-impact matrix, it is now
possible to assess for each of these scenarios whether it is in line with the
interdependencies formulated in the matrix, or whether contradictions, i.e.,
inconsistencies, appear. Figure 3.9 shows one of these 486 scenarios: the
scenario [A2 B1 C3 D1 F1 E1], i.e., a scenario in which for descriptor A the
variant A2 is active, for descriptor B the variant B1 is active, and so on.

Fig. 3.9 One of 486 scenarios for Somewhereland


The formal cross-checking between this scenario and the cross-impact
matrix is done in CIB by looking up for each descriptor pair which
influence relationship exists between their active descriptor variants. For
the influence relationship between the descriptor boxes “A. Government”
and “B. Foreign Policy,” this is shown in Fig. 3.10.

Fig. 3.10 Consulting the cross-impact matrix on the influence relationship A2 → B1

The extract from the cross-impact matrix in the upper left-hand corner
of Fig. 3.10 shows that if the descriptor “A. Government” is assigned the
variant “A2 ‘Prosperity party‘” and the descriptor “B. Foreign policy” is
assigned the variant “B1 Cooperation,” the impact of A on B will be a
medium strength promotion (+2, see the highlighted judgment cell in the
upper left-hand corner of Fig. 3.10). This is represented graphically in Fig.
3.10 by a green arrow of medium strength. In terms of content, this
expresses the judgment that an economy-focused party is likely to seek
cooperation with other countries to promote trade relations and thus the
domestic economy. A reverse effect from B1 to A2 is not coded in the
matrix (cross-impact: 0).
Likewise, the cross-impact matrix can be used to graphically represent
all other descriptor relationships for the scenario under study. Figure 3.11
shows this result.

Fig. 3.11 The impact diagram of scenario [A2 B1 C3 D1 F1 E1]

It is of importance for understanding CIB to keep in mind that the


diagram shown in Fig. 3.11 applies exclusively to the scenario studied here.
Each of the 486 combinatorial scenarios of Somewhereland has its own
impact diagram.

3.4.2 Discovering Scenario Inconsistencies Using Influence


Diagrams
The influence diagram allows us to assess the consistency of each
descriptor variant and the scenario as a whole. Descriptor “C. Economy”
with its variant “C3 Dynamic” attracts numerous green arrows. This means
that many pro-economic developments take place in the scenario under
study, that is, an economy-focused government, the cooperative relationship
with neighboring countries, a society that is at peace with itself, and a
culture that values performance. Overall, the large number of green arrows
arriving at “C3 Dynamic economy” indicate that the assumption made here
is plausible and understandable against the background of the other
descriptor variants active in the scenario—provided one accepts the system
description encoded in the matrix.
The situation is different for descriptor “D. Distribution of wealth,”
which is assigned the variant “D1 Balanced” in the scenario under study.
There is not a single green arrow pointing to this descriptor, whereas there
are three red arrows. This expresses that, although the assumption
“Balanced distribution of wealth” is part of the scenario under study,
actually nothing in the scenario speaks in favor of this assumption, but
much speaks against it: Pro-business policy of the government and
meritocracy in society tend to fuel income and wealth contrasts in the
population (at least in Somewhereland), and even though dynamic
economic growth brings wealth gains for many, the gains turn out to be
higher for the wealthy than for the lower income groups. Thus, it is largely
incomprehensible why a balanced distribution of wealth should be assumed
in this environment, and the bundle of incoming red arrows at descriptor D
is the formal expression of the lack of plausibility for this component of the
scenario. The negative result of this plausibility check is anticipated in Fig.
3.11 by drawing the questionable descriptor variant “D1 Balanced” for
descriptor D against a red background.
Thus, according to this form of graphical plausibility check, three
descriptors in Fig. 3.11 are assigned unambiguously plausible descriptor
variants (A, B, and C). Two descriptors do receive mixed impacts (E and F).
However, since the green arrows predominate, the selected descriptor
variants appear acceptable in these cases as well. One descriptor, however,
(D) is clearly at odds with its environment and therefore—as a part of this
particular scenario—implausible.
In CIB, the entire scenario must thus be discarded. It does not form an
intact fabric of mutually supporting assumptions about the descriptor
futures, and the scenario is therefore discredited as a logical construct. Just
as in mathematics, where proofs are valid only if no link in the chain of
reasoning fails, CIB accepts a scenario as a consistent solution to the cross-
impact matrix only if no logical weakness is shown at any descriptor.
Rather, in a consistent scenario, a variant must be active for each descriptor
that is understandable in light of the influences of the other descriptors. As
will be shown, this is typically true for only a very small part of the possible
combinations of descriptor variants.

3.4.3 Formalizing Consistency Checks: The Impact Sum


In view of the high number of scenarios to be checked in real applications, a
consistency check is ultimately practicable only if it can be performed with
software support. This requires a formalization of the described visual
inspection of the impact diagrams. The key to this formalization is the
impact sum, which is obtained in the impact diagram for each descriptor by
summing up all the influences acting on the descriptor, considering the
signs and strength ratings. The impact sum expresses the net balance of the
promoting and hindering influences on the descriptor. In Fig. 3.12, the
impact sums are shown below each descriptor.

Fig. 3.12 The influence diagram Fig. 3.11 with the impact sums of the descriptors

For instance, Descriptor E attains the impact sum +4, since the
promoting impacts of +3 (Descriptor C), and + 3 (Descriptor D) and the
hindering impact −2 (Descriptor F) are acting on it. In total, this results in
+3 + 3–2 = +4.
The plausibility argument for the impact diagram developed in Sect.
3.4.2 requires that for each descriptor, a variant is active that attracts as
many green arrows as possible and as few red arrows as possible, whereby
strong influences bear a higher weight than weak ones. Translated to the
concept of impact sums, this means that a scenario is plausible and
internally consistent if for each descriptor the impact sum is as high as
possible. In accordance with the above visual inspection of the impact
diagram Fig. 3.11, we now see in Fig. 3.12 already with a glance at the
impact sums that the assumed variant for Descriptor C is particularly
plausible and the assumed variant for Descriptor D is strikingly implausible.
Next, we need to clarify how to interpret the “highest possible impact
sum” requirement. In CIB, the following understanding applies (Weimer-
Jehle, 2006):
The impact sum of a descriptor is not “as high as possible,” and its active variant is M3
therefore inconsistent with the rest of the scenario, if the impact sum could be increased
by changing to a different descriptor variant.

If even one of the descriptors has an inconsistent descriptor variant, then


the scenario as a whole must be rejected as inconsistent. Only if the impact
sum for none of the descriptors can be increased by changing the active
variant is the consistency of the scenario confirmed and the scenario
accepted as a valid solution of the matrix.4
Thus, CIB imposes a comparative criterion on the impact sums (“strong
consistency”). Absolute criteria, such as the impact sum must be at least 0
for all descriptors (“weak consistency”), are possible but can lead to
scenarios that appear questionable against the background of the strong
consistency principle and are rarely used in practice.

3.4.4 The Formalized Consistency Check at Work


Figure 3.13 shows that the impact sum for Descriptor D can actually be
increased in our test scenario by changing the active variant for this
descriptor. At the top, the part of Fig. 3.12 responsible for the impact sum
of Descriptor D in the test scenario is shown. The bottom of Fig. 3.13
shows for comparison how the impact sum for Descriptor D changes if—
with an otherwise unchanged scenario—D2 (large contrasts in wealth
distribution) is assumed to be active instead of D1: The impact sum for
Descriptor D increases due to the change of the active variant from −7 to
+7. This expresses that the assumption of large contrasts in wealth
distribution is more favored by the variants of the other descriptors in this
scenario than by the originally assumed descriptor variant.

Fig. 3.13 Demonstration of inconsistency of descriptor variant D1

This observation proves that no consistent variant was selected for


Descriptor D in the test scenario [A2 B1 C3 D1 F1 E1]. Thus, the scenario
is rejected, and the consistency check of the test scenario has produced a
clear answer.
It is noteworthy that the local recovery of consistency at an inconsistent
descriptor by replacing the inconsistent descriptor variant by its consistent
counterpart does not necessarily mean that the modified scenario is now
automatically consistent. In the example, the modified scenario is actually
inconsistent because the improvement made to Descriptor D has caused a
new inconsistency elsewhere (Fig. 3.14). The variant D2 now selected for
Descriptor D fits into the picture as long as the rest of the scenario is
assumed to continue. However, the new variant D2 calls into question
whether the assumption “E1 Social cohesion: Social peace” is still tenable,
since large contrasts in wealth undermine social peace in Somewhereland
(according to the matrix author). This interdependence of the plausibility of
the individual components of a scenario underscores the complexity of the
task of finding a thoroughly consistent scenario, and this underlines the
special quality of the few scenarios that are completely flawless from the
point of view of the CIB consistency check.

Fig. 3.14 Consequential inconsistency in Descriptor E after adjustment of Descriptor D

3.4.5 From Arrows to Rows and Columns: The Matrix-Based


Consistency Check
The consistency check based on the impact diagrams is illustrative and
plausible. However, for larger systems, impact diagrams become confusing
when the number of arrows rises. In addition, it is unhelpful that with an
impact diagram, only the impact sums of the active descriptor variants can
be traced. The comparison with the impact sums of the nonactive variants,
which is decisive for the consistency assessment, cannot be done directly
but requires, as in Fig. 3.13, the comparison with further impact diagrams,
which show the descriptors with modified variants. Therefore, if in
application practice it is necessary to perform a consistency assessment by
hand, for example, to verify the results of the software-based evaluation for
illustration, this is usually better done based on a tabular implementation of
the CIB algorithm. Figure 3.15 shows how the calculation of the impact
sums known from Fig. 3.12 can be carried out in tabular form.

Fig. 3.15 Matrix-based calculation of impact sums for scenario [A2 B1 C3 D1 E1 F1]
For this purpose, all rows and columns belonging to the active
descriptor variants of the examined scenario are marked in the cross-impact
matrix. The intersection cells of the marked rows and columns (highlighted
in dark in Fig. 3.15) represent—if they do not carry the value 0—the impact
arrows shown in the corresponding impact diagram. Thus, entry “3” in the
intersection cell of Row F1 and Column A2 corresponds to the thick green
arrow drawn from descriptor box F to descriptor box A in Fig. 3.12. The
column sums of the intersection cells in Fig. 3.15 equal the impact sums for
the corresponding descriptor, as confirmed by comparison with Fig. 3.12.
The practical advantage of the matrix-based consistency check is that
the impact sums of the nonactive descriptor variants also can be easily
derived. For this purpose, only the rows, but not the columns, of the active
descriptor variants are marked, as shown in Fig. 3.16. The sum of all
marked rows then yields the impact balances,5 i.e., the compilation of the
impact sums of all descriptor variants, regardless of whether they are active
(bottom row in Fig. 3.16).

Fig. 3.16 Matrix-based calculation of the complete impact balances of a scenario

In the row “impact balances,” the impact sums of the active descriptor
variants (shown inverted) can now be easily compared with the impact
sums of the nonactive variants of the same descriptor. In this way, it can be
determined for which descriptors a nonactive descriptor variant would
achieve a higher impact sum than the active variant and thus indicate
inconsistency (a tie would be acceptable). For our test scenario, we already
know from the graphical consistency check in Sect. 3.4.4 what the result
will be. The matrix-based consistency check in Fig. 3.16 leads to the same
result: Descriptor D (and only Descriptor D) violates the consistency
condition because the nonactive descriptor variant D2 achieves a higher
impact sum of +7 than the active variant D1, which achieves only −7. All
other descriptors, however, satisfy the consistency condition in this
scenario: The active descriptor variant A2 achieves an impact sum of +3
and thus is higher than the impact sums of A1 (0) and A3 (−3). The same
applies to the impact sums of the variants of Descriptors B, C, E, and F.
Of course, even for a small CIB matrix such as that of Somewhereland,
it would be unfeasible to obtain the solutions by manual consistency checks
of all descriptor variant combinations. This check must be executed in a
software-based manner. However, the matrix-based consistency check
makes it possible to convince oneself of the correctness of the calculated
scenarios quickly and easily with the help of paper and pencil. On the one
hand, this also can help the core team better understand the inner logic of
the identified scenarios. Moreover, the manual check also can be a
confidence-building tool to visualize the validity of the computer results to
the participants of the scenario exercise who are not familiar with the CIB
method or to the target audience of the scenario analysis. Because of this
possibility of retrospective validation without technical aids, CIB analysis
can avoid or at least mitigate black-box effects.

3.4.6 Scenario Construction


The described consistency analysis (“CIB algorithm”) allows us to check
each proposed scenario for its internal consistency and its consistency with
the cross-impact matrix. The result of the check is, in simple terms, a “yes”
(scenario is consistent and can be used) or a “no” (scenario has at least one
flaw and should be discarded).6 Strictly speaking, the CIB algorithm is
therefore a scenario consistency assessment tool, not a scenario
construction tool.
However, this assessment tool also is the key to scenario construction:
The construction is done in the standard CIB procedure by checking all
descriptor variant combinations of the scenario space and identifying the
few combinations that pass the consistency check as “solutions of the
matrix,” i.e., as consistent scenarios. Since the consistency check is
formulated in a programmable way by introducing the impact sums, cross-
impact matrices with millions or billions of variant combinations can be
evaluated.7
In the Somewhereland example, 10 of the 486 combinations of
descriptor variants pass the consistency check. These 10 successful
scenarios will be referred to below as the “scenario portfolio” of the matrix
and will be considered in detail in the next subchapter.

3.5 How to Present CIB Scenarios


As mentioned above, 10 of the 486 combinatorially possible scenarios of
the Somewhereland matrix prove to be successful in the CIB consistency
test and are considered solutions of the Somewhereland matrix (“consistent
scenarios”). These scenarios can be presented in several ways. The obvious
way is to simply list the descriptors together with their active variants (“list
format”). Table 3.3 shows this for one of the consistent scenarios of the
Somewhereland matrix.
Table 3.3 A Somewhereland scenario in list format

Scenario No. 7
A. Government A3 “social party”
B. Foreign policy B1 cooperation
C. Economy C3 dynamic
D. Distribution of wealth D2 strong contrasts
E. Social cohesion E1 social peace
F. Social values F1 meritocratic

However, this format is cumbersome for larger scenario portfolios and


makes it difficult to perceive the differences between scenarios. The “short
format” is more compact:
Scenario No. 7: [A3 B1 C3 D2 E1 F1].
Using the “short format,” all 10 consistent scenarios of the
Somewhereland matrix resulting from the consistency check of the 486
descriptor variant combinations can be reported in a concise form (Table
3.4).
Table 3.4 The 10 Somewhereland scenarios in short format

1. [A1 B3 C1 D2 E3 F3] Or, even more compact [1 3 1 2 3 3]


2. [A1 B3 C2 D1 E1 F3] [1 3 2 1 1 3]
3. [A1 B3 C2 D1 E1 F2] [1 3 2 1 1 2]
4. [A1 B2 C2 D1 E1 F2] [1 2 2 1 1 2]
5. [A3 B2 C2 D1 E1 F2] [3 2 2 1 1 2]
6. [A3 B1 C2 D1 E1 F2] [3 1 2 1 1 2]
7. [A3 B1 C3 D2 E1 F1] [3 1 3 2 1 1]
8. [A3 B2 C3 D2 E1 F1] [3 2 3 2 1 1]
9. [A2 B2 C3 D2 E2 F1] [2 2 3 2 2 1]
10. [A2 B1 C3 D2 E2 F1] [2 1 3 2 2 1]

The numbering of the scenarios can basically be chosen freely, because


in the order of listing, there is no judgment of the scenarios.8 It is also
important to be aware that neither the order of the descriptors in the matrix
nor the order of the variants within a descriptor has an effect on the
algorithm and thus on the formation of the scenarios. Both the order of the
descriptors and the order of their variants can be chosen freely.
However, reading the content of a scenario portfolio listed in short
format is tedious. The tableau format often used in CIB studies is better
suited for this, either with integrated (Fig. 3.17) or separate descriptor
listing (Fig. 3.18).9

Fig. 3.17 The Somewhereland scenarios in tableau format with integrated descriptor listing

Fig. 3.18 The Somewhereland scenarios in tableau format with separate descriptor listing

In the tableau format, the scenarios are to be read vertically from top to
bottom. If neighboring scenarios match the variants of a descriptor, the
relevant table cells are merged to highlight the similarity of the scenarios at
this point and to increase the readability of the tableau by reducing the
amount of text. The order of the scenarios can be changed to bring similar
scenarios into proximity to each other. A respective sorting has already
been done here.
The top row of the tableau contains scenario titles (or mottos) that
summarize the essence of the scenario. These titles are not a result of the
CIB evaluation but are created by interpreting the scenarios. Different
scenarios with similar characteristics can be combined into a scenario
family by means of a shared title. The individual scenarios within the
scenario family then present themselves as subvariants of the shared title.
Often a suitable title is suggested by reading the scenario. It can also be
helpful to look at the scenario’s impact diagram (or its tabular equivalent) to
gain an understanding of the cause-effect relationships in the scenario and
to draw inspiration for the choice of title. Occasionally, one also finds that
certain descriptor variants are exclusive to a single scenario or scenario
family and can therefore be considered characteristic features that suggest
an informative scenario title. For example, “C1 Shrinking economy” and
“E3 Unrest” occur only in Scenario no. 1 and were therefore the inspiration
for the title of Scenario no. 1 chosen in Figs. 3.17 and 3.18. “A2 ‘Prosperity
party’” and “E2 Tensions” occur only in scenario family no. 9/no. 10 and
therefore shape the view of these scenarios. A particularly thorough but
time-consuming way to find a title is to formulate a storyline for the
scenario and then condense the storyline into a title. The procedure for
storyline development is discussed in detail in Sect. 4.6.
Scenario no. 10 builds on the test scenario used to explain the CIB
consistency check in Figs. 3.12 and 3.14. This scenario follows the original
test scenario with respect to Descriptors A, B, C, and F but considers the
change in Descriptor D, the necessity of which was made clear in Fig. 3.12,
and adjusts for the consequential inconsistency in Descriptor E, which arose
after the correction of D in Fig. 3.14. Finally, with these changes, the
complete consistency of the scenario is achieved, which is evident by its
appearance on the solution list. The corresponding impact diagram can be
seen in Fig. 3.19.

Fig. 3.19 Impact diagram of Somewhereland scenario no. 10

In Fig. 3.19, several aspects are worth noting. As expected, the CIB
algorithm ensures that no descriptor in a consistent scenario has a
particularly poor impact sum. However, this does not mean, as Fig. 3.19
and the impact diagrams shown earlier might suggest, that impact sums in
consistent scenarios are generally nonnegative10 or that inconsistent
descriptors have generally negative impact sums. Since the CIB consistency
principle is designed as a comparative requirement for the impact sums, the
consistency of a descriptor is decided only by comparing its impact sum
with the impact sums that the other variants of the descriptor would
achieve. These are not directly visible in the impact diagram, so that the
level of the impact sum in the impact diagram is only a provisional
indication of whether there is consistency for the descriptor in question. In
the end, however, the impact sums of the alternative descriptor variants
must be calculated and compared.
Figure 3.19 further shows that impact diagrams of consistent scenarios
also can contain descriptors that are exposed to hindering (red) impacts.
However, the consistency condition also guarantees for these descriptors
that their active variant was a good choice despite the hindering impacts,
either because the hindering impacts are countered by sufficient promoting
impacts or because the alternative variants of the descriptor would perform
even worse or at least not better in their impact sums. For example, Fig.
3.19 shows that assumed dynamic economic development is hindered by
social tensions (which can cause a deterioration in investor confidence,
impair the consumer climate and disturb industrial peace in companies). On
the other hand, business-friendly policies of the government, cooperative
foreign relations and a widespread meritocratic culture provide so many
forces promoting dynamic economic development that the assumption of
weak economic development would be more questionable than the dynamic
economic development assumed in the scenario.

3.6 Key Indicators of CIB Scenarios


As described, the core function of CIB is to confirm or refute the
consistency of a scenario. However, CIB generates additional information
as part of this review process, which can also be useful for understanding
and assessing the scenario.

3.6.1 The Consistency Value


The consistency value is the most important key indicator of a scenario in
CIB practice. It determines with its sign the two-valued statement
“consistent” or “not consistent.” In addition, it allows for a gradation of the
consistency assessment on an integer scale. Consistency values can be
calculated for individual descriptors as well as for the whole scenario.

Descriptor Consistency Values


As explained in Sect. 3.4.3, the decisive consistency criterion in CIB is that
the impact sum of an active descriptor variant is not exceeded by any
impact sum of a nonactive variant of the same descriptor. It is therefore
natural to define the degree of consistency by the difference by which the
impact sum of the active variant exceeds the impact sums of the nonactive
variants of the same descriptor. We can read these differences from the
impact balances of the descriptors. The impact balances of the already
known test scenario [A2 B1 C3 D1 F1 E1] are shown in Fig. 3.16 and are
summarized in Fig. 3.20 (active descriptor variants and their impact sums
are printed inverted).

Fig. 3.20 Consistency values of the descriptors calculated for the test scenario

In the case of Descriptor A, the impact sum of the active descriptor


variant A2 is +3. For the nonactive variants, the impact sums amount to 0
for A1 and − 3 for A3. The advantage of the active impact sum over the
highest impact sum of all nonactive variants is +3 − Max {0, −3} = +3–
0 = +3. This advantage is called the consistency value of Descriptor A, or
consistency of A for short. The consistency values for the other descriptors
are calculated accordingly. The consistency value for Descriptor D is
negative, which indicates its inconsistency.
In summary, this definition means that the consistency value for
consistent descriptors is at least 0. The higher the consistency value is, the
clearer the plausibility advantage of the active descriptor variant over its
alternatives. Negative consistency values denote inconsistent descriptors
because negative values indicate that a change in the active descriptor
variant leads to a gain for the impact sum of the descriptor in question.
Accordingly, the more negative this value is, the more pronounced the lack
of plausibility.

Scenario Consistency Values


The result of the consistency value calculation for the descriptors also is the
basis for the consistency evaluation of the scenario as a whole. As already
stated in Sect. 3.4.2, CIB requires that the inner logic of the scenario must
not have any local flaws. The scenario must therefore be evaluated
according to the descriptor with the lowest consistency, and the consistency
value for the scenario as a whole is thus defined as the minimum of the
consistency values of all descriptors. For the test scenario, the descriptor
consistencies CD take the following values, according to Fig. 3.20:
$$ {C}_D=\left[3,1,11,-14,5,3\right]. $$
The consistency value of the scenario [A2 B1 C3 D1 F1 E1] is therefore
$$ {C}_S=\operatorname{Min}\ \left\
{{\mathrm{C}}_{\mathrm{D}}\right\}=-14, $$
which identifies it as a clearly inconsistent scenario. A scenario is
considered consistent if it consists exclusively of consistent descriptors and
thus has a scenario consistency value of CS = 0 or higher.

Nonconsideration of Autonomous Descriptors


Descriptors that are not under the influence of other descriptors and
therefore have an empty matrix column represent external influences on the
system in CIB (cf. Sect. 6.1.2). Accordingly, their impact balance
necessarily consists of zero values, regardless of the scenario. Hence, the
impact balances of autonomous descriptors do not contain any information
about scenario consistency. Autonomous descriptors are therefore
disregarded in determining the scenario consistency value. Somewhereland
does not include any autonomous descriptor.

Inconsistency Scale
The inconsistency value of a descriptor or a scenario describes the result of
the consistency assessment from the opposite perspective: A scenario with a
consistency value of −4 has an inconsistency value of 4. Inconsistency
values, however, do not have negative values by convention, i.e., a scenario
with a consistency value of +1 has the inconsistency value of 0 (cf. Fig.
3.21). This is to account for the understanding that the characterization
“zero inconsistency” expresses the absence of inconsistency and should
therefore generally encompass all consistent scenarios, regardless of their
varying degree of positive consistency. Scenarios with inconsistency values
of 0, 1, 2, etc. (i.e., consistency values ≥0, −1, −2, …), are abbreviated as
IC0 scenarios, IC1 scenarios, IC2 scenarios, and so on.

Fig. 3.21 Comparison of consistency and inconsistency scales

Thus, inconsistency values do not contain any new information


compared to the consistency values. They merely provide an additional
linguistic option to avoid the use of negative indicator values when
describing inconsistent scenarios.
Global Inconsistency
In addition to the concept of inconsistency described here, the concept of
“global inconsistency” also has been introduced. It is rarely used in CIB
practice and is not based on the most inconsistent descriptor but includes all
inconsistent descriptors of a scenario by adding up the inconsistency values
of all inconsistent descriptors of a scenario (Weimer-Jehle, 2006). For
scenarios with only one inconsistent descriptor (as in the case of the test
scenario in Fig. 3.20), there is accordingly no difference between global or
ordinary (local) inconsistency. Consistent scenarios are characterized by the
global inconsistency value 0, just as in the case of ordinary inconsistency.

3.6.2 The Consistency Profile


The set of descriptor consistency values of a scenario is called the scenario
consistency profile. Thus, the descriptor consistencies [3, 1, 11, −14, 5, 3]
of the test scenario [A2 B1 C3 D1 F1 E1], which can be obtained from Fig.
3.20, constitute its consistency profile. In this example, the central message
of the consistency profile is quite obvious, namely, that the scenario in
question is inconsistent and that descriptor D is responsible for it. However,
the consistency profile also provides insightful information for consistent
scenarios. Figure 3.22 shows the consistency profiles of the consistent
Somewhereland scenarios no. 3 and no. 10, taken from Table 3.4. The
profiles were computed according to the calculation scheme shown in Fig.
3.16 and the calculation rules for the descriptor consistencies (Fig. 3.20).

Fig. 3.22 Two examples of consistency profiles

This shows that the consistency values for the descriptors within a
scenario are highly scattered in both cases, and this is typical for CIB
scenarios. The consistency of a descriptor can be interpreted as an indicator
of the well-foundedness of its active variant. That is, the active variant for
wealth distribution in Scenario no. 10 (D2 Large contrasts) is a particularly
well-founded assumption within this scenario because it is by far the better
assumption than the opposite assumption (D1 Balanced).

Consistency Profile and Scenario Stability


On the other hand, both scenarios shown in Fig. 3.22 include descriptors
with zero consistency. These descriptors are marginally consistent, and the
foundation of the corresponding scenario assumptions is less convincing: At
least one other assumption for these descriptors would be similarly
plausible. One possible interpretation of the descriptor consistency score is
that systems are presumably most likely to be vulnerable at the low-
consistency descriptors, in particular at the marginally consistent
descriptors, where they can most easily lose their stability due to internal or
external perturbations. They could be the “breaking points” of the system
state, where the system begins to move away from its previous stable state
when perturbed and begins to search for another stable system state.
Therefore, marginally consistent descriptors also are referred to as
marginally stable.

Consistency Profile and Judgment Uncertainty


For a similar reason, the consistency profiles also indicate which cross-
impact judgments require the most attention when critically reviewing the
role of data uncertainty in the scenario construction process. For example,
in Scenario no. 3, data uncertainty in the columns of Descriptors A, D, and
E are of little relevance because the consistency values for the descriptors
are so high that consistency for these descriptors would not be jeopardized
even if individual cross-impact judgments had to be slightly revised. In
contrast, in the columns of Descriptors B and F, even minor judgment
revisions could cause the consistency of Scenario no. 3 to be lost.
As practice shows, it is the rule that CIB scenarios contain one or more
marginally stable descriptors. Scenarios without marginally stable
descriptors are rather an exception. Following the definition of scenario
consistency CS in Sect. 3.6.1, this also leads to the conclusion that it is the
rule that consistent scenarios carry a consistency value of 0 and that
scenarios with higher consistency values are rather rare. This is also true for
the Somewhereland scenarios, of which 9 of the 10 scenarios have scenario
consistency CS = 0. The fact that the presence of marginally stable
descriptors is the normal case in CIB scenarios can be taken as an implicit
commentary of the CIB method on the nature of complex systems,
according to which the vast majority of these systems are usually in system
states that have at least one vulnerability to their stability. From the CIB
perspective, robustly stable complex systems must be understood as
exceptions.

3.6.3 The Total Impact Score


The total impact score of a scenario is defined as the sum of the impact
sums of all active descriptor variants. For our familiar test scenario [A2 B1
C3 D1 E1 F1], the total impact score can be taken from Fig. 3.15:
$$ \mathrm{TIS}=3+2+10-7+4+2=+14 $$
The same result is obtained by adding all intersection cells in Fig. 3.15.
The addition of the strength values of all arrows printed in the impact
diagram Fig. 3.11, considering their signs, also leads to the same result.
The latter calculation method also conveys the meaning of the total
impact score: Scenarios in which there are numerous and strong promoting
impacts between the descriptors (green arrows in Fig. 3.11) and only a few
hindering impacts (red arrows) show a strong internal logic of the scenario.
The total impact scores are high in these cases. Conversely, scenarios with a
low total impact score correspond to impact diagrams in which few or weak
green arrows and/or numerous and strong red arrows, i.e., hindering
impacts, are active, indicating a weak inner logic of the scenario.
Similar to the consistency value, the total impact score can be calculated
for all scenarios, regardless of whether they are consistent or inconsistent.
While the consistency value, as a “local” indicator, assesses scenarios
according to their weakest point, the total impact score supplements this
assessment by a “global” indicator, assessing the entire scenario. Scenarios
of high consistency also tend to have a relatively high total impact score.
However, the correlation is not strict, and therein lies the added value of the
total impact score as a separate metric.
In CIB, however, it is primarily the consistency value that is decisive for
the scenario assessment, which is why the total impact score is usually used
only as a supplement for the comparison of scenarios with the same
consistency value. In a group of scenarios with the same consistency, the
total impact score can be used to determine which of them has the higher
overall logical strength.

3.7 Data Uncertainty


In principle, CIB categorically distinguishes between consistent and
inconsistent scenarios. However, it must be kept in mind that the
consistency assessment of the scenarios is based on the cross-impact data
and that these are usually the result of expert estimates. Therefore, cross-
impact data usually cannot be considered exact and unquestionable, but
some uncertainty must be assumed for them. The main reason for
uncertainty is that the cross-impact values usually do not directly express
evidence. Instead, the experts must perform a translation of their body of
knowledge to the cross-impact rating scale, and this translation is fraught
with uncertainty, even if the underlying knowledge is reliable. In addition,
the use of an integer rating scale for the strength assessments leads to
rounding errors even from a purely technical point of view, since an
intermediate strength rating (“strength lies between 2 and 3”) that is
perceived as appropriate by the experts must be rounded up or down when
assessing the impacts.
The consequence is that the impact balances, which decide the scenario
consistency, are also sums of uncertain values and thus uncertain
themselves. Therefore, there are good reasons not to interpret the
consistency condition as a mathematically sharp cutoff in practice but to
introduce a significance threshold for inconsistency. An inconsistency
below the significance threshold is to be regarded as a marginal
inconsistency, and the respective scenarios cannot be discarded with
certainty.

3.7.1 Estimating Data Uncertainty


The uncertainty margin may vary from case to case. However, there is
empirical evidence of the range of uncertainty that can be considered
typical. It is provided by systematic comparisons of the assessments of
different experts on the same influence relationship. This is described in
more detail in Sect. 6.3.3. As a result, empirical data suggest that a scenario
of N descriptors can be discarded as surely inconsistent only if its
inconsistency value exceeds the significance threshold of (Table 3.5)
Table 3.5 Significance of inconsistency classes depending on the number of descriptors

IC N
1 <ca. 5
2 <ca. 17
IC N
3 <ca. 37

$$ {IC}_S\approx \frac{1}{2}\ \sqrt{N-1} $$ M4

In the Somewhereland matrix with N = 6 descriptors, IC1 scenarios are


therefore not significantly inconsistent but marginally inconsistent, while
IC2 scenarios must be considered significantly inconsistent.
These orientation values apply to cross-impact ratings with average
estimation uncertainty. In particular cases, for example, in the case of
system relationships that are particularly difficult to assess, other values
may be appropriate.

3.7.2 Data Uncertainty and the Robustness of Conclusions


The insight that marginally inconsistent scenarios cannot be discarded with
certainty must be considered when interpreting the findings of a CIB
analysis. Findings of any kind that are based only on the IC0 scenarios can
initially only be interpreted as indications. A finding can be considered
significant (i.e., robust against data uncertainty) only when it is also
confirmed in the set of scenarios with marginal inconsistency. This means,
for example:
To obtain a comprehensive idea of the range of possible futures,
marginally inconsistent scenarios also can be considered.
Conversely, due to data uncertainty, it cannot be ruled out that an IC0
scenario has achieved this status only because of rating uncertainties.
Therefore, it is legitimate to review the consistent scenarios and to
exclude scenarios that are classified as implausible for good reason after
a thorough assessment, despite the formal consistency qualification.
Any findings derived from the set of consistent scenarios, such as
– Descriptor variant X does not occur in the scenario set and is therefore
not part of the plausible future space.
– Descriptor variant X occurs only together with variant Y.
– Descriptor variant X is part of all scenarios and is therefore assumed to
be a highly probable development.
can be considered significant findings only if they are also confirmed
in the set of marginally inconsistent scenarios.
Uncertainties resulting from expert dissent must furthermore be
considered separately in the interpretation (cf. Sect. 4.5).

3.7.3 Other Sources of Uncertainty


In addition to cross-impact rating uncertainties, there are other reasons for
not categorically excluding slightly inconsistent scenarios from
consideration. CIB conceptually assumes that the behavior of all descriptors
is determined exclusively by their mutual influence relationships. In fact,
however, at least weak external influences of factors excluded from the CIB
analysis almost always exist. These external influences, if considered,
would have resulted in small changes in the impact balances, which then
also could have caused slight shifts in the consistency values of the
scenarios. As in the case of the rating uncertainties, this means that the
consistency condition CS ≥ 0 should be interpreted as a soft threshold.

References
Gausemeier, J., Fink, A., & Schlake, O. (1998). Scenario management: An approach to develop
future potentials. Technological Forecasting and Social Change, 59, 111–130.
[Crossref]

Gordon, T. J., & Hayward, H. (1968). Initial experiments with the cross impact matrix method of
forecasting. Futures, 1(2), 100–116.
[Crossref]

Honton, E. J., Stacey, G. S., & Millet, S. M. (1985). Future scenarios–the BASICS computational
method, economics and policy analysis occasional paper (Vol. 44). Batelle Columbus Division.

Johansen, I. (2018). Scenario modelling with morphological analysis. Technological Forecasting and
Social Change, 126, 116–125.
[Crossref]

Kosow, H., Weimer-Jehle, W., León, C. D., & Minn, F. (2022). Designing synergetic and sustainable
policy mixes–a methodology to address conflictive environmental issues. Environmental Science and
Policy, 130, 36–46.
[Crossref]
Nash, J. F. (1951). Non-cooperative games. The Annals of Mathematics, 54, 286–295.
[Crossref]

Rhyne, R. (1974). Technological forecasting within alternative whole futures projections.


Technological Forecasting and Social Change, 6, 133–162.
[Crossref]

von Reibnitz, U. (1987). Szenarien–Optionen für die Zukunft. McGraw-Hill.

Weimer-Jehle, W. (2006). Cross-impact balances: A system-theoretical approach to cross-impact


analysis. Technological Forecasting and Social Change, 73(4), 334–361.
[Crossref]

Weimer-Jehle, W. (2009). Properties of cross-impact balance analysis. arXiv, 0912.5352v1.

Weimer-Jehle, W., Deuschle, J., & Rehaag, R. (2012). Familial and societal causes of juvenile
obesity–a qualitative model on obesity development and prevention in socially disadvantaged
children and adolescents. Journal of Public Health, 20(2), 111–124.
[Crossref]

Weitz, N., Carlsen, H., & Trimmer, C. (2019). SDG synergies: An approach for coherent 2030
agenda implementation. Stockholm Environment Institute Brief.

Zwicky, F. (1969). Discovery, invention. In Research through the morphological approach.


Macmillan.

Footnotes
1 The term “cross-impact” places CIB in the tradition of “cross-impact analysis” first proposed in
the 1960s by Gordon and Helmer and Gordon and Hayward (Gordon & Hayward, 1968). See Sect. 6.
3.1.

2 Strictly speaking, the type of scale used in CIB is an interval scale with a discrete metric. It is more
presupposing than an ordinal scale because it assumes, for example, that the effect of “+2” is twice as
strong as the effect of “+1,” whereas an ordinal scale would assume only that the effect of “+2” is
stronger than the effect of “+1.”

3 An example of the use of diagonal fields is described in Weimer-Jehle et al. (2012). In this study, a
diagonal field is used to represent that the practice of sports promotes the enjoyment of physical
activity and that this further strengthens the inclination to engage in sports.

4 Mathematically, the search for consistent scenarios in CIB is equivalent to discrete-valued


multiobjective optimization: multiobjective optimization because all descriptors should
simultaneously achieve the highest possible impact sums. Discrete-valued because each descriptor
has the freedom to choose only one option from a set of enumerable options. The solutions to this
optimization task are referred to in mathematics as “Nash equilibria”. Cf. Nash (1951).

5 The term “impact sums” refers to the sum of impacts on a single descriptor variant, while the
“impact balance” consists of all impact sums of a descriptor.

6 Sect. 3.6 describes how to calculate and use gradual consistency ratings with the CIB consistency
test and how further additional information can be taken from this rating.

7 At the time of printing of this book, nearly all published CIB applications were carried out using
the freely available software ScenarioWizard (https://www.cross-impact.org). However, the algorithm
in its basic form does not pose great challenges to experienced programmers and allows self-
developed software solutions.

8 The scenario order of the scenario list shown in Table 3.4 is not identical with the result output of
the ScenarioWizard, as it has already been rearranged to prepare the following figures.

9 The tableau format for CIB scenarios is based on a proposal by Dipl.-Ing. Christian D. León.

10 However, they often are, and in fact always are, if the cross-impacts fulfill the so-called
standardization condition (Weimer-Jehle, 2009, cf. Sect. 6.3.2).
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis, Contributions to Management
Science
https://doi.org/10.1007/978-3-031-27230-1_4

4. Analyzing Scenario Portfolios


Wolfgang Weimer-Jehle1
(1) ZIRIUS, University of Stuttgart, Stuttgart, Germany

Wolfgang Weimer-Jehle
Email: [email protected]

Keywords Cross-Impact Balances – CIB – Scenario – Scenario portfolio –


Portfolio analysis – Qualitative systems analysis – QSA

The basic function of CIB is to create a portfolio of scenarios that illustrate


the range of possible future developments of the system. The scenarios
taken together are referred to as the scenario portfolio of the matrix. The
workflow of this fundamental analysis is described in Chap. 3. For many
use cases, this immediate result of the CIB analysis is sufficient, and the
analysis is then completed at this point. The next step would be the
utilization of the analysis results by the target audience, which, however, is
not the subject of this book.
A further goal can also be to gain insights into the system under study
through CIB analysis. This requires an analytical and interpretative
examination of the scenario portfolio and the matrix from which it is
derived. In this case, the scenario construction is followed by the work step
of portfolio analysis.
In support of this case, the following chapter describes a set of typical
analysis methods that can be used to pursue the goal of translating CIB
results into system insights by processing the results in such a way that they
provide input for system reflection. These analysis methods are not part of
the methodological core of CIB, the presentation of which ended with
Chap. 3. Rather, they are supplementary auxiliary methods designed to dock
on the specific capabilities and requirements of CIB but could in principle
be designed differently without conflicting with the CIB methodology. The
analysis methods described in this chapter are therefore to be understood as
suggestions and examples.
Portfolios are classified according to the degree of inconsistency of their
scenarios (cf. Sect. 3.6.1). If nothing else is noted, the term “portfolio” in
the following always refers to the IC0 portfolio of a matrix, i.e., to all
scenarios with an inconsistency value of 0 (IC0 scenarios, fully consistent
scenarios). In various cases, however, it also will be appropriate to consider
the IC1 portfolio. This consists of all scenarios with a maximum
inconsistency value of 1, i.e., all IC0 and IC1 scenarios. Accordingly, the
IC2 portfolio consists of all IC0, IC1, and IC2 scenarios.

4.1 Structuring a Scenario Portfolio


The typically hoped-for outcome of standard CIB analysis is a manageable
but not too small set of scenarios containing a number of distinctly different
future narratives. The Somewhereland portfolio may well be considered an
ideal type in this regard. In practice, the result of a CIB analysis often
corresponds to this type, but by no means always. Guidance on what to do
if the portfolio does not match the ideal type is given in Chap. 5. For the
present section, however, we assume the case that the evaluation result
sufficiently meets the specific needs of the analysis regarding the number
and variety of scenarios and that the task now is to gain maximum insight
from the portfolio.
Just as the individual scenario is a narrative about one possible future,
the portfolio is an implicit narrative about the future space of a system as a
whole. The goal of working with the portfolio is to capture this implicit
narrative and make it explicit. What we capture in the process, however,
depends on the perspective from which we view and organize the portfolio.
Our perspective is expressed in the methodological tools we use to search
for order and structure in the portfolio. Section 4.1 therefore describes
various methods (“perspectives”) for structuring a portfolio that helps us
“read” the portfolio’s messages. Specific project requirements, however,
may make it necessary to develop other perspectives not described here.
4.1.1 Perspective A: If-Then
A frequent task for a scenario analysis is the question of which
environmental conditions a target descriptor, which is of special interest for
the target audience of the analysis, could develop favorably or unfavorably.
In this case, an ordering system in the form of an “if-then” question is
required for the portfolio that uncovers the response of the target descriptor
to its environment.
The starting point of the analysis is the decision of which descriptor in
the “if” role and which descriptor in the “then” role can be expected to yield
the best possible insight in terms of the analysis goals. It is often (but not
always) favorable to choose an autonomous descriptor (if available) that
represents external conditions for the system as the “if” descriptor.
Again, the “Somewhereland” matrix is used as an example. Since the
Somewhereland matrix does not have an autonomous descriptor, it is
expanded for the purpose of this section, and a descriptor for the global
political trend is added. The expanded “Somewhereland-plus” matrix is
shown in Fig. 4.1. Somewhereland is exposed to the global trend, but vice
versa, as a medium size country, has no significant influence on it.
Accordingly, the new Descriptor G exerts influences (on Descriptors A, B,
and C), and its row therefore contains data, but its column is empty.
Fig. 4.1 The Somewhereland-plus matrix

We further assume for the purpose of this analysis that the intended
readership of this Somewhereland analysis is not interested in the question
of the general development of the country but that their interest is focused
on the question of economic development. This means that Descriptor “G.
Global political trend” serves as our “if” descriptor and Descriptor “C.
Economy” is our “then” descriptor (target descriptor).
The Somewhereland-plus IC0 portfolio includes 16 scenarios
(compared to 10 for the basic Somewhereland matrix), listed in short format
in Table 4.1. An increase in the number of scenarios often occurs when an
autonomous descriptor is added (see Sect. 4.7.1).
Table 4.1 The scenarios of Somewhereland-plus in short format

No. 1 [A2 B2 C2 D2 E2 F1 G1] No. 9 [A1 B2 C2 D1 E1 F2 G2]


No. 2 [A1 B3 C1 D1 E1 F2 G1] No. 10 [A3 B2 C2 D1 E1 F2 G2]
No. 3 [A1 B3 C1 D2 E3 F3 G1] No. 11 [A1 B3 C2 D1 E1 F2 G2]
No. 4 [A3 B1 C3 D2 E1 F1 G2] No. 12 [A1 B3 C2 D1 E1 F3 G2]
No. 5 [A3 B2 C3 D2 E1 F1 G2] No. 13 [A1 B3 C1 D2 E3 F3 G2]
No. 6 [A2 B1 C3 D2 E2 F1 G2] No. 14 [A3 B1 C3 D2 E1 F1 G3]
No. 7 [A2 B2 C3 D2 E2 F1 G2] No. 15 [A2 B1 C3 D2 E2 F1 G3]
No. 8 [A3 B1 C2 D1 E1 F2 G2] No. 16 [A3 B1 C3 D1 E1 F2 G3]

From the perspective of an “if-then” analysis, the analytical interest can


be summarized in the question of how the framework conditions influence
the target descriptor and what uncertainties exist for this relationship. To
organize the portfolio with this question in mind, the portfolio is sorted first
by the framework conditions (here, the autonomous descriptor “G. Global
political trend”) and then by the target descriptor (here, “C. Economic
performance”). The result is shown in Fig. 4.2.
Fig. 4.2 Somewhereland-plus portfolio arranged according to the “if-then” perspective
This particular way of ordering the portfolio makes the answer to our
“if-then” question visible:
Under conditions of heightened global tensions (G1), Somewhereland’s
economic performance could decline or stagnate but not grow
dynamically. The case of declining economic performance can be
expected if Somewhereland decides under pressure from global
developments to follow the path of nationally oriented, confrontational
politics.
In the status quo case (G2), nothing can be ruled out regarding economic
development. Declining economic performance under G2 conditions is
characteristically correlated with the occurrence of social unrest. Under
the same conditions, stagnation is associated with societies in which the
wealth gap is small. Dynamic growth, on the other hand, thrives under
these conditions, specifically on the soil of meritocratic societies.
In the case of a global balance of interests (G3), the factors and system
interrelations considered in the matrix point to a secure economic growth
phase in Somewhereland. The unifying element of all scenarios
belonging to this case is cooperative relations with neighbors.
Domestically, societies can be similarly configured, as in the case of
dynamic growth under global status quo conditions. However, the
favorable global conditions also allow dynamic growth in this case (and
only in this case) for a less meritocratic value system in society.
In order to validate the results, they should also be checked against the
IC1 portfolio of the “Somewherland-plus” matrix (cf. Sect. 3.7.2). The
results described should not be misunderstood as sound political and
economic analysis. They should be viewed merely as an illustration of the
“if-then” perspective.
Ordering the portfolio using an autonomous descriptor has a good
chance of revealing latent messages of the portfolio only if the autonomous
descriptor exerts a substantial influence on the target descriptor, either
directly by cross-impacting the target descriptor or indirectly by influencing
essential drivers of the target descriptor. In more extensive analyses,
multiple autonomous descriptors and/or multiple target descriptors also may
occur. The described procedure can then be used accordingly.
The “if-then” perspective also can be applied to matrices without
autonomous descriptors if the system contains a descriptor with
outstandingly high formative power on the system that can be used instead.

4.1.2 Perspective B: Order by Performance


The scenarios in the portfolio also can be sorted according to how they
perform against a criterion deemed relevant by the target audience.
Examples of these criteria are as follows:
The desirability of the scenario for a particular group of people
The compatibility of the scenario with a particular guiding principle or
plan
The (dis)similarity of the scenario to the present
The (dis)similarity of the scenario to a given reference future, and others
The evaluation of the performance against the selected criterion can be
done by individual assessment of each scenario. However, especially if a
large number of scenarios are to be evaluated, a formalized evaluation
procedure is preferable. For this purpose, the first step is to evaluate the
performance of the descriptor variants (rather than the performance of a
scenario as a whole). If we wish to evaluate the Somewhereland scenarios
of Fig. 3.17 with respect to the criterion “dissimilarity to the present” and
assume, for example, that the present state of Somewhereland can best be
described by Scenario no. 7 “The principle of hope” with the code [A3 B1
C3 D2 E1 F1], the evaluation of the descriptor variants shown in Fig. 4.3
could be used.
Fig. 4.3 Evaluation of descriptor variants according to their dissimilarity with the present state of
Somewhereland (example values)

A rating of 0 was assigned for conformity with the present state (i.e.,
Scenario no. 7) and a rating of 2 for clear deviation from the present. A
rating of 1 is used for descriptor variants that describe a gradual difference
from the present state. In principle, ratings can be made by the core team
but with higher legitimacy by a group of experts or stakeholders or by the
target audience. In any case, it is useful to document explanations for all
ratings.
Now, an index value can be determined for all scenarios of the portfolio
by calculating how many points each scenario accumulates based on the
ratings given in Fig. 4.3. For instance, Scenario no. 1 from Fig. 3.17
“Society in crisis” with the code [A1 B3 C1 D2 E3 F3] accumulates
2 + 2 + 2 + 0 + 2 + 2 = 10 points and is thus particularly different from the
present state of Somewhereland.
The progression of dissimilarity to the present in the array of scenarios
also can be visualized by arranging the scenarios in a tableau in the order
shown in Fig. 4.4. The descriptor variants that were assigned a rating of 1 in
Fig. 4.3 are shaded light here, and the descriptor variants that were assigned
a rating of 0 are shaded dark. The descriptor variants that are particularly
unlike the present (the “terra incognita”) remain white. The result is shown
in Fig. 4.5. This form of scenario presentation makes it immediately
apparent in which areas we must prepare for change in the various
scenarios.

Fig. 4.4 Arrangement of Somewhereland scenarios on a performance scale

Fig. 4.5 Somewhereland scenario tableau, ordered by dissimilarity to the present


An application example of this methodology for assessing the
compatibility of CIB scenarios with a guiding principle was presented by
Kopfmüller et al. (2021). A corresponding application according to the
criterion of group desirability was described by Ayandeban (2016).

4.1.3 Perspective C: Portfolio Mapping


The object of this perspective is to arrange the scenarios of a portfolio on a
map in such a way that similar scenarios are located adjacent to each other
and dissimilar scenarios are located far from each other. Ultimately, the goal
is to move away from looking at individual scenarios and instead strive to
discern the overarching structure of the future space. From this perspective,
the individual scenarios are “test drillings” in the future space, which
individually have limited meaning but together provide an informative
picture.
This analysis technique has its roots in a traditional intuition-based
scenario method, scenario axes,1 which is outlined first before describing its
application to CIB analysis.

Scenario Axes
In the traditional scenario axes approach, two basic questions concerning
the future development of the system under study are identified based on
the participants’ understanding of the problem, and two diametrically
opposed future development options are formulated for each of the basic
questions. For a scenario analysis on international cooperation and its
possible direction, for example, the axes shown in Fig. 4.6 could be chosen.
Fig. 4.6 Example of a scenario axes analysis (The example draws from work done by the IPCC on
the future of global climate gas emissions (Nakićenović et al. 2000))

For each sector of the diagram and the respective combination of basic
developments, a scenario is then elaborated based on discussion. In Sector
I, for example, a scenario would be developed in which international trade
institutions are strengthened, national regulations are reduced, and free
trade agreements are established. In a scenario for Sector II, international
cooperation also would be strengthened, but with the main objective of
concluding international climate protection agreements and cooperatively
pursuing other environmental protection concerns.
Scenario axes is a simple and quick-to-use technique that is often
chosen when formal methods are not to be used and particular emphasis is
placed on creative scenario building.

Using Scenario Axes Diagrams in CIB Analysis


The fundamental difference between CIB and the traditional scenario axes
method is that the scenarios in CIB are not developed intuitively based on
predefined axes. Instead, they are determined algorithmically from the
cross-impact data. Subsequently, however, the scenario axes approach also
can be used in CIB as an auxiliary technique for ordering the portfolio. Just
as in the traditional way of using scenario axes, the first task in applying
this technique in CIB is to determine the dimensions according to which the
axes are to be spanned. This requires a prior understanding of the most
fundamental trends of the system. At the same time, this choice also may
express the special problem view that motivated the CIB analysis.
As an example, we again use the Somewhereland matrix with its 10
scenarios and ask ourselves whether Somewhereland will in the future see
itself more as a collection of individuals making claims against society, or
more as a community to which individuals are committed. Second, we ask
whether society will succeed in remaining economically, politically, and
socially functional or whether trends toward dysfunctionality will prevail.
Before the CIB scenarios can be positioned on these two axes, all
descriptor variants must be evaluated as described in Sect. 4.1.2, except that
this must now be done for two criteria. The ratings express whether the
descriptor variant corresponds to an axis topic (+1) or is rather to be
assigned to the opposite pole (−1). A possible result of this assessment task
is shown in Fig. 4.7. The value 0 is assigned if the descriptor variant is not
clearly related to the axis motif.
Fig. 4.7 Two-dimensional rating of the Somewhereland descriptor variants

The Descriptor variant “E1 Social peace” is rated +1 on the “Functional


vs. dysfunctional” index, as social peace indicates the functioning of social
institutions and intact social balancing mechanisms and cooperation in
society. The descriptor variant “E3 Unrest” expresses the opposite. Unrest
is both a consequence and an expression of the failure of social processes of
cooperation, reconciliation of interests, and understanding. This descriptor
variant is therefore assigned a value of −1.
For the example analysis, only the direction of the correlation between
descriptor variants and axis topics is assessed with the rating interval [−1
… + 1]. However, in the case of a substantial difference in the relevance of
the descriptor variants for the axis topics, a more differentiated rating
interval (e.g., −2 … +2 or − 3 … +3) can be used, as is also the practice in
the process of cross-impact assessments.
Based on the ratings of the descriptor variants in Fig. 4.7, the scenarios
can now be located on the axes diagram by forming indices for both
dimensions. Scenario no. 6 “Cozy society” with the code [A3 B1 C2 D1 E1
F2] achieves an index value of +1 + 0 + 0 + 1 + 0 + 1 = +3 on the
“Collectivism vs. individualism” dimension and an index value of
0 + 1 + 0 + 0 + 1 + 0 = +2 on the “Functional vs. dysfunctional” dimension.
It is therefore plotted in the axis diagram in Fig. 4.8 at the position [+3, +2].
The “Protectionism” scenario has the same coordinates as one of the three
scenarios from the “Us vs. them” group. In this case, the scenario groups
marked in gray overlap.
Fig. 4.8 A portfolio map of the Somewherland scenarios

According to the evaluations of the descriptor variants, scenarios can


reach maximum coordinates of −3 … +3 on the horizontal as well as on the
vertical axis, since for both dimensions three descriptors were assessed as
relevant for the axis topic. This margin is fully exploited by the
Somewhereland scenarios, and they thus cover the maximum span in
relation to the two axes.
For a correct interpretation of the axis diagram, the limitations of a
summated index should be noted. For example, the statement that Scenario
no. 8 (upper marker in the “The principle of hope” group in Fig. 4.8)
describes “the most functional society” among the Somewhereland
scenarios would be an overinterpretation of the data. The functionality of a
society is too complex an issue to be measured validly by simply counting a
number of the society’s features. A more correct description is that Scenario
no. 8 combines the highest number of functional descriptor variants among
all consistent scenarios.
The 10 Somewhereland scenarios also could have been arranged by
direct interpretation on this or another axis diagram. Occasionally, manual
interpretation may even lead to better results because it can be more
individualized to the peculiarities of each scenario than the formal
procedure. The advantage of the formal method via the rating of the
descriptor variants is that it also can process large numbers of scenarios on
the basis of a standardized assessment. In any case, the success of the
procedure stands and falls with an inspired choice of axis topics. The
identification of meaningful axis topics can be supported by statistical
methods, such as correspondence analysis (Le Roux and Rouanet, 2009).

Special form of the Scenario Axes Diagram: Probability vs.


Effect
A special form of application of the scenario axes method arises when the
scenario probability and the relevance or effect of the scenario on a
planning or decision are chosen as axis topics (Fig. 4.9). The resulting
diagram consists of only one quadrant, since both axes are positive
quantities. The diagram can help to select from a portfolio those scenarios
that require special attention. However, it is usually necessary to estimate
the probability and effect of the scenarios through an individual assessment
for each scenario, since CIB does not make a direct statement about these
quantities. Therefore, this diagram is most suitable for medium-sized
portfolios. For small portfolios, a selection process is unnecessary, and for
large portfolios, an individual assessment of the scenarios would be time-
consuming.

Fig. 4.9 Ordering the portfolio according to probability and effect

Usually, the scenarios with a high probability and effect are selected as
particularly relevant for use in planning and decision-making, although care
also must be taken to ensure that the selected scenarios describe
substantially different futures. In addition, selected low-probability
scenarios can be included in the sense of an incident analysis if they have a
particularly high effect (“wildcards”).
Applications of this form of portfolio analysis can be found, for
example, in Aschenbrücker and Löscher (2013) and Sardesai et al. (2019).
4.2 Revealing the Whys and Hows of a Scenario
Since the CIB scenarios are derived from the data of the cross-impact
matrix by a simple algorithm, it is always possible to reconstruct the
reasons for the composition of a scenario from the matrix. This procedure
deepens one’s own understanding of the scenarios and makes it easier to
communicate the scenarios convincingly to others.

4.2.1 How to Proceed


An effective way to elucidate the scenario logic is the determination of the
intersection cells of the active descriptor variants in the cross-impact
matrix. For this purpose, all rows and columns of the active descriptor
variants of a scenario are marked in the matrix. The intersection cells of the
row and column markers are then highlighted. The intersection cells of a
column indicate which impacts lead to the activation of the respective
descriptor variant and which counterinfluences were overcome. Figure 4.10
shows the procedure for Somewhereland Scenario no. 1 “Society in crisis”
with the scenario code [A1 B3 C1 D2 E3 F3]. Active impacts (intersection
cells) are printed light on dark. Impacts that stabilize the scenario are
marked in green and destabilizing impacts in red.
Fig. 4.10 Elucidating the background of Somewhereland Scenario no. 1 “Society in crisis”

By studying the intersection cells in Column C1, we recognize that the


declining economic performance (C1) in this scenario is due to the
conflictual foreign policy and social unrest. The influence of family-
oriented social values, which work slightly against declining economic
performance, cannot prevail against these effects. The arguments behind the
other scenario components can be identified in a similar way.
A corresponding analysis also could be performed using influence
diagrams. However, influence diagrams of large matrices are often highly
twisted and difficult to read.

4.2.2 The Scenario-Specific Cross-Impact Matrix


The “flow of forces” within a scenario can be summarized in a more
compact way by using the scenario-specific cross-impact matrix. For this
purpose, all rows and columns are removed from the cross-impact matrix
that does not belong to the active descriptor variants of the scenario. Figure
4.10 thus becomes the diagram shown in Fig. 4.11.

Fig. 4.11 The specific cross-impact matrix of Somewhereland Scenario no. 1

This table becomes even more focused on those impacts that account for
the consistency of a scenario if only the positive (green) impacts are noted
because the negative impacts could not prevail in the scenario and therefore
do not contribute to its justification (Fig. 4.12).

Fig. 4.12 Positive part of a scenario-specific cross-impact matrix


The explanatory power of the scenario justifications can be further
increased if explanatory notes for the cross-impact data also are collected
during the creation of the cross-impact matrix, and these are then included
in the scenario justifications, as shown for Descriptor E in Fig. 4.13.

Fig. 4.13 Justification form for Descriptor variant “E3 Social cohesion: Unrest”

This tabular representation can alternatively be implemented in the


shape of a diagram (Fig. 4.14). In contrast to the influence diagram
introduced in Chap. 3, which lists the incoming and outgoing influences for
all descriptors, here, only the influences on the descriptor whose variant is
to be justified (here Descriptor “E. Social cohesion”) are recorded.
Fig. 4.14 Illustration of the justifications of a descriptor variant

A special application for elucidating the background of a scenario arises


when cross-impact data have been collected through expert interviews or
workshops, and the scenarios are then discussed with the experts for
validation. Clarifying the mechanisms behind the scenarios makes it easier
to explain them and, if their plausibility is in doubt, to identify the elements
of the matrix that are responsible for the doubted scenario components. This
process is discussed in more detail in Sect. 6.4.5, Paragraph “General
recommendations—Iteration and scenario validation.”

4.3 Ex Post Consistency Assessment of Scenarios


Even though scenario construction is the most common use for CIB, the
consistency check of a given scenario is the real core function of the
method. It is the mass application of this validation step to all possible
scenarios of a scenario space and the sorting out of inconsistent scenarios
that establishes the construction function of CIB. However, there also are
use cases in which only the consistency check and not the scenario
construction is the required functionality of CIB. This is the case, for
example, when scenarios already exist from other project parts or
preliminary projects and it is now necessary to check their internal
consistency ex post. Examples from application practice for the use of this
function of CIB are Saner et al. (2011), Schweizer and Kriegler (2012),
CfWI/Centre for Workforce Intelligence (2014), and Kurniawan (2018).
The illustrative example below is inspired by Kurniavan’s study, which
addresses mobility concepts for Singapore. However, descriptors and cross-
impact data have been partially simplified and modified for demonstration
purposes.

4.3.1 Intuitive Scenarios


The example addresses the mobility future of “Somewhereland City,” the
fictitious capital of Somewhereland. It assumes that intuitive scenarios have
already been created in scenario workshops and now need to be checked for
consistency. A prerequisite for using CIB for consistency checking is that
the intuitive scenarios are formulated in terms of descriptors and their
alternative future outcomes or that it is possible to bring the scenarios into
this structure. For the demonstration, we assume that the intuitive scenario
process resulted in the scenarios shown in Fig. 4.15.
Fig. 4.15 Scenario axes diagram and the “Somewhereland City mobility” example

4.3.2 Reconstructing the Descriptor Field


To verify the internal consistency of the intuitive scenarios, a table is first
created that summarizes the main topic areas of the scenarios as descriptors
and the differences between the scenarios as descriptor variants. The table
resulting from the intuitive scenarios of Fig. 4.15 is shown in Fig. 4.16.
Fig. 4.16 Descriptors and descriptor variants of the “Somewhereland City” intuitive mobility
scenarios

The intuitive scenarios thus correspond to the scenario codes:

“Departure to yesterday” [A3 B2 C1 D2 E2 F2]


“Digital paradise” [A3 B3 C2 D3 E1 F3]
“Good old days” [A1 B1 C1 D2 E2 F1]
“The unfinished” [A2 B3 C4 D1 E1 F2]

4.3.3 Preparing the Cross-Impact Matrix


The next step consists of creating a cross-impact matrix based on the
identified descriptors. This requires assessments of the interrelationships
between the descriptor variants. However, in intuitive scenario building
processes, one cannot usually expect that these interrelationships will be
systematically documented in the scenario descriptions during the scenario
workshops. Instead, at this point, it is usually necessary to go beyond
reading the scenarios and ask not only how things might develop (which is
the subject of the intuitive scenarios) but also why they would develop that
way. Thus, it is necessary to reconstruct the undocumented hypotheses
about system interrelationships that were used during the scenario
workshops when putting together the components of the scenarios. For this
purpose, the participants of the scenario workshops can be interviewed,
discussion transcripts or audio recordings can be analyzed, and literature
sources named by the workshop participants as the basis for their
discussions can be evaluated. Figure 4.17 shows an example of a cross-
impact matrix that could result from this process.

Fig. 4.17 “Somewhereland City” cross-impact matrix

4.3.4 CIB Evaluation


The next step consists of evaluating the matrix and comparing the CIB
results with the intuitive scenarios. The evaluation of the matrix results in
nine consistent (IC0) scenarios, which are listed in Fig. 4.18.
Fig. 4.18 “Somewhereland City” CIB analysis results

Comparison of the CIB scenarios with the results of the intuitive


scenario process leads to the following findings:
I.
The intuitive scenarios “Departure to Yesterday,” “Good Old Days,”
and “Digital Paradise” are confirmed by the CIB analysis, as they also
occur in the CIB portfolio (CIB scenarios nos. 4, 3, and 9).
II. The intuitive scenario “The Unfinished” is not part of the CIB
portfolio. This means that it is criticized by the CIB analysis as
inconsistent, that is, incompatible with the internal relationships of the
system as collected in the matrix. The consistency check shown in
Fig. 4.19 follows the procedure described in Fig. 3.16 and specifically
criticizes the choice of the descriptor variant “F2. Municipal
governance: Cross-party project policies” as inconsistent. According
to the impacts indicated in the matrix, the rise of a digital economy
and the increasing prevalence of virtual lifestyles should pave the way
for participatory approaches to municipal governance (because of
changes in the educational and milieu stratification of the urban
population), as suggested by CIB scenario no. 7. If the inconsistency
found had been below the significance threshold, the scenario could
still have been accepted. However, according to Fig. 4.19, the
still have been accepted. However, according to Fig. 4.19, the
descriptor inconsistency with IC5 is far beyond the range of marginal
inconsistency.
III.
In addition to the intuitively found scenarios, the CIB analysis
identifies five additional scenarios of the highest quality level IC0. In
part, these additional scenarios consist of minor variations of the
scenarios already represented in the axis diagram. For example,
Scenario no. 5 differs from Scenario no. 4 “Departure to Yesterday” in
only one descriptor. Such variations generally do not introduce
fundamentally new ideas into the scenario analysis and can be
disregarded for compact consideration.

Fig. 4.19 Consistency critique of “The Unfinished” intuitive scenario


However, one scenario (Scenario no. 1) differs from all intuitive
scenarios by at least three descriptors (i.e., at least half the number of
descriptors). It thus introduces a significant innovation to the intuitive
portfolio. This scenario describes the emergence of a temporary coexistence
of old and new structures in Somewhereland City. While lifestyles, mobility
behavior and municipal administration are already following new digitally
influenced paradigms, economic structures are lagging behind and for the
time being remain in the old patterns. This coexistence is temporarily
stabilized by the fact that the new lifestyles greatly reduce motorized
private transport, which conflicts less with the logistical needs of the local
industries. In this way, a fragile truce is created between the two seemingly
incompatible subtrends. This variation of a medium-term urban future was
not addressed by any of the intuitive scenarios, but the systematic
combinatorial search of the CIB analysis revealed it as a coherent
possibility.
Figure 4.20 then shows the scenario axes diagram corrected and
expanded according to the CIB consistency check.
Fig. 4.20 Corrected and extended scenario axes diagram according to the result of the CIB analysis

The case that a subsequent CIB analysis finds meaningful scenarios that
were missed by a preceding intuitive scenario analysis is not uncommon. In
fact, this is reported in most cases where an intuitive scenario analysis has
been validated by CIB (Schweizer and Kriegler 2012; CfWI/Centre for
Workforce Intelligence 2014; Schweizer and O’Neill 2014; Kurniawan
2018).
4.4 Intervention Analysis
CIB cannot only be used to construct or verify scenarios. It also can be used
—within the limits of a qualitative analysis—to investigate the response of
a system to external influences, for example, interventions. Here, we ask
whether a policy under consideration (the “intervention”) would change the
scenario portfolio favorably by pushing undesirable scenarios out of the
portfolio or by allowing desirable scenarios to enter the portfolio. In
contrast, an intervention would be considered counterproductive if the
opposite effect occurs. Ambivalent effects also can be detected by
intervention analysis if the intervention expands the portfolio at both the
desirable and undesirable edges.
The classic way to represent interventions in CIB is to view the
intervention as an action that enforces a particular descriptor variant. This
then implies that all other variants of the respective descriptor are forcibly
excluded. Technically, this is implemented by removing the descriptor
variants excluded by the intervention from the matrix and then recalculating
the scenarios.2 The workflow of a CIB intervention analysis is described in
detail in Nutshell I (Fig. 4.21).
Fig. 4.21 Nutshell I—Workflow of a CIB intervention analysis

As an example, the water supply of a fictitious large city in an emerging


country is examined below. The example is inspired by a study on the water
supply of Lima/Peru (Kosow et al. 2013, Schütze et al. 2018, León and
Kosow 2019). However, the matrix used by Kosow, León, and Schütze was
greatly simplified for the demonstration purpose of this chapter. The
analysis below therefore reflects only a fraction of the issues addressed in
the original matrix.

4.4.1 Analysis Example: Interventions to Improve Water


Supply
The fictitious city studied in our example is characterized by unorganized
immigration through rural exodus and the resulting formation of informal
settlements on the outskirts of the city. A considerable part of the poorer
population is not connected to the public water network and must be
supplied by tanker trucks. Shortages, hygiene problems, and additional
impoverishment due to the expensive form of supply are the result. The
development of additional water sources is difficult and cost-intensive in
the arid environment of the city, which also is hydrologically stressed by
climate change. The development of scenarios and an intervention analysis
based on them are to provide impulses for a discourse on measures with
policy-makers, administrators, and stakeholders.

4.4.2 The Cross-Impact Matrix and its Portfolio


Six descriptors from the areas of water governance, socioeconomic
environment and water infrastructure are used for the demonstrator matrix
(Figs. 4.22 and 4.23). Descriptor “F. Water network connection rate” is the
target descriptor for which a specific variant is sought, namely, descriptor
variant “F3 Increasing water network connection rate.”

Fig. 4.22 The descriptors and descriptor variants of the “Water supply” intervention analysis
Fig. 4.23 “Water supply” cross-impact matrix (basic matrix)

The CIB evaluation of the matrix results in an IC0 portfolio of six


scenarios (Fig. 4.24).

Fig. 4.24 The “Water supply” portfolio without interventions (basic portfolio)

The portfolio covers the range of possibilities from complete failure


(F1, Scenarios no. 1 and 2) to success (F3, Scenario no. 6), in each case
showing plausible contextual developments.

4.4.3 Conducting an Intervention Analysis


The subject of the intervention analysis is the question of which
intervention in the system could ensure a favorable development of the
target descriptor through its direct and indirect effects. Based on the
previous portfolio, it must be assumed that success (F3) is possible (since
F3 occurs in the portfolio) but not certain (since F1 and F2 also occur in the
portfolio). Thus, an optimal intervention would change the portfolio so that
only F3 occurs. A moderately successful intervention would at least
eliminate the particularly undesirable F1 variant (decreasing coverage of the
water network) from the portfolio.

Compilation of the Intervention Options


An intervention, i.e., the forcing of descriptor variant X as a means to
produce the desired descriptor variant Y, must meet two conditions to be a
meaningful option for controlling the system: It must be (1) feasible in the
sense that a way is available to produce descriptor variant X by
proportionate means and, if necessary, to support it against possible
counterreactions of the system; (2) it must be reasonable to expect that
forcing descriptor variant X will move the system in the desired direction,
i.e., toward descriptor variant Y. The assessment of the first condition must
be made by expert judgment. The assessment of the second condition can
be supported by CIB intervention analysis.
The obvious intervention option is, of course, direct intervention on the
target descriptor itself. The question of which indirect intervention could
best influence a target descriptor arises only when no proportionate
approach to direct intervention is discernible. In the example, we assume
that no acceptable way could be found to directly control the connection
rate and that expert consultation revealed that interventions to enforce E1
(Integrated and participatory catchment management) or A2 (Public water
company with autonomy from government) could be feasible and
potentially effective options for intervention. The presumption of
effectiveness is therefore tested for both proposals through an intervention
analysis.

Testing a Proposed Intervention: E1


First, the intervention in favor of E1 is examined. This is done by deleting
the descriptor variants E2 and E3 from the matrix. The CIB algorithm now
has no choice in descriptor E and must necessarily resort to E1 in scenario
construction. This is equivalent in effect to a sufficiently strong intervention
in the system to enforce E1, for example, through legislation. Figure 4.25
shows the cross-impact matrix modified for the intervention analysis.

Fig. 4.25 “Water supply” cross-impact matrix with intervention at E1

The evaluation of the E1 intervention matrix leads to a portfolio of three


scenarios (Fig. 4.26). Not all scenarios have the target variant F3, but the
particularly unfavorable descriptor variant F1 is avoided throughout.
Accordingly, the intervention can be described as effective to a certain
extent but not as meeting the objective.
Fig. 4.26 The IC0 portfolio of the E1 intervention matrix

A closer comparison between the base portfolio in Fig. 4.24 and the
intervention portfolio in Fig. 4.26 shows that the intervention portfolio in
this case represents an extract from the base portfolio, i.e., all E1 scenarios
of the base portfolio have survived. In general, in intervention analysis, all
scenarios of the base portfolio that already carry the forced descriptor
variant also will occur in the intervention portfolio, but in addition, there
may be additional scenarios in the intervention portfolio that did not occur
in the base portfolio. These additional scenarios describe configurations in
which reactions of the other descriptors would prevent the forced descriptor
variant from occurring if the system were left to its own choices. However,
the intervention causes these feedback effects to lose their preventive
power. This type of intervention scenario does not occur with the E1
intervention but does appear in the intervention case investigated next.3

Testing a Proposed Intervention: A2


Next, the intervention in favor of A2 “Public water company with
autonomy from government” is examined. The corresponding intervention
matrix is shown in Fig. 4.27. The descriptor variants A1 and A3 are
removed from the matrix to prevent them from competing with A2.
Fig. 4.27 “Water supply” cross-impact matrix with intervention at A2

The evaluation of this matrix leads to two scenarios (Fig. 4.28). In


contrast to the intervention at E1, this intervention portfolio not only
consists of the intervention-compliant part of the basic portfolio, but a new
scenario also has been added in the form of scenario A2-2.
Fig. 4.28 The IC0 portfolio of the A2 intervention matrix

The A2 intervention has the desired effect. The intervention portfolio


avoids not only F1 but also F2 and thus contains only the desired F3
scenarios. Thus, this intervention is considered promising and preferable to
the E1 intervention.

Robustness Check
As always in CIB, it is recommended to check the significance of a finding
by examining portfolios with marginal inconsistency. With N = 6
descriptors, the IC1 portfolio must be considered according to Sect. 3.7. In
IC1, however, there are no additional scenarios for the matrix in Fig. 4.27,
so the dominance of the F3 scenarios also is given in IC1, and the
effectiveness conclusion for the A2 intervention is “IC1-robust” and thus
significant.

Side Effect Control


When assessing interventions, the possibility of unintended side effects of
the intervention also must be considered, and their harmful effects (if any)
must be weighed against the beneficial effects, i.e., the promotion of a
target descriptor variant (F3 in this case). The intervention analysis also can
provide suggestions for this. From a CIB perspective, an intervention acts
by narrowing down the range of possibilities for the target descriptor. As a
rule, however, the intervention also brings about a change in the possibility
space for other descriptors, which must then be interpreted as a side effect.
In the example of the A2 intervention portfolio, a side effect is that the
descriptor variant “B1 Reduced (noncost-covering) tariffs” is eliminated
from the portfolio as a result of the intervention, and the variant B2 “Cost-
covering water tariffs” now appears throughout (see Figs. 4.24 and 4.28). In
this case, cost-covering tariffs mean an increase in tariffs compared with the
B1 alternative, which can be associated with social burdens and resistance
to the reform program and thus with political costs for the intervening
authority.

Alternative Forms of Intervention Analysis


The form of intervention examination discussed thus far is the classic form
of intervention analysis in CIB. For interventions that are not suited to this
approach, there are alternatives for conducting an intervention analysis with
CIB.
For example, interventions may be proposed that are not capable of
enforcing a descriptor variant with certainty but rather have the effect of
gradually promoting the intervened descriptor variant. The categorical
deletion of the alternative descriptor variants in the matrix would then be an
unrealistic idealization of the intervention effect. A better approach here
would be to introduce an additional descriptor “Intervention” with a single
descriptor variant that exerts a gradual promotion on the intervened variant
and/or a gradual inhibition on its competitor variants, thus introducing a
gradual intervention-induced change into the system behavior.
Other types of interventions also may affect the interrelationships
between descriptors. That is, certain cross-impacts may emerge, disappear,
or change because of the intervention. In this case, the intervention can be
mapped in the matrix by modifying the affected cross-impact data.

4.4.4 Surprise-Driven Scenarios


In CIB, the technique of intervention analysis is not only used to study
deliberate interventions in the system. It also can be used to analyze the
consequences of “surprises” acting on the system.
The consideration of possible surprises, also called “wildcards”, is a
traditional element of scenario analysis. Wildcards are conceivable future
events whose probability is considered low from today’s perspective but
whose impact in case of occurrence would be so high that it is nevertheless
advisable to consider them (Petersen 1997, Steinmüller and Steinmüller
2004, SwissFuture 2007). Wildcard considerations are therefore often
carried out following the actual scenario creation as a supplementary
robustness consideration.4
The procedure of intervention analysis described in Sect. 4.4.3 also can
be applied to wildcard analysis if the surprise event can be expressed by the
assumption that for a descriptor (or several descriptors), a certain variant is
certain to occur and/or other variants are certain to be excluded. For
Somewhereland, for example, the wildcard “Global economic crisis” could
mean that for descriptor “C. Economy,” variant “C1 Shrinking” is likely to
be active, and the other variants of C are to be regarded as unlikely to occur.
Figure 4.29 shows the matrix shortened accordingly by the descriptor
variants C2 and C3.
Fig. 4.29 Implementation of the “Global economic crises” wildcard into the Somewhereland matrix

Under the conditions of the wildcard, the matrix shown in Fig. 4.29
produces an IC0 portfolio of three scenarios. These essentially describe two
types of futures, one of them in two variants (Fig. 4.30).
Fig. 4.30 The Somewhereland portfolio under the impact of the “Global economic crises” wildcard

4.5 Expert Dissent Analysis


To ensure the data quality, CIB studies often use several knowledge sources
for the assessment of the influence relationships, e.g., several literature
references that make statements on a certain relationship or several experts
who are asked to assess a relationship. If the sources come to similar
conclusions, the respective cross-impact data can be considered confirmed.
However, sources sometimes convey substantially different opinions. If the
differences in assessment persist even after the opposing arguments have
been exchanged and considered and are thus regarded as genuine dissent,
the question arises as to how to deal with the divergent suggestions for
cross-impact values. Dealing with dissent can mean two things: first,
finding an appropriate way to decide which data to include in the analysis
when there are divergent proposals; and/or second, understanding dissent
itself as a special sort of information about the system and extracting
conclusions from the dissent data.

4.5.1 Classifying Dissent


Dissent on cross-impact assessments can have different qualities. Figure
4.31 shows two examples from a CIB study on the interdependent
preconditions of sustainable heat consumption (Jenssen and Weimer-Jehle
2012). In this case, 12 experts provided written assessments of the influence
relationships. The example concerns the influence of social value change on
the use of renewable heat energy and the influence of oil price development
on the informedness of energy consumers.

Fig. 4.31 Different qualities of rating differences. Adapted from Jenssen and Weimer-Jehle (2012)

In Case (a) in Fig. 4.31, there is a less severe form of dissent. The expert
panel agreed that there is an impact, and there also was consensus on the
hindering character of the impact, i.e., on the sign of the cross-impact. Only
the strength of the impact was estimated differently. Things are different in
Case (b). There was no agreement on the fundamental question of whether
an impact exists at all or on its sign. Thus, there is a substantial divergence
of estimates and a stronger dissent than in Case (a).
Regardless of whether expert panels or literature reviews are used as
sources for cross-impact coding, judgment sections with rating differences
can be classified into the following categories based on Hummel (2017):

Category I The knowledge sources consistently indicate that there is an influence relationship
(strength between the descriptors and whether it is promoting or inhibiting. This means that all
rating sources lead to cross-impact values in the judgment cell and that the different sources
controversy) do not propose different signs for any cell. The strength ratings, on the other hand,
are assessed differently.
Category II One part of the knowledge sources sees a noticeable impact between two descriptors
(impact and agrees on the sign of the impact (but possibly diverges in the rating of the
relevance strength). The other part of the sources does not see an influence that is strong
controversy) enough to be considered.
Category III The knowledge sources assume different directions of influence, which leads to
(severe disagreement in the evaluation of the cells of the judgment section even regarding the
dissent) sign.

When cross-impact matrices completed by different experts on the same


topic are examined statistically, it can be seen that typically approximately
70% of the assessments turn out to be very coherent (a maximum of one
point of deviation in the difference between two cell values), while
approximately 14% of the assessments show clear differences that indicate
a fundamentally divergent conception of the influence relationship (three or
more points of deviation in the difference between two cell values) (Cf.
statistics box, Sect. 6.3.3). To deal with a set of matrices from different
sources and partly divergent cross-impact ratings (“matrix ensemble”),
different techniques are available, which are described below. They differ in
their effort but also in their ability to adequately process dissent and to use
it as a special kind of information.

4.5.2 Rule-Based Decisions


Here, rating differences are resolved by the core team during a review of the
collected data. The core team sets clear rules in advance on how rating
differences are to be brought to a decision. For example, it can be decided
according to defined criteria which sources of knowledge are considered
decisive in the case of rating differences. The criteria can be, among others:
Domain-specific assignment of different competence ratings of the
sources: In the case of Somewhereland, for example, a knowledge Source
A could be assigned in advance a high competence for economic issues
but a lower competence for social issues, while the competence
assessment for Source B would be the reverse. In the case of differences
between Source A and Source B on the effects on economic descriptors,
the rating of Source A could then be assigned a higher weight, while for
effects on social descriptors, in the case of doubt, the rating of Source B
could weigh more heavily.
Experts can be asked to provide a section-by-section self-assessment of
the certainty of their ratings. In case of doubt, ratings that are self-
assessed as reliable can then be assigned a higher weight.
For an example of rule-based resolution of rating differences in a CIB
analysis, see Hummel (2017).

4.5.3 The Sum Matrix


The simplest approach for merging multiple ratings is to form the sum
matrix from all individual matrices. The sum matrix aggregates the ratings
of the individual matrices cell by cell in an additive manner and thus
reflects the agreements or disagreements among the individual ratings with
its cell values: High positive or negative values in the sum matrix refer to
relationships that have been consistently coded as strong by many sources
and consistently coded in sign. In contrast, relationships that are either
frequently coded as insignificant or weak, or for which there is broad
disagreement among the knowledge sources about the sign, are expressed
by low values in the sum matrix.

Example: A Matrix Ensemble about Resource Economics


Figure 4.32 shows an example on the topic of resource economics to
illustrate the aggregation of four individual expert matrices into a sum
matrix. The example refers to a fictitious resource that is essential for global
economic development but scarce, which is extracted by foreign mining
companies in underdeveloped countries and utilized in industrialized
countries. For example, it is assumed that each member of a group of four
experts has independently prepared his or her own matrix and that this
matrix ensemble is subsequently evaluated by the core team.
Fig. 4.32 Example of a matrix ensemble and its sum matrix

Consensus and Dissent in the Matrix Ensemble


In some cases, the ratings of the expert panel coincide completely, such as
on the question of the effect of a stagnating global economy (A1) on
international politics (B).
In some cases, the expert ratings agree on the basic mode of action but
differ in their assessment of the strength of impact (rating difference
Category I). An example is the impact of D1 on A. In other cases, some
experts, in contrast to other members of the expert panel, did not see any
significant impact (impact of C on A, Category II). In both cases, the sum
matrix shows values that are comparable to the case of a consensual
assumption of medium impact strengths.
Special attention is required if the assessments in the expert panel on an
influence relationship differ so fundamentally that there is disagreement on
the sign (Category III). In the example in Fig. 4.32, this is the case for the
effect of high foreign investments in rescource exploration and extraction
(D3) on international political relations (B). At this point, the different
matrices reflect fundamentally incompatible conceptions:
Model of reality α: Model of reality β:
(proposed by Expert I and Expert II) (proposed by Expert III and Expert IV)
“Strong economic linkages between the resource- “Economic relations in the form of resource
consuming countries and the capital-poor part of the investments by industrialized consumer
resource-exporting countries (in the form of countries in capital-weak resource countries
investments in one direction and exports in the other frequently lead to neocolonial structures and
direction) create an interdependence of interests that then promote long-term tensions between the
promotes understanding.” countries involved.”
Coding: Coding:

D3: Exploration/extraction investments: Massive


B. International politics: B1—Balance of interests, B2—Tensions, B3—Conflicts

The disagreement of the expert panel on the fundamental nature of this


influence relationship causes the cell values in question to be close to zero
in the sum matrix, and as a result neither idea plays a significant role in
shaping the scenarios.

Evaluation of the Sum Matrix


The sum matrix can be evaluated as an ordinary cross-impact matrix. As
explained in Sect. 3.3, CIB does not require a specific rating interval, and
the sum matrix formally corresponds to a cross-impact matrix in which a
wider interval has been used—in the example of Fig. 4.32 with an interval
[−12… + 12] instead of the usual rating interval [−3… + 3]. In this case, the
evaluation results in three scenarios of the highest quality level IC0 (Fig.
4.33).
Fig. 4.33 Fully consistent solutions of the sum matrix

Significance of Inconsistencies in Sum Matrices


As in the case of single matrices, it may be advisable to extend the portfolio
by allowing scenarios with marginal (nonsignificant) inconsistency. The
small portfolio in Fig. 4.33, for example, contains only polar developments
for descriptors A and B, and there may be interest in considering additional
scenarios with moderate developments, if such can be constructed without
significant inconsistency.
An orientation value for the significance threshold of inconsistencies for
single matrices is given in Sect. 3.7. For sum matrices, it must be
considered that their impact balances do not simply result from the sum of
N-1 potentially imprecise impact ratings, as in the case of a single matrix,
but that each single impact value of the sum matrix is in turn composed of
several contributions from the ensemble matrices. If the ensemble consists
of m ensemble matrices, each value for the impact balance of the sum
matrix results from the summation of m·(N-1) individual impact ratings,
each of which is subject to rating uncertainty. The transfer of the rule given
in Sect. 3.7 to the conditions of a sum matrix therefore means that the
significance threshold for inconsistencies here is (for matrices with average
rating uncertainties):

M5

In the example in Fig. 4.32, N–1 = 3 and m = 4, resulting in a


significance threshold for inconsistency of ICS = 1.7. The inclusion of IC1
scenarios, and perhaps even IC2 scenarios, is therefore justified in this case.
The evaluation up to IC2 yields five scenarios, which also now contain a
larger proportion of mean developments (Fig. 4.34).

Fig. 4.34 Solutions of the “Resource economy” sum matrix, including all scenarios with
nonsignificant inconsistency

Sum Matrix Construction in the Case of Nonuniform Rating


Scales
If the rating scale was not settled in advance when a matrix ensemble was
created by a group of experts, the sum matrix may have to be formed from
matrices with nonuniform scales. For example, one part of the experts may
use the scale [−2… + 2] and another part the scale [−3… + 3]. Thus, since
the strongest impacts were coded once with a strength rating of 2 and in the
other case with a strength rating of 3, it is not appropriate to add the
matrices without prior preparation.

Here, it is useful that CIB allows a cross-impact matrix to be multiplied in the whole by M6
any positive integer without changing the IC0 portfolio of the matrix. The portfolios to a
nonzero IC level also result identically when the IC value is multiplied by the same factor.
The significance threshold for the inconsistencies also multiplies by the same factor. This
property is part of the so-called multiplication invariance of CIB.5

This property of CIB can be used here to produce a uniform rating scale
even after the matrices have been created. For this purpose, the matrix with
scale [−2… + 2] is multiplied by a factor of 3 and the matrix with scale
[−3… + 3] is multiplied by a factor of 2. Both matrices continue to have
their unchanged portfolios6 but are now both coded on the uniform scale
[−6… + 6] so they can be combined into a sum matrix.

Sum Matrix vs. Mean Value Matrix


Instead of the sum matrix, the use of a mean value matrix also can be
considered. It is formed from the sum matrix by dividing each cell value by
the number of individual matrices and then rounding. The mean value
matrix has the advantage that the value interval of the matrix corresponds to
the original evaluation interval, and the data of the mean value matrix can
thus be easily compared with the data in the individual matrices.
Due to the multiplication invariance of CIB discussed above, the sum
matrix and the mean value matrix should in principle lead to the same
portfolios. However, the necessary rounding of the division results means
that the portfolio of the mean value matrix can deviate from that of the sum
matrix. However, the results of both matrices remain similar, i.e., the
scenarios of the sum matrix are at most slightly inconsistent in the mean
value matrix and vice versa. Nevertheless, this means that the portfolio of
the sum matrix reflects the original data more accurately than the portfolio
of the mean value matrix.

Summary: Interpreting the Sum Matrix


High values in the sum matrix occur only when many sources agree on the
existence, direction and high strength of the impact. Medium values, on the
other hand, can come about either when all sources agreed that a medium
impact existed or because some of the sources did not assign an impact and
some assumed a strong impact. A zero impact score can express consensus
that there is no impact, or it can be the result of a controversial assessment
of the cross-impact sign and thus an expression of a massive (Category III)
dissent.
Basically, these properties of the sum matrix make sense for CIB
analysis. After all, it is reasonable that the scenario construction is mainly
based on influences on which there is agreement among experts, and a
strong impact strength rating is attributed to them—both prerequisites for
the emergence of high values in the sum matrix. The property of the sum
matrix that influences with a divergent assessment of the cross-impact sign
result in a weak or disappearing value in the sum matrix also leads to a very
simple but basically sensible handling of dissent in the CIB evaluation: It is
a rational and responsible approach not to consider influences in scenario
construction for which not even the direction of the effect can be reliably
clarified.
However, information is lost in the summation process: The decision to
de facto disregard divergent ratings by leaving them to be obliterated in the
sum matrix is pragmatic and justifiable. But it also means foregoing the use
of the full information content of the ensemble data. More time-consuming,
but also more productive, are the ensemble evaluation procedure and the
group evaluation procedure described in Sects. 4.5.5 and 4.5.6.

4.5.4 Delphi
One possible aim of dissent analysis can be to make the rating differences
of a matrix ensemble the object of investigation as information about the
diverging system views of the experts instead of consolidating the rating
differences by means of decision rules or by forming a sum matrix. The first
step to this is to identify the substantial part of the rating differences and to
separate it from the part of the rating differences that came about rather by
chance, by misunderstandings or by lack of reflection, before then applying
the evaluation techniques described in Sects. 4.5.5 and 4.5.6 to all
substantial rating differences. Separating substantial and nonsubstantial
rating differences requires additional effort and is not an obligatory step. It
does, however, improve the quality of the dissent analysis, as it reduces the
risk of deriving artificial conclusions from apparent dissent. A widely used
method for examining dissent in expert surveys is the Delphi method (e.g.,
Häder and Häder 2000).
Expert Delphis are multistage survey procedures in which a group of
experts is first asked individually to assess a question, for example, by
which year a certain future technology can be expected to be ready for the
market (e.g., Ammon 2009). If clear divergences are identified in the
responses, then the experts involved in the controversy are given an
overview of the divergent assessments and their justifications, and they are
asked to reconsider their own assessment in light of the assessments and
justifications of the other experts and then either reaffirm or modify their
own assessment. The result is either a convergence of assessments or a
clearer formulation of the assessment dissent. Both are considered
legitimate results of a Delphi and a gain in knowledge (Fig. 4.35).

Fig. 4.35 Procedure of a Delphi survey

Delphi methods can be conducted in the classic form by collecting


expert judgments in writing in the first and second rounds. A variant of the
classic Delphi is the Group Delphi (Niederberger and Renn 2018). Here, the
second round of elicitation, i.e., the processing of controversies, is not
conducted in another written round but in an expert workshop. Typically,
the controversies are negotiated there in small groups, and the group results
are discussed in plenary. It is expected that the direct exchange of
arguments in the workshop will lead to a better mutual appreciation of the
controversial points of view and thus to an improved quality of the
consensus finding or the justifications of dissent compared to the purely
written procedure.
In the cross-impact elicitation process, the Delphi method can be used
to work through divergent cross-impact ratings from written surveys or
interviews (Fig. 4.36). It can be decided individually for each survey from
which threshold a rating difference is considered to require processing. A
criterion can be, for example, a rating difference greater than one point or a
difference in sign. The choice of criterion also must consider the number of
cases to be processed and the time resources available for processing the
cases.
Fig. 4.36 Dissent management using the Delphi method

As a result, we cannot expect to find consensus on all divergent ratings.


However, this will generally succeed for some of the divergences. For the
cases of genuine dissent, at least a more precise understanding will be
achieved, which will facilitate their further treatment in the analysis
process. It is then advisable to resort to one of the two evaluation
procedures described below.

4.5.5 Ensemble Evaluation


Ensemble evaluation is a technique to explicitly acknowledge cross-impact
rating dissent in the evaluation and not hide it by using a sum matrix.
Instead, each expert matrix is evaluated individually, and then the different
portfolios are contrasted and compared. The conceptual difference from the
sum matrix is thus that the integration of the individual expert views in the
sum matrix takes place before the matrix evaluation at the level of the
cross-impact data, whereas in the ensemble evaluation, it takes place after
the matrix evaluation at the level of the scenarios.
In ensemble evaluation, the integration of the expert views consists of
searching for scenarios that are valid for several matrices at the same time,
i.e., are part of their portfolios. The first parameter of this analysis is the
quorum q, i.e., the minimum number of ensemble matrices that must lead to
the same scenario in the evaluation to make this scenario count as a solution
of the ensemble evaluation. The second parameter is the inconsistency score
IC that is allowed in the evaluations of the individual matrices. In the
example of the four “Resource economy” expert matrices in (Fig. 4.32), the
ensemble evaluation proceeds as follows:

Step 1: Individual Evaluation of the Expert Matrices


An individual evaluation of all matrices for the highest consistency class
IC0 yields the following picture: Matrix I (created by Expert I) results in
four scenarios. Matrix II leads to five scenarios. Matrix III and Matrix IV
each yield three scenarios. However, since the individual portfolios partially
overlap, these results are based on only seven different scenarios, which are
listed in Fig. 4.37.

Fig. 4.37 Compilation of scenarios of the individual evaluations of the “Resource economy” matrix
ensemble

The ensemble portfolio represents the raw result of the ensemble


evaluation. It can be interpreted in the same way as an ordinary scenario
portfolio if the uncertainty as to which expert will be right in his or her
assessments is understood as part of the uncertainty about the future.
However, the ensemble evaluation has further objectives, which are
described in the following.

Step 2: Compiling the Ensemble Table


Judging from the rating differences among the expert matrices in our
example, it is not to be expected that all experts would agree with all
scenarios in the ensemble portfolio. The ensemble table in Fig. 4.38 shows
in which expert matrices the scenarios of the ensemble portfolio achieve
consistency.7

Fig. 4.38 The ensemble table of the “Resource economy” matrix ensemble

Step 3: Analyzing the Ensemble Table


If, in our example, a scenario must be valid for all expert matrices (i.e., to
be consistent in all four matrices, q = 4), then one scenario becomes the
focus of attention. Based on the matrices, it can be assumed that all
members of the expert panel accept Scenario no. 1 as a plausible future. If
we lower the requirements and require consistent scenarios in only three of
the four matrices (q = 3), no further solutions arise in this ensemble.
For q = 2, i.e., for the requirement that scenarios are acceptable for at
least half of the matrices, five more scenarios are found. Of these, three
scenarios (nos. 2, 3, and 4) are legitimated by Matrices I and II, and two
scenarios (nos. 5 and 6) are legitimated by Matrices III and IV. This
indicates an increased potential for understanding between Expert I and
Expert II on the one hand and between Expert III and Expert IV on the
other.
Finally, the ensemble evaluation for q = 1 shows that another alternative
emerges from Matrix II with Scenario no. 7, which, however, is not
supported by any other matrix at the IC0 level.
By allowing moderate inconsistency (IC1), the overlaps between the
individual portfolios can be expanded. Then, for our ensemble, two
scenarios are acceptable for all matrices, and five scenarios reach a “three-
fourths majority” among the matrices.
The main results of an ensemble evaluation are thus i) the identification
of scenarios that are supported, if not by all, then at least by some of the
group members, and ii) the determination of scenarios that are characteristic
for individual group members or certain subgroups.

Sensitivity Analysis
An evaluation task comparable to the ensemble evaluation of expert dissent
arises when the project team carries out the cross-impact assessments itself
but anticipates the possibility of contrasting assessments of certain impact
relationships after studying the literature or consulting with experts. The
project team can then define a “baseline matrix” in which the assessment
variant judged to be likely is used for each uncertain impact relationship. In
addition, a matrix ensemble is created in which one of the uncertain impact
relationships is varied in each ensemble member, but the base matrix is
otherwise retained. The ensemble evaluation then reveals which of the
assessment uncertainties are associated with significant changes in the
portfolio and should therefore be considered critical uncertainties, and
which assessment uncertainties have no or only insignificant effects on the
portfolio.
Provided that not too many assessment uncertainties have been
identified, it may also be considered for a second analysis step to vary two
influence relationships in each ensemble member. This analysis allows the
study of combination effects.
Examples of sensitivity analyses in CIB practice can be found in
Schweizer and Kriegler (2012) and Schweizer and O’Neill (2014).

4.5.6 Group Evaluation


Group evaluation as an option for analyzing the rating differences can be
regarded as an intermediate form between the sum matrix and the ensemble
evaluation. While the sum matrix offers no opportunity to address the rating
divergences, the ensemble evaluation affords each rating difference a
chance to express itself in the evaluation results. The group evaluation takes
a middle course here, in that it focuses on a few dissent topics that are
assessed as particularly significant but treat all other rating differences in
the logic of a sum matrix.

Step 1: Identification of the Key Dissent


In the “Resource economy” ensemble, the judgment section “Impact of D
on B” shows by far the greatest differences in ratings, and these can be
classified as Type III controversies. Some of the ratings within a cell range
from +3 to −3 and thus show the maximum possible discrepancy (Fig.
4.39). As described in Sect. 4.5.3 in the paragraph “Consensus and dissent
in the matrix ensemble,” the different ratings are based on fundamentally
conflicting ideas about the cause-effect relationship that is at work between
these descriptors. Compared to this dissent, all other rating differences
between the ensemble matrices are of secondary importance. Although
there is another Type III controversy for the B2 → D1 judgment cell, the
score differences here are much smaller than in the case of the D3 → B
controversy. The D3 → B controversy is therefore classified as the key
dissent of the ensemble.

Fig. 4.39 Key dissent of the “Resource economy” matrix ensemble

Step 2: Grouping the Matrices Along the Key Dissent


The matrices can be grouped according to how the authors of the matrices
relate to the key dissent. Under Group (1), we include all matrices that
describe high investments of consumer countries in exporting countries as
conflict-promoting (Matrices I and II). Group (2) includes all matrices that
represent the opposite position (Matrices III and IV).

Step 3: Group Sum Matrix Building and Evaluation


Each matrix group is combined into a separate sum matrix and evaluated.
The procedure is described in Nutshell II (Fig. 4.40). The portfolios of the
sum matrices are compared. Shared scenarios are identified, and the
exclusive scenarios of the groups are interpreted against the background of
the key dissent.

Fig. 4.40 Nutshell II—Dissent analysis by group evaluation

The differences in the portfolios of the two groups (Fig. 4.40) can easily
be explained by the main dissensus: In Group (1), the hypothesis of a
conflict-promoting effect of foreign resource investments prevents the
occurrence of scenarios with permanently high resource investments and
resulting high resource supply. Such scenarios become possible only with
the counterhypothesis in Group (2). Neglecting resource efficiency also can
occur only in Group (2) because it presupposes a permanently high resource
supply and thus high resource investments, which indirectly implies, in the
long term, the negation of the hypothesis of a conflict-promoting effect of
high foreign resource investments, a hypothesis characteristic for Group (1).

Comparing the Results of the Group Evaluation and the


Ensemble Evaluation
The comparison of Fig. 4.40 with Figs. 4.37 and 4.38 shows that the results
of the group evaluation for the example largely correspond for this case to
the result of the ensemble evaluation. The group evaluation also points to
Scenario no. 1 (identical to Scenario no. 1 of the ensemble evaluation) as
the consensus scenario. All scenarios of both groups also appear in the
ensemble evaluation. Conversely, almost all scenarios of the ensemble
evaluation also are found in the results of the evaluation of the group
matrices.
The exception is Scenario no. 7 of the ensemble evaluation, which does
not appear in the group evaluation, at least not in the consistency level IC0
used here. This is understandable because Scenario no. 7 is not a consensus
in Group (1), which consists of Matrices I and II but is supported by only
Matrix II and rejected by Matrix I (cf. Figure 4.38). Since the sum matrix of
Group (1) consists of a compromise of Matrices I and II, it is easily
understandable that Scenario no. 7 does not achieve complete consistency
in this sum matrix.
However, the scenario is only weakly inconsistent and would occur if
the evaluation were extended to IC1.8
The broad agreement between the ensemble evaluation and the group
evaluation in this example does not necessarily occur in every case when
ensemble and group evaluations are compared. It expresses that there is no
other serious dissent in our example apart from the key dissent described
and that the division into two groups is therefore sufficient for a dissent
analysis.
With a sufficiently high number of matrices, it also is possible to
process more than one key dissent by group evaluation. If two key dissents
are identified, up to four matrix groups may be needed, and in the case of
three key dissents up to eight matrix groups, provided that each key dissent
can be pinned down to two polar positions. However, a smaller number of
groups may be sufficient for the dissent analysis if the dissent positions are
correlated, i.e., if the experts who share a common view on one field of
dissent also take shared positions on other fields of dissent.
In summary, it can be noted that dissent analysis in CIB offers far-
reaching opportunities to identify and formulate expert controversies and to
analyze their systemic consequences. Expert dissent, when processed with
these techniques, is not a “confounding factor” for analysis but a kind of
information about the system and its perceptions.

4.6 Storyline Development


For many purposes, the scenarios produced directly by CIB are sufficient
for the intended use. For other uses, where communication with laypersons
or with nonparticipants in the scenario process is the primary focus, it may
be necessary to elaborate the terse “raw scenarios” into narrative texts about
the respective system futures, the storylines. The narrator takes on the role
of a fictitious chronicler who, in the target year of the scenarios, explains
“retrospectively” the developments and contexts that have led to the
“current” state of the system. For the assignment of a motto for the
storyline, one might reflect on what title the chronicler would have chosen
for his or her essay to summarize the essence of this piece of “history” in a
concise manner.

4.6.1 Strengths and Weaknesses of CIB-Based Storyline


Development
CIB offers ambivalent conditions for the creation of storylines. Favorably,
CIB does not merely provide a simple list of the individual elements of a
scenario. Instead, the cross-impact matrix provides the storyline creator
with a rationale for the scenarios. It provides rich background information
on how the individual scenario elements relate to each other, why the
scenario just happened to fall into place the way it did, and what resistance
there might be to the developments described in the scenario. If the cross-
impact ratings were accompanied by explanatory text when the cross-
impact matrix was created, these can further deepen the storylines.
On the other hand, CIB’s approach of understanding systems as
networks of interacting elements is a complication. Potentially, every
development is at the same time the cause and consequence of other
developments. Since CIB, at least in its basic form, makes no statement
about the temporal sequence in which a scenario builds up as a self-
reinforcing network of developments, there is no immediate possibility of
assigning a linear logic to a CIB scenario.
Texts, and thus storylines, on the other hand, are linear in structure, and
each sentence should be comprehensible from what has been said thus far.
Retrospective justifications cannot always be avoided in argumentative
texts. However, they always disrupt the narrative flow and must therefore
be avoided as much as possible. Due to this linearity, a simple text is hardly
able to adequately represent the complexity of a network. The elaboration
of a CIB scenario into a storyline can therefore usually not be done without
friction, since the network logic of the CIB scenario must be brought into a
linear logic that is unnatural for it—it must be sequenced.9

4.6.2 Preparation of the Scenario-Specific Cross-Impact


Matrix
The procedure for translating a CIB scenario into a storyline is
demonstrated below, using Somewhereland scenario no. 10 “Prosperity in a
divided society” (scenario code [A2 B1 C3 D2 E2 F1], see Fig. 3.17). The
key to transforming the scenario into a storyline is the cross-impact matrix
as a database of system interrelationships. However, for understanding the
scenario logic, only the part of the matrix that addresses the
interrelationships between the active descriptor variants of the scenario is
needed. In other words, we need to deal with only the scenario-specific
cross-impact matrix (Fig. 4.41, see Sect. 4.2.2).

Fig. 4.41 Specific cross-impact matrix for Somewhereland scenario no. 10

Moreover, not all entries in the specific cross-impact matrix need to be


considered for the construction of a simple storyline; primarily, the positive
values are relevant because although data on overcoming resistance to
successful developments also are useful for a detailed storyline, they can be
excluded in a simple storyline such as the one to be developed here. The
most important thing is to understand what gives the scenario its shape, and
this is explained by the positive impacts, not the negative ones.
Furthermore, in the simplest case, we can limit ourselves to noting only
between which developments influences become effective and leave the
strength information aside to further simplify the narrative. For very
extensive scenarios, for which the consideration of all positive cross-
impacts would lead to confusing storylines, it also can be considered to skip
weak positive cross-impacts and to address only the strong influences on a
descriptor. However, this is not necessary in the present simple example. On
the other hand, it is generally a useful addition to quote the textual
explanations for the cross-impacts in keywords, if such were documented
during the cross-impact elicitation. The result for Somewhereland scenario
no. 10 is shown in Fig. 4.42.

Fig. 4.42 Data basis for storyline development for Somewhereland scenario no. 10

For small cross-impact matrices, as in the example, a graphical


implementation of this database also can be considered (Fig. 4.43), also
omitting the distinction of the influence strengths and the display of the
negative cross-impacts.
Fig. 4.43 Data basis for the development of a storyline in graphical representation

4.6.3 Storyline Creation


Now the storyline can be created. At the beginning of the storyline creation
process, the key decision is in which order the descriptors should be
narrated. Ideally, a descriptor should appear in the storyline only if the
descriptors that determine it have already been discussed previously in the
storyline. The ideal case of a “well narratable” scenario is therefore when a
descriptor order can be found by reordering, in which the specific matrix of
the scenario contains only entries above the diagonal (Fig. 4.44).
Fig. 4.44 Example of a perfectly sequencable specific matrix

If the narrative sequence can be based on a perfectly sequencable


matrix, then each new descriptor that enters the plot during the narrative can
be fully explained by the descriptors already introduced. However, only in
exceptional cases can a sequence be found for the descriptors that leads to
this ideal case. Usually, one must be content with finding an order that
causes as few “subdiagonal impacts” as possible. The specific matrix for
Somewhereland scenario no. 10, as presented in Fig. 4.42, is poorly suited
for storyline development because it contains four subdiagonal entries.
However, by a simple rearrangement (moving descriptor F to the beginning
of the sequence), the number of subdiagonal entries can be halved (Fig.
4.45).

Fig. 4.45 Improved descriptor order for Somewhereland scenario no. 10

In addition, of the two remaining subdiagonal entries, one entry is a


direct neighbor of the diagonal (effect of A2 on F1). Such diagonal-near
entries are less disturbing for the narrativity of the scenario, since the
descriptors A and F coupled in this way can be treated directly one after the
other and therefore in narrative unity in the storyline.
In this simple example, it was quite easy to see that the sequencing of
the specific matrix would benefit from moving descriptor F to the top of the
descriptor sequence. With larger matrices, it is often less obvious which
descriptors would be best placed at the beginning of the narrative sequence.
Promising candidates are then the descriptors that show particularly
numerous and strong cross-impact values in their row of the specific matrix.
Even higher priority should be given to descriptors that do not contain
column entries in the specific matrix. They can always be positioned at the
beginning, and descriptors without row entries can always be placed at the
end of the sequencing without disturbing the narrative flow.
Through the found narrative sequence, the network of descriptor
relations can be unrolled step by step into a linear form, and this in such a
way that each descriptor can be largely explained by the descriptors
preceding it in the storyline. Thus, for Somewhereland scenario no. 10, the
sequence follows:

The fact that the items, apart from descriptor F, are ordered
alphabetically is because in this scenario, a single rearrangement was
sufficient to achieve a satisfactory sequence. Often, more rearrangements
are needed, which then leads to more alphabetical mixing.
The sequence found is valid only for Scenario no. 10 and not for all
Somewhereland scenarios because each scenario has a different specific
cross-impact matrix.
As a literary text in the broadest sense, a storyline remains an individual
product, despite the strong framework provided by CIB analysis, and its
length and style also must depend on the intended purpose and target
audience. The following storyline for Somewhereland scenario no. 10 is
therefore only one possible example:
This shows that storyline texts become possible that no longer
immediately reveal their origin as the transcription of an algorithmically
constructed raw scenario. The result is a seemingly “normal”
argumentative–descriptive text, which is, however, in fact closely
prescribed by the scenario, the cross-impacts behind it, and their
explanatory text. The two reverse impacts from Fig. 4.45 (A2 → F1 and
D2 → F1) that run counter to the linear logic of the text are incorporated
into the storyline by interpreting them as a stabilization and reinforcement
of the already prevailing meritocratic values (descriptor F) that occurs in the
course of time.
It also can be seen that the storyline would have been much less dense if
there were no explanatory text for the cross-impacts. This underlines the
importance of the recommendation from Sect. 3.3 to document the
reasoning behind the cross-impact ratings together with the cross-impact
data, even if these have no direct function for the algorithmic scenario
construction process.

4.7 Basic Characteristics of a CIB Portfolio


In this subchapter, the formal aspects of scenario portfolios are discussed.
The most obvious feature of a portfolio is the number of its scenarios. Since
CIB does not generate a fixed number of scenarios but suggests that the
number of scenarios should depend on the system interdependencies
encoded in the matrix, large differences are found here from application to
application.
In addition to the number of scenarios, two other aspects shape the
appearance of a portfolio, namely, (i) the completeness with which the
portfolio covers the future space as represented by the set of descriptor
variants included in the matrix and (ii) the diversity of scenarios.

4.7.1 Number of Scenarios


CIB does not provide a fixed number of consistent scenarios from the
matrix evaluation. The example “Somewhereland” leads to 10 scenarios,
and portfolios of this size are not atypical. In some cases, however, there
may be many more or fewer, and in rare cases, there may be no solution at
all. The reason for this is that the number of scenarios primarily depends on
the interconnections between the descriptors. From a methodological point
of view, this is a desirable property because the number of consistent
scenarios can be understood as part of the assessment that the CIB analysis
makes about the system: The scenario count expresses the degree of future
openness of the system or, conversely, its degree of determinacy.
From a practical perspective, too, a high or low number of scenarios is
not necessarily a good or bad thing. A high number of scenarios can be an
inconvenience for one purpose but a prerequisite for another.
Based solely on the statistical features of the matrix and its
interdependencies, i.e., without evaluation of the matrix using the CIB
algorithm, it is generally not possible to reliably infer the number of
scenarios, and the information about the “scenario-proneness” of a matrix is
hidden in its structural features. Only a few approaches are known to read
this cipher message.10 One of them is described below.

Scenario Counts in Practice


In general, the number of scenarios (the “portfolio volume”) shows a wide
variance in CIB practice, ranging from cases with only one scenario11 to
cases where project peculiarities intentionally lead to several million
scenarios.12 However, neither is typical. According to an analysis of 120
matrices taken from application practice, the scenario count for IC0 is
between 3 and 15 scenarios in 50% of the cases (median: 5) and between 6
and 70 scenarios for IC1 (median: 15).
In approximately 2/3 of the application cases, the IC0 portfolio led to
the range of 3–30 scenarios, which is well suited for usual scenario analysis
purposes. For approximately 75% of the cases, either IC0 or IC1 falls
within this favorable range so that the scenario portfolio can be used
without further action. For the remaining 25% of the cases, Chap. 5 shows
ways to deal with the result.
Sparse Matrices: A Prerequisite for Large Scenario Portfolios
Whether the number of scenarios for a matrix turns out to be high or low
cannot be reliably predicted from the features of the matrix. Instead, this
becomes apparent only through the CIB evaluation. However, with the
proportion of zero values in the matrix, there is an indicator that shows in
advance whether there is a chance of a high number of scenarios.
Cells with zeros give the influenced descriptor more freedom in how it
adjusts to the influencing descriptor. This opens additional opportunities for
consistent descriptor variant combination, and a high proportion of empty
cells in matrices (“sparse matrices”) should therefore favor a high number
of scenarios.
The statistics box shows that these considerations are confirmed in
practice. Neither low nor high zero value frequencies necessarily lead to a
high number of scenarios. For each zero value frequency, there is a range
for the portfolio volume. The lower limit of the range is always in the low
single digits, regardless of the zero value frequency (except for very high
proportions of empty cells in the range above 80%). However, there is a
clear effect of zero value frequency on the upper bound of the range. The
sparser the matrix is populated with nonzero cell values, the higher the
possible number of scenarios.
As a rule of thumb, one can assume that the upper end of the range for
the number of scenarios approximately doubles per 10% higher proportion
of empty cells, although this rule loses its validity at very high zero value
frequencies and is replaced by a progressively increasing slope rate. In the
extreme case of a matrix completely filled with zeros, the number of
scenarios is equal to the number of descriptor variant combinations of the
scenario space.
For five of the 120 matrices examined, there was no consistent scenario,
which is why they were not included in the statistics box. This corresponds
to a share of approximately 4% in the sample of practice matrices. Section
5.1 proposes ways to deal with matrices with no or few solutions.
One special cause for a high proportion of empty cells in cross-impact
matrices is the presence of autonomous descriptors, i.e., descriptors that
represent external conditions of the system and therefore have no cross-
impact values in their columns. They also fit into the described correlation
and often lead to an above-average number of scenarios.

Frequency Distribution of the Inconsistency Value


The expansion of the number of scenarios because of the transition to
higher inconsistency classes is as case-dependent as the number of “fully
consistent” IC0 scenarios. For a concrete matrix, the effect of increasing the
allowed inconsistency can be read from the matrix’s frequency distribution
of inconsistency values. It depicts how many descriptor variant
combinations have a given inconsistency value. In the case of
Somewhereland, this results in the frequency distribution of the 486
descriptor variant combinations shown in Fig. 4.46.13

Fig. 4.46 Frequency distribution of the inconsistency value in the Somewhereland matrix

During CIB analysis, the focus of consideration is first on the fully


consistent scenarios (IC0), since they are perfect solutions of the
consistency condition. However, due to unavoidable judgmental
uncertainties when creating the cross-impact matrix, there are also reasons
to consider scenarios of moderate inconsistency. Up to which limit this is
advisable is discussed in Sects. 3.7.1 and 6.3.3.
In any case, the potential relevance of the scenarios with IC > 0 raises
the question of how much the scenario portfolio expands if IC1 or even IC2
is allowed. In the case of Somewhereland, five additional scenarios are
added to the 10 IC0 scenarios if the IC1 class is admitted. The portfolio
volume thus expands by the factor 15/10 = 1.5. This expansion factor is
within the typical range for CIB matrices. As the statistics of the matrices of
published CIB studies show, the expansion factor in the majority of
application cases is between 1.2 and 4.5. This implies that the vast majority
of descriptor variant combinations are found in higher inconsistency
classes, which are significantly inconsistent.

4.7.2 The Presence Rate


In addition to the number of scenarios, a further important feature of
portfolios is how fully they exploit the range of potential futures of the
scenario space, as expressed by the various descriptor variants in the cross-
impact matrix. It may be the case that one portfolio concentrates on certain
subspaces of the scenario space and leaves a large portion of the space
unused, while for other portfolios, the scenarios are more or less evenly
distributed across the scenario space. Neither is right or wrong per se.
Rather, it is a neutral statement that the CIB analysis makes about a system,
expressing how open the future of the system appears to be from the
perspective of the method.
A simple yet instructive indicator of the extent to which a portfolio
exploits the range of the scenario space is the proportion of available
descriptor variants that are taken up by the portfolio. This proportion is
referred to as the presence rate. The Somewhereland portfolio in Fig. 3.17
achieves a presence rate of 100% (“full presence”) since all descriptor
variants included in the cross-impact matrix are addressed in at least one
scenario of the portfolio. However, full presence is not the rule. In practice,
the average presence rate for IC0 portfolios is typically in the range of
60%-90%. Missing descriptor variants (“vacancies”) indicate that certain
parts of the scenario space are not touched by the portfolio because they
seem to be inaccessible due to the interdependencies in the system.

To obtain a closer look at this indicator, we investigate the “Oil price”


matrix (Fig. 4.47). The example analyzes the factors influencing the world
market oil price and their interdependencies. It is taken from Weimer-Jehle
(2006), where this matrix is in turn taken in modified form from an earlier
study (Honton et al. 1985). The example is thus to be understood as a
retrospective analysis of oil market dynamics in the 1980s.
Fig. 4.47 The cross-impact “Oil price” matrix (Weimer-Jehle 2006)

The matrix leads to three fully consistent (IC0) scenarios, as shown in


Fig. 4.48.

Fig. 4.48 The three fully consistent scenarios of the “Oil price” matrix

In the IC0 portfolio of the “Oil price” matrix, 9 of the 16 descriptor


variants in the matrix are represented, resulting in a presence rate of 56.3%.
The vacancies are not only quantitatively but also qualitatively substantial:
For “E. Oil price,” of all descriptors, which is the focus of interest in the
analysis, only one of four descriptor variants is available in IC0.
When higher inconsistency classes are included in the portfolio, the
vacancies step by step disappear: The evaluation of the matrix with
admission of the IC1 scenarios results in an additional thirteen scenarios
that introduce six of the missing variants. In the portfolio of IC2 scenarios
with a further 38 scenarios, all 16 descriptor variants of the matrix are
finally found (Fig. 4.49).

Fig. 4.49 Descriptor variant vacancies of the “Oil price” matrix (empty squares)

The decision regarding up to which inconsistency classes the extension


of the portfolio is justifiable is discussed in Sect. 3.7.

4.7.3 The Portfolio Diversity


Variety in the alternative pictures of the future presented by a scenario
portfolio is created not only by a high utilization of the available descriptor
variants. It also is crucial that the descriptor variants are combined in a
varied manner in the scenarios. The presence rate as an indicator of
comprehensiveness of descriptor futures in the portfolio must therefore be
supplemented by a second indicator that measures the combinatorial
portfolio diversity.14

The Distance Table


Access to the assessment of portfolio diversity is provided by the distance
table. It shows how different (distant) the scenarios of the portfolio are from
each other. In the simplest case, distance between two scenarios is
calculated as the number of descriptors with different variants. The lowest
possible distance is 1 (when a pair of scenarios differ only in a single
descriptor and otherwise match). The maximum possible distance is equal
to the number of descriptors N (when two scenarios are completely
different). The distance table of the portfolio “Oil price” (Fig. 4.48) is
shown in Fig. 4.50. For example, the distance between Scenarios no. 2 and
no. 3 takes the value of 1 because Scenarios no. 2 and no. 3 differ in one
descriptor (Descriptor D) but match in all other descriptors.

Fig. 4.50 Distance table of the “Oil price” portfolio

Measuring Portfolio Diversity


A useful indicator of scenario diversity in the portfolio is the maximum
number of portfolio scenarios that keep a certain minimum distance from
each other and can therefore all be considered sufficiently different from
each other. By choosing the minimum distance, we can specify how strictly
we want to understand the “sufficiently different” requirement. For
example, we can require that “sufficiently different” should mean that at
least half of the descriptors carry different variants (N/2 criterion). Then, we
can generally expect that the future narratives implied by the scenarios are
not just minor variations but clearly different in character.
For the “Oil price” matrix with five descriptors, the N/2 criterion means
that scenarios must meet at least a distance of 3 to satisfy the minimum
distance condition. A look at the distance table immediately shows that two
pairs of scenarios can be found in this simple portfolio that satisfy the N/2
criterion (Scenarios no. 1 and no. 2) or even overfulfill the criterion
(Scenarios no. 1 and no. 3). Thus, both scenario pairs are a distance-
controlled selection of the “Oil price” portfolio in terms of the N/2
minimum distance condition.
However, the full set of three scenarios fails to satisfiy the minimum
distance condition. Since two scenarios can be found that satisfy the
minimum distance condition, but not three scenarios, the diversity score of
the “Oil price” IC0 portfolio based on the N/2 criterion is D = 2.
The Somewhereland portfolio achieves a higher diversity score. For this
portfolio with six descriptors, the N/2 criterion means that scenarios must
meet a minimum distance of 3. With this requirement, when all possible
scenario selections are sampled, four selections with five scenarios can be
found for the Somewhereland portfolio, but none can be found with six
scenarios. Figure 4.51 shows the Somewhereland distance table with
Scenarios no. 1, no. 2, no. 5, no. 7, and no. 9 marked. All intersection cells
of these scenarios (shown inverted) carry values of at least 3, confirming
that they form an N/2 selection. Thus, the diversity score of the
Somewhereland portfolio, based on the N/2 criterion, is D = 5.

Fig. 4.51 Distance table of the Somewhereland portfolio with marking of an N/2 selection

Typical Diversity Scores


The higher the diversity score of a portfolio, the more strikingly different
future narratives can be drawn from it. The examples given earlier show a
portfolio with a low diversity score and a portfolio with a high diversity
score. In principle, all possible scenario selections must be examined to
determine the diversity score, but this reaches technical limits for large
portfolios.15 Therefore, in Sect. 5.2.2, a procedure is described with which
the diversity score of a portfolio can be estimated with little effort and
which is also supported by the CIB software ScenarioWizard.
Using the diversity score as an indicator, it is possible to examine what
portfolio diversity can generally be expected in a CIB analysis. As a rule,
the scores are higher for large portfolios than for small portfolios. This also
implies that they are generally higher for IC1 portfolios than for IC0
portfolios. Furthermore, higher diversity scores result if instead of N/2 a
lower requirement for scenario distances is applied, for example, N/3.
Even for this weaker requirement, the scenarios resulting from diversity
sampling will still be significantly different from each other, although in
some cases less different than when the N/2 condition is applied (see
statistics box).

The diversity score is a concise answer of the CIB analysis to the


question of how open or how closed a system is in its future development.
Thus, where a specific matrix and its portfolio come to lie in the frequency
distribution shown in the statistics box is not random but rather places the
system under investigation in terms of its openness to the future in the range
of all CIB analyses recorded in the statistics.
The diversity score should therefore not be rashly interpreted as an
indicator of whether a CIB analysis has “succeeded.” When analyzing a
system whose development is in fact channeled in a narrow part of the
possibility space, a valid CIB analysis should reflect that system
characteristic and report a low diversity score. In contrast, a failed CIB
analysis would be suspected if a system assumed to be very open to the
future yields only a low diversity score. In this case, it should be questioned
whether the ad hoc assumption of future openness is justified; if this
assumption is still made, it should be considered whether some aspects
essential for future openness were not included in the CIB analysis.

References
Ammon, U. (2009). Delphi-Befragung. In S. Kühl, P. Strodtholz, & A. Taffertshofer (Eds.),
Handbuch Methoden der Organisationsforschung. Quantitative und qualitative Methoden (pp. 458–
476). VS Verlag für Sozialwissenschaften.
[Crossref]

Aschenbrücker, A., & Löscher, M. (2013). Szenario-gestützte Identifikation von externer


Bedrohungspotenzialen in der Medikamentenversorgungskette, IPRI-Praxis Nr. 2, Stuttgart.

Ayandeban. (2016). Future scenarios facing Iran in the coming year 1395 (March 2016–March 2017)
[in Persian]. Ayandeban Iran Futures Studies, www.ayandeban.com

CfWI/Centre for Workforce Intelligence. (2014). Scenario generation - enhancing scenario


generation and quantification. CfWI technical paper series no. 7. See also: Willis G, cave S, Kunc M
(2018) strategic workforce planning in healthcare: A multi-methodology approach. European
Journal of Operational Research, 267, 250–263.

Häder, M., & Häder, S. (Eds.). (2000). Die Delphi-Technik in den Sozialwissenschaften: Methodische
Forschungen und Innovative Anwendungen (ZUMA-Publikationen). Springer VS.

Honton, E. J., Stacey, G. S., & Millet, S. M. (1985). Future scenarios—The BASICS computational
method, economics and policy analysis occasional paper (Vol. 44). Batelle Columbus Division.

Hummel, E. (2017). Das komplexe Geschehen des Ernährungsverhaltens - Erfassen, Darstellen und
Analysieren mit Hilfe verschiedener Instrumente zum Umgang mit Komplexität. Dissertation,
University of Gießen.

Jenssen, T., & Weimer-Jehle, W. (2012). Mehr als die Summe der einzelnen Teile - Konsistente
Szenarien des Wärmekonsums als Reflexionsrahmen für Politik und Wissenschaft. Gaia, 21(4), 290–
299.
[Crossref]

Kopfmüller, J., Weimer-Jehle, W., Naegler, T., Buchgeister, J., Bräutigam, K.-R., & Stelzer, V.
(2021). Integrative scenario assessment as a tool to support decisions in energy transitions. Energies,
14, 1580. https://doi.org/10.3390/en14061580
[Crossref]

Kosow, H., León, C., & Schütze, M. (2013). Escenarios para el futuro - Lima y Callao 2040.
Escenarios CIB, storylines & simulacin LiWatool. Scenario brochure of the LiWa project (www.
lima-water.de).

Kurniawan, J. H. (2018). Discovering alternative scenarios for sustainable urban transportation. In


48th annual conference of the Urban Affairs association, April 4–7, 2018, Toronto, Canada.

Le Roux, B., Rouanet, H. (2009). Multiple correspondence analysis. SAGE PUBN. http://www.
ebook.de/de/product/10546753/brigitte_le_roux_henry_rouanet_multiple_correspondence_analysis.
html.

León, C., & Kosow, H. (2019). Wasserknappheit in Megastädten am Beispiel Lima. In J. L. Lozán,
S.-W. Breckle, W. Kuttler, & A. Matzarakis (Eds.), Warnsignal Klima: Die Städte (pp. 191–196).
Universität Hamburg.

Nakićenović, N., et al. (2000). Special report on emissions scenarios. Bericht des Intergovernmental
Panel on Climate Change (IPCC). Cambridge University Press.

Niederberger, M., & Renn, O. (2018). Das Gruppendelphi. Springer VS.


[Crossref]

Petersen, J. L. (1997). Out of the blue. Wild cards and other big future surprises. The Arlington
Institute.

Ramirez, R., & Wilkinson, A. (2014). Rethinking the 2×2 scenario method: Grid or frames?
Technological Forecasting and Social Change, 86, 254–264.
[Crossref]

Saner, D., Blumer, Y. B., Lang, D. J., & Köhler, A. (2011). Scenarios for the implementation of EU
waste legislation at national level and their consequences for emissions from municipal waste
incineration. Resources, Conservation and Recycling, 57, 67–77.
[Crossref]

Sardesai, S., Kippenberger, J., & Stute, M. (2019). Whitepaper scenario planning for the generation
of future supply chains. Fraunhofer IML. https://doi.org/10.24406/iml-n-566073
[Crossref]

Schütze, M., Seidel, J., Chamorro, A., & León, C. (2018). Integrated modelling of a megacity water
system – The application of a transdisciplinary approach to the Lima metropolitan area. Journal of
Hydrology. https://doi.org/10.1016/j.jhydrol.2018.03.045

Schweizer, V. J., & Kriegler, E. (2012). Improving environmental change research with systematic
techniques for qualitative scenarios. Environmental Research Letters, 7(4), 044011.
[Crossref]

Schweizer, V. J., & O’Neill, B. C. (2014). Systematic construction of global socioeconomic pathways
using internally consistent element combinations. Climatic Change, 122, 431–445.
[Crossref]

Steinmüller, A., & Steinmüller, K. (2004). Wild cards. Wenn das Unwahrscheinliche eintritt.
Murmann Verlag.

SwissFuture (ed.) (2007). WildCards. Magazin für Zukunftsmonitoring 2/2007.


Taleb, N. N. (2010). The black swan: The impact of the highly improbable. Random House
Publishing.

van’t Klooster, S. A., & van Asselt, M. B. A. (2006). Practising the scenario-axes technique. Futures,
38(1), 15–30.
[Crossref]

Weimer-Jehle, W. (2006). Cross-impact balances: A system-theoretical approach to cross-impact


analysis. Technological Forecasting and Social Change, 73(4), 334–361.
[Crossref]

Weimer-Jehle, W. (2009). Properties of Cross-Impact Balance Analysis. arXiv:0912.5352v1.

Weimer-Jehle, W., Deuschle, J., & Rehaag, R. (2012). Familial and societal causes of juvenile
obesity - a qualitative model on obesity development and prevention in socially disadvantaged
children and adolescents. Journal of Public Health, 20(2), 111–124.
[Crossref]

Weimer-Jehle, W., Wassermann, S., & Fuchs, G. (2010). Erstellung von Energie- und Innovations-
Szenarien mit der Cross-Impact-Bilanzanalyse: Internationalisierung von Innovationsstrategien im
Bereich der Kohlekraftwerkstechnologie. 11. Symposium Energieinnovation, TU Graz, February 10-
12, 2010.

Footnotes
1 See, for instance, van’t Klooster and van Asselt (2006). For a critical perspective, see Ramirez and
Wilkinson (2014).

2 In the CIB software ScenarioWizard, the function “Force variant” can be used to perform an
intervention analysis.

3 This additional type of intervention scenario may or may not occur as a result of an intervention. In
no case, however, does it occur when intervening on an autonomous descriptor.

4 Still beyond wildcards are concepts that argue that system disruptions in reality are often triggered
by nonanticipatable events beyond the horizon of experience (“black swans,” Taleb 2010).

5 Invariance property IO-2, Weimer-Jehle (2006: 343). For a proof, see also Weimer-Jehle (2009),
Property VIII.
6 To be precise: After multiplication by the factor n, a matrix has the same IC0 portfolio as before,
and the ICn portfolio is the same as the former IC1 portfolio, and the IC(2n) portfolio is the same as
the former IC2 portfolio, and so on.

7 The creation of the ensemble individual evaluations and the intersection table is supported in the
ScenarioWizard software by the “Ensemble evaluation” function.

8 When analyzing marginally inconsistent scenarios in the group evaluation, it should be noted that
the significance threshold for inconsistencies according to Memo M5 (Sect. 4.5.3) depends on the
number of matrices in a group and may therefore differ from group to group if the groups are of
different sizes.

9 In didactics, the term sequencing stands for the creation of a learner-friendly, sequential order of
learning content. The term is adopted in CIB because the arrangement of the thematic components of
a scenario or storyline in an order that promotes understanding is an analogous didactic task.

10 An exception is a matrix containing strongly biased data, which leads to small portfolios (see
Sect. 5.3).

11 Weimer-Jehle et al. (2010).

12 Weimer-Jehle et al. (2012).

13 The calculations can be performed, e.g., using the ScenarioWizard software.

14 As an example, demonstrating that high scenario diversity and high presence of descriptor
variants do not automatically go hand in hand, we consider a matrix with 10 descriptors and two
variants for each descriptor. The portfolio consists of 11 scenarios: one scenario with the first variant
for each descriptor and 10 further scenarios, each with the second variant for one descriptor and the
first variant for the other descriptors. Then, a presence rate of 100% is achieved, and yet the portfolio
shows only minimal diversity because all scenarios are only marginal variations of the parent
scenario (Scenario no. 1).
15 In general, the binomial coefficient z!/(k!(z-k)!) indicates how many different ways there are to
select k scenarios from a portfolio of z scenarios. That is, there are 3 ways to select two scenarios
from the “Oil price” portfolio of 3 scenarios and 252 ways to form a set of 5 scenarios from the 10
Somewhereland scenarios. This is feasible. However, there are more than 10 billion different ways to
select 10 scenarios from a portfolio of 50 scenarios.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis, Contributions to Management
Science
https://doi.org/10.1007/978-3-031-27230-1_5

5. What if… Challenges in CIB Practice


Wolfgang Weimer-Jehle1
(1) ZIRIUS, University of Stuttgart, Stuttgart, Germany

Wolfgang Weimer-Jehle
Email: [email protected]

Keywords Cross-Impact Balances – CIB – Scenario

Many CIB users would probably consider a portfolio of moderate size


consisting of very different scenarios an ideal result of a CIB analysis. Such
a portfolio would explore the future space of a system, and the number of
scenarios would facilitate in-depth engagement with each identified
scenario. In this sense, the Somewhereland portfolio can be understood as
an ideal case. With its ten scenarios, it addresses very different futures and
makes use of all descriptor variants of the matrix in at least one scenario. At
the same time, it is not so extensive that a thorough examination of each
scenario would be impossible.
Portfolios of this quality are not exceptional as a direct result of CIB
evaluation. However, they are also not a matter of course. The IC0 portfolio
can also yield very few, very many, or insufficiently diverse scenarios and
thus pose problems for the intended utilization of the scenarios. One
solution may be to repeat the evaluation with more favorable parameters or
to use complementary methods, such as cluster analysis, to arrive at a
portfolio that still possesses the desired properties. In certain cases,
however, a better approach is to acknowledge that it is precisely in the
unexpected shape of the portfolio that a message is conveyed to us by CIB
about the system under study.
The main task of this chapter is to discuss the most important cases of
“nonideal” portfolios and to describe suitable procedures for dealing with or
interpreting such results. In addition, Sect. 5.7 discusses the special case of
context-sensitive impacts, a challenge that occasionally arises when
creating a cross-impact matrix.

5.1 Insufficient Number of Scenarios


Cross-impact matrices with a small number of scenarios are not the rule, but
they are not rare in CIB practice. According to the statistics shown in Sect.
4.7.1, approximately 22% of cross-impact matrices found in the literature
produce at most two consistent scenarios for IC0, and approximately 1/3 of
the matrices produce at most three scenarios. Approximately 4% of the
matrices found in CIB studies yield no scenarios at all in IC0. Approaches
for dealing with matrices that yield too few or no scenarios are therefore
essential parts of the CIB analysis toolkit.
An example of a cross-impact matrix with few IC0 solutions is the “oil
price” matrix (Fig. 4.47). In Sects. 4.7.2 and 4.7.3, we have already seen
that this matrix yields only three fully consistent scenarios, which in fact
address only two distinctly different futures. A considerable proportion
(approx. 44%) of the descriptor variants of the matrix are not represented in
the portfolio. The question arises whether this narrowing of the future space
is validly justified by the coded system relationships or whether the
narrowness of the IC0 portfolio must be regarded as artificial.
In Sect. 3.6, inconsistency classes were introduced as the central
instrument to control in CIB whether conclusions can be considered to be
significant or whether they could be caused by common rating
uncertainties. Scenarios that are excluded by the IC0 portfolio but enter the
portfolio when marginal inconsistencies are allowed are generally
considered to be scenarios of sufficient consistency and qualified for use. In
the case of the oil price matrix with 5 descriptors, Eq. M4 in Sect. 3.7.1
recommends allowing IC1, which leads to 16 scenarios (Fig. 5.1).
Fig. 5.1 IC1 portfolio of the “oil price” matrix

Now we have arrived at a diverse portfolio that remains manageable in


terms of volume. The presence rate of the descriptor variants has increased
from 56.3% (9 of 16) to 93.8% (15 of 16). The diversity score of the
portfolio (N/2 criterion; see Sect. 4.7.3) has tripled from D = 2 to D = 6.1
However, the effect of a switch to the IC1 portfolio on the number of
scenarios varies from matrix to matrix. The expansion factor, i.e., the ratio
of the IC1 scenario count to the IC0 scenario count, can assume very
different values, as the statistics box “Expansion factor” in Sect. 4.7.1
shows.
The additional view on the scenarios of marginal inconsistency is
generally advisable to complete an analysis. If it turns out that extending the
portfolio would not add any fundamentally new future types but merely
provide variations on the types already represented in IC0, the decision can
be made to rely exclusively on the IC0 portfolio.

5.2 Too Many Scenarios


A challenge for the users of the portfolio can also arise if the IC0 evaluation
leads to a high number of scenarios. It is then no longer possible to deal
with all scenarios in detail, and limiting the analysis to selected scenarios
could be associated with the risk of a potentially consequential
incompleteness of the future analysis.
An example of a matrix with a relatively high number of scenarios in
IC0 is shown in Fig. 5.2. The matrix addresses possible futures of global
socioeconomic development. The example is based on a study by
Schweizer and O’Neill (2014) but has been simplified and modified for
demonstration purposes. Many global relationships cannot be formulated in
a generalized way because of the strong heterogeneity among world
regions. Therefore, the matrix shows a relatively high proportion of zeros.
According to Sect. 4.7.1, this is a typical cause for a high number of
scenarios, which is confirmed in this case.

Fig. 5.2 “Global socioeconomic pathways” matrix. Adapted and modified from Schweizer and
O’Neill (2014)

The IC0 portfolio of this matrix includes 32 scenarios as shown in Table


5.1.
Table 5.1 Portfolio of the “Global socioeconomic pathways” matrix

No. 01 [A3 B1 C1 D3 E1 F1 No. 12 [A3 B2 C1 D2 E2 F1 No. 23 [A3 B2 C2 D2 E2 F2


G1] G2] G2]
No. 02 [A1 B2 C1 D2 E2 F1 No. 13 [A2 B2 C2 D2 E2 F1 No. 24 [A3 B2 C3 D2 E2 F2
G1] G2] G2]
No. 03 [A2 B2 C1 D2 E2 F1 No. 14 [A3 B2 C2 D2 E2 F1 No. 25 [A3 B2 C2 D1 E2 F3
G1] G2] G2]
No. 04 [A1 B2 C1 D2 E2 F2 No. 15 [A2 B2 C3 D2 E2 F1 No. 26 [A3 B3 C2 D1 E2 F3
G1] G2] G2]
No. 05 [A2 B2 C1 D2 E2 F2 No. 16 [A3 B2 C3 D2 E2 F1 No. 27 [A2 B2 C1 D2 E2 F3
G1] G2] G2]
No. 06 [A3 B2 C1 D2 E2 F2 No. 17 [A1 B2 C2 D1 E2 F2 No. 28 [A3 B2 C1 D2 E2 F3
G1] G2] G2]
No. 07 [A2 B2 C1 D2 E2 F3 No. 18 [A3 B2 C3 D1 E2 F2 No. 29 [A3 B2 C2 D2 E2 F3
G1] G2] G2]
No. 08 [A3 B2 C1 D2 E2 F3 No. 19 [A1 B2 C1 D2 E2 F2 No. 30 [A3 B2 C3 D2 E2 F1
G1] G2] G3]
No. 09 [A1 B2 C2 D1 E2 F1 No. 20 [A2 B2 C1 D2 E2 F2 No. 31 [A3 B2 C3 D2 E2 F2
G2] G2] G3]
No. 10 [A1 B2 C1 D2 E2 F1 No. 21 [A3 B2 C1 D2 E2 F2 No. 32 [A1 B3 C3 D1 E3 F3
G2] G2] G3]
No. 11 [A2 B2 C1 D2 E2 F1 No. 22 [A2 B2 C2 D2 E2 F2
G2] G2]

In principle, several ways are open in CIB evaluation to reduce the


complexity of a large portfolio to a manageable level. Three methods are
discussed in detail below. In Sect. 5.2.4, further information on additional
approaches is provided. Each of these methods provides a complete view of
the portfolio with respect to one particular aspect but remains incomplete
with respect to other aspects. In practical application, the challenge is to
identify which approach is most fruitful in the case at hand.

5.2.1 Statistical Analysis


Statistical analyses facilitate an overview of a portfolio and thus provide
general insights into the future space that the portfolio describes. However,
the results of such analyses must be interpreted against the background of
the special properties of CIB analysis.
The simplest way to statistically analyze a portfolio is to count the
frequencies with which the different descriptor variants occur in the
portfolio.2 Figure 5.3 illustrates this approach for the “Global
socioeconomic pathways” portfolio.
Fig. 5.3 Occurrence frequencies of the descriptor variants in the “Global socioeconomic pathways”
portfolio

Figure 5.3 clearly shows that the descriptor variants play highly
different roles in the portfolio. Scenarios with the descriptor variant “B1 -
Low average income,” for example, are very rare, while scenarios with the
descriptor variant “E2 - Medium educational attainment” represent the
standard case. In contrast, the descriptor variants “F. Quality of
governance” are distributed fairly evenly across the scenarios.

Interpreting the Frequency Data


The natural interpretation of frequencies in CIB refers to the status of
scenarios as possibilities. An interpretation as probabilities, in contrast, is
highly preconditional, as explained below. The low frequency of B1, for
example, means that there are only few ways to assemble the descriptor
variants of the matrix into a coherent picture such that a low average
income finds its place in it. In other words, the presence of B1 in a scenario
reveals a good deal about what other components of the scenario must look
like. This could be because B1 can only be caused by a few very specific
combinations of environmental conditions (represented by the other
descriptors) or because the occurrence of B1 has extremely formative
consequences for the other descriptors (see Excursus).
Excursus: Understanding the Causes of Low Descriptor Variant
Frequencies
Descriptor variants may have a noticeably low frequency in the portfolio
because their presence in a scenario either
Requires a very specific environment as a precondition or
Exerts a highly determinant effect on their environment
or because both causes are mixed. Both possible causes are different in
nature and shed different light on the rare descriptor variant. Therefore, it
may be of interest to clarify which cause is decisive in the present case.
The cause of a weak presence can be clarified by a simple
supplementary analysis with two steps. In the first step, the descriptor
column of the rare descriptor variant is set to 0 (i.e., Column B in the
case of descriptor variant B1) and the descriptor expression frequency is
recalculated. If the frequency increases significantly as a result,
environment sensitivity has been verified. In the second step, the row of
the relevant descriptor variant (here, B1) is now set to 0 (with the
column restored). Significantly increased frequencies indicate a
formative effect of the rare descriptor variant. In the example “Global
socioeconomic pathways,” this test yields the following frequency data
for the rare descriptor variants B1 and E3:

Basic matrix Descriptor column cleared Descriptor variant row cleared


B1 3.1% 20% 3.3%
E3 3.1% 1.5% 24.4%

Thus, the reason for the weak presence of descriptor variant B1 lies
in its environment sensitivity and for that of E3 in its formative effect on
the environment.3

Requirements for a Probabilistic Interpretation of Frequency


Data
For proper interpretation of frequency statistics, it must be noted that
frequency data as a rule must not be interpreted rashly as probability data.
CIB constructs the scenario portfolio as a list of possibilities without
evaluating their probability of occurrence (cf. Sect. 8.1). An interpretation
of frequencies as probabilities would presuppose that all scenarios of the
portfolio are assumed to be equally probable. However, this cannot be
assumed without special justification. As a rule, it will not be appropriate to
assume that all portfolio scenarios are equally probable.

5.2.2 Diversity Sampling


Diversity sampling adopts a fundamentally different approach to dealing
with large portfolios. The goal here is to work out, as a message of the
portfolio, how different the future of the system can be. This goal can be
achieved through examples, provided that the scenarios chosen as examples
are as different from one another as possible, thus making the diversity
potential of the portfolio tangible, and as long as the scenario selection is
not misinterpreted to mean “nothing else can happen.”
The immediate aim of this approach is therefore to make a small, highly
heterogeneous selection of scenarios from the portfolio. A complete
representation of the future space of the portfolio is neither intended nor
possible. In addition, since the diversity approach explicitly aims at
presenting examples, it is not harmful that there are generally several
alternative ways to define reasonable selection sets for a portfolio, while
only one of them is actually used. Thus, in the end, unlike in statistics,
scenario selection involves only a few scenarios, but - again, unlike the
statistical approach - it examines them in detail.
Figure 5.4 shows a schematic future space for a system in the target year
of the scenario analysis. For the sake of simplicity, the future space is
plotted in two dimensions (these could be, for example, desirability and
similarity of the scenarios to the present).
Fig. 5.4 Exploring the future space through scenario selection

The future space is initially well represented by a comprehensive


portfolio (fine crosses in Fig. 5.4a). An improper selection of five scenarios
could leave broad parts of the future space unrepresented by favoring a
certain part of the future space (bold crosses in Fig. 5.4b). The selection
shown in Fig. 5.4c is more suitable: although it naturally captures the future
space less well than the full portfolio, it still gives a reasonably accurate
impression of the shape of the future space. The key to better representation
in Fig. 5.4c is that the distance between the selected scenarios is much
greater than in Fig. 5.4b. This forces the use of all available space when
distributing the selection scenarios. Diversity among the scenarios of a
selection is consequently a selection criterion that ensures a good
exploration of the future space.
For scenarios based on descriptors and descriptor variants, the “max-
min” selection heuristic developed by Tietje (2005) can be used, which
results in well-diversified scenario sets.4 This procedure, which is
implemented in the CIB software ScenarioWizard, is described in Nutshell
III (Fig. 5.5). In the following, it is applied to the 32 scenarios of the
“Global socioeconomic pathways” matrix.
Fig. 5.5 Nutshell III - Procedure for creating a selection with high scenario distances (diversity
sampling)

Figure 5.6 shows five scenarios that were selected from the 32 scenarios
in the “Global socioeconomic pathways” matrix using the “max-min”
heuristic.
Fig. 5.6 Scenario selection according to the “max-min” heuristic (diversity sampling)

Each of these scenarios differs from every other scenario in the


selection in at least four descriptors, the majority of the seven descriptors.
As mentioned, there is usually more than one equivalent way to perform
scenario selection. Working with a scenario subset, prepared, e.g., by
diversity sampling, is therefore only a suitable approach for dealing with
large portfolios if it is sufficient for the objective of the scenario study to
address the range of possible futures on the basis of examples.

Excursus: Alternative Scenario Selection Approaches


In addition to the “max-min” procedure (Tietje 2005) discussed above,
other selection procedures have been described in the scenario literature
that can be used as alternatives depending on the needs of the CIB
analysis and the characteristics of the descriptors and their variants, even
though these selection procedures were originally developed for use with
other scenario methods. Two examples from recent research are as
follows:
SDA (“Scenario Diversity Analysis,” Carlsen et al. 2016): The SDA
procedure allows for mathematically optimal diversity in scenario
selection but is designed for ordinal or quantified descriptor variants. In
CIB analyses, therefore, the procedure can only be used if no nominal
(i.e., rank-free) descriptors are used that have more than two variants
(see Sect. 6.2.2 for the differences between nominal, ordinal, and ratio
descriptors).
OLDFAR (“Optimized Linear Diversity Field Anomaly Relaxation,”
Lord et al. 2016): OLDFAR is particularly suitable as a selection
procedure if the participants in a scenario analysis have different
perceptions of what constitutes the diversity of a scenario set. OLDFAR
searches for a scenario selection that satisfies all diversity perceptions of
the participants to the greatest extent possible (Pareto optimality).

5.2.3 Positioning Scenarios on a Portfolio Map


The procedure described in Sect. 5.2.2 is based on scenario distances and
thus on a formal criterion. The portfolio mapping procedure introduced in
Sect. 4.1.3, in contrast, offers a way to perform scenario selection that is
more determined by content-related perspectives. As a first step in creating
a portfolio map, we must define two content-related criteria that seem
suitable for typifying the scenarios. In the analysis “Global socioeconomic
pathways,” for example, we could define i) an index for social development
and ii) an index for economic development and thus distinguish the
scenarios according to whether social and economic developments occur
together or whether one development takes place at the expense of the
other.
As described in Sect. 4.1.3, the next step is to evaluate all descriptor
variants according to the two criteria. The evaluations could be made, for
example, as shown in Fig. 5.7.
Fig. 5.7 Evaluation of descriptor variants according to the criteria of social and economic
development (exemplary data)

According to Sect. 4.1.3, the coordinates of a scenario in the axis


diagram are determined by summing the index points of all active
descriptor variants of the scenario. This results in the diagram shown in Fig.
5.8.

Fig. 5.8 Portfolio map of the “Global socioeconomic pathways” portfolio

Because certain scenarios arrive at the same index values in different


ways, the diagram contains only 22 points for 32 scenarios. The diagram
shows that social and ecological development, in the examined matrix,
cannot occur completely detached from one another but that one
development can be ahead of the other to a limited extent. Four future types
can be defined, and one scenario can be determined as representative for
each:
Common success for both development types (e.g., no. 32)
Common failure for both development types (e.g., no. 1)
Unbalanced development with a slight advantage of economic
development (e.g., no. 30)
Unbalanced development with the advantage of social development (e.g.,
no. 8)
Between each of these scenarios lies a transition area occupied by
numerous other scenarios.

Method Comparison
A comparison of the results of the formal selection procedure with the
content-oriented selection procedure reveals similarities but also
differences. Scenarios no. 1 and no. 32 were nominated in agreement by
both procedures. However, the formal procedure failed to recognize that
Scenarios no. 8 and no. 30 could be assets to the selection although they
appear to contribute only moderately new aspects, if we judge by the formal
scenario distances. The content-driven procedure, in contrast, was blind to
the fact that although scenarios no. 1 and no. 2 arrive at similar index values
and therefore appear as neighbors on the portfolio map, the similar index
totals are reached in very different ways. Thus, behind this seemingly close
proximity are nevertheless strikingly different futures. A combination of
both methods therefore offers chances for additional insights.

5.2.4 Further Procedures


Other common statistical methods for ordering large portfolios are cluster
analysis and correspondence analysis. Both methods are only briefly
discussed here. The reader is referred to the literature for more detailed
information.

Cluster Analysis
Cluster analysis enables one to systematically group large sets of scenarios
into families by sorting them according to similarities (Everitt et al. 2001).
In the resulting scenario families (“clusters”), certain descriptor variants are
the same for all members and thus define the future type for which the
cluster stands. The variants of the other descriptors are not uniform within
the cluster. Statistics offers different forms of cluster analysis. The widely
used statistics software package R, for example, provides several
procedures with which different cluster algorithms can be applied.5
Correspondence Analysis
Correspondence analysis is a statistical technique for identifying latent
structures in datasets (Le Roux and Rouanet 2009). In CIB, it can be used to
determine best-fit dimensions for a portfolio map (Sect. 4.1.3). Pregger et
al. (2020) present an example application of this statistical method to order
a very large CIB portfolio (see Sect. 7.2). The statistical method of
multidimensional scaling (Kruskal and Wish, 1978) represents a similar
approach.

5.3 Monotonous Portfolio


Occasionally, a cross-impact matrix leads to a portfolio that essentially
points to only one type of future, either because there is only one scenario
or because there are several scenarios whose variants match for almost all
descriptors. A result of this type is in tension with the scenario concept
because it expresses a negation of the basic hypothesis of the openness of
the future. Instead, the result implies that the system under study has
predictable development trends and would therefore be more appropriately
handled by forecasting methods.
Before arriving at this conclusion, however, a critical review of the
results is appropriate. Sect. 5.1 recommends repeating the evaluation in
such cases, permitting marginal inconsistencies. If the inclusion of marginal
inconsistencies leads to an expansion of the portfolio and a greater variety
of future types, the one-sidedness of the IC0 portfolio may be interpreted as
an artifact, and the expanded portfolio may be used.
The case is different if the monotonicity of the portfolio holds even if
marginally inconsistent scenarios are admitted. Then, the monotonicity
must be accepted as a valid statement about the matrix and the system
description formulated in it. The coded interdependencies fix the system in
a hermetic state from which it can break out only with difficulty. The goal
of the analysis must no longer be to work toward a pluralistic portfolio in
disregard of this finding. Rather, the goal must now be to understand how
this one-sidedness results from the system description and what insights
about the system can be gained from it.
As an example of the case of monotonicity implied by the cross-impact
data, we examine a matrix on the social development of an emerging
country (Fig. 5.9). The matrix is based on a study by Cabrera Méndez et al.
(2010) but has been significantly condensed for use as an example.

Fig. 5.9 Cross-impact matrix on the social development of an emerging country. Adapted and
modified from Cabrera Méndez et al. (2010) (Translation from Spanish by the author)

The matrix has a single solution in IC0, which depicts a prosperous


society that is centralized, well-organized, authoritarian, and not concerned
with ecological issues (Table 5.2).
Table 5.2 Only solution of the “Emerging country” matrix

A. Economic growth A1 Growing


B. Economy B2 Based on knowledge
C. Urbanization C1 Big cities
D. Political focus D1 Economic growth and employment
E. State resources E1 In good order
F. Rule of law and security F1 Freedom & rule of law & strong institutions
G. Equal opportunities G2 Determinism
H. Environmental sustainability H2 Overexploitation
The matrix contains eight descriptors. According to Sect. 3.7, the
threshold for marginal inconsistencies is therefore 0.5√7 ≈ 1.3, and thus,
IC1 scenarios may be considered. However, in IC1 (and in IC2 and IC3),
the matrix does not yield any further scenarios. Thus, the monotonicity of
the portfolio is highly significant, and the next question is which properties
of the matrix lead to this outcome.
First, due to the low quota of zero values in the matrix, it is to be
expected from the outset that the number of scenarios for this matrix is low
(cf. Sect. 4.7.1). However, the matrix also contains clear indications that the
coded interrelations massively prefer certain descriptor variants and thus
prevent a plurality of solutions. Matrices with this property are referred to
as “predetermined matrices” and will be considered in more detail in the
following sections.

5.3.1 Unbalanced Judgment Sections


Usually, the coding of a judgment section in the cross-impact matrix turns
out in such a way that the descriptor variants of the impact source prefer
different variants of the target descriptor, as for example in the judgment
section B → C in Fig. 5.9:

C
C1 C2
B B1 −1 1
B2 2 −2

Here, B1 prefers C2, and B2 prefers C1. The judgment sections of this
regular type introduce “if-then” mechanisms into the matrix, which
interrelate with one another and ultimately form the foundation for a
pluralistic portfolio. Rarer are usually one-sided coded judgment sections,
such as Section D → E in Fig. 5.9:

E
E1 E2
D D1 2 −2
D2 2 −2
This type of judgment section leads to a preference for one descriptor
variant (here E1) no matter which development occurs for D. The more
sections of this type with one-sided preference that occur in a descriptor
column, the more difficult it becomes for CIB to also find solutions with a
different descriptor variant. The descriptor is then predetermined, and if a
matrix contains a sufficient number of predetermined descriptors, the matrix
as a whole is also predetermined.
Unbalanced judgment sections and their interpretation are discussed in
more detail in Sect. 6.3.2. It is important to note that the occurrence of
unbalanced sections in a matrix should not be interpreted per se as an
indication of flawed judgments. Such sections may occur in error in certain
cases. However, in other cases, they may represent the correct
implementation of a valid system insight (see Sect. 6.3.2, Paragraph
“Phantom variants”). For a CIB analysis, their presence in a matrix is
therefore simply a fact whose effect on the evaluation results must be
investigated from a technical perspective. After this investigation has been
conducted, the time has come to question their factual correctness.
In the matrix in Fig. 5.9, unbalanced judgment sections appear
unusually often, as Fig. 5.10 illustrates. In certain cases, the preferences of
the unbalanced sections change within a column and thus lose their power.
This is the case in Column B, where two unbalanced sections prefer B1 and
two other unbalanced sections prefer B2. In Columns C, E, F, and H,
however, the unbalanced sections cause a clear bias in favor of the
descriptor variants C1, E1, F1, and H2. Therefore, it is not surprising that
the single scenario of the matrix has just these characteristics and that the
matrix is unable to provide other solutions. The descriptors A and G are
weakly predetermined with only one unbalanced section. Nevertheless, the
only consistent scenario with the descriptor variants A1 and G2 follows this
gradual predetermination.
Fig. 5.10 Unbalanced judgment sections in the “Emerging country” matrix

The more predetermined descriptors there are, the less maneuvering


room there is for the descriptors that are not directly predetermined because
the predetermined descriptors are also fixed in how they affect the other
descriptors. Thus, we are justified in closely associating the monotonicity of
the portfolio in the case of this matrix with its high proportion of
unbalanced judgment sections.

5.3.2 Unbalanced Columns


Unbalanced judgment sections are only one way in which predetermination
of descriptors can occur. More general evidence of predetermination can be
found by examining the column sums of the cross-impact matrix (Fig. 5.11).
For example, Column A1 in Fig. 5.9 contains, from top to bottom, the 14
cross-impact values −2, 2, 2, −2, −2, 2, 2, −2, 3, −2, 3, 1, −1, 2. This
corresponds to a column sum of +6.
Fig. 5.11 Column sums of the “Emerging country” matrix

Note
Column sums must not be confused with impact sums. The column
sum summarizes all cross-impact values of the column of a descriptor
variant, while the impact sum only sums up the cross-impact ratings of a
descriptor variant column that are active in a specific scenario.
Therefore, the column sums do not refer to a specific scenario but are a
general property of the matrix. In contrast, the impact sums are a
property of a specific scenario.

In the case of balanced cross-impact ratings, the column sums are low.
However, if the cross-impact ratings within a descriptor variant column
show a clear preference for one sign, the column sum accumulates to higher
positive or negative values. Unbalanced judgment sections are one possible
cause for an imbalance in a column. Another possible cause is, for instance,
that although there is a balanced mix of positive and negative values in a
column, the negative judgments are typically high and the positive
judgments are typically low.
Figure 5.11 reveals massive imbalances for several columns of the
“Emerging country” matrix. In particular, for descriptors C, E, and H, there
is a bias in favor of C1, E1, and H2. The cause in this case is the
accumulation of unbalanced judgment sections, which we have previously
noted.
In our example, we see that descriptors B and D, which are not
systematically predetermined by unbalanced judgment sections, are not
otherwise subject to significant bias in the cross-impact ratings and
accordingly have low-impact sums. Nevertheless, even these descriptors do
not escape the predetermination of the matrix because all other descriptors
are predetermined and therefore exert predetermined influences on B and D.
Thus, the monotonicity of the portfolio has been successfully explained
from a technical perspective, and the unbalanced judgment sectors have
been identified as the central cause.6 As a next step, the identified causes of
monotonicity can be questioned. It can be discussed whether the cross-
impact ratings that were identified as the cause of the monotonicity are
justified with respect to content (and the monotonicity of the portfolio is
therefore to be accepted as a valid result) or whether the ratings must be
revised and a new evaluation conducted.

5.4 Bipolar Portfolio


The bipolar portfolio is another type of a less pluralistic portfolio. This
portfolio type is not monotonic in the sense of Sect. 5.3 but prototypically
contains two scenarios, describing two opposing system states. Transitional
states in the form of further scenarios with mixed properties do not exist in
the case of a bipolar portfolio. If there are additional scenarios, each of
them is clearly assigned to the environment of one of the two bipolar
scenarios.
An example of a cross-impact matrix with a bipolar portfolio is the
“social sustainability” matrix shown in Fig. 5.12. This matrix addresses the
interactions of certain sustainability aspects relevant for social stability. The
descriptors are a selection from a descriptor field developed by Renn et al.
(2007, 2009).
Fig. 5.12 Example of a cross-impact matrix on social sustainability. Adapted and modified from
Renn et al. (2007)

The IC0 portfolio of this matrix comprises two completely opposite


scenarios (Fig. 5.13). An identical portfolio is also found in IC1, IC2, and
even IC3. Therefore, this finding is highly significant.
Fig. 5.13 A bipolar portfolio

The scenarios in Fig. 5.13 describe a black-and-white picture of the


future. Either everything develops well, or everything develops poorly,
because if things develop well in one area, this development supports
favorable developments in other areas and vice versa. The scenarios in Fig.
5.13 may well be the correct answer to the system interrelationships
formulated in the matrix, and one may conclude that nothing else was to be
expected on the basis of the problem definition. After all, one should not
expect a surprise when asking a question with an obvious answer.
Nevertheless, it must be admitted that the result of the CIB analysis
could also have been easily arrived at intuitively and that therefore only a
limited gain in knowledge is associated with the analysis. It may therefore
be desirable to recognize the proneness of a research topic to a bipolar
portfolio already at the problem definition stage and, in this light, to
question whether a CIB analysis, with the considerable amount of labor it
requires, is justified in this case. In the following, we therefore discuss i)
which problem features can justify the assumption that a CIB analysis is
likely to lead to a bipolar portfolio and ii) which analysis opportunities CIB
provides for bipolar systems in addition to the determination of the
portfolio, which is not very rewarding in such cases.

5.4.1 Causes of Bipolar Portfolios


As a cause of the bipolarity of the portfolio in the present example, it has
already been suggested that favorable developments (or unfavorable ones)
mutually support one another, and the system thus tends to synchronize
either in one or the other “pure form.” Generalized, this means that the
descriptor variants of the example matrix can be divided into two groups
such that positive impacts dominate within both groups and negative
impacts prevail between the two groups.
The fact that this is indeed the case in our example matrix can be clearly
seen if the descriptor variants are sorted according to whether they belong
to the “decreasing” or “increasing” type, irrespective of the descriptor to
which they belong. Figure 5.14 shows that the unshaded quadrants on the
top left and bottom right, which reflect the internal impacts between the
“decreasing” variants and between the “increasing” variants, respectively,
are almost completely positive, while the impacts between the groups
(shaded quadrants on the top right and bottom left) are almost completely
negative.7

Fig. 5.14 Cross-impact matrix “Social sustainability” with sorted descriptor variants

Thus, if the descriptor variants can be sorted into two “confrontational


groups” and there are no or few ambivalent descriptors that escape this
grouping, an inclination of the matrix toward a bipolar portfolio can be
expected.
However, the pattern of confrontational descriptor variant groups should
not be apparent only after the cross-impact matrix has been prepared,
because at that point, the greater part of the work of a CIB analysis would
already have been done. Therefore, it is better to reflect directly after the
selection of the descriptors and their variants whether it will be in their
nature to group confrontationally. The task here is first to develop a
conjecture regarding the two confrontational groups and then to make a
judgment of the descriptor field regarding the proportion to which the
descriptors can be classified into these groups and how many of them will
evade this structure and present themselves as ambivalent descriptors (that,
for example, receive inhibitions from descriptor variants of both groups or
are strongly promoted out of Group A but themselves rather promote
descriptor variants of Group B). The higher the proportion of ambivalent
descriptors is, the less likely it is that the portfolio is bipolar.
This anticipation would probably have been easy in the case of the
descriptor field on social sustainability. However, in other cases,
anticipation may prove difficult for descriptor variants whose membership
in confrontational groups is not already suggested by their names, as in the
“Social sustainability” matrix. The preliminary attempt to identify
confrontational groups can therefore fail, and a bipolar pattern will only
reveal itself unexpectedly in the further course of the CIB analysis. In this
case, however, the analysis can be said to have produced a gain in
knowledge. This gain does not consist in the portfolio, which is not very
informative, but in the insight that, contrary to the first impression, the
examined system has turned out to be bipolar and in how the descriptor
variants are attributed to the confrontational groups.

5.4.2 Special Approaches for Analyzing Bipolar Portfolios


The basic evaluation step of a CIB analysis - the calculation of the scenario
portfolio - is often not very productive in the case of a bipolar system.
Nevertheless, even in this case, the cross-impact matrix contains extensive
information about the system under investigation. The challenge here is to
develop analytical approaches beyond scenario construction that can be
used to gain additional insight into the system from the collected data.
Which approaches are most suitable for this purpose must be determined on
a case-by-case basis. However, the intervention analysis described in Sect.
4.4 often proves to be fruitful for the analysis of bipolar portfolios, although
it is applied for this purpose in a special form, as outlined in the following.
For an intervention analysis, we start from the basic finding for our
“Social sustainability” example that the network of descriptors, if left to
itself, can only be in the homogeneous state of “everything increases” or
“everything decreases.” We now ask ourselves what would be the effect of
an intervention that forces a selected descriptor into the “increasing” state.
Obviously, this intervention would have no effect if the network is already
in the “everything increases” state. However, what would be the effect in
the other case, i.e., if the network is in the “everything decreases” state and
is now subjected to this intervention? Would only the intervened descriptor
switch and the rest of the network remain in its original state? This outcome
would be expected in the case of an intervention on a descriptor without
much influence in the network. Alternatively, would the descriptor forced to
the “increasing” state be able to drag a few more descriptors, or perhaps all
the other descriptors, along in a chain reaction? Then, the intervention
would have acted at a pivotal point of the system and would have exerted
leverage. This question raises the prospect of arriving at useful conclusions
about the system’s behavior, despite the bipolar nature of the portfolio.

Single Intervention
If we first attempt an intervention at A2 (increasing economic output; for
the technical implementation of the intervention analysis, see Sect. 4.4.3,
the portfolio shown in Table 5.3 results (printed in short format).
Table 5.3 Portfolio after intervention on “A. Economic performance”

A B C D E F G H
No. 1 2 1 1 1 1 1 1 1
No. 2 2 2 2 2 2 2 2 2

Not much has changed. Scenario no. 2 in Fig. 5.13 (“everything


increases”) continues to exist because the intervention here is superfluously
trying to work toward a condition that already exists, i.e., it does not cause
any disturbance in the system. Scenario no. 1 in Fig. 5.13, in contrast, can
no longer continue to exist in unchanged form. At least for descriptor A, the
state must change as a direct expression of the intervention. However,
beyond that, there is no effect. The position of Descriptor A in the impact
network is obviously not strong enough to carry other descriptors along
when its state changes. The same minimal effect is found for interventions
on Descriptors B, C, E, and G. Obviously, the multiple stabilizing forces
harden the network state to such an extent that most single interventions
achieve only local effects in the network, that is, on the intervened
descriptor itself.
Only the interventions on Descriptors D (social engagement), F (equity
of chances), and H (education) are more interesting. The intervention on D
results as shown in Table 5.4.
Table 5.4 Portfolio after intervention on “D. Social engagement”

A B C D E F G H
No. 1 1 1 1 2 2 1 1 1
No. 2 2 2 2 2 2 2 2 2

An intervention to promote social engagement thus proves to be


somewhat more effective than the previous interventions because it induces
at least one consequential improvement, namely, for Descriptor E (social
integration). A look at the cross-impact matrix in Fig. 5.12 makes this result
understandable enough: Descriptor D is by far the strongest influence on
Descriptor E, stronger than all the other influences combined.
The systemic effect of an intervention on F (equity of chances) is even
stronger and more complex (Table 5.5).
Table 5.5 Portfolio after intervention on “F. Equity of chances”

A B C D E F G H
No. 1 1 1 1 1 1 2 1 1
No. 2 2 2 1 1 1 2 1 2
No. 3 2 2 2 2 2 2 2 2

The new Scenario no. 3 again only acknowledges that the intervention
does not disturb an already successful network state. Here, the new
Scenarios no. 1 and no. 2 express the effect of the intervention on the
former Scenario no. 1 (“everything decreases”) in Fig. 5.13. The split of the
old Scenario no. 1 into two new scenarios means that the effect of the
intervention is uncertain, and the two scenarios provide information about
the extent of the uncertainty. In the worst case, the intervention could be
ineffective except for the local effect on F (new Scenario no. 1). In the best
case, in contrast, it could succeed in tipping even three other “dominoes”: A
(economic performance), B (innovation ability) and H (education).

Intervention analysis is thus able to produce statements about the effect of an intervention M7
on two different scales:
1. Range of the effect in the system
2. Certainty resp. uncertainty of the effect.

However, the intervention on H (education) has the most far-reaching


effect. Here, too, there are three scenarios, and thus, an uncertain effect of
the intervention within certain limits (Table 5.6).
Table 5.6 Portfolio after intervention on “H. Education”

A B C D E F G H
No. 1 2 2 1 1 1 1 1 2
No. 2 2 2 2 1 1 1 2 2
No. 3 2 2 2 2 2 2 2 2

However, even in the unfavorable case (new Scenario no. 1), we have
an effect on at least two further descriptors (plus the local effect on the
intervened descriptor), and in the more favorable case (new Scenario no. 2),
we even have an effect on four further descriptors.
The intervention analysis thus yields a rather differentiated statement
about the suitability of the descriptors for an intervention aimed at reversing
a downward spiral in social sustainability. The effects of the interventions
are summarized in Fig. 5.15, where the intervention effect is measured by
the number of descriptors tipped. Descriptors A, B, C, E, and G are not very
suitable for intervention because they do not have any systemic effect.
Descriptor D promises at least a weak systemic effect. The effect of an
intervention on F is uncertain—worse than D in the worst case but much
better than D in the best case. However, the best choice we could make
according to the present analysis would be the intervention on H
(education): here lies the greatest potential among all interventions in the
favorable case and a significant advance even in the unfavorable case.

Fig. 5.15 Intervention effects: worst case (dark shading) and best case (light shading)

Dual Interventions
According to the results of the present analysis, even the most favorable
stand-alone intervention is not capable of completely reversing a downward
spiral in the “Social sustainability” matrix. Only partial successes could be
achieved. We therefore next ask whether a simultaneous intervention on
two descriptors might be sufficient to force the impact network into a
homogeneous “everything increases” state. In the portfolio, this would be
expressed by having the old Scenario no. 2 from Fig. 5.13 as the only
solution.
With eight descriptors, there are 28 ways to conduct an intervention on
two descriptors. As in the paragraph “single intervention,” the effect is
again assessed according to the maximum and minimum number of
descriptors that could be tipped. Complete success corresponds to tipping
all eight descriptors (“domino intervention”). The results for all 28 possible
dual interventions are summarized in Fig. 5.16.
Fig. 5.16 Effect of dual interventions in the “Social sustainability” matrix

For 10 of the 28 combinations, there is only a local effect on the two


directly intervened descriptors. In 11 combinations, a moderate systemic
effect emerges. In seven combinations, the desired full success occurs: in
each of these dual interventions, a complete domino effect sets in, and the
“everything increases” scenario is the only consistent solution of the matrix.
All scenarios that contain unfavorable developments are suppressed by the
interventions.
It is striking that descriptor D (social integration) is involved in five of
the seven domino interventions. The analysis thus assigns this descriptor a
key role in the effort to break a downward social spiral. It is also striking in
Fig. 5.16 that the effects of the 11 partially successful combinations are all
in the weak to medium effect range, and no combination is “largely
successful.” This suggests that once success is achieved for a critical set of
six descriptors, a chain reaction is “ignited” that runs until the system is
fully resynchronized.
This sample analysis does not claim to make valid statements about
social systems. The results presented here depend on the ratings in the
cross-impact matrix, and important descriptors were omitted in the interest
of clarity. The lesson to be learned from this chapter is methodological: it
has been shown that CIB analysis provides opportunities to analyze cross-
impact data and reach conclusions that go beyond the determination of the
scenario portfolio. This becomes particularly valuable when the portfolio
itself—as in the case of a bipolar portfolio—is not very insightful.
An analysis of single and dual interventions to stimulate a desired
system state can be found in research practice, for example, in Hummel
(2017).

5.5 Underdetermined Descriptors


As a rule, CIB assumes that the behavior of a descriptor is either entirely
determined by the other descriptors or that it is an autonomous descriptor
(empty descriptor column, cf. Sect. 6.1.2). Thus, its behavior is completely
indeterminate from the perspective of the analysis and entirely guided by
external influences. A particular challenge, however, is the intermediate
case in which a descriptor is subject to influences from the other descriptors
and we recognize, however, that these influences are not decisive. Rather,
the key determinants for that descriptor lie outside the descriptor field. This
case does not fit easily into the logic of a CIB analysis and requires special
attention.
A meaningful handling of such descriptors, which are underdetermined
from the perspective of the descriptor field, is described in the following
example. Using a descriptor field that is simplified compared to the
Somewhereland matrix, we examine the economic and social development
of a fictitious small country, called SmallCountry. The descriptors used are
foreign policy, economic performance and social cohesion.
In the course of this fictitious analysis, the (fictitious) experts might
suggest that it would be a useful extension to also consider the influence of
the much larger neighboring country, BigCountry (Fig. 5.17).
Fig. 5.17 Fictitious neighboring countries BigCountry and SmallCountry

Economic relations with BigCountry are of great importance to


SmallCountry (although the reverse is not true because of the proportions
between the countries), and in the political discourse in BigCountry, there is
a dispute between the two leading parties about how to shape foreign
economic relations in general, including those with SmallCountry. The
question of what political direction BigCountry will take in the future
therefore has a significant impact on SmallCountry’s economic future.
Conversely, how cooperative or uncooperative SmallCountry is toward
BigCountry in general may also be perceived casually in BigCountry’s
political discourse, although other issues are more significant in the minds
of BigCountry’s voters.
These considerations are coded in the row and column of descriptor D
in a matrix (Fig. 5.18). Judgment sector D → A expresses the significant
influence of BigCountry’s political development (and its subsequent trade
policy) on SmallCountry’s economy. Judgment sector A → D represents the
presumed weak influence of SmallCountry’s foreign policy attitude on
BigCountry’s electoral choices. The other entries in the matrix describe the
internal interrelationships in SmallCountry, as they have already been
assumed for the Somewhereland matrix.

Fig. 5.18 Economic-social development of the fictitious country SmallCountry

The coded impact of external developments (in BigCountry) on the


system under study (SmallCountry) in Row D is not a critical matter.
However, as we will see soon, the coded back effect on BigCountry in
Column D causes complications.

Effects of Column D Coding


Figure 5.19 shows the portfolio of the “SmallCountry” matrix. It can be
seen that there is a strict coupling of SmallCountry’s foreign policy and
BigCountry’s voting behavior: Whenever SmallCountry chooses a conflict-
oriented foreign policy, BigCountry citizens prefer their nationalist-oriented
party, and vice versa. This fixed correlation in Fig. 5.19 is not accidental or
due to the small portfolio of three scenarios. Given the coding of Column
D, it would always occur this way, regardless how SmallCountry’s internal
interrelationships are coded and regardless how many scenarios result,
because since there is only one determinant for descriptor D in the
“SmallCountry” matrix with descriptor A, the CIB algorithm always
induces D to align with A.
Fig. 5.19 Portfolio of the “SmallCountry” matrix according to conventional evaluation

Critical Review of the Results


The strict coupling of descriptors D (administration of BigCountry) is not
objectionable from a technical perspective because it correctly reflects the
impacts documented in the matrix. From a content perspective, however, it
is unsatisfactory. It is hardly plausible to assume that the electorate in
BigCountry would orient itself exclusively on the events of its small and,
from their viewpoint, insignificant neighbor when determining which path
its country should take into the future. Much more significant will be
domestic issues in BigCountry, for example, those concerning economic
development, social relations, the question of participation in wealth,
education and upward mobility, urban-rural conflicts and others. If foreign
relations enter into electoral decisions at all, then (in a country that views
itself as a global player), it is more likely to be in terms of relations with
global opponents or allies. In this light, the notion that BigCountry’s
electoral decisions are conclusively determined by the affairs of its small
neighbor SmallCountry appears to be a caricature of BigCountry’s political
reality. The crucial shortcoming of the “SmallCountry” analysis thus far is
that the matrix in Fig. 5.18 is unable to generate, in addition to the scenarios
in Fig. 5.19, the very plausible case that political developments in
BigCountry will take different paths than the secondary influences from
SmallCountry would suggest.

Problem Solution
The cause of the problem described above is that D is an underdetermined
descriptor. The influences described in Column D may be correct in
themselves. However, their incompleteness creates a distorted picture of the
descriptor. The necessary inclusion of the possibility that political
developments in BigCountry might take different paths than specified by
Column D is most easily performed by deleting all entries in the column of
the underdetermined descriptor. The admission, expressed in this way, that
the behavior of descriptor D cannot be inferred on the basis of our
descriptor field leads to a better system analysis than holding on to the
illusion that a statement about D could be made with the existing
descriptors in this matrix. After the entries in Column D of Fig. 5.18 are
deleted, the result is an expanded portfolio (Fig. 5.20):

Fig. 5.20 “SmallCountry” portfolio when considering the underdetermination of descriptor D

All of the original scenarios are also represented in the expanded


portfolio (now as scenarios nos. 1, 2, and 4). However, three new scenarios
are added,8 which address the possibility of an asynchrony of the political
paradigms of the neighboring countries and discuss how the conditions in
SmallCountry may adjust to this asynchrony. Thus, by deleting all values in
the underdetermined descriptor column, we have succeeded in correcting
the deficit of the portfolio Fig. 5.19 and avoiding the obscuring of a
plausible part of the solution space.

In the case of underdetermined descriptors, it is therefore recommended to document the M8


impacts in the relevant descriptor column for informational purposes (such as sector
A → D in the example) but to delete them before evaluation.

Lessons Learned
Problems caused by underdetermined descriptors are not limited to analyses
in which, as here, the influence of large systems on small systems must be
considered, although such problems then occur frequently. However, it is
generally necessary when compiling a descriptor field, or at the latest after
completion of the cross-impact matrix, to consider whether one or more
descriptors of the descriptor field are predominantly determined by
influences outside the descriptor field and then to proceed according to the
recommended procedure.
The inadequate handling of underdetermined descriptors is not
uncommon in CIB practice and often results in plausible scenario
alternatives being overlooked. Special attention to this point is therefore a
rewarding investment in analysis quality.

5.6 Essential Vacancies


The presence of as many different descriptor variants as possible in the
portfolio is, as described in Sect. 4.7.2, a measure of a portfolio’s diversity
and its ability to paint a multifaceted picture of the future space of the
system. However, it is exceptional if a portfolio achieves a presence rate of
100%, i.e., uses all descriptor variants presented by the matrix. It is more
typical that some descriptor variants are missing (“vacancies”).
As long as the vacancies are not numerous, this is not necessarily a
problem. However, it is critical if descriptor variants are missing that would
have been essential for the analysis. For example, it can be an issue if the
vacancies involve a target descriptor, which limits the ability to discuss
under what circumstances the alternative futures of the target descriptor
might occur. Even beyond the target descriptors, the absence of particularly
interesting descriptor variants may be perceived as unwelcome. Thus, in a
manner similar to that discussed in previous chapters, the question arises
whether the essential vacancies are artificial and should be resolved or
whether they should be accepted as a valid observation and their causes
investigated.

5.6.1 Resolving Vacancies by Expanding the Portfolio


The first approach to addressing essential vacancies is already known from
the treatment of small portfolios (cf. Sect. 5.1). First, it should be checked
whether the vacancy in question continues to exist when allowing marginal
inconsistencies. If the missing descriptor variant is found in the extended
portfolio, one can consider using the extended portfolio completely or, if the
IC0 portfolio is satisfactory apart from the essential vacancy, selecting one
or more scenarios with the vacant descriptor variant from the extended
portfolio and placing them alongside the scenarios of the IC0 portfolio with
an explanatory comment.
The instance of artificial vacancies can be observed in the example “Oil
price” from Sect. 5.1. The IC0 portfolio in Fig. 4.48 only shows scenarios
with the oil price “high.” However, the oil price is the target descriptor for
this analysis, as already indicated by the title of the matrix. Its restriction to
only one of four possible descriptor variants is therefore particularly
unsatisfactory. For this matrix, the issue can be largely resolved by
extending the analysis to the IC1 portfolio (Fig. 5.1). Now, three out of four
oil price descriptor variants are represented in the portfolio. For one
descriptor variant (low oil price), however, the vacancy remains and is
therefore significant.

5.6.2 Cause Analysis


If the vacancy continues to exist in the extended portfolio, it must be
accepted as a valid property of the matrix. The focus in dealing with the
vacancy then shifts from attempting to eliminate it toward investigating its
causes. This serves to clarify whether the vacancy is a well-justified
consequence of the system interrelationships or whether it reflects flaws in
the selection of the descriptors and their variants or flaws in the coding of
the cross-impact data.9 On the other hand, a satisfactory explanation of the
vacancy is also necessary for a comprehensible documentation of the
scenario analysis because it helps justify the absence of the corresponding
scenarios to the target audience of the analysis.
The elementary diagnostic tool for clarifying a vacancy resorts to a
method already used in Sect. 5.3 to clarify the causes of monotonous
portfolios: the column sum. In vacancy analysis, the column sum of the
vacant descriptor variant helps one understand whether a one-sided cross-
impact evaluation of the descriptor variant provokes the vacancy or whether
the vacancy is a consequence of systemic interplay between the descriptors.
The column sums of the descriptor “E. Oil price” in our example are
recorded in Fig. 5.21.
Fig. 5.21 Column sums of descriptor “E. Oil price”

Figure 5.21 shows that the column sums in this example correspond
exactly to the pattern of descriptor variant presence in the IC0 and IC1
portfolios and thus provide a plausible explanation for the presences and
vacancies of descriptor E. The descriptor variant with by far the highest
column sum (E3) is precisely the descriptor variant that is the only one
already occurring in IC0. In IC1, the descriptor variants E2 and E4 enter the
portfolio, which is easily understandable since these are the two descriptor
variants with column sums in the middle range. The column sum of the
significantly vacant descriptor variant E1, shaded in Fig. 5.21, has by far
the lowest column sum, indicating a preponderance of negative values in
this column. It is this dominance of negative values in the E1 column that
prevents the CIB algorithm from finding scenarios capable of pushing the
impact sum of E1 into a range high enough to satisfy the consistency
condition.
This outcome clarifies that no systemic causes must be presumed for the
significant vacancy of “E1 Oil price low.” This is sufficiently explained by
the considerable preponderance of negative values in the column of this
descriptor variant and the disadvantage it thus experiences compared to all
other variants of the descriptor. The content-oriented interpretation of this
formal finding is that in compiling the descriptor variants, that is, in
collecting the ad hoc conceivable developments for the other descriptors,
the authors of the matrix identified numerous potential causes of higher oil
prices but few potential causes of a low oil price. Thus, the authors of the
matrix expressed an implicit skepticism about the possibility of a low oil
price in advance of the analysis, which then found its consequent
expression in the composition of the portfolio.
Following the formal vacancy analysis, a content-related examination of
the vacancies can be conducted. Here, for example, the question can be
discussed whether the one-sidedness of the hindering influences is justified
from the standpoint of resource economics or whether important factors that
would have spoken in favor of the vacant descriptor variant are possibly
missing in the descriptor field.
In practice, many vacancies can be understood by the column sums.
However, in cases where vacancies occur despite balanced column sums,
systemic explanations must be searched for. Typical systemic causes for
vacancies are, for example, vacancies of other descriptors, which disable
potentially promoting influences on the investigated vacancy (thus creating
“corollary vacancies”), or antagonistic relations between potential
promoters of the vacant descriptor variants, which means that in each
scenario only a part of the promoters can become active. Additionally,
constellations in which a descriptor variant effectively acts against its
potential supporters and/or in favor of its potential antagonists can lead to
systemic vacancies despite balanced column sums.

5.7 Context-Dependent Impacts


CIB is based on dissecting the causal relationships of a system into pairwise
relationships. This is usually sufficiently well possible. Occasionally,
however, this approach cannot capture all the interrelationships of a system.
This is the case when context-dependent influences have to be captured,
i.e., when the influence of one descriptor A on another descriptor B depends
on the state of a third descriptor C (Fig. 5.22).

Fig. 5.22 Conventional (left) and context-dependent impact on B (right)

Influence relationships with weak context dependency are not


uncommon in CIB practice. Usually, one pragmatically decides to omit this
aspect. In contrast, influence relationships with strong context dependency
are rather rare in practice. However, they do occur, and ignoring
strong context dependency could lead to substantial inaccuracy in the
results of the analysis. In these cases, the complication must be addressed
and incorporated into the analysis using appropriate procedures. A simple
procedure for dealing with context-dependent influences is described in
Nutshell IV (Fig. 5.23).

Fig. 5.23 Nutshell IV - Processing context-dependent impacts

For illustration, this procedure is applied below to an analysis of future


mobility demand in private transport (viewed from the perspective of 2020)
and its dependence on the oil price and choice of drive technology (Fig.
5.24).

Fig. 5.24 Context-dependent impacts in the “Mobility demand” matrix

The descriptor variant “E1 Vehicle drive: Combustion” refers to a car


fleet that is predominantly powered by fossil fuels, whereas “E2 Vehicle
drive: Electric” refers to a car fleet powered by renewable energy. Two
influences are coded in the matrix which, on closer inspection, are not
generally valid but mentally presuppose that a certain variant is active for
another descriptor (shaded judgment section in Fig. 5.24): The argument
that a high oil price increases mobility costs and thus dampens mobility
demand (A → D) at first sounds plausible in the 2020s because it
corresponds to the structures existing at that time. However, the future is to
be considered, and descriptor E suggests the possibility of a structural
change. If the descriptor “E. Vehicle drive” is set to “E2 Electric,” the oil
price is largely irrelevant for mobility costs and thus also for mobility
demand. Thus, the formulated impact of A → D is not true in every case but
only if for descriptor E the variant E1 (“Combustion”) is active. Otherwise,
it would be more accurate to assume no impact for A → D.
For the influence A → D, there is thus a context-dependent impact that
depends on a third descriptor: Descriptor E. The same is true for the
influence of D → A, where an increase in oil demand and thus in the oil
price is assumed due to a growing demand for mobility. This influence is
also plausible only under Condition E1 and would have to be omitted under
Condition E2.
The simplest way to address this type of context dependency of impacts,
as described in Nutshell IV, is to formulate two variants of the cross-impact
matrix, each claiming only conditional validity (Fig. 5.25).

Fig. 5.25 Conditional cross-impact matrices (the top matrix is valid for E1 scenarios, that below for
E2 scenarios)

Each conditional matrix is first evaluated separately, which results in the


portfolios shown in Table 5.7.
Table 5.7 Portfolios of the two conditional matrices “Mobility demand”

Matrix “E1” Matrix “E2”

SI1: [A1 B1 C1 D1 E1] SII1: [A1 B1 C1 D1 E1]

SI2: [A2 B2 C2 D1 E1] SII2: [A2 B1 C1 D1 E1]

SI3: [A2 B2 C2 D1 E2] SII3: [A2 B2 C2 D2 E2]

SI4: [A1 B1 C1 D2 E2]

SI5: [A2 B2 C2 D2 E2]


Since matrix “E1” is only valid for E1 scenarios and matrix “E2” is only
valid for E2 scenarios, only the part of the solutions that fits their validity
condition is significant in each portfolio. Therefore, only this part is
included in the final solution set. The matrix “E1” (left scenario list)
delivers the SI1 and SI2 scenarios (both E1 scenarios). For matrix “E2,”
scenario SII3 from the right scenario list is valid and adopted (the only E2
scenario in this list). Together, this results in the portfolio shown in Fig.
5.26. In these scenarios, and only in these scenarios, exactly the influence
relations A → D and D → A are incorporated, which correspond to the state
of descriptor E in the scenario.

Fig. 5.26 Mobility demand portfolio after consideration of context dependencies

In the conventional approach, we would have had to decide whether to


consider the case of fossil mobility or that of e-mobility as a reference when
coding the influences A → D and D → A and then accept that the chosen
coding of these relationships is not correct for all scenarios. As can be seen
from the scenario lists above, this would not have been without
consequences. For example, if the fictitious authors of the study had
decided in 2020 to choose the current structures as the coding reference,
they would have worked exclusively with the E1 matrix. The E1 matrix
generates the correct E1 scenarios SI1 and SI2 as well as the correct E2
scenario SI5 (we can see that this scenario is correct because it also appears
in the right column of the portfolio of the E2 matrix, labeled SII3).
In addition, however, the portfolio of the E1 matrix also shows the two
E2 scenarios SI3 and SI4, which are incorrect from the perspective of the E2
matrix. Scenario SI3 in Fig. 5.27 provides an idea of the flaws that can arise
in the scenario logic if context dependency is not considered.

Fig. 5.27 Flawed scenario logic due to neglect of context dependency

In this scenario, the stagnating demand for mobility despite growth in


prosperity results from the argument that the high oil price dampens the
demand for mobility. This would be a relevant point if vehicles were
powered by gasoline or diesel. However, since this scenario involves
electric mobility, this reasoning is inapplicable, and the argumentative basis
of the scenario is therefore flawed at this point.
Since the cross-impact matrix does not offer a natural place to note the
context dependency of an impact, it is recommended when eliciting cross-
impact data in interviews or workshops to document context-dependently
formulated impact assessments separately and then afterward process this
information during the evaluation in the manner described above.

References
Cabrera Méndez, A. A., Puig López, G., & Valdez Alejandre, F.J. (2010). Análisis al plan national de
desarrollo - una visión prospectiva. XV Congreso international de contaduría, administratión e
informática, México.

Carlsen, H. C., Eriksson, E. A., Dreborg, K. H., Johansson, B., & Bodin, Ö. (2016). Systematic
exploration of scenario space. Foresight, 18, 59–75.
[Crossref]

Everitt, B. S., Landau, S., & Leese, M. (2001). Cluster analysis. Arnold.

Hummel, E. (2017). Das komplexe Geschehen des Ernährungsverhaltens - Erfassen, Darstellen und
Analysieren mit Hilfe verschiedener Instrumente zum Umgang mit Komplexität. Dissertation,
University of Gießen.

Kruskal, J. B., & Wish, M. (1978). Multidimensional scaling, Sage University paper series on
quantitative application in the social sciences, 07–011. Sage Publications.

Le Roux, B., & Rouanet, H. (2009). Multiple correspondence analysis. SAGE PUBN. http://www.
ebook.de/de/product/10546753/brigitte_le_roux_henry_rouanet_multiple_correspondence_analysis.
html).

Lord, S., Helfgott, A., & Vervoort, J. M. (2016). Choosing diverse sets of plausible scenarios in
multidimensional exploratory futures techniques. Futures, 77, 11–27.
[Crossref]

Pregger, T., Naegler, T., Weimer-Jehle, W., Prehofer, S., & Hauser, W. (2020). Moving towards
socio-technical scenarios of the German energy transition - lessons learned from integrated energy
scenario building. Climatic Change, 162, 1743–1762. https://doi.org/10.1007/s10584-019-02598-0
[Crossref]

Renn, O., Deuschle, J., Jäger, A., & Weimer-Jehle, W. (2007). Leitbild Nachhaltigkeit - Eine
normativ-funktionale Konzeption und ihre Umsetzung. VS-Verlag.

Renn, O., Deuschle, J., Jäger, A., & Weimer-Jehle, W. (2009). A normative-functional concept of
sustainability and its indicators. International Journal of Global Environmental Issues, 9(4), 291–
317.
[Crossref]

Schweizer, V. J., & O’Neill, B. C. (2014). Systematic construction of global socioeconomic pathways
using internally consistent element combinations. Climatic Change, 122, 431–445.
[Crossref]

Tietje, O. (2005). Identification of a small reliable and efficient set of consistent scenarios. European
Journal of Operational Research, 162, 418–432.
[Crossref]

Footnotes
1 Scenarios nos. 2, 6, 8, 11, 13, and 15 have a mutual distance of 3 or higher.

2 Another method of statistical evaluation is, for example, to examine the frequency of descriptor
variant combinations in the portfolio.

3 In a more in-depth form of this analysis, the descriptor column (and then the descriptor variant
row) of the rare variant is not set to 0 as a whole. Rather, only one judgment section of the column
(or judgment group of the row) is set to 0 in several successive evaluation runs. In this way, it can be
determined which relationships in particular contribute to the rarity of the examined descriptor
variant.

4 Tietje (2005) presented three different scenario selection procedures developed on the basis of
morphological fields, including the “max-min-selection” procedure used here. The procedures were
originally intended for use with the consistency matrix method. However, they can be applied to CIB
scenarios without difficulty because CIB also uses morphological fields. Diverging from Tietje’s
suggestion, however, all scenarios of the portfolio are treated as basically equivalent here.

5 Available at: www.r-project.org

6 As a technical check of whether the explanation found for the monotonicity of our example
portfolio is sufficient, all unbalanced judgment sectors can be modified, e.g., by clearing them all or
by converting the signs in the unbalanced sectors so that sectors with regular sign patterns are
created. If the reevaluation then yields a broader portfolio, the monotonicity can be unambiguously
attributed to the unbalanced judgment sectors. Actually, in the example matrix, this test evaluation
results in a breakup of the monotonicity.
7 The only exception is the impact of “A. Economic performance“on “E. Social integration,” where
it can already be seen in the matrix Fig. 5.12 that a counteracting impact is coded, since it is assumed
that people become more egocentric in a booming economy and consequently care less about one
another. However, this single exception is too isolated to have a significant effect on the system.

8 This is not a peculiarity of the case discussed here. It is always the case that when all values in a
column are deleted, all previous scenarios are retained, but new scenarios can be added (but do not
necessarily have to be added).

9 In this way, the cause analysis for the vacancies of a portfolio is also a suitable means of at least
partially checking the quality of the cross-impact data.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis, Contributions to Management
Science
https://doi.org/10.1007/978-3-031-27230-1_6

6. Data in CIB
Wolfgang Weimer-Jehle1
(1) ZIRIUS, University of Stuttgart, Stuttgart, Germany

Wolfgang Weimer-Jehle
Email: [email protected]

Keywords Cross-impact balances – CIB – Scenario – Descriptor – Descriptor


variant – Cross-impact judgment – Expert elicitation

This chapter addresses the three fundamental data objects needed for CIB
analysis: descriptors, descriptor variants, and cross-impact data. The basic
aspects of their role were described in the method description in Chap. 3. These
basic aspects will be deepened here by “dossiers” on the data objects. The
dossiers discuss terminology, types, methodological requirements, and
elicitation procedures commonly used in practice.

6.1 About Descriptors


6.1.1 Explanation of Term
The term descriptor originates in the technical language of library and
information science, where descriptors are used to index the content of
documents or information stored in databases to ensure that it can be found. A
typical dictionary definition would be “an identification word or keyword that
characterizes the content of information...” (Duden, 1994, own translation).
The term “descriptor” has appeared as part of the technical language in
scenario studies since the 1980s (Honton et al., 1985). The term is used in a
figurative sense. Just as descriptors in library science have the task of making
the essential contents of a text accessible, descriptors in scenario analysis fulfill
the same task for a special type of text: hypothetical descriptions of the future.
In CIB, descriptors characterize the core elements of a system that are required to sufficiently M9
describe its evolution and whose mutual influence relationships are capable of explaining that
evolution .

6.1.2 Descriptor Types


CIB descriptors can be distinguished according to different typologies.
Familiarity with these typologies is not necessary to correctly conduct a CIB
analysis. However, knowledge of them helps one obtain a reflective view of the
structure of a CIB matrix, to identify possible deficits, to anticipate the effects of
design decisions on the success of the analysis, and to interpret analysis results.
Two typologies are particularly useful: (1) a formal typology based on the
descriptor’s position in the influence network and (2) a content-oriented
typology that distinguishes the roles of the descriptors in the knowledge
generation envisaged by the analysis.

Formal Typology: Classification by Interdependence Type


Technically, descriptors differ according to whether they influence other
descriptors and whether they are influenced by other descriptors. This results in
three types:

Systemic descriptors Descriptors of this type exert influence on other


descriptors and at the same time are influenced by other descriptors. This is the
most common case in CIB analysis. For example, all descriptors of the
Somewhereland matrix are systemic descriptors . Systemic descriptors are the
foundation of the interdependent operations of a system, and only these
descriptors are able to initiate complex and nonintuitive system behavior.
Autonomous descriptors Autonomous descriptors exert influence but are not
in turn influenced by any other descriptor. This type can be recognized by an
empty descriptor column in the cross-impact matrix. Autonomous descriptors
are suitable for representing the external conditions of a system, for example,
global developments when local systems are considered.
Passive descriptors Passive descriptors are characterized by the fact that they
receive influences but do not influence other descriptors. In the cross-impact
matrix, this is expressed by empty descriptor rows. These descriptors align
themselves with the system activities without intervening in them.1
In addition, as a transitional form between autonomous and systemic
descriptors, there is the intermediate type underdetermined descriptors, which
was discussed in detail in Sect. 5.5.
The case in which a descriptor is neither influencing nor being influenced is
irrelevant because it represents a factor that is completely unconnected to the
rest of the system and of no use to the analysis. It can be discarded.
Figure 6.1 shows a matrix that describes an opinion-forming process. The
members of a social group form their opinion on a given question in the form of
agreement (yes) or disagreement (no). In this process, group members are
usually influenced by the opinions of other group members, with certain group
members exerting greater influence than others. Some group members may also
show a preference for taking the opposite position to the opinion of certain other
members.

Fig. 6.1 “Group opinion” matrix and its portfolio

Two people in the group take on a special role: Daniel does not let any other
group member influence his opinion. The matrix Column B assigned to him is
therefore empty, which means that he is represented by an autonomous
descriptor.
Alec, in contrast, is a new group member whose social reputation within the
group remains low. His opinion is noted. However, no one is influenced by it.
Conversely, he orients himself only to his special reference persons in the group.
In this case, the corresponding row of the matrix is empty, and the
corresponding descriptor is to be classified as a passive descriptor.
All other persons (Anna, Philipp, Sarah, Laura) are represented by
descriptors with cross-impact entries in their rows and in their columns
(systemic descriptors). They assume a dual role in that they are objects of
influence and at the same time participate in shaping the group’s opinion-
forming process.
The scenario table shows which subgroups with homogeneous opinions can
form. For better clarity, the approvals in Fig. 6.1 are presented in light print,
whereas the disapprovals appear in gray. One solution type consists of Sarah
holding an opinion that differs from the rest of the group (scenarios 1 and 2). In
the second type, two camps are formed consisting of Daniel and Alec on the one
hand and Anna, Philipp, Sarah, and Laura on the other (scenarios 3 and 4). For
each solution, there is also an inverse solution in which the yes and no positions
are reversed.2
A special characteristic of passive descriptors is that they can be deleted
from the matrix without consequences for the scenarios with respect to the
remaining descriptors. This is illustrated by removing the descriptor “E. Alec”
from the matrix in Fig. 6.1 and then re-evaluating the matrix.
The result shown in Fig. 6.2 makes clear that the camp formations follow the
same pattern as in the original matrix. The only difference is that the scenarios
now no longer include a statement about Alec.

Fig. 6.2 “Group opinion” cross-impact matrix and portfolio after removing the passive descriptor
Passive descriptors can thus be removed from the matrix without changing
the nature of the portfolio. Motives for removing passive descriptors can be to
make the matrix and the results clearer and easier to understand or, in the case of
large matrices, to reduce the computing time required to evaluate the matrix.
Passive descriptors should be retained, however, if they make important content-
oriented statements, for example, because they are a direct part of the research
question to be addressed by the CIB analysis.
The correct variant of a removed passive descriptor in the context of a
specific scenario obtained from the shortened matrix can be determined
subsequently by inserting the scenario into the complete matrix according to the
scheme shown in Fig. 3.16 and determining the descriptor variant of maximum
impact balance for the passive descriptor. Figure 6.3 shows this procedure for
scenario no. 3 from Fig. 6.2. The scenario does not contain any information
about Alec because the passive descriptor “E. Alec” was temporarily removed
from the matrix. Inserting the scenario [Anna: yes, Daniel: no, Phillip: yes,
Sarah: yes, Laura: yes] into the original matrix (Fig. 6.1) and calculating the
impact balances shows that “no” is the consistent descriptor variant of “Alec” in
this case.
Fig. 6.3 Post hoc determination of the consistent variant of a passive descriptor

The comparison with scenario no. 3 of the portfolio of the complete matrix
(Fig. 6.1) shows that this result is correct.

Content-Oriented Typology: Classification by Roles in Terms of


Content
In terms of content, descriptors can be distinguished according to the role they
play in answering the research question of a CIB analysis:
Target descriptors Target descriptors are all descriptors that provide a direct
answer to the research question. For example, the central question underlying
the matrix Fig. 4.23 in Sect. 4.4 is the wish to gain knowledge about the future
of the water network connection rate of a large city in an emerging country.
This matrix has a single target descriptor: Descriptor “F. Coverage of the water
network.” The question of which descriptors are to be considered as target
descriptors of a matrix does not result from formal aspects of the matrix but is
decided from the perspective of the target audience of the study.
Driver descriptors Driver descriptors are all descriptors that are not
themselves target descriptors but exert influence on the target descriptor(s).
They are a necessary component of the descriptor field because they explain
under which circumstances the target descriptor(s) adopt one or another
descriptor variant.
Intermediary descriptors Descriptors of this type are not target descriptors
themselves and have no direct influence on them. They are included in the
matrix because they are necessary to explaining the interaction between the
driver descriptors and possible indirect feedback effects of the target
descriptor(s) on the drivers. Thus, they are indirectly relevant for the behavior
of the target descriptor(s).

Figure 6.4 illustrates the attribution of these descriptor roles to the


descriptors of the “Water supply” matrix familiar from Sect. 4.4. The research
question places descriptor F in the center of the analysis (target descriptor,
printed inversely). Descriptors A, B, C, and E, which are shown by entries in
Column F, have an influence on the target descriptor (horizontal hatching) and
are driver descriptors. Descriptor D has no entry in Column F (diagonal
hatching) but receives influences from Descriptors A, E, and F and passes them
on to driver descriptor C. This qualifies Descriptor D as an intermediary
descriptor.

Fig. 6.4 Descriptor roles in the “Water supply” matrix


That intermediary descriptors can be important for the logical coherence of
the scenarios despite their lack of direct influence on the target descriptors can
be seen when the intermediary descriptor D in Fig. 6.4 is removed and the
resulting portfolio is compared with the original portfolio (Fig. 6.5).

Fig. 6.5 Portfolios with (top) and without (bottom) intermediary descriptor D

It can be noted that omitting the intermediary descriptor leads to an


expansion of the portfolio from six to nine scenarios. All scenarios in the
original matrix remain consistent in our example. However, without the
mediation of the intermediary descriptor, the new portfolio shows three
additional scenarios that would have been discarded as inconsistent if the
intermediary descriptor had been considered.
The missing intermediary descriptor also affects the analysis results for
Target descriptor F (coverage of the water network), although there is no direct
effect of Descriptor D on the target descriptor. Scenario no. 8 of the new
portfolio suggests that increasing network coverage would be possible even with
strong population growth in the informal settlements. The unabridged matrix, in
contrast, denies this possibility (Fig. 6.5, top), arguing that increasing water
network coverage lowers poverty and, as a consequence (through declining birth
rates and increasing opportunities to leave the informal settlement and move to
better housing locations), also dampens demographic development in informal
settlements. The matrix can only establish this indirect effect if the intermediary
descriptor “D. Urban poverty” is part of the descriptor field.
The restriction of the descriptor field to target and driver descriptors by
neglecting relevant intermediary descriptors is in practice a frequent cause of
matrices with insufficient explanatory power. As a substitute, in a matrix
without intermediary descriptors, all effects leading via them would have to be
considered mentally when assigning cross-impact ratings, which will often be
only partially successful and reduces the quality of the CIB analysis.

6.1.3 Methodological Aspects


To provide a viable basis for CIB analysis, descriptors must meet certain
methodological requirements. These requirements are discussed below.

Completeness of the Descriptor Field


Taken together, the descriptors should enable substantial statements about the
subject of the scenario analysis. This ability arises directly from the target
descriptors but also indirectly from the driver descriptors and, if necessary, the
intermediary descriptors (cf. Sect. 6.1.2). To ensure the system-analytic
completeness of the descriptor field, it should be considered whether the
behavior of each descriptor (except the autonomous descriptors) can be credibly
explained by the other descriptors. Otherwise, the procedure described in Sect.
5.5 should be followed.

Number of Descriptors
The number of descriptors used for a CIB analysis is not prescribed by the
method. In principle, it results from a trade-off between the goals of adequacy
and completeness of the analysis and that of limiting the effort required to
perform the analysis. A too low number of descriptors endangers the quality of
the analysis because relevant aspects are not represented by descriptors and thus
neglected or because complex aspects, which would be better broken down by
several descriptors, must be aggregated into a single descriptor. However, too
many descriptors can also jeopardize analysis quality. Since usually only limited
time resources are available for an analysis, the attention and care that can be
devoted to the individual descriptor and its interconnections decrease as the
number of descriptors increases, with the risk that the analysis builds on
insufficiently understood descriptors and interrelationships.
Practice shows that the number of descriptors used in CIB studies varies
widely. An evaluation of 84 published CIB studies on different topics found a
minimum descriptor number of 3 and a maximum descriptor number of 43, with
both extremes being exceptions.3 In most method applications, descriptor fields
of between 9 and 15 descriptors are found (see Statistics Box).

Aggregation Level
CIB analyses are generally conducted at a relatively high aggregation level due
to the limited number of descriptors. For example, economic development in
Somewhereland is represented by overall economic growth, whereas a more
detailed analysis could differentiate by economic sector or region, for example.
Similarly, the topic of societal values could be considered in a more
differentiated way than in the Somewhereland matrix if distinctions were made
according to prevailing value attitudes in different social milieus or different age
cohorts. Generally, the more differentiated the descriptors are, the more reliably
cross-impacts can be assessed, and the more detailed and well-founded the
resulting scenarios are. In the end, the time resources that are available
determine how many descriptors can be used and which aggregation level must
be chosen for the analysis.

In any case, care should be taken to ensure that the level of aggregation chosen is M10
approximately uniform across the entire descriptor field. If certain subsystems are fine-grained
while other subsystems are described in a highly aggregated way, biases may arise in the
assessment of the influences of the disaggregated descriptors on the highly aggregated system
parts when a standardized impact rating scale is used. Due to their disproportionately high
number, the disaggregated descriptors can develop an exaggerated influence on the impact
balances .4

Documentation
For each descriptor, a written definition should be formulated: the “descriptor
essay .” This definition is helpful for the core team because it prevents
uncontrolled changes in its interpretation of the descriptor during the analysis
process. Without a detailed written version of the descriptor definition,
misunderstandings among participants and diverging interpretations can occur
unnoticed. This risk increases further if in the course of the process additional
persons who did not participate in descriptor selection and definition become
involved, for example, an expert panel for the estimation of the cross-impact
data. Furthermore, even after the scenario analysis has been completed, the
descriptor essays are of great importance for the comprehensibility, evaluability,
and usability of the results by the target audience of the study.
No general rule can be proposed for the size of descriptor essays because it
necessarily depends on the complexity of the topic, the number of descriptors,
and the level of ambition and willingness to read of the intended audience. In
practice, essays of approximately half a page to one page of text per descriptor
are frequently found.

6.2 About Descriptor Variants


6.2.1 Explanation of Term
In CIB, descriptor variants are the various discrete states that a descriptor can
assume and that individually outline an alternative future development for this
descriptor. Other equivalent synonyms for descriptor variants are also used in
the CIB literature or have been proposed by scholars:
Descriptor states
Descriptor values
Descriptor levels
Descriptor categories
Descriptor possibilities
Descriptor conditions
Alternative futures of a descriptor
The term “descriptor variants ” is used in this book because it is rather
general and does not imply a certain type of CIB application. Some of the
alternatives are tied to specific use cases. For instance, the term “descriptor
states,” which was originally used when the CIB method was introduced
(Weimer-Jehle, 2006), is suitable for scenarios that describe the future at a
particular point in time, rather than trends. “Descriptor values” are rather
appropriate for quantitative descriptors, while “descriptor levels” imply at least
ordinal descriptors. It must also be kept in mind that the CIB method is used not
only for scenario construction but also for general qualitative systems analysis,
where labels such as “alternative futures” are not always appropriate, for
example, when the research question implies no reference to time. As mentioned
before, this book uses the neutral term “variants,” which is not tied to the type of
application or to any particular form of temporal interpretation of the descriptors
(alternative neutral terms would be descriptor categories, possibilities, or
conditions). The convention adopted here does imply any criticism of the use of
other synonyms in specific contexts in the literature.

6.2.2 Types of Descriptor Variants


Similar to the descriptors, different typologies can be applied to the descriptor
variants. For CIB, two typologies are of particular importance. As mentioned
above, descriptor variants can be formulated to describe either states or
developments (trends), and both can adopt the form of qualitative or quantitative
characterizations, depending on topic. Qualitative characterizations can also be
further distinguished according to whether a ranking between descriptor variants
is apparent. Descriptor variants can also be differentiated according to their
degree of presence in the portfolio. These three typologies are discussed below.

State Descriptors Versus Trend Descriptors


State descriptors describe an issue as it might be in the target year of the
scenario analysis. Thus, state descriptors are suitable for creating a “snapshot”
of the system under consideration in the target year. No direct statement is made
about the path to the target year, i.e., about the time between today and the target
year, although the final state of a development will usually also entail
implications about the development itself.
Trend descriptors, in contrast, address the period from today to the target
year and characterize the pathway up to that point. Figure 6.6 shows examples
of both types of descriptors. Both express the same fact in different ways.

Fig. 6.6 Examples of state descriptors and trend descriptors

Both forms are possible from a methodological perspective and common in


CIB practice. However, state-related descriptor variants are always based on an
implicit recourse to the trend-related perspective. If, for example, the influence
of energy prices on energy efficiency is to be assessed in a state-related way, it
makes no sense to take literally the cross-impact question “Does state ‘A1—
high energy price in 2050’ promote state ‘B1—high energy efficiency
development in 2050’?”. Energy prices in 2050 do not in any way influence
energy efficiency in the same year—it is far too late for that in 2050. Rather,
energy efficiency in 2050 is a consequence of the energy price development in
the period before 2050 and the efficiency investments triggered by it in the
period before 2050.
This consideration shows that trend descriptors are the more natural form to
use in CIB analyses when developing scenarios. However, state descriptors are
also manageable, with the understanding that the path belonging to the final
state is always meant as well. However, it is advisable to proceed as uniformly
as possible within a matrix to avoid having to constantly change perspectives
during the cross-impact assessment and later when interpreting the scenario
results. The design of the descriptors of the Somewhereland example deserves
criticism in this respect.

Descriptor Variants: Scales of Measurement


Regardless of the choice of state or trend descriptors, descriptors and their
variants differ in how their description is made. For example, if we consider the
Somewhereland descriptor “C. Economy,” the descriptor addresses a topic that
is generally described quantitatively, such as by gross domestic product.
Following measurement theory, a quantitative descriptor is termed a “ratio
descriptor” because its value is characterized by its ratio to a unit. In contrast,
the Somewhereland descriptor “F. Social values” cannot be adequately described
by numerical values but must be characterized textually by qualitative
descriptions. Descriptors of this type are termed “nominal descriptors.” In
between, there is the type of ordinal descriptor whose variants cannot be defined
numerically but to which a natural ranking can be attributed (cf. Table 6.1 and
Fig. 6.7).
Table 6.1 Descriptor variant classification

Nominal descriptors Ordinal descriptors Ratio descriptors


Nominal descriptors Ordinal descriptors differ from nominal descriptors Ratio descriptors are
are purely qualitative: in that they have a natural order. In the example in quantitative in nature and
each variant of the Fig. 6.7, the order of the descriptor variants “social estimated by comparison to
descriptor has its own peace,” “tensions,” and “unrest” reflects a steady a unit of measurement. The
particular identity. The progression, which is why it would be unnatural to valence of their variants is
descriptor variants place the variants in the order “tensions,” “social expressed by numbers (not
cannot be placed in an peace,” “unrest,” for example. Ordinal descriptors, necessarily by a single
unambiguous order for however, do not presuppose that the distances number as in Fig. 6.7),
objective reasons. between the variants are equivalent or that there is which not only suggests an
Rather, by their nature, any information at all about the valence of the order of descriptor variants
they stand side by side distances. Ordinal descriptors can therefore be used (as in the case of ordinal
as equal members of a (unlike ratio descriptors) even when no metric is descriptors) but also
group (the group of available to measure the distances between the clearly defines the
possible futures of the variants of a descriptor distances between the
descriptor). descriptor variants
This does not preclude
that, subjectively, the
descriptor variants may
seem more or less
desirable

Nominal and ordinal descriptors are also referred to as categorical descriptors


Based on Stevens’ (1946) theory of scales of measurement

Fig. 6.7 Examples of nominal, ordinal, and ratio descriptors

CIB does not require a specific descriptor type. All types and mixed
descriptor fields, as in the Somewhereland matrix, can be used without further
ado. However, the evaluation algorithm assumes all descriptors to be nominal
descriptors—the type of descriptor that has the fewest requirements for the
measurement properties of a descriptor. In other words, CIB treats all descriptors
as nominal descriptors regardless of their actual type. Therefore, CIB
fundamentally generates qualitative scenarios and can be classified as a
qualitative scenario methodology in terms of its product (see Sect. 8.1 for more
details). Ordinal or quantified descriptors can also be used. However, their
special properties are not assumed and not used during the evaluation.
Additionally, the order of the descriptor variants in the matrix is irrelevant for
the evaluation. That is, changing the order of descriptor variants for an ordinal
descriptor would appear unnatural to the reader but would not lead to different
calculation results. Additionally, the assigned numbers in the variant definitions
of quantified descriptors are not used in CIB evaluation, although they may play
a role later in the interpretation and exploitation of the scenarios.

This implies that CIB also interprets the variants of quantified (“ratio”) descriptors as qualities M11
. For example, dynamic economic growth as an economic state has a different quality than
stagnation, and CIB explores the implications of this difference in quality. Within a CIB
analysis, the numerical variants specified for quantified descriptors have only the illustrative
function of making this quality difference comprehensible and assessable.

Descriptor Variant Classification According to Occurrence in the


Portfolio
Another perspective on descriptor variants arises from the way they occur in the
scenario portfolio. In addition to the “regular case” of descriptor variants
occurring in changing combinations in multiple scenarios, descriptor variants
can also occur in other qualities of presence.

Vacant Vacant variants (“vacancies”) are descriptor variants that are completely missing in the
descriptor portfolio because they are not used by any scenario of the portfolio
variants The CIB analysis thus expresses the assessment that serious obstacles stand in the way of
realizing the vacant descriptor variants. The extent of these obstacles is expressed by
whether the vacancies continue to exist in the IC1 portfolio or even in higher IC levels
Examples of vacant descriptor variants in the portfolio of the “Oil price” matrix (Fig. 4.
48) are the descriptor variants “C3 Weak world tensions,” “E4 Very high oil price,” and
other descriptor variants
Robust Robust descriptor variants are variants that are shared by all scenarios of the portfolio.
descriptor This implies that all other variants of the relevant descriptor are vacant
variants Robust descriptor variants denote developments which, according to the results of the CIB
analysis, are insensitive to the future uncertainty of the other descriptors and must
therefore be regarded as likely developments. At this point, the scenario analysis moves
exceptionally close to a prognostic statement. The significance with which this statement
can be made depends on whether the descriptor variant in question is also robust in the
IC1 portfolio or even in higher IC levels.
An example of a robust descriptor variant is “E3 High oil price” in the IC0 portfolio of the
“Oil price” matrix (Fig. 4.48)
Characteristic Characteristic descriptor variants are variants that only occur in a single scenario of the
descriptor portfolio
variants Characteristic descriptor variants indicate that very specific prerequisites are required for
their activation, which are only provided by specific scenarios, or that the influences of
the characteristic descriptor variant on the other descriptors determine the scenario in a
unique way. If the characteristic descriptor variant is also particularly meaningful for the
research question of the analysis, it provides an attractive option for the scenario’s title
Examples of characteristic descriptor variants are “C1 Shrinking economy” and “E3
Unrest” in the portfolio of the Somewhereland matrix (Fig. 3.17). Characteristic variants
are more noteworthy in large portfolios because in small portfolios many descriptor
variants inevitably have this status
Regular All other descriptor variants, i.e., those that are neither vacant, characteristic, nor robust,
descriptor are termed regular variants. Their distinguishing feature is that they occur in more than
variants one, but not all, scenarios of the portfolio
Regular descriptor variants should be the normal case in a diverse portfolio. Only if this
type plays a major role in the portfolio can the ideal of a portfolio rich in varying future
motifs be realized

6.2.3 Methodological Aspects


The variants of a descriptor must fulfill three basic requirements to enable a
methodologically correct CIB analysis: within a descriptor, the variants must be
complete and mutually exclusive, and the variants of different descriptors must
be formulated and defined in a manner that is free of overlap.

Definition
For maximum clarity for the cross-impact elicitation and interpretation of
results, the definition of a descriptor variant should not only be reflected by the
abbreviated name by which it is represented in the cross-impact matrix but also
be accompanied by documentation as a textual and, if appropriate, quantitative
explanation (Fig. 6.8). This information can be included in the descriptor essay.
Fig. 6.8 Example of descriptor variants and their definitions

Completeness
Completeness (exhaustiveness) means that the descriptor variants taken together
must represent all possible futures of the descriptor that are relevant for the
analysis. In the course of the data processing, CIB will assign exactly one
variant to each descriptor for each scenario. The case in which a descriptor does
not adopt any of the offered variants and thus refers to a development outside
the variant spectrum is not provided for in CIB. To avoid this methodological
“constraint” leading to an inappropriate narrowing of the scenario space, a well-
considered decision about the range of the possible must be made when defining
the variants for each descriptor.5

Mutual Exclusivity
Mutual exclusivity means that no conceivable real-world development should be
assignable to more than one variant of a descriptor simultaneously. The reason
for this requirement is again that CIB assigns exactly one variant to each
descriptor during scenario construction. Thus, for example, the variants of a
descriptor for economic growth may not be formulated as “below 2%/a,” “1–
3%/a,” and “above 3%/a,” because it is not clear to which descriptor variant an
economic growth of 1.5%/a is to be assigned.
Of course, such obvious errors in the design of the descriptor variants do not
play a significant role in practice. More relevant are hidden exclusivity
deficiencies, for example, when the variants of a descriptor “A. Development of
household incomes” are formulated as follows:
A1: Decreasing household incomes
A2: Increasing household incomes
A3: Rising inequality of household incomes
A4: Declining inequality of household incomes
The exclusivity defect in this case is that two dimensions are mixed in one
descriptor that are actually independent topics, namely, the level and distribution
of household income. In reality, it is quite possible for the household incomes of
a group of persons to increase in both level and inequality at the same time. To
account for this case, CIB would have to include both A2 and A3 in the same
scenario during scenario construction, which is technically impossible. It is
therefore necessary in this case to introduce two separate descriptors for the
level and inequality of household incomes.

Absence of Overlap
The definitions of the descriptor variants do not only have to be coordinated
within one descriptor. Methodological difficulties can also arise between the
variants of different descriptors if care is not taken to ensure absence of overlap .
Absence of overlap means that the variants of different descriptors should avoid
making statements about the same topic. Surely it is hardly to be expected that
someone would consider introducing two descriptors for the same subject. In
practice, the danger of creating overlaps is more likely to arise where the
documentation on the descriptors and their variants describe concomitants of the
actual main developments. For example, the definitions of the Somewhereland
descriptors “A. Government” and “B. Foreign Policy” shown in Fig. 6.9 contain
textual elements that violate the requirement of nonoverlapping descriptors.
Fig. 6.9 Incorrect definition of descriptor variants due to overlap of topics

The example shows that the two descriptors address sufficiently different
topics with respect to the party in power and the style of foreign policy and that
it is therefore justified to define separate descriptors for the topics. Basically, it
is useful to make the intended scenarios as concrete and substantial as possible
by a detailed elaboration of the descriptor variants. However, in this case, a
methodological defect occurs: the fact that the question of the role of
multilateral agreements is addressed in the definitions of both descriptors causes
an overlap. The consequence is that free play between descriptors A and B is no
longer possible, since A1 and B1 can no longer be combined without causing the
definition texts to contradict one another. The CIB analysis can thus no longer
explore whether a government of the “Patriots party” can find its way to a
cooperative foreign policy under certain circumstances without coming into
conflict with the definitional texts, since this combination is ruled out ad hoc by
an aside.
To leave the CIB analysis maximum freedom to investigate a research
question, it is therefore best to avoid overlapping descriptor definitions
altogether. In the case under discussion, this would mean addressing statements
about the attitude toward multilateral agreements in only one of the two
descriptors. At least, however, one must avoid categorical statements and rather
to associate a relative preference with the descriptor variant (e.g., the “Patriots
party” prefers a rejection of multilateral agreements, and a cooperative foreign
policy prefers participation in multilateral agreements).

6.2.4 Designing the Descriptor Variants


By choosing the descriptor variants, decisions are made that exert a large effect
on the character of the scenarios. How finely graded are the descriptor variants?
Do we use the analysis to track down possible extreme developments and thus
strive for scenarios that represent future alternatives in a particularly accentuated
way? Or do we prefer to focus on the most likely future range? Do we orient
ourselves to what is possible according to experience, or do we also want to
address the possibility of transcending historical experience when determining
the range of descriptor variants?

Gradation of the Descriptor Variants


With the usual numbers of descriptor variants in CIB, the range of possibilities
for a descriptor can normally only be explored in a coarse manner. While there
is no limit to the number of descriptor variants in CIB from a methodological
perspective, the amount of work required to create the cross-impact matrix
increases approximately quadratically with the number of descriptor variants.
Moreover, if the gradation of the descriptor variants is too fine, it becomes
increasingly difficult to determine the differences between their effect on other
descriptors by small integer cross-impact values. Ultimately, the decision on the
number of descriptor variants results from a trade-off between the gain of
additional refinement and the associated effort.6 Practice shows that most users
prefer a coarse gradation (see Statistics Box “Number of Variants per
Descriptor”).

Range of the Descriptor Variants: Conservative Scenarios vs.


Extreme Scenarios
A formative design decision is also made by choosing the range of descriptor
variants. This is where the decision is made regarding the extent to which the
analysis can also penetrate into the area of futures that appear rather unlikely on
an ad hoc basis, i.e., whether the analysis tends to focus more on conservative
scenarios or more on extreme scenarios .
The decision to restrict the analysis to the range of probable developments
and to omit the more extreme possibilities is not objectionable from a technical
perspective (“central descriptor variants,” Fig. 6.10 left). The marginal areas of
the possibility space are excluded from consideration, and thus, the possibility is
deliberately accepted that the real development could, under certain
circumstances, lead outside the scope of consideration. In return, one achieves a
more refined consideration of the area assumed to be likely. However, the
decision to focus on the center of the possibility space must be considered when
interpreting the results of the analysis. In other words, if extreme developments
were excluded from the spectrum of descriptor variants, it would be a circular
argument to conclude at the end of the analysis from the inevitable absence of
extreme scenarios in the portfolio that the system under investigation tends
toward calm development.

Fig. 6.10 Examples of central and peripheral descriptor variants

The counter model with respect to the choice of central descriptor variants is
the decision to focus on the periphery of the spectrum of possibilities (Fig. 6.10,
right). Here, only a coarse definition of the middle range is accepted to afford
access to the outskirts of the spectrum of possibilities. In this way, CIB analysis
can also provide extreme scenarios. In Fig. 6.10, for example, one would argue
that the descriptor variant “A2 moderate” describes an economic development
that, despite the considerable range of developments it includes, is essentially
within the bounds of historical experience and represents more or less a
continuation of existing socioeconomic structures. In contrast, each peripheral
variant would lead in its own way to massive change or even disruption. The
price to be paid for the greater “colorfulness” of an analysis that includes the
peripheral part of the possibility space is that the scenarios using the central
descriptor variants lead to vague and not very informative pictures of the future.
However, at the cost of increased workload in matrix generation, the
advantages of both approaches can be combined by introducing five variants for
the descriptor in Fig. 6.10.
These considerations are not limited to (quantitative) ratio descriptors. The
corresponding design options are also open to ordinal and nominal descriptors.

Plausibility of Descriptor Variants


Overall, there is a trade-off between the goals of making scenarios interesting
and making them plausible (Biß et al., 2017). The goal of creating interesting
scenarios that highlight possible trend breaks and associated opportunities and
challenges of the future is better served by extreme scenarios. In contrast, the
goal of creating unquestionably plausible and realistic scenarios is more securely
achieved with conservative scenarios. According to Wiek et al. (2013), the
plausibility of descriptor variants (and thus of scenarios) can be based on three
justifications:
1.
The development described by the descriptor variant has already manifested
itself in the past and thus proved to be a conceivable development.
2.
The development described by the descriptor variant has never occurred in
the system under study but has occurred in a comparable system (for
example, in another country).
3.
There is no historical example of the development described by the
descriptor variant, either in the investigated system or in a comparable one.
However, there are robust arguments for assuming its possibility (e.g., a
“proof of concept” in technological developments).
In addition, further factors, which lie not only in the context of observation
but also in the cognitive disposition of the observing persons, determine
subjective plausibility perception (Schmidt-Scheele, 2020). Ultimately, the
needs of the target audience of the scenario study determine which objectives
must guide the design of the descriptor variants.
6.3 About Cross-impacts
The preparation of the cross-impact matrix is the centerpiece of data acquisition
in CIB analysis. This step is often the most time-consuming of the entire
analysis process. Here, piece by piece, the system picture is created from which
conclusions are subsequently drawn during data evaluation. Figuratively
speaking, CIB scenarios must be thought of as a picture puzzle game that the
CIB algorithm assembles from the individual cross-impact “puzzle pieces.”
Each individual puzzle piece, i.e., each individual cross-impact value, has the
potential to strengthen or undermine the credibility of a scenario. Consistent care
in obtaining, justifying, and documenting cross-impact judgments is therefore
essential to the quality of a CIB analysis.

6.3.1 Explanation of Term


The term “cross-impact ” originates in a 1968 paper by American foresight
researchers T.J. Gordon and H. Hayward, in which they proposed a
mathematical simulation procedure to estimate how the probabilities of
sociotechnical events change once some of them have occurred.7 Gordon and
Hayward referred to the influence that an event has on the probability of events
that have not yet occurred as “cross-impacts.” Subsequently, the term was
adopted by numerous other researchers examining the reciprocal influence of
events and trends from various perspectives.8 In CIB, the term “cross-impact” is
used to emphasize that here—in contrast to the traditional consistency matrix
method9—causal information is evaluated rather than correlation information.

6.3.2 Methodological Aspects


Underlying the relatively simple work instructions for estimating cross-impact
ratings formulated in Sect. 3.3 are several questions about the details of the
assessment task. Good quality of the cross-impact matrix requires that the
assessments conform to the method with respect to these details, and quality also
benefits if all persons involved in data collection are familiar with the design
options that the method allows and capable of using them in a meaningful way.
The most important design options and methodological requirements are
discussed below.

Rating Interval
For the cross-impact ratings, the interval [–3…+3] is mainly used in the
examples of this book. However, this interval is not stipulated by the method. In
the literature, one also finds application examples using larger or smaller scales,
for example, [–5…+5]10 or [–2…+2]11. Basically, CIB is operable even if only
the sign of the influence is provided without a strength rating, which
corresponds to the interval [–1…+1].
The rating interval is therefore a matter of choice. As mentioned, from a
technical perspective, CIB does not prescribe an interval. The choice of interval
is a trade-off between the goal of providing the scenario construction procedure
with as much information as available about the differences in strength between
influences (a goal that is promoted better by large intervals) and the insight that
qualitative knowledge about influence relationships—which is often the basis
for cross-impact ratings—does not usually justify fine gradations. Intervals that
are too large only lead to a stronger experience of uncertainty for the experts and
pseudoaccuracy in the rating results.
Practical experience shows that the interval [–3…+3] is often perceived as
useable and at the same time sufficient for recording existing knowledge about
strength relations. In a sample of 64 CIB studies with documented rating
intervals , the interval [–3…+3] was used in 49 cases (77%). With this interval,
CIB follows older traditions with other methods based on expert judgments.12
Nevertheless, individual circumstances can also suggest a different choice. For
example, a coarser strength scale may be preferred if a study requires an
assessment of relationships that are particularly difficult and uncertain with
respect to evaluate.

Empty Judgment Sections and the Omission of Very Weak Influences


CIB does not require that all or any particular portion of the judgment sections
actually be filled with impact ratings. Rather, the degree to which the matrix is
filled should result solely from the peculiarities of the system under study and
may vary.
For instance, in the Somewhereland matrix in Fig. 3.7, 7 of the 15 judgment
sections are empty, and only eight of them are completely or partially filled with
cross-impact values . The connectivity (i.e., the share of nonempty judgment
sections in the total number of such sections) is thus approximately 53% in this
case.
Whether a system is characterized by many or few interconnections between
the descriptors already represents a first statement about the system, and from
CIB’s point of view, systems can display the entire range from strong to weak
connectivity. It is true that weak connectivity may have consequences for the
analysis results, particularly for the number of scenarios (see Sect. 4.7.1).
However, this should not be viewed as a sign of a poorly designed cross-impact
matrix but rather as a logical reaction of the method to a certain system property.
In practice, typical connectivity is between 40 and 70%, with a median value of
approximately 50%.

Experts with little or no experience compiling cross-impact matrices


occasionally intuitively assume that it is desirable to fill the matrix as densely as
possible. This erroneously assumed requirement can then lead to the coding of
very weak influence relationships that would have been better omitted. This
would distort the rating scale and can lead to inappropriate scenario portfolios.

Cross-impact matrices may be densely or sparsely filled. There is no methodological necessity M12
to fill as many judgment sections as possible. One of the essential tasks in creating the cross-
impact matrix is to decide which relationships are to be omitted as less relevant and to focus
the view on the influencing relationships that are actually key.

An explicit reference to the permissibility of blank judgment sections may


therefore be useful as part of the working instructions for an expert panel.

Ensuring Coding Quality: Avoid Coding Indirect Influences


What Are Indirect Influences?
An indirect influence is an influence that does not proceed directly from
descriptor X to descriptor Y but from descriptor X to descriptor Z and then from
descriptor Z to descriptor Y. The presence of indirect influences is the rule in
interconnected systems, including cross-impact matrices. For example, in
the Somewhereland matrix, societal values have no direct influence on foreign
policy (the F → B judgment sector in Fig. 3.8 is empty). However, there is an
indirect influence because societal values affect the voting decisions of
Somewhereland citizens and thus which party runs the government, and this
decision in turn determines the spirit of foreign policy as each party pursues its
foreign policy agenda with its own policy style. That this reasoning is not a
direct influence of F (societal values) on B (foreign policy) but, rather, an
indirect one can be seen from the fact that the described train of thought includes
another descriptor (in this case, “A. Government”).
Why Is It a Problem to Code Indirect Influences in the Matrix Together with
Direct Influences?
Indirect effects play a key role in understanding complex systems, and CIB
automatically accounts for all indirect effects and chains of effects of any length
through its evaluation algorithm. This capability makes CIB a powerful systems
analytic tool. Conversely, however, this means that only direct influence
relationships may be coded in the cross-impact matrix for two reasons:
1.
If we were to consider the above reasoning as an invitation to also code the
F → B judgment section itself, this would lead to double-counting of the
effect because CIB would then take into account both the direct F → B
pathway and additionally construct the indirect F → A → B influence
pathway by its algorithm.
2.
Moreover, coding the indirect influence in the form of a direct influence
would limit the flexibility of the CIB analysis because the directly coded
influence would assume the unconditional validity of the chain of effects F
→ A → B and hide the fact that additional influences of other descriptors
acting on A could lead to an election result that is “surprising” from
Descriptor F’s point of view and thus also to a foreign policy that does not
fit Descriptor F’s perspective. In other words, in reality, indirect effects can
be interrupted by the influence of other forces on the intermediate link, and
the analysis should be able to account for this possible effect. This type of
reasoning is prevented in CIB when indirect influence is “hardwired” in the
matrix as seemingly direct influence.

Thus, cross-impact ratings should be strictly limited to direct influences , and it should be left M13
to the CIB algorithm to account for the indirect effects they imply.

Notice: Changing the descriptor field can convert direct influences into
indirect ones or vice versa
As described above, an indirect influence relationship can be identified by
the appearance of a third descriptor as an intermediate element in the detailed
verbal formulation of the impact process. However, this means that the
classification of an influence relationship as direct or indirect is only valid in
relation to a specific descriptor field. If the descriptor “A. Government” were
removed from the Somewhereland descriptor field, the influence relationship
from F to B described above would become a direct effect for the reduced
Somewhereland matrix. Conversely, the subsequent addition of a new descriptor
to Somewhereland would raise the question of whether some of the mechanisms
previously correctly coded as direct influences in the matrix might now be
classified as indirect and thus should be broken down into their components,
which then lead through the new descriptor as an intermediate link.
Implementation Hints
Limiting the cross-impact assessment to direct influences is a challenging task.
Experience has shown that experts may find it difficult to reliably maintain the
distinction between direct and indirect influences. Careful preparation of the
expert panel for this issue is therefore important for quality assurance. However,
the most important tool for self-control by the experts or for subsequent quality
control by the core team is a written explanation of the coded influence
relationships with a precise description of the assumed impact process.
Descriptions that mention other descriptors in addition to the source and target
descriptor of the influence when describing the assumed impact process suggest
a coding error.

Ensuring Coding Quality: Avoiding Inverse Coding


Inverse coding occurs when cross-impact assessments do not answer the
question “What impact does A have on B?” but instead answer the question
“What inference can I make for B if I assume A?” For example, Table 6.2 shows
a cross-impact assessment expressing an inverse (inference-based) train of
thought.
Table 6.2 Example of an inverse coding

The underlying consideration here is that the success of technology X is only


conceivable if the government promotes this technology and that therefore the
success of technology X would clearly indicate that policy-makers previously
decided to promote technology X and not competing technology Y.
This argumentation may be correct in itself. However, it is inversely coded
in the judgment section A → B and thus incorrect, since the causal relationship
runs in the opposite direction. An indication of the inverse logic in the
argumentation lies in the fact that it concludes on an event (the technology
promotion) that occurs temporally before the event that gives rise to the
conclusion (the technology success). Correctly, this insight into the connection
between technology promotion and technology success must be coded in the
judgment section B → A (Table 6.3).
Table 6.3 Correction of an inverse coding
Reversing causal logic through inverse coding diminishes the ability of CIB
analysis to correctly represent and analyze the consequences of causal
relationships in the system. Occasionally, meaningful scenarios nevertheless
emerge on the basis of inverse coding. However, at the latest, when secondary
analyses are performed that take a closer look at the causal relationships in the
system, such as intervention analysis (cf. Sect. 4.4), inverse coding frequently
leads to flawed analysis results.
As with indirect influence coding, inverse coding may be applied
unconsciously. It is therefore not sufficient to draw the attention of the experts
who are to perform the cross-impact assessment to the inadmissibility of inverse
coding at the beginning of data elicitation. Careful data collection also requires a
subsequent review of the explanations provided by the experts for the impact
assessments.

Ensuring Coding Quality: Balancing Positive and Negative Cross-


impacts
Technically, there is always more than one equally valid way to code an
influence relationship using cross-impacts. For example, the consideration that a
good MINT education13 of school graduates has a positive impact on the
innovation capacity of the economy in the long run can be expressed with
exclusive use of positive cross-impacts (Table 6.4).
Table 6.4 Coding an influence relationship using positive impacts

The same consideration can just as well be described by its opposite, i.e., its
dampening effect on the risk of decreasing innovative capacity (Table 6.5)
Table 6.5 Coding an influence relationship using negative impacts

For if, as in this case, only the alternatives that innovation capacity increases
or decreases are offered, then to state that MINT education helps prevent
innovation capacity from decreasing is equivalent to stating that such an
education promotes the increase in innovation capacity. Not surprisingly,
explicitly mentioning both sides of the influence effect, i.e., promoting and
inhibiting, is also a way of expressing the same consideration (Table 6.6).
Table 6.6 Coding an influence relationship using mixed impacts
The three alternative codings of the same consideration formulated here are
methodologically equivalent. The distance between B1 and B2 is always the
same (i.e., 2 points), and CIB evaluation would result in identical portfolios for
all three variants. The equivalence of the three cross-impact codings (a), (b), and
(c) is the consequence of the following general law in CIB:

Uniform addition of any number in all cells of a judgment group does not change the portfolio M14
of a matrix (“addition invariance ”).14 The reason for this is that the CIB algorithm only reacts
to the difference between two impact sums, and this does not change when a judgment group is
uniformly shifted in its score level.

In this way, judgment group (b) is obtained by subtracting 2 points in all


cells of judgment group (a). Judgment group (c) is in turn created by adding one
point in all cells of judgment group (b) and similarly by subtracting one point in
all cells of judgment group (a).
It may depend on personal mindsets whether experts prefer to express their
insights in the form of promotion or hindrance or by mixtures. Given the
indifference of the CIB algorithm to coding style, at first glance, how to express
oneself could be left to the experts’ discretion.

Comparability as a Criterion for the Coding Style


However, the goal of comparability of cross-impact assessments argues against
awarding the experts this discretion. If several experts assess the same
relationship or if one expert wants to compare another expert’s assessment with
his or her own system view, it should be easy to perceive whether the system
views of both experts are essentially the same or whether there is a
disagreement. However, this perception is more difficult if the cross-impact
ratings are not made according to a consistent coding style. For instance, the two
assessments according to (a) and (b) could easily be perceived as apparent
dissent, although both assessments are actually based on the same system view
and CIB draws the same conclusions from both.

Conventions to Ensure Comparability of Cross-impact Ratings


Therefore, to keep cross-impact ratings directly comparable in projects where
several experts perform independent assessments, it is recommended to agree on
a convention for coding style in advance. Possible conventions are as follows:
(C1) The exclusive use of positive cross-impacts in the style of coding (a).

(C2)
The exclusive use of negative cross-impacts in the style of coding (b).
(C3)
The balanced use of positive and negative cross-impacts in coding (c).
With all the conventions mentioned, it is in principle possible to perform a
correct CIB analysis. In application practice, however, cross-impact matrices
with an (at least) approximately balanced use of positive and negative cross-
impact ratings dominate, and this book follows this practice.

“Standardization” as a Strict but also Restrictive Instrument to


Balance Positive and Negative Cross-impacts
To support a balanced use of positive and negative cross-impacts, the CIB
methodology originally included the recommendation that the cross-impacts in
each individual judgment group be chosen so that their cross-sum is zero
(Weimer-Jehle, 2006). This “standardization ,” which is also followed by most
of the matrices in this book, ensures an optimal balance of positive and negative
codings, not only in the matrix as a whole but also in each of its parts. However,
it considerably narrows the options in assigning cross-impact ratings and may
force experts to choose different numbers in individual cells than they would
have initially. CIB practice has shown that most users prefer to dispense with
strict standardization and instead strive for an approximate balance of positive
and negative cross-impacts in the matrix as a whole.

Ensuring Coding Quality: Calibrating Strength Ratings


Cross-impact data represent impact strengths on a relative scale. There is no
absolute metric with which to determine what impact strength 3 means, and the
CIB algorithm does not require such a metric. The key to a meaningful strength
rating is comparison. A promoting impact should be rated +3 if it is judged to be
approximately equivalent to a previously coded impact of that strength on the
same descriptor. A hindering impact may be rated –3 if its encounter with an
impact rated +3 would result in an approximate neutralization of both impacts.
A promoting impact may be rated +2 if it is considered equivalent to the
combined effect of two other impacts previously rated +1, and so on. Impacts
that are judged too weak to be equated with other impacts rated +/–1 should be
set to 0. Performing a number of random cross-comparisons of this type within a
descriptor column provides an opportunity for quality assurance and validation
of the cross-impact strength calibration .

At the beginning of the cross-impact assessment of each descriptor column, it is helpful to first M15
ask oneself which is the strongest impact in that column and to rate it +3 or –3. Afterward, this
impact serves as a reference point for the other cross-impact ratings in the column.

Ensuring Coding Quality: Sign Errors and Double Negations


In application practice, the core team is occasionally confronted with the fact
that experts set incorrect signs for the cross-impacts. These signs can be
obviously nonsensical, or the sign error can become apparent by comparing the
rating with the textual explanations provided by the experts. In the worst case,
the sign error only becomes apparent after the matrix evaluation in the form of
implausible scenarios. A frequent cause of sign errors is the issue of “double
negation.”
A double negation occurs when it is necessary to assess how a factor affects
the nonoccurrence of an event. A negative impact on a descriptor variant of the
type “X does not occur” expresses the same idea as a positive impact on “X
occurs,” but it does so in a way that can easily lead to confusion. Table 6.7
shows a correctly formulated judgment group for the belief that an increase in
innovation activity helps prevent the absence of economic growth.
Table 6.7 Example of a double negation

However, persons unfamiliar with cross-impact assessments may find it


confusing that an economy-promoting impact is expressed by a negative cross-
impact value. In fact, however, a negative impact in cell B2 does not express
that the impact works against the economy but that it works against harming the
economy (double negation).
Thinking in double negations , which can be necessary for certain judgment
cells, is generally not difficult for trained persons. However, it can be a
considerable challenge for untrained experts. In interactive elicitation
procedures, the interviewers can intervene directly by inquiring. In written
surveys, control is only possible by checking the consistency between
explanatory texts and cross-impact ratings and contacting the expert
and inquiring if necessary. If it turns out in interviews or workshops that the
respondents do not become accustomed to this difficulty and that coding errors
due to double negations generate a permanent burden, it can be decided as a
workaround to only query the part of the judgment cells that does not imply a
double negation and to have the core team conduct any complementary ratings
that may be necessary due to the chosen coding style.15
However, sign errors can also occur as mistakes by people who are familiar
with cross-impact assessments. When checking the quality of the completed
cross-impact matrix, when searching for the causes of dissenting expert
assessments or when searching for the causes of scenarios that appear
implausible, it is therefore generally useful to pay particular attention to
conspicuous sign decisions in judgment cells with double negation.

Ensuring Coding Quality: Predetermined Descriptors


Each variant of a descriptor is to be regarded as an ad hoc possible development;
otherwise, it would be pointless to include it in the analysis. The perspective of
CIB is therefore to regard the behavior of each descriptor as basically open to all
its variants.16 Whether and in which combinations the individual descriptor
variants then actually occur in the scenarios is not to be anticipated but, rather,
emerges through the analysis.
However, certain patterns of cross-impact ratings can limit this openness at
an early stage by pushing a descriptor toward a certain variant from the
beginning. It can then become difficult or, in extreme cases, impossible for CIB
to find constellations that allow this descriptor to take a different path, and the
result for this descriptor is thus largely or even entirely predestined. The
descriptor is then termed a “predetermined descriptor” (cf. Sect. 5.3 on the
effects of predetermined descriptors on the portfolio). The judgment section
shown in Table 6.8 presents a typical pattern in which cross-impact ratings
contribute to the predetermination of a descriptor (here, descriptor B):
Table 6.8 A judgment section contributing to predetermination

By this judgment section, Descriptor variant B2 is preferred in any case,


independent of which variant prevails for Descriptor A in a scenario. The
difference is only in how strong this preference is. Descriptor variant B1 can
thus in no way draw an advantage for its impact balance from the influence of
A, and as far as descriptor A is concerned, B1 is always at a disadvantage
compared to B2.
As long as the one-sided impact of descriptor A remains an exception among
the influences on descriptor B, and other descriptors in contrast to A act with an
unbiased pattern on B, consequences do not necessarily follow. However, the
more other descriptors also act unilaterally in favor of B2, the less likely it
becomes that scenarios with Descriptor variant B1 can still occur in the
portfolio. The matrix considered in Sect. 5.3 is an example of a matrix with
many strongly predetermined descriptors.

Phantom Variants as a Cause of Bias


The use of one-sided rating patterns should not generally be regarded as
incorrect use of the method. These patterns may in fact occasionally arise due to
a lack of methodological experience and should then be questioned by the
interviewers with reference to the predetermination effects. In other cases,
however, they will express valid system insights and thus must be accepted
despite their limiting effect on the diversity of the portfolio. The resulting
narrowing of the future space is then meaningful and a valid aspect of the
system analysis.
In the preceding example, there may be good reasons for the rating and the
associated predetermination. An intensification of environmental legislation,
whether moderate or significant, should in any case result in environmental
benefits. In this case, the deeper background of the one-sided impact of
Descriptor A is a consequence of the fact that the conceivable spectrum has not
been exhausted by the variants of Descriptor A. From a general perspective, a
descriptor variant, such as “A3 environmental legislation is moderately
softened,” that would round out the range of possible futures of this descriptor is
missing. If this descriptor variant were included, the judgment section would
appear less one-sided and more in line with the usual pattern (Table 6.9).
Table 6.9 A phantom variant

The original judgment section, limited to A1 and A2, may have come about
through deliberate exclusion of the conceivable descriptor variant A3, for
example because a regression in environmental legislation was considered
highly unlikely or because it would contradict the premises of the study.17 The
narrowing of the options for descriptor B by the one-sidedness of the judgment
section would then simply be the logical consequence of a deliberate restriction
of the variant spectrum of descriptor A.
Descriptor variants that are conceivable in principle but excluded from the
analysis for good reason, such as A3, which are an unspoken part of the
spectrum of possibilities and must be supplemented mentally to capture the idea
of a judgment section, are referred to as “phantom variants.”
What is needed is sensitivity to the occurrence of one-sided cross-impact
ratings. One-sided judgment sections emerging during expert elicitation should
be discussed with the experts and the implications explained. If there are good
reasons and if the one-sidedness is confirmed by the experts in knowledge of the
implications, the one-sided judgment section should be accepted. If one-sided
judgment sections result in strongly predetermined descriptors, consideration
can also be given to simplifying the matrix by deleting the devalued descriptor
variants.
Examples from practice in which expert judgment led to strongly
predetermined cross-impact matrices include Weimer-Jehle et al. (2010) and
Cabrera Méndez et al. (2010).

Ensuring Coding Quality: Absolute Cross-impacts


A regular cross-impact on a descriptor (for example, from the rating interval [–
3…+3]) always allows the possibility to be overruled by other influences on the
same descriptor. However, CIB offers the opportunity to address the special case
where an influence is “absolute,” i.e., an influence that definitively decides the
outcome for the influenced descriptor without allowing influences from other
descriptors to challenge the decision. Absolute cross-impacts can adopt two
roles:
Absolute activator: The occurrence of a descriptor variant (for example, of
A1) will, under all circumstances, entail the occurrence of a certain variant of
another descriptor (for example, of B2), regardless of what other influences
are exerted on B by other descriptors.
Absolute deactivator: The occurrence of a descriptor variant (for example,
of A1) will, under all circumstances, prevent the occurrence of a certain
variant of another descriptor (for example, of C3), regardless of what other
influences are exerted on C by other descriptors.
If an expert panel formulates an absolute impact as part of its understanding
of the system, ways must be sought to implement this insight through
appropriate cross-impact ratings. CIB provides the possibility to express such
relationships. However, this requires leaving the normal range for cross-impact
ratings and resorting to the artifice of a “sufficiently high” cross-impact. For
cross-impact matrices of usual size, cross-impact values of +99 for absolute
activators and –99 for absolute deactivators are sufficient. The example shown
in Table 6.10 illustrates the use of absolute cross-impacts . It concerns the effect
of a legislative intervention on the product design and the energy consumption
of household appliances.
Table 6.10 Using absolute cross-impacts

Consumption-based taxation (A2) affects the market prospects of appliances


with higher energy consumption and thus makes it more important for appliance
manufacturers to consider whether they should focus on corresponding product
designs. However, it remains conceivable that this impact could be compensated
by other factors, such as strong growth in consumer purchasing power, which
would make taxation less critical to purchase, or an increasing tendency toward
comfort-oriented consumer expectations. Thus, since the effect of A2 is in
principle compensable, it makes sense to express it through conventional cross-
impacts within the normal rating interval.
The situation is different in the case of a ban on high-consumption devices
(A3). The effect of a legal ban is not compensable; rather, it is decisive.
Therefore, inappropriate scenarios could arise if a ban’s effect were expressed
“only” through compensable cross-impacts. Here, the use of an absolute cross-
impact is the better choice.

Avoiding Conflicts Between Absolute Cross-impacts


When using multiple absolute cross-impacts in a matrix, care must be taken to
avoid coding contradictions. For example, it would contradict the concept of an
absolute cross-impact if a descriptor variant A1 acted as an absolute activator in
favor of B2 and at the same time D3 was assumed to be an absolute deactivator
for the same descriptor variant B2. This would lead to the conflict that in all
scenarios that combine A1 and D3, B2 would be both absolutely demanded and
forbidden. Similarly, it would be a conflict if two absolute impacts within a
scenario called for both B2 and B3.
The simplest way to avoid conflicts between absolute impacts is to use not
more than one absolute cross-impact in each descriptor column. If it is necessary
to implement several absolute impacts on one descriptor, their conflict potentials
must be carefully considered.
Examples of the use of absolute impacts in CIB studies can be found in
Meylan et al. (2013) and Kosow et al. (2022).

6.3.3 Data Uncertainty


In Sect. 3.7, the question was already raised about how high the uncertainty
margin is to be assumed for cross-impact ratings. This may vary from case to
case. However, a question can be asked: Which uncertainty range can be
regarded as typical? Empirical evidence on this question is provided by CIB
studies in which groups of experts were asked to construct a cross-impact matrix
in such a way that each member of the expert panel made his or her own
independent cross-impact assessments. By comparing these matrices, we can
assess how much agreement there is, in general, between the cross-impact
ratings of different experts on the same issue. Since the consistency assessment
in CIB is based on the differences between impact sums, it is particularly
informative to examine the similarities and discrepancies of the rating difference
between two matrix cells of a judgment group.
For example, the scores of cells A1 → B1 and A1 → B2 in the
Somewhereland matrix in Fig. 3.7 have the values [–2, +1]. The cell difference
is therefore 3, i.e., the impact of A1 on B2 is three points higher than the impact
on B1. If another expert were to evaluate this relationship with the values [–1,
+1], then this would correspond to a cell difference of 2, and the estimates of the
cell differences of the two experts would thus differ by 1 point at this point of
the matrix.
The empirical data (see statistics box) show that in rare cases, there are
strong deviations between the cell differences (deviation of 3 points or more in
approximately 16% of the cases). The respective codings can then hardly be
based on even approximately matching ideas about a descriptor relationship.
Instead, the experts in these cases likely have clearly distinguishable reality
models about the relationship in question.
This part of the deviation distribution is therefore not a consequence of
coding uncertainty , but it indicates expert dissent. The occurrence of divergent
reality models in an expert panel is by no means an “operational accident” in
CIB but rather part of the analysis intention, for example, when different
stakeholder groups are interviewed and the aim is to capture their different
perspectives. However, such cases should then be handled with special dissent
procedures (see Sect. 4.5).
In contrast, the part of the deviation distribution that refers to small but very
frequent deviations represents the regular coding uncertainty. In approximately
84% of the cases, the cell differences vary by a maximum of 2 points. Due to
their frequency, this part of the rating uncertainty cannot be regarded as an
exception but must be understood as a regular element of the assessment
process. It must be expected even if the experts have an idea of an influence
relationship that is in principle in agreement. For this part of the deviation
distribution, the frequency data yield a mean value of approximately 0.5 points
as the average uncertainty of the difference between two matrix cell values.

Since the consistency value of a scenario is the sum of N – 1 differences of


two cross-impact values (when N is the number of descriptors), the coding
uncertainty causes a mean uncertainty of the consistency value of 0.5
$$ \sqrt{N-1} $$. This considers that independent uncertainties in
additions only increase with the square root of the number of summands.18
Therefore, a scenario can be discarded as surely inconsistent only if its
inconsistency value exceeds the significance threshold given in Sect. 3.7.1.

6.4 About Data Elicitation


The collection of descriptors and their variants and the elicitation of cross-
impact data is a preparatory procedural step and not part of the CIB
methodology in a strict sense. Therefore, neither orientation nor directions are
provided by CIB for this task. However, this task corresponds to the usual
elicitation exercises in the social sciences, and it also arises in the same or
similar way for other methods of future research.19 CIB can therefore draw on
an established body of proven procedures for data collection. In addition, new
data collection and elicitation procedures have been developed in CIB studies.
This chapter provides an overview of the most important data collection and
elicitation procedures used in CIB studies, including an assessment of their
strengths and weaknesses. It aims to provide guidance for the reader’s choice of
an elicitation format adapted to individual project needs.

6.4.1 Self-Elicitation
In the simplest case, the necessary data objects (descriptors, descriptor variants,
cross-impact ratings) are determined partly or entirely by the core team. This
assumes that the core team has sufficient expertise in the entire relevant subject
area.

Advantages Disadvantages
Advantages Disadvantages
The main advantages of this The main disadvantages are the increased risk of overlooking
elicitation method are its important perspectives as a result of “groupthink” and possibly of
procedural simplicity, low the team lacking competence sufficient to represent all the
required effort, and independence required areas of knowledge.
from the need for external Thus, the disadvantages of the process lie in the increased risk of
experts to contribute to data subjectivity and limited scientific legitimacy. These concerns
elicitation. The method is carry less weight if the explicit purpose of the analysis is to
therefore particularly suitable for express the core team’s system view through scenarios (e.g., in the
studies with limited resources. case of stakeholder scenarios), if the thematic range of the object
of analysis is well covered by the competencies of the team, or if
the matrix is only prepared for illustrative or teaching purposes
(such as in the case of the “Somewhereland” matrix).

Examples
Examples of self-elicitation of descriptors/variants and/or cross-impact data in
CIB studies are Saner et al. (2011), Slawson (2015), Saner et al. (2016),
Weimer-Jehle et al. (2016), Schweizer and Kurniawan (2016), Regett et al.
(2017), and Zimmermann et al. (2017).

6.4.2 Literature Review


One way for the core team to place data elicitation on a more objective footing
and to step beyond the boundaries of their own competencies is to review the
relevant literature. A literature review can be applied for all data objects used in
CIB analysis, i.e., descriptors, descriptor variants, and cross-impact data.

Descriptor Screening
A literature review for descriptor screening consists of evaluating the literature
to identify topics that are named relevant influencing factors for the subject of
the analysis. To objectify the literature selection, it is advisable to document the
search procedures (e.g., databases used, search criteria). Recently, the use of
software tools for qualitative content mining has occasionally been observed
(e.g., Kalaitzi et al., 2017; Sardesai et al., 2018). A special form of review is the
evaluation of expert statements in media reports on the subject of the scenario
study (Ayandeban, 2016).

Descriptor Variants
Obtaining descriptor variants through literature reviews can be based on studies
that make future predictions about the descriptors. In the best case, the
descriptors themselves have already been the subject of scenario studies, or
controversial forecasts on the descriptor topic can be collected to capture the
spectrum of possible futures for the descriptors (Nutshell V, Fig. 6.11).

Fig. 6.11 Nutshell V . Using subscenarios as descriptor variants

Cross-impact Data
The derivation of cross-impact data from the literature by the core team consists
of identifying passages that make statements about descriptor relationships and
coding the qualitative or quantitative statements on a cross-impact rating scale.
To reduce coding uncertainty, several persons trained in the CIB method can be
asked to code the literature passages independently. Comparison of the results
then makes it possible to assess intercoder reliability.

Coding Literature Quotations: An Example from Practice


A CIB analysis by Schweizer and Kriegler (2012) on the drivers of global
climate gas emissions can serve as an example of coding literature statements on
descriptor relationships. The study and its results are described in more detail in
Sect. 7.4.
Schweizer and Kriegler’s analysis includes the two descriptors “Global
population growth” and “Global economic development.” For the coding of the
judgment section “Global population growth influences global economic
development,” Schweizer and Kriegler referred to a meta-study that summarized
research on this topic (Nakićenović et al., 2000: 120):

Prior to 1980, the overwhelming majority of studies showed no


significant correlation between population growth and economic growth
(National Research Council 1986). Recent correlation studies, however,
suggest a statistically significant, but weak, inverse relationship for the
1970s and 1980s, despite no correlation being established previously
(Blanchet 1991).

From this statement, Schweizer and Kriegler derived the cross-impact


ratings shown in Table 6.11.
Table 6.11 Coding a text passage
The assessment mentioned in the literature quotation that for earlier times
(i.e., of low population) no correlation could be proven was expressed by
Schweizer and Kriegler by an empty judgment group in the top row. The
negative cross-impacts on the lower right are based directly on the literature,
including the estimation of a low influence strength. The positive cross-impacts
at the bottom left resulted as the “other side of the coin” from the negative cross-
impacts, as the authors followed the standardization rule (see Sect. 6.3.2).
For relationships for which a scientific controversy was identified,
Schweizer and Kriegler also prepared alternative coding and tested the
robustness of the scenario results by repeating the evaluation after exchanging
the baseline coding with the alternative coding (cf. Sect. 4.5.5, paragraph
“Sensitivity analysis”).

Assessment

Advantages Disadvantages
The advantage of literature-based elicitation is its utilization If performed carefully, including by
of a broad knowledge base and—if sought—the scientific using the objectivity enhancement
legitimacy of the elicitation results. This advantage is measures described at left, this form of
strengthened by the transparent reference to recognized data elicitation can be highly time-
expert knowledge and by recording and balancing scientific consuming.
dissent. Nevertheless, a subjective component arises in the A fundamental difficulty with
interpretation of the text passages and in the selection of literature-based surveys can also arise
literature. However, this aspect can be mitigated by for CIB studies that investigate
formalized and documented search strategies and employing developments whose interrelationship
multiple coders. Another advantage is the independence has not yet been addressed in the
from external experts and the effort to recruit them to literature but knowledge is
contribute to the data elicitation. nevertheless available from experts.
Such “uncharted territory” is a
particularly attractive field of
application for CIB. However, this
means that the necessity may arise to
switch to other elicitation methods for
parts of the matrix.

Examples
Descriptors, descriptor variants, and/or cross-impact data have been collected in
many CIB studies through a literature review, e.g., Weimer-Jehle et al. (2011),
Schweizer and Kriegler (2012), Meylan et al. (2013), Centre for Workforce
Intelligence (2014), Schweizer and O’Neill (2014), Ayandeban (2016),
Shojachaikar (2016), Musch and von Streit (2017), Kalaitzi et al. (2017),
Sardesai et al. (2018), and Mitchell (2018).

6.4.3 Expert Elicitation (Written/Online)


Experts (or stakeholders deemed to have expertise in the details of group
interests or local conditions) are asked to provide their knowledge for one or
more phases of the analysis.

Descriptor Screening
For descriptor screening , the expert panel receives information about the subject
of the scenario analysis and is asked to suggest descriptors in writing or online.
A structured selection of the interviewees can be used to pursue an appropriate
representation of interdisciplinary perspectives, scientific controversies or (in
the case of stakeholders) different interests. A public online survey can also be
considered a special form of elicitation (Mowlaei et al., 2016). Expert elicitation
can also be used as a validation step after a literature review (Meylan et al.,
2013; Pregger et al., 2020).

Descriptor Ranking
If the expert panel is surveyed again after a list of possible descriptors has been
compiled and asked to assess the relevance of the proposed descriptors, for
example, by assessing weight scores, this form of survey can also be used for
descriptor ranking and thus for the final selection of the descriptors to be
considered in the CIB analysis. An obvious option is to select the descriptors
with the highest average scores. However, other approaches can also be found in
the literature for transforming expert judgments into a selection. Schweizer and
O’Neill (2014), for example, surveyed a group of experts online and asked each
individually to mark the most important descriptors on a list. All descriptors
nominated by at least 25% of the experts were then used for the CIB analysis.
Pregger et al. (2020) sent a list of descriptor suggestions to experts in different
disciplines and asked them to assign 0–10 points for each descriptor suggestion.
The researchers then evaluated separately according to discipline which
descriptors best represented the perspectives of each discipline.

Descriptor Variants
Experts may also be asked to suggest alternative futures for the descriptors. An
approach that can be combined with the literature-based approach is one in
which the core team prepares written drafts for the definition of the descriptor
variants after a literature review and asks experts to comment on the drafts
(Pregger et al., 2020). This variant reduces the workload for the expert panel. In
addition, the drafts produced by the core team can incorporate all project
requirements and design decisions (such as the number of descriptor variants or
whether to prioritize central or peripheral variants) from the outset, thereby
providing guidance to the experts for their comments.

Cross-impact Data
The alternative to using literature as “written expert knowledge” is to approach
experts directly and ask them to assess the cross-impacts. For the written survey,
experts are first selected and approached on the basis of their proven expertise in
the field (for example, in the form of publications or project experience). In the
case of an interdisciplinary descriptor field, expert selection must reflect the
disciplinary range and should also cover the most important expert controversies
in the individual knowledge domains, if any. To avoid subjective survey results,
several experts should be included for each discipline. The experts receive the
following:
A brief description of the project and the intended use of the scenarios.
A working guide to the assessment task, including a cursory outline, that
explains how cross-impact data are used by CIB.
Definitions of descriptors and their variants (“descriptor essays”).
Blank forms for entering cross-impact ratings and explanations. It is
convenient to offer blank forms in different formats to accommodate the
different working habits of the experts (printout, spreadsheet file, scw-file).20
It is advisable to ask the experts not only for the cross-impact ratings but
also for brief justifications, i.e., explanations of the impact mechanism they
assume. For this, the experts should be sensitized by a negative and positive
example in the work instructions not only to write a paraphrase of the cross-
impact rating but to formulate an actual explanation for the coding.21 To limit
the workload for the experts, they should not be asked to provide reasons for
each individual judgment cell but only to provide explanations at the judgment
section level. It is also helpful to ask the experts for a self-assessment of their
confidence in each judgment section.
Judgment explanations are not technically required for CIB evaluation.
However, they can significantly increase the quality of the analysis, as they
make the experts’ cross-impact ratings more comprehensible for the core team
and for third parties and enrich the subsequent scenarios with substance because
such explanations can be used in the scenario descriptions and storylines. They
also serve quality assurance purposes, as they make it easier to identify technical
rating errors (for example, sign errors; see Sect. 6.3.2) and to address assessment
dissent. However, providing judgment explanations means considerable
additional work for the expert panel, which must be considered when measuring
the assessment task and announcing the time required for the assessment when
inviting the experts.
In the simplest case, the responses of different experts are summarized and
evaluated by forming a sum matrix (cf. Sect. 4.5.3). However, a careful
examination of controversial assessments (if any) is preferable. Procedures for
handling expert dissent are described in Sect. 4.5. As mentioned, it is also
advisable for the core team to compare the cross-impact ratings of the experts
with the explanations (if available) to identify technical errors of judgment and
to be able to correct them in consultation with the experts concerned.

Partitioning the Matrix for Expert Elicitation


In the simplest case, all experts are asked to fill in the complete matrix. This
assumes that the areas of knowledge addressed in the matrix are not too diverse,
i.e., that all experts have the ability to make sound judgments for the entire
matrix. In addition, the matrix should not be too large; i.e., a realistic estimate of
the time required for processing should be weighed against the expected
willingness of the experts to invest their time.
The alternative to the complete processing of the matrix by all experts is the
division of the matrix and the assignment of individual matrix parts to experts
particularly competent in the required themes. It is recommended that the matrix
be divided into columns (instead of rows) since the cross-impacts in CIB are
always added to impact balances in columns. Therefore, it is essential above all
that the strength ratios of the cross-impacts within a descriptor column are valid.
Achieving this aim is made easier if the ratings within a column are made by the
same expert or experts.
The Somewhereland matrix for separate processing could be partitioned, for
example, as shown in Fig. 6.12. By partitioning the matrix, one can ensure that
the assessments are performed specifically by experts particularly suited for this
purpose. In addition, the workload for each expert can be limited in this way,
especially for larger matrices. The difference between expert elicitation with full
matrices or with partial matrices is visualized in Fig. 6.13. Of course, assigning
exactly one expert to each partition, as shown in Fig. 6.13, is to be understood as
an example. As a rule, it is preferable to have each partition assessed by more
than one expert. Techniques for combining the individual assessments into an
overall matrix or for using the individual assessments directly are described in
Sect. 4.5.
Fig. 6.12 Partitioning of the assessment task according to knowledge domains

Fig. 6.13 Two ways to construct a cross-impact matrix by expert group elicitation

Assessment

Advantages Disadvantages
One advantage of this elicitation technique is the There are also disadvantages. Expert
potentially high number of experts who can be knowledge in published written form has
involved and, due to the limited effort required, the usually already been subjected to a quality
low barrier to participation for them. Regardless of check by other experts, which is not the case
the form chosen, expert-based elicitation generally with expert statements acquired by direct
also has the advantage that the experts can project questioning. Furthermore, expert assessments
their system knowledge onto interdependencies are also subject to the risk of being uncertain
regarding which they and others have not yet judgments made in the absence of reliable
commented in publications, thus giving the CIB knowledge. Another disadvantage of using this
analysis access to genuinely new insights. elicitation method alone is the risk of basing
Furthermore, expert dissent can usually be better the analysis on perhaps one-sided opinions by
identified and mapped in the CIB analysis by experts who were selected “by chance,”
working with a group of experts than by interpreting especially if the number of experts interviewed
the literature. is small.

Examples
Descriptor lists have been collected using this elicitation technique, for instance,
by Wachsmuth (2015), Ayandeban (2016), and Mowlaei et al. (2016). Cross-
impact data from written surveys have been used by Förster (2002), Renn et al.
(2007, 2009), and Jenssen and Weimer-Jehle (2012), among others.

6.4.4 Expert Elicitation (Interviews)


An alternative to written or online-based expert elicitation is personal interviews
of the experts by members of the core team. This form of elicitation is also
suitable in principle for all data objects used in CIB. However, due to the usually
large time requirements for the core team, this elicitation method will generally
only be used for the part of the data collection for which it promises the greatest
advantages under the individual project circumstances.

Descriptor/Variant Screening
Experts are approached, receive information about the subject of the scenario
analysis, and are asked to make suggestions for descriptors and descriptor
variants in an interview. A structured selection of the interviewees allows for an
adequate representation of interdisciplinary perspectives, scientific controversies
or (in the case of stakeholders) different interests.
A special form of descriptor screening by expert interview was used by
Uraiwong. Based on the interviews, the interviewees’ mental models of the
problem under study were formulated (“multistakeholder mental model
analysis”). The descriptors for the CIB analysis were then selected from the
factors used in the mental models (Uraiwong, 2013).

Cross-impact Data
Experts can also be asked for cross-impact assessments during interviews. If the
respondents are willing to participate in a sufficiently long interview or in a
series of interviews, the questioning can aim at filling out the entire matrix.
Interviewing several experts thus leads to an ensemble of matrices that can be
compared and either evaluated individually or conflated before evaluation.
However, interviews can also be used to prepare partial matrices (see Figs. 6.12
and 6.13). Each interview should then target the part of the matrix for which the
interview partner is particularly competent.
A variant for interview-based elicitations that is worth considering but that
has been rarely used is to ask the experts not directly for cross-impact ratings but
for a verbal description of the interrelationships. The transcribed descriptions
then provide a similar source for the system interrelationships as literature
quotations, and the core team can code the descriptions after the interview. The
advantage of this approach is that it spares the experts the unfamiliar coding task
and enables them to articulate their expertise in a way that is familiar to them.
The coding can be conducted by CIB experts who, unlike most interviewees,
have a precise idea of how different coding patterns play out in the CIB
algorithm. A negative aspect of this elicitation form, however, is the partial loss
of legitimacy for the matrix, which is only indirectly expert-based, and the risk
of not exactly reflecting the expert intentions with the coding. It also requires
experience on the part of the interviewer to continuously ensure during the
interview that the descriptions offered by the experts contain sufficient
substance for the subsequent coding.
In CIB practice, interviews are predominantly used in the form of face-to-
face interviews.22 However, there are also cases of telephone interviews and,
recently, online interviews.23 If the pool of experts is sufficiently large, once can
also considered conducting the interviews not with individuals but with small
groups of, for example, three experts in the same field of knowledge. Here, the
advantage is that discussions occur between the interviewees and thus richer and
provisionally validated justifications for the assessments can be expected.
Moreover, misjudgments due to misunderstandings or insufficiently reflected
reasoning are less likely. However, it should be noted that group discussions
may lead to extended interview times.

Assessment

Advantages Disadvantages
One advantage of interviews compared The disadvantage of using this elicitation method alone is
to workshops (Sect. 6.4.5) is that it is the lack of critical questioning of the assessments by other
generally easier to obtain the experts
participation of experts, as they have toAdditionally, due to the sequential interviewing of different
invest less time than for a workshop. experts, it can occur that clarifications of the task or the
Interviews also have the advantage definitions of descriptors and variants made during an
over the written survey that interview create an interview situation that differs from that
uncertainties about the assessment task of preceding interviews and interview results that are thus
or about the meaning of the descriptors founded on an inconsistent basis. In contrast, respondents in
and descriptor variants can be clarified a workshop benefit equally from all clarifications made
immediately by the interviewer during the elicitation process.
Conversely, the interviewer can also The quality of the interview results depends sensitively on
ask immediate questions if the cross- the interviewer’s ability to recognize discomfort,
impact assessment raises concerns that uncertainties, and misinterpretations of the interviewee
there is a sign error or that the ratings regarding the method and judgment task and his or her
would have assumedly unintended capability to resolve these issues.
effects on the evaluation (for example, As a further disadvantage, the considerable time,
if judgment sections are coded one- coordination and, often, travel effort for the core team
sidedly; see Sect. 5.3.1). associated with the numerous interviews must be considered.
A significant advantage is that the
interviewer can directly work toward
providing sufficient explanations and
directly ensure that the explanations
are comprehensible and not limited to
a mere paraphrase of the fact of the
influence.
Another advantage over workshop-
based expert elicitation (see below)
cited in the literature is that separate
interviews prevent experts from
influencing one another, thus creating
“groupthink” risks.24
Examples
Descriptors, descriptor variants, and/or cross-impact data through expert
interviews were collected by Schneider and Gill (2016), Schmid et al. (2017),
Brodecki et al. (2017), Musch and von Streit (2017), Pregger et al. (2020), and
Oviedo-Toral et al. (2021) among others. The special form in which information
about relationships was collected verbally and then coded by the core team was
used by Meylan et al. (2013) for part of the cross-impact data.

6.4.5 Expert Elicitation (Workshops)


As an alternative to written or interview-based expert surveys, expert elicitation
can also be conducted in workshops. One challenge of workshop-based
elicitation is the task of creating and maintaining a fruitful discussion
atmosphere. Certain domains of knowledge should not claim sovereignty of
interpretation over the system under study, nor should individuals with strong
opinions be permitted to dominate discussion. Moderators must therefore strive
to foster a hierarchy-free space in which the better argument prevails.25 In
addition to competence criteria, it can therefore be useful when selecting experts
for a workshop to also consider the discussion culture of prospective experts and
to avoid strong hierarchical differences in the expert group, which can have a
disturbing effect if the higher-ranking persons tend to assert their status in
discussion. A productive working atmosphere also requires the moderators to act
in a balancing manner during the workshop by restraining those who act in a
dominant fashion and actively encouraging reserved participants to share their
views.

Descriptor Screening
In a workshop, after receiving preliminary written information about the
scenario analysis, experts are asked to propose descriptors and descriptor
variants. A structured selection of the participants can promote an adequate
representation of interdisciplinary perspectives, scientific controversies or (in
the case of stakeholders) different interests. The expert workshop can also be
used as a validation step after a literature review or a written or online survey or
interviews. Creativity techniques can also be used to stimulate working
procedures (for example, Biß et al., 2017; Ernst et al., 2018).

Descriptor Ranking
Descriptor ranking can be conducted in workshops by asking the expert panel to
rank a proposed list of descriptors by importance to the analysis through
discussion or by assigning points. This process can be followed by the final
selection of descriptors.

Cross-impact Data
The expert workshop is also a frequently used means to elicit cross-impact data.
To this end, the expert panel discusses descriptor interdependencies in a plenary
session or small groups to elaborate on the cross-impact ratings. If working in
small groups, the alternative procedures shown in Fig. 6.13 are also applicable.
As with interviews, it may be considered for workshops to only request and
document verbal descriptions of interrelationships and afterward have them
coded by method experts.

Assessment

Advantages Disadvantages
One advantage of workshops over written Attracting recognized experts for one or more
surveys or interviews is the opportunity for workshop dates can be challenging, and difficulties in
discussion between experts and the resulting doing so can result in compromises in workshop
more intensive critical scrutiny and the staffing, with consequences for the quality of the
increased obligation to provide sound reasons results.
for the assessments made. In this way, In the worst case, deficiencies in the discussion
subjectivity can be reduced and poorly culture can undermine the inherent strengths of
substantiated assessments can be filtered out. workshop-based elicitation. While proponents of this
Another advantage is the direct encounter of elicitation method hope that the assessments will
different perspectives on the system under benefit from the “wisdom of the group,” i.e., that the
investigation. Whereas in a written survey or judgmental ability of a group is higher than that of the
in interviews the multidisciplinary views of individual participants, skeptics worry that
the problem are initially only juxtaposed and “groupthink,” i.e., fixation on one viewpoint due to
only later synthesized during the evaluation conformity pressure within the group, will result in
process, in an interdisciplinary workshop alternatives being ignored that might be equally well
there is often a fruitful direct exchange among or better justified that the agreed-on alternatives.26
individuals with differing perspectives already To compensate for this possible disadvantage of
during matrix creation, and frequently a workshops, moderators must be careful to address
genuinely new view of the problem emerges. “sidelined“ discussion contributions and ensure that
The rationality ethos of face-to-face discourse positions are only discarded on the basis of factual
can also encourage stakeholder participants to arguments.
engage in fact-based as opposed to interest-
driven discussion of the interrelationships.
Participants in CIB workshops often report
that this form of interdisciplinary system
reflection was perceived as a new and
inspiring experience, triggered a consideration
of previously unreflected-upon aspects, and
was thus also understood as benefiting the
participants.
Number of Participants
The choice of the number of experts requires finding a balance between
covering the thematic range of the matrix through expert competence, reducing
subjectivity in the expert statements, and creating a fruitful discussion
atmosphere. According to studies that provide information on the number of
experts in workshops for the creation of a CIB matrix, a range of 5–40 experts is
common, whereby the work is usually divided among small groups for
participant numbers in the upper range.27 In most of the consulted studies,
participant numbers ranged from 7 to 18, with a median of 12. Fink et al. (2002)
recommended a group size of 8–13 as the optimal compromise between
groupthink risks and socialization effort.

Time Management
Estimating the time required to create a cross-impact matrix in an expert
workshop is challenging because the time needed can depend heavily on
individual project conditions. Determinants of the time required include the
proportion of zero-valued influence relationships in the matrix, how many
variants the descriptors have, and how difficult or controversial the impact
assessments prove to be for the expert panel. Nevertheless, it is inevitable in
workshop planning to make assumptions about the working speed. Empirical
data from CIB practice can serve as a guide. However, only a small number of
published CIB studies provide information on the time required for matrix
preparation. Thus, only a few indications are available. Based on the available
data and own experience, an average working speed of approximately 1.5–7 min
per judgment section can be estimated. All judgment sections of a matrix,
including the empty ones, are included in the average calculation so that for a
matrix with descriptors, N(N – 1) judgment sections are always assumed. Both
the lower value and the upper value of the mentioned time interval seem to be
exceptions to the norm.28 In most cases, the working speed in the reviewed
studies was in the range of 4–5 min per judgment section. However, it is
generally true that time pressure reduces assessment quality. It can be expected
that the more time one can spend on the assessments, the higher the data quality.
These orientation values are average values and only apply if the experts
have the required interdependency knowledge at hand and only have to mentally
reflect before applying their knowledge to the given case for coding. If
assessments must first be deduced, extrapolated from analogies or debated,
considerably longer working times may be required for individual influence
relationships.
Since the working time can only be approximately estimated and deviations
may occur, it is advisable to prepare a backup plan for use in the event of
unexpectedly long time requirements. Possible approaches are as follows:
Prearrangement of an optional additional appointment for another workshop.
Column-by-column division of the matrix into several parts and switching to
parallel processing in small groups as soon as a lack of time becomes
apparent in the course of the workshop.
Scheduling the processing of matrix parts, for which an elicitation alternative
is available if necessary (such as a literature-based elicitation), at the end of
the workshop.
However, in the interest of a uniform elicitation procedure, these approaches
should be used only when necessary.

General Recommendations
Pretest
Expert time is usually a scarce resource, and disruptions in the elicitation
process should be avoided as much as possible. The quality of the resulting data
can also be impaired if weaknesses in the elicitation concept only become
apparent during implementation. For all expert-based elicitation methods, it
therefore makes sense to conduct a pretest . This means working through the
matrix in advance on a trial basis using the chosen elicitation method with
people who are not involved in the project. In this way, deficiencies in work
instructions, process planning, or descriptor essays can be identified in advance,
and an initial project-specific estimate of the working speed can be obtained.
Combining Elicitation Methods
In addition to the use of a single elicitation method, there are numerous
instances in the literature where methods have been combined to collect cross-
impact matrices.29 Possible reasons for method combinations include:
The chosen elicitation method is the literature-based collection of cross-
impact data. However, for certain influence relationships, no literature
references can be found. Therefore, the respective relationships must be
elicited with an expert-based method.
A workshop-based creation of the matrix is planned. However, one expert
with essential expertise is unable to attend the workshop. This expert will
therefore be interviewed in advance about the parts of the matrix that are
assigned to his or her field of expertise, and the expert’s assessments and
justifications are subsequently introduced into the workshop.
The core team plans to conduct a written expert survey about the cross-impact
data. Some experts are familiar with the CIB method, while others are not.
The latter group is offered the option of being interviewed as an alternative to
completing a written survey so they can receive assistance during the
assessment procedure if needed.
In the first step, the cross-impact data are obtained by a written survey in a
group of experts. Comparison of the returned matrices reveals sufficient
agreement for many judgment sections but also substantially divergent results
for some judgment sections. In a subsequent workshop with the expert panel,
the divergent assessments are discussed, and a final assessment is developed
(see Sect. 4.5.4).
Format combinations can create potential for optimizing expert-based matrix
preparation. Here, a possible disadvantage is that the elicitation method can
affect results. Method combinations can lead to different matrix parts being
based on unequal foundations.
Iteration and Scenario Validation
For expert-based data elicitation, an iterative repetition of the procedure can be
used as a quality assurance tool. After the cross-impact matrix has been created,
the scenarios are calculated and fed back to the expert panel in a validation
workshop. The goal here is to have the experts evaluate the scenarios regarding
their plausibility. Since the scenarios were constructed from the system
perspectives of the expert panel, it is to be expected that the panel will generally
endorse the scenarios as plausible. If, however, doubts are expressed about the
plausibility of certain scenarios, there are three possible causes:
1.
The criticized scenario correctly corresponds to the system view of the
expert panel. However, the experts cannot mentally reconstruct the complex
reasons for the form of the scenario, and thus, there is no spontaneous
“recognition” of their own system view.
2.
The criticized scenario is based in part on cross-impacts that do not
correctly reflect the system view of the expert panel because mistakes were
made during coding or because the strength relations between the cross-
impacts were not set appropriately.
3. During expert elicitation, irresolvable controversial assessments may have
arisen for certain influence relationships. Therefore, when creating the
matrix one of the proposals was followed while the others were
matrix, one of the proposals was followed, while the others were
disregarded. Consequently, the scenarios may no longer be plausible for the
experts whose assessments were disregarded. If instead an averaging of the
controversial assessments was performed, it would be not inevitable but
possible that the “compromise scenarios” would appear somewhat
implausible for both sides.
First, when a scenario is criticized, it must be clarified which case applies.
This clarification is conducted by reading out from the matrix the influences for
and against the criticized scenario as well as the influences for and against the
alternative scenario that was found more plausible by the expert panel. It is also
valuable if explanatory texts for the cross-impacts are collected during the
matrix creation, which can subsequently be used for the review step (cf. Sect. 4.
2). Then, the expert panel can either trace and appreciate the reasons that led to
the construction of the scenario in its original form (case 1), or they can
specifically identify which influences coded in the matrix led to the
misconstruction (case 2). Therefore, the cross-impact matrix can be specifically
revised, or the plausibility critique can be concretely linked to specific
assessment controversies (case 3).
Ultimately, available resources and time constraints as well as the
willingness of the expert panel to participate determine whether an iteration step
can be envisaged. The basic process is illustrated for a fictitious example in
Nutshell VI (Fig. 6.14).

Fig. 6.14 Nutshell VI. Workflow in a scenario validation workshop

Examples
Expert workshops on data collection have been used, for example, by Schütze et
al. (2018), Lambe et al. (2018), Venjakob et al. (2017), Biß et al. (2017), Ernst et
al. (2018), Musch and von Streit (2017), Hummel (2017), Wachsmuth (2015),
Drakes et al. (2017, 2020), Mphahlele (2012), Weimer-Jehle et al. (2012), Fuchs
et al. (2008), and Aretz and Weimer-Jehle (2004). The special form of the core
team coding independently based on verbal interdependency descriptions can be
found, for example, in Schneider and Gill (2016) and Kurniawan (2018).

6.4.6 Use of Theories or Previous Research as Data Collection


Sources
In certain cases, the descriptors and their variants result from the task without
the need for a separate screening. This may be the case, for example, when the
goal of the CIB analysis is to explore the interdependencies of a list of factors
already specified in another study or other part of the project (e.g., in Renn et al.,
2007, 2009) or when the descriptor field developed in another study can be
adopted (e.g., in Vögele et al., 2019, Mitchell, 2018). Other opportunities for
this type of data collection arise when the descriptors are derived from a
theoretical concept (Schmid et al., 2017; Shojachaikar, 2016) or when CIB is
used as a supplementary or validation method after a conventional scenario
process (Centre for Workforce Intelligence, 2014; Kurniawan, 2018).
When using CIB as a validation method, in the best case, the cross-impact
data can also be obtained from the process documents or the process observation
of a previous scenario project. Theory-based cross-impact data, in contrast, can
rarely be obtained. There is one example of this approach in the literature
(Shojachaikar, 2016). Furthermore, cross-impact data can also be systematically
extracted if there are quantitative descriptors in the matrix that are connected by
a mathematical relationship. In this case, the cross-impacts of the respective
descriptor columns can be determined by computation (Weimer-Jehle, 2006). An
example of this procedure can be found in Aretz and Weimer-Jehle (2004).

Assessment

Advantages Disadvantages
The advantage of using this method of data collection is the The main disadvantage of the format
reduced risk that subjectivity in the opinions of the core group is that it is rarely applicable. In
or experts will shape or distort the analysis. The responsibility addition, the orientation to
for the quality of the data collected in this way is delegated to preliminary research or a theoretical
the authors of the previous study or to the theoretical concept framework also means the
used. Furthermore, the amount of effort required for this form renunciation of one’s own
of data collection is usually low. perspective and an independent view
of the problem.

Examples
Renn et al. (2007, 2009), Kemp-Benedict et al. (2014), Centre for Workforce
Intelligence (2014), Shojachaikar (2016), Hummel (2017), Schmid et al. (2017),
Kurniawan (2018), Mitchell (2018).

References
Aretz, A., & Weimer-Jehle, W. (2004). Cross Impact Methode. In Der Beitrag der deutschen
Stromwirtschaft zum europäischen Klimaschutz. Forum für Energiemodelle und energiewirtschaftliche
Systemanalyse, Hrsg, LIT-Verlag.

Ayandeban. (2016). Future scenarios facing Iran in the coming year 1395 (March 2016–March 2017) [in
Persian]. Ayandeban Iran Futures Studies. www.ayandeban.com

Biß, K., Ernst, A., Gillessen, B., Gotzens, F., Heinrichs, H., Kunz, P., Schumann, D., Shamon, H., Többen,
J., Vögele, S., & Hake, J. -F. (2017). Multimethod design for generating qualitative energy scenarios. STE
Preprint 25/2017, Forschungszentrum Jülich.

Blanchet, D. (1991). Estimating the relationship between population growth and aggregate economic
growth in developing countries: Methodological problems. In Consequences of rapid population growth in
developing countries. Taylor & Francis.

Brodecki, L., Fahl, U., Tomascheck, J., Wiesmeth, M., Gutekunst, F., Siebenlist, A., Salah, A., Baumann,
M., Brethauer, L., Horn, R., Hauser, W., Sonnberger, M., León, C., Pfenning, U., & O’Sullivan, M. (2017).
Analyse der Energie-Autarkiepotenziale für Baden-Württemberg mittels Integrierter
Energiesystemmodellierung. BWPLUS Report, State of Baden-Württemberg.

Cabrera Méndez, A. A., Puig López, G., & Valdez Alejandre, F.J. (2010). Análisis al plan national de
desarrollo - una visión prospectiva. XV Congreso international de contaduría, administratión e informática,
México.

Centre for Workforce Intelligence. (2014). Scenario generation—Enhancing scenario generation and
quantification. CfWI technical paper series no. 7. See also: Willis G, Cave S, Kunc M (2018) Strategic
Workforce Planning in Healthcare: A multi-methodology approach. European Journal of Operational
Research, 267, 250–263.

Chuvychkina, I. (2017). Die Perspektiven des Energiedialoges EU-Russland—eine wissenschaftliche


Szenarioanalyse. Dissertation. Universität Bremen.

Drakes, C., Cashman, A., Kemp-Benedict, E., & Laing, T. (2020). Global to small island; a cross-scale
foresight scenario exercise. Foresight, 22(5/6), 579–598. https://doi.org/10.1108/FS-02-2020-0012
[Crossref]

Drakes, C., Laing, T., Kemp-Benedict, E., & Cashman, A. (2017). Caribbean Scenarios 2050—
CoLoCarSce Report. CERMES Technical Report No. 82.

Duden. (1994). Das Große Fremdwörterbuch. Duden-Verlag.

Enzer, S. (1980). INTERAX—An interactive model for studying future business environments.
Technological Forecasting and Social Change, 17(Part I:141–159 / Part II), 211–242.
[Crossref]

Ernst, A., Biß, K., Shamon, H., Schumann, D., & Heinrichs, H. (2018). Benefits and challenges of
participatory methods in qualitative energy scenario development. Technological Forecasting and Social
Change, 127, 245–257.
[Crossref]

Fink, A., Schlake, O., & Siebe, A. (2002). Erfolg durch Szenario-Management—Prinzip und Werkzeuge
der strategischen Vorausschau. Campus.

Förster, G. (2002). Szenarien einer liberalisierten Stromversorgung. Akademie für


Technikfolgenabschätzung.
Fuchs, G., Fahl, U., Pyka, A., Staber, U., Vögele, S., & Weimer-Jehle, W. (2008). Generating innovation
scenarios using the cross-impact methodology (Discussion-Papers Series No. 007-2008). Department of
Economics, University of Bremen.

Gordon, T. J. (1994). Cross-impact method. In The millenium project: Futures research methodology.
ISBN: 978-0981894119 .

Gordon, T. J., & Hayward, H. (1968). Initial experiments with the cross impact matrix method of
forecasting. Futures, 1(2), 100–116.
[Crossref]

Honton, E. J., Stacey, G. S., & Millet, S. M. (1985). Future scenarios—The BASICS computational method,
economics and policy analysis occasional paper (Vol. 44). Batelle Columbus Division.

Hummel, E. (2017). Das komplexe Geschehen des Ernährungsverhaltens—Erfassen, Darstellen und


Analysieren mit Hilfe verschiedener Instrumente zum Umgang mit Komplexität. Dissertation, University of
Gießen.

Jenssen, T., & Weimer-Jehle, W. (2012). Mehr als die Summe der einzelnen Teile—Konsistente Szenarien
des Wärmekonsums als Reflexionsrahmen für Politik und Wissenschaft. GAIA, 21(4), 290–299.
[Crossref]

Johansen, I. (2018). Scenario modelling with morphological analysis. Technological Forecasting and
Social Change, 126, 116–125.
[Crossref]

Kalaitzi, D., Matopoulos, A., et al. (2017). Next generation technologies for networked Europe—Report on
trends and key factors. EU-Programm Mapping the path to future supply chains, NEXT-NET Project report
D2.1.

Kane, J. (1972). A primer for a new cross impact language-KSIM. Technological Forecasting and Social
Change, 4, 129–142.
[Crossref]

Kemp-Benedict, E., de Jong, W., & Pacheco, P. (2014). Forest futures: Linking global paths to local
conditions. In P. Katila, G. Galloway, W. de Jong, P. Pacheco, & G. Mery (Eds.), Forest under pressure—
Local responses to global issues. Part IV—Possible future pathways. IUFRO World Series Vol. 32.

Kosow, H., Weimer-Jehle, W., León, C. D., & Minn, F. (2022). Designing synergetic and sustainable policy
mixes—A methodology to address conflictive environmental issues. Environmental Science and Policy,
130, 36–46.
[Crossref]

Kunz, P. (2018). Discussion of methodological extensions for cross-impact-balance studies. STE preprint
01/2018, Forschungszentrum Jülich.

Kurniawan, J. H. (2018). Discovering alternative scenarios for sustainable urban transportation. In 48th
Annual conference of the Urban Affairs Association, April 4-7, 2018, Toronto, Canada

Lambe, F., Carlsen, H., Jürisoo, M., Weitz, N., Atteridge, A., Wanjiru, H., & Vulturius, G. (2018).
Understanding multi-level drivers of behaviour change—A Cross-impact Balance analysis of what
influences the adoption of improved cookstoves in Kenya (SEI working paper). Stockholm Environment
Institute.

Lee, H., & Geum, Y. (2017). Development of the scenario-based technology roadmap considering layer
heterogeneity: An approach using CIA and AHP. Technology Forecasting & Social Change, 117, 12–24.
https://doi.org/10.1016/j.techfore.2017.01.016
[Crossref]

Lloyd, E. A., & Schweizer, V. J. (2014). Objectivity and a comparison of methodological scenario
approaches for climate change research. Synthese, 191(10), 2049–2088.
[Crossref]

Meylan, G., Seidl, R., & Spoerri, A. (2013). Transitions of municipal solid waste management. Part I:
Scenarios of Swiss waste glass-packaging disposal. Resources, Conservation and Recycling, 74, 8–19.
[Crossref]

Mitchell, R. E. (2018). The human dimensions of climate risk in Africa’s low and lower-middle income
countries. Master thesis, University of Waterloo.

Mowlaei, M., Talebian, H., Talebian, S., Gharari, F., Mowlaei, Z., & Hassanpour, H. (2016). Scenario
building for Iran short-time future—Results of Iran Futures Studies Project. Finding futures in
uncertainties. In 6th International postgraduate conference, Department of Applied Social Sciences, Hong
Kong Polytechnic University, September 22-24, 2016.

Mphahlele, M. I. (2012). Interactive scenario analysis technique for forecasting E-skills development.
Dissertation. Tshwane University of Technology, Pretoria, South Africa.

Musch, A. -K., & von Streit, A. (2017). Szenarien, Zukunftswünsche, Visionen—Ergebnisse der
partizipativen Szenarienkonstruktion in der Modellregion Oberland. INOLA report no. 7, Ludwig-
Maximilians University.

Nakićenović, N., et al. (2000). Special Report on Emissions Scenarios. Report of the Intergovernmental
Panel on Climate Change (IPCC). Cambridge University Press.

National Research Council. (1986). Population growth and economic development: Policy questions.
National Academy Press.

Oviedo-Toral, L.-P., François, D. E., & Poganietz, W.-R. (2021). Challenges for energy transition in
poverty-ridden regions—The case of rural Mixteca, Mexico. Energies, 14, 2596. https://doi.org/10.3390/
en14092596
[Crossref]

Pregger, T., Naegler, T., Weimer-Jehle, W., Prehofer, S., & Hauser, W. (2020). Moving towards socio-
technical scenarios of the German energy transition—Lessons learned from integrated energy scenario
building. Climatic Change, 162, 1743–1762. https://doi.org/10.1007/s10584-019-02598-0
[Crossref]

Regett, A., Zeiselmair, A., Wachinger, K., & Heller, C. (2017). Merit Order Netz-Ausbau 2030. Teil 1:
Szenario-Analyse—potenzielle zukünftige Rahmenbedingungen für den Netzausbau. Project report,
Forschungsstelle für Energiewirtschaft (FfE).

Renn, O., Deuschle, J., Jäger, A., & Weimer-Jehle, W. (2007). Leitbild Nachhaltigkeit—Eine normativ-
funktionale Konzeption und ihre Umsetzung. VS-Verlag.

Renn, O., Deuschle, J., Jäger, A., & Weimer-Jehle, W. (2009). A normative-functional concept of
sustainability and its indicators. International Journal of Global Environmental Issues, 9(4), 291–317.
[Crossref]

Rhyne, R. (1974). Technological forecasting within alternative whole futures projections. Technological
Forecasting and Social Change, 6, 133–162.
[Crossref]

Saner, D., Beretta, C., Jäggi, B., Juraske, R., Stoessel, F., & Hellweg, S. (2016). FoodPrints of households.
International Journal of Life Cycle Assessment, 21, 654–663. https://doi.org/10.1007/s11367-015-0924-5
[Crossref]

Saner, D., Blumer, Y. B., Lang, D. J., & Köhler, A. (2011). Scenarios for the implementation of EU waste
legislation at national level and their consequences for emissions from municipal waste incineration.
Resources, Conservation and Recycling, 57, 67–77.
[Crossref]

Sardesai, S., Kamphues, J., Parlings, M., et al. (2018). Next generation technologies for networked Europe
—Report on future scenarios generation. EU-Programm Mapping the path to future supply chains, NEXT-
NET project report D2.2.

Schmid, E., Pechan, A., Mehnert, M., & Eisenack, K. (2017). Imagine all these futures: On heterogeneous
preferences and mental models in the German energy transition. Energy Research & Social Science, 27,
45–56. https://doi.org/10.1016/j.erss.2017.02.012
[Crossref]

Schmidt-Scheele, R. (2020). The plausibility of future scenarios. Conceptualising an unexplored criterion


in scenario planning. Transcript Independent Academic Publishing. See also: Scheele, R. (2019). Applause
for scenarios!? An explorative study of ‘plausibility’ as assessment criterion in scenario planning.
Dissertation, University of Stuttgart.

Schneider, M., & Gill, B. (2016). Biotechnology versus agroecology—Entrenchments and surprise at a
2030 forecast scenario workshop. Science and Public Policy, 43, 74–84. https://doi.org/10.1093/scipol/
scv021
[Crossref]

Schütze, M., Seidel, J., Chamorro, A., & León, C. (2018). Integrated modelling of a megacity water system
—The application of a transdisciplinary approach to the Lima metropolitan area. Journal of Hydrology.
https://doi.org/10.1016/j.jhydrol.2018.03.045

Schweizer, V. J., & Kriegler, E. (2012). Improving environmental change research with systematic
techniques for qualitative scenarios. Environmental Research Letters, 7.

Schweizer, V. J., & Kurniawan, J. H. (2016). Systematically linking qualitative elements of scenarios
across levels, scales, and sectors. Environmental Modelling & Software, 79, 322–333. https://doi.org/10.
1016/j.envsoft.2015.12.014
[Crossref]

Schweizer, V. J., & O’Neill, B. C. (2014). Systematic construction of global socioeconomic pathways using
internally consistent element combinations. Climatic Change, 122, 431–445.
[Crossref]

Shojachaikar A (2016) Qualitative but systematic envisioning of socio-technical transitions: Using cross-
impact balance method to construct future scenarios of transitions. In International sustainability
transitions conference, September 6-9, 2016.

Slawson, D. (2015). A qualitative cross-impact balance analysis of the hydrological impacts of land use
change on channel morphology and the provision of stream channel services. In Proceedings of the
international conference on river and stream restoration—Novel approaches to assess and rehabilitate
modified rivers. June 30–Juli 2, 2015, Wageningen, The Netherlands (pp. 350–354).
Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103, 677–680.
[Crossref]

Tori, S., te Boveldt, G., Keseru, I., & Macharis, C. (2020). City-specific future urban mobility scenarios—
Determining the impacts of emerging urban mobility environments. Horizon 2020 project “Sprout”
Delivery 3.1. https://sprout-civitas.eu/

Uraiwong, P. (2013). Failure analysis of malfunction water resources project in the Northeastern Thailand
—Integrated mental models and project life cycle approach. Kochi University of Technology.

Venjakob, J., Schüver, D., & Gröne, M. -C. (2017). Leitlinie Nachhaltige Energieinfrastrukturen,
Teilprojekt Transformation und Vernetzung von Infrastrukturen. Project report “Energiewende Ruhr”,
Wuppertal Institut für Klima, Umwelt, Energie.

Vergara-Schmalbach, J. C., Fontalvo Herrera, T., & Morelos Gómez, J. (2012). Aplicación de la Planeación
por Escenarios en Unidades Académicas: Caso Programa de Administración Industrial. Escenarios, 10(1),
40–48.
[Crossref]

Vergara-Schmalbach, J. C., Fontalvo Herrera, T., & Morelos Gómez, J. (2014). La planeación por
escenarios aplicada sobre políticas urbanas: El caso del mercado central de Cartagena (Columbia). Revista
Facultad de Ciencias Económicas, XXII(1), 23–33.
[Crossref]

Vögele, S., Rübbelke, D., Govorukha, K., & Grajewski, M. (2019). Socio-technical scenarios for energy
intensive industries: The future of steel production in Germany in context of international competition and
CO2 reduction. STE Preprint 5/2017, Forschungszentrum Jülich.

von Reibnitz, U. (1987). Szenarien—Optionen für die Zukunft. McGraw-Hill.

Wachsmuth, J. (2015). Cross-sectoral integration in regional adaptation to climate change via participatory
scenario development. Climatic Change, 132, 387–400. https://doi.org/10.1007/s10584-014-1231-z
[Crossref]

Weimer-Jehle, W. (2006). Cross-impact balances: A system-theoretical approach to cross-impact analysis.


Technological Forecasting and Social Change, 73(4), 334–361.
[Crossref]

Weimer-Jehle, W. (2009). Properties of cross-impact balance analysis. arXiv:0912.5352v1.

Weimer-Jehle, W., Buchgeister, J., Hauser, W., Kosow, H., Naegler, T., Poganietz, W.-R., Pregger, T.,
Prehofer, S., von Recklinghausen, A., Schippl, J., & Vögele, S. (2016). Context scenarios and their usage
for the construction of socio-technical energy scenarios. Energy, 111, 956–970. https://doi.org/10.1016/j.
energy.2016.05.073
[Crossref]

Weimer-Jehle, W., Deuschle, J., & Rehaag, R. (2012). Familial and societal causes of juvenile obesity—A
qualitative model on obesity development and prevention in socially disadvantaged children and
adolescents. Journal of Public Health, 20(2), 111–124.
[Crossref]

Weimer-Jehle, W., Wassermann, S., & Fuchs, G. (2010). Erstellung von Energie- und Innovations-
Szenarien mit der Cross-Impact-Bilanzanalyse: Internationalisierung von Innovationsstrategien im Bereich
der Kohlekraftwerkstechnologie. 11. Symposium Energieinnovation, TU Graz, February 10–12, 2010.
Weimer-Jehle, W., Wassermann, S., & Kosow, H. (2011). Konsistente Rahmendaten für Modellierungen
und Szenariobildung im Umweltbundesamt. Gutachten für das Umweltbundesamt (UBA), UBA-Texte
20/2011, Dessau-Roßlau.

Wiek, A., Keeler, L. W., Schweizer, V., & Lang, D. J. (2013). Plausibility indications in future scenarios.
International Journal of Foresight and Innovation Policy, 9, 133–147.
[Crossref]

Zimmermann, T., Gößling-Reisemann, S., & Isenmann, R. (2017). Ermittlung von


Ressourcenschonungspotenzialen in der Nichteisenmetallindustrie durch eine Zukunftsanalyse nach der
Delphi-Methode. UBA Report, project DelphiNE.

Footnotes
1 Secondary autonomous descriptors (descriptors that only receive influences from autonomous
descriptors) and secondary passive descriptors (descriptors that only influence passive descriptors) can also
be defined. Secondary autonomous descriptors become autonomous descriptors when the original
autonomous descriptors are removed from the matrix. Similarly, secondary passive descriptors become
passive descriptors when the original passive descriptors are removed.

2 Inverse solutions occur regularly when CIB matrices consist entirely of descriptors with two variants and
all judgment sections are exchange-symmetric, i.e., show the same numbers after swapping the order of the
variants for both descriptors.

3 Three descriptors can be found in Vergara-Schmalbach et al. (2012, 2014). Forty-three descriptors were
used by Weimer-Jehle et al. (2012).

4 A method for embedding disaggregated subsystems in a CIB matrix is discussed by Kunz (2018, Chapter
III.3).

5 For the question of choosing the realistic range for the descriptor variants, see Sect. 6.2.4.

6 When using the CIB software ScenarioWizard, there is a limit of nine variants per descriptor (at the time
of printing). As the statistics box “Descriptor variant numbers” shows, this does not lead to any practical
restriction.

7 In 1966, prior to the first scientific publication on the cross-impact method, Theodore Gordon and Olaf
Helmer developed the method’s basic concept and applied it for the first time in game entitled “Future,”
which the Kaiser Aluminum and Chemical Company then distributed as a promotional gift (Gordon, 1994).
8 E.g., Kane (1972, “KSIM”), Enzer (1980, “INTERAX”), Honton et al. (1985, “BASICS”).

9 For the consistency matrix method, see Rhyne (1974), von Reibnitz (1987), Johansen (2018).

10 An example of the interval [–5…+5] is found in Lee and Geum (2017).

11 The rating interval [–2…+2] has been used by Wachsmuth (2015), Schmid et al. (2017), and Tori et al.
(2020), among others.

12 E.g., BASICS (Honton et al., 1985).

13 MINT: A group of academic disciplines, consisting of mathematics, informatics, natural sciences, and
technology.

14 Invariance property IO-1, Weimer-Jehle (2006: 343). For a proof see Weimer-Jehle (2009), Property XI.

15 The judgment group shown above could just as well have been rated [B1 B2] = [+1 0] to avoid double
negation.

16 This basic openness does not prevent the descriptor variants from being regarded as of varying
likelihood. However, a descriptor variant with probability 0 would hardly be a meaningful element of a CIB
analysis.

17 For example, the scenario study could be commissioned by a ministry of the environment that rules out
weakening its environmental legislation as a policy option and therefore wishes to examine only the effects
of different levels of strengthening environmental legislation.

18 According to the Gaussian law of error propagation.


19 For example, Morphological Analysis, Field Anomaly Relaxation, BASICS, ScenarioManagement, and
others.

20 scw files are project files of the CIB software ScenarioWizard.

21 For a cross-impact assessment [A3 Education focus: MINT -> B3 Economic growth: high = +2], for
example, the justification “MINT education promotes economic development” would be a paraphrase,
since it merely repeats in words what the cross-impact rating already expresses on its own. In contrast, an
explanation should reveal the reasoning of the rater regarding the score; e.g., it could read “MINT-oriented
education would, in the long run, increase the available human resources for corporate R&D departments
and applied university research, thus supporting the innovation-based ‘business model’ of
Somewhereland’s export economy.” (MINT education focuses on mathematics, informatics, natural
sciences, and technology)

22 For instance, Hummel (2017), Brodecki et al. (2017, in combination with a workshop as elicitation
format), Pregger et al. (2020).

23 Schmid et al. (2017). Schweizer and O’Neill (2014) also used telephone interviews among other forms
for cross-impact elicitation.

24 Lloyd and Schweizer (2014), Chuvychkina (2017).

25 Cf. discourse rules according to Jürgen Habermas.

26 On the risk of groupthink in group-based scenario processes and the ability of CIB to avoid it, see
Lloyd and Schweizer (2014).

27 Among the CIB studies reviewed by the author in which matrices were elicited by expert workshops, 11
studies included information on the number of participants in the workshops.

28 The lower value of the interval (1.5 min per processing field) occurred, for example, in a study in which
all descriptors had only two variants, and in which the standardized judgment sections could mostly be
assumed to be internal antisymmetric, so that predominantly only one rating per judgment section was
required (Weimer-Jehle et al., 2012).
29 E.g., Weimer-Jehle et al. (2011), Meylan et al. (2013), Schweizer and O’Neill (2014), Brodecki et al.
(2017).
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis, Contributions to Management
Science
https://doi.org/10.1007/978-3-031-27230-1_7

7. CIB at Work
Wolfgang Weimer-Jehle1
(1) ZIRIUS, University of Stuttgart, Stuttgart, Germany

Wolfgang Weimer-Jehle
Email: [email protected]

Keywords Cross-Impact Balances – CIB – Scenario – Application – Iran


nuclear deal – Energy transition – Health prevention – Obesity risk –
Climate change

This chapter describes CIB application practice through selected examples.


It includes systems and scenario analyses in the fields of politics, energy
supply, public health, and climate protection, thus illustrating that CIB is
not bound to specific subject areas but can be used in practice as a generic
tool for qualitative system and scenario analysis.
The selection of examples also illustrates that a CIB analysis can have
different objectives, even if the limited number of case studies presented
does not allow for completeness. The first example, which concerns the Iran
nuclear deal represents the classical objective in which CIB is used to
illuminate the range of possible futures for a system with a limited number
of scenarios. The description of the scenario study on energy supply that
follows complements the presentation of the classic application with an
example in which CIB analysis yielded a large number of scenarios and
statistical methods were required to extract the key messages from the
portfolio. Subsequent studies on public health and climate protection show
that there can also be other objectives for CIB analysis. In the analysis of
juvenile obesity, in contrast to the classical application of the method, the
aim is not to look into the future but to understand a complex system and to
assess the possible effect of interventions without reference to time. In the
final example, on climate protection, the task for CIB was to subject
existing policy advice scenarios to a critical consistency check and to
identify possible one-sidedness.
When selecting the case studies, care was taken to consider applications
in which it was possible to monitor the success of the CIB analysis in one
way or another. Such monitoring is usually difficult, since CIB is often used
to construct long-term scenarios, and success can then “only” be measured
by whether the target audience of the study perceive the results as useful. In
the case of the selected analyses on the Iran nuclear deal, public health, and
climate protection, however, it was possible to assess the quality of the
results. The Iran nuclear deal scenarios were designed as short-term
scenarios, and their outcome statements can be compared today with
subsequent real developments. The study on obesity prevention was not
designed as a future analysis but as a system analysis, and its results could
be compared with empirical data. The CIB analysis on climate protection
was retrospective. Therefore, its results could be measured against actual
past developments. Although such verification opportunities are the
exception in CIB practice, these examples provide an impression of the
potential of CIB analysis to validly address complex real-world problems.
An extensive bibliography of published CIB studies in various subject
areas can be found at www.cross-impact.org.

7.1 Iran Nuclear Deal


The Iranian future research institute Ayandeban1 regularly publishes an
annual forecast for the coming year. The forecast for the period March
2016–March 2017 (the year 1395 according to the Islamic calendar) was
prepared using CIB analysis (Ayandeban, 2016). The international conflict
over Iran’s nuclear program, regional crises, and economic and domestic
problems made an analytical look into the country’s near future both urgent
and difficult. In the first step, the Ayandeban team identified 203 factors
that were considered to pose challenges for the country in the coming year
by evaluating expert statements, media analysis, and a public survey. From
this list, seven factors that were deemed particularly influential and
uncertain were selected for CIB analysis (Table 7.1).
Table 7.1 Descriptor field of the CIB analysis “Iran 1395”

(A) Economic growth Iran


(B) Disagreement among high-level governance actors
(C) Nuclear deal stability
(D) United States 2016 election results
(E) Regional conflicts with Saudi Arabia
(F) Global oil price
(G) Inflation rate Iran

The cross-impact matrix of this scenario analysis is documented in


Ayandeban (2016). The resulting scenario portfolio is depicted in Fig. 7.1.
The consistency of the scenarios decreases from the inside to the outside.
The size of the dots indicates the total impact score, i.e., the cumulative
strength of the internal stabilization forces in a scenario. The color indicates
whether the scenario was rated desirable for Iran (green), not desirable
(red), or ambivalent (yellow). Closely related scenarios are connected by
lines. White dots represent scenarios that were not considered in detail by
the authors of the Iran 1395 study.
Fig. 7.1 “Iran 1395” scenarios and their thematic core motifs. Own illustration based on Ayandeban
(2016)

From an Iranian perspective, the analysis results were alarming. Only


unfavorable or, at best, ambivalent scenarios emerged as consistent and
marginally inconsistent scenarios. Favorable scenarios were completely
absent (core and inner ring in Fig. 7.1). Moreover, it was precisely the
unfavorable scenarios that exhibited a particular inner strength. It was only
when the Ayandeban team also considered scenarios of lower consistency
and thus lower credibility (outer ring in Fig. 7.1) that they were able to
identify favorable scenarios for the country.
The actual events that followed proved that the 2016 CIB analysis was
realistic in emphasizing a pessimistic outlook. The nuclear deal quickly
came under pressure in the period that followed and was finally unilaterally
terminated by the USA. Regional conflicts remained virulent, and economic
development corresponded more to the country’s fears than its hopes.
Ultimately, the situation largely corresponded to one of the two closely
related “Red Situation” scenarios in the core segment of Fig. 7.1 and thus to
a scenario family recommended for special attention according to CIB’s
two most important scenario quality criteria: consistency value and total
impact score.
The medium- and longer-term development of the country was not the
subject of the analysis. Therefore, no statements are made about them in
these results.

7.2 Energy and Society


To curb global climate change, energy production worldwide must be
placed on a new footing. The demand for energy must be reduced through
efficiency measures, perhaps also by reducing certain energy-intensive
activities, and the remaining energy demand must be supplied largely by
carbon-neutral energy sources, such as solar energy, wind power, or
biomass. Many countries are therefore pursuing plans to reform the
production, distribution, and use of energy carriers and to contribute to
global climate protection with this “energy transition.”
However, this task is by no means a purely technical one. The energy
transition can only succeed if society is willing and able to provide the
necessary resources, if the education systems produce a sufficient number
of specialists trained to meet the needs of the energy transition, if policy-
makers set the necessary course with farsightedness, perseverance, and a
sense of proportion, and if new forms of governance and decision-making
are developed between all stakeholders in international, national, and local
politics, business and civil society. Last but not least, citizens must also
accept the changes; in the best case, they participate actively and are willing
and able to adjust their personal approach to energy use to the new
circumstances.
In short, whether and how the energy system can be successfully
reformed will also depend on how societies develop in the coming decades.
That societies will change is clear enough. Demographic change, the impact
of digitalization, changes in the world of work, an increasingly complex
international environment, changes in the values and lifestyles of the
population, the growing need to become more responsive to changing forms
of global crises—all these factors make it certain that the societies of the
future will appear different from those of today, but they also make it
uncertain how such societies will appear.
The Energy-Trans research consortium2 brought together experts in the
fields of technology, social sciences, economy, behavioral sciences, legal
sciences, and systems sciences to jointly consider the prerequisites for a
successful energy transition in Germany. The consortium formed an
interdisciplinary working group to explore the different paths that societal
change in Germany could take by 2050 and which energy consumption and
energy supply structures would then presumably emerge in each of these
hypothetical societies (Pregger et al., 2020; Vögele et al., 2013, 2017;
Weimer-Jehle et al., 2016).
In Pregger et al.’s study, 28 key areas of uncertainty in societal
development were identified and discussed among various experts. The
study examined, for example, how the education system in Germany will
develop, whether the welfare state of the future will be oriented more
toward liberalistic ideas or more toward the Scandinavian welfare models,
which trends will prevail in the media landscape, and in which direction
societal values will change. International issues were also considered, for
example, the future of the EU and global political trends. In all cases,
several conceivable trends were formulated together with domain experts,
and the mutual influences between the trends were assessed and
summarized in a cross-impact matrix. In addition, how each of these
developments would directly or indirectly affect the energy system was
discussed.
The data obtained on the essential societal future uncertainties and their
interdependencies were evaluated by CIB analysis. Due to the enormous
complexity of the question, this resulted in approximately 1700 societal
scenarios. These were located on a map using statistical methods in such a
way that similar societies were plotted adjacent to one another. The analysis
revealed how German society might appear in 2050 involves three key
uncertainties:
How willing and able are people as a society to change, or how strongly
will they try—technically, economically, and culturally—to hold on to
what they have for as long as possible?
How much do people want a strong state, or how much do people wish to
liberalize and deregulate the various subsystems of the society, such as
the world of work, the education system, the welfare state, the economy?
How much is the understanding of a good life changing? Which goals are
individuals trying to achieve? Are these goals predominantly
materialistic, postmaterialistic, or oriented toward securing the future for
coming generations?
Only a few scenarios described societies that answered one or more of
these questions unambiguously. Mixed forms were typical for all three key
uncertainties, and the specific nature of the scenarios lay in the details of
these mixed forms.
The societal scenarios contained sufficient energy-economy information
to inform an energy model and to obtain approximate estimates of how
much energy demand would arise in each scenario and what energy mix
that society would rely on. Thus, an approximate value for climate-
damaging CO2 emissions could be established for each scenario, and the
societies could be evaluated according to how compliant they would be
with national climate protection goals (Fig. 7.2).
Fig. 7.2 Map of German societies in 2050 and their CO2 emissions. Modified from Pregger et al.
(2020)

From the results, the authors of the study were able to draw conclusions
regarding the relationship between societal and energy development. The
most unfavorable types of society for climate protection were found
between the poles of “inertia” and “market,” i.e., societies that achieve
moderate economic growth but do so on the basis of outdated technical
structures. Low climate gas emissions are found in the center and in the
middle bottom of the map. The lowest emissions, however, occurred for
less desirable scenarios, namely, for societies that implement an energy
transition driven by constraints under strong pressure, for example, when
they lose access to their sources of fossil fuels and whose demographic and
economic development is already impaired by such stress.3

7.3 Public Health


Overweight in children and adolescents is among the most discussed health
issues (BMELV/BMG, 2008). Approximately one-sixth of children in
OECD countries are overweight, and the OECD expects the figures to
continue increasing in the future (OECD, 2017). A good understanding of
the complex causes is important for effective preventive measures (Maziak
et al., 2008).
In a project within the German government’s prevention research
program, a team from the University of Hannover, the Katalyse Institute,
and the University of Stuttgart researched the social and societal
background of obesity in socially disadvantaged children and adolescents in
2009–2012. The project adopted two different approaches to gain a
comprehensive picture of the causes of juvenile obesity. First, the affected
person’s perspective was elaborated in the form of interviews with affected
young people and their relatives. Second, an expert perspective was
collected and evaluated with a CIB analysis. For the CIB analysis,
interdisciplinary expert workshops were conducted, and a qualitative risk
model was created to obtain information useful for the design of prevention
programs (Deuschle & Weimer-Jehle, 2016; Weimer-Jehle et al., 2012).
A cross-impact matrix was formulated by 18 experts from the fields of
nutrition, sport, medicine, sociology, psychology, therapy and prevention
practice, and youth culture. The necessary data were collected in four 1-day
and one 2-day workshops. The resulting cross-impact matrix contains 13
individual and 8 societal context factors. Societal context factors include,
for example, the quality of the food supply or the destructuring of everyday
family life due to work, school, or social constraints. Examples of
individual context factors are education, gender, and genetic/epigenetic
overweight disposition. In addition to these context factors, which establish
the setting for individual weight dynamics, the matrix also contains 22
individual factors, such as physical activity, media use, or parental role
models, which react to the context factors and influence one another. Figure
7.3 shows a section of the influence network. For reasons of clarity, only
strongly promoting (+) and strongly inhibiting (–) influences are shown in
the diagram.
Fig. 7.3 Section of the network of impact relations between the factors influencing the energy
balance of an individual. Own representation based on data from Weimer-Jehle et al. (2012)

For the evaluation of the influence network, different combinations of


health-promoting and overweight-promoting individual and societal context
factors were examined. In each case, it was observed whether the CIB
analysis generated exclusively weight gain scenarios, exclusively weight
loss scenarios, or mixtures of the two. Depending on the result, the cases
were sorted into three risk classes (high, low, and medium).
Figure 7.4 shows the results for four cases. In each case, scenarios were
determined for approximately eight million individual case groups, and the
risk class for each case group was recorded. The shares of the risk classes in
the individual case groups are indicated (black: risk class “high,” gray: risk
class “medium,” and white: risk class “low”).

Fig. 7.4 CIB analysis of obesity risks for children and adolescents for four case examples. Data from
Weimer-Jehle et al. (2012)

The reference case (a) shows how the influence network reacts when
social and individual context factors are set randomly. In this case, a clear
weight tendency emerges only for a few context cases. For most context
cases, the result is the medium risk class; i.e., for a group of individuals
with identical context conditions, a mixed weight trend would be expected.
Matters are different if we do not set the social context randomly but align it
with the conditions in Germany (and other OECD countries) at the time of
the study. Then, there are only a few case groups with certain weight loss,
and the scenario space is dominated by context cases with certain weight
gain and those with mixed weight developments (see analysis (b) in Fig.
7.4). However, if we turn our attention specifically to groups of individuals
who enjoy favorable individual conditions, clearly obesogenic scenarios can
be largely avoided despite the unfavorable societal context, and almost all
case groups remain at the intermediate risk level (analysis (c) in Fig. 7.4).
Only if certain key societal conditions were also reformed in a health-
promoting manner could the juvenile obesity risk be safely repressed, at
least for advantaged groups of individuals (analysis (d) in Fig. 7.4).
Unlike most CIB studies, the CIB analysis of juvenile obesity risks did
not examine long-term future developments but, rather, described a current
issue and its network of causes in the form of a system analysis. This
approach opens the possibility of comparing the statements of the CIB
analysis with reality and thus verifying the validity of the analysis results.
For this purpose, the study authors used the empirical data of the project
part “affected persons’ perspective.” The individual circumstances of the
participants in the affected persons surveys were anonymously entered into
the cross-impact matrix, and the analysis result of each individual case for
the weight tendency was compared with actual findings. The CIB analysis
achieved a rate of 90% accurate assessments.

7.4 IPCC Storylines


Global climate change is considered one of the greatest challenges of our
time. Climate research tries to estimate the future extent of climate change
by model calculations so that the world can prepare itself by taking
countermeasures. However, since future climate development will depend
on how global greenhouse gas emissions continue to develop and this
development is in the hands of mankind, climate science can only make “if-
then” statements, and it must start from emission scenarios as basic
assumptions for its calculations.
The quality of these emission scenarios is thus crucial for the relevance
of the climate analyses that are based on them. Unrealistically low emission
scenarios would falsely translate to mild climate change in the climate
models and thus lead to a deceptive all-clear signal. The world would lose
important time to prepare for actual climate change and fail to take
necessary countermeasures. Unrealistically high emission scenarios, in
contrast, would cause the models to paint too bleak a picture of the climate
future and thus conjure unfounded expectations of catastrophe—with
consequences that could range from welfare-threatening overreactions to a
loss of credibility for climate science and a subsequent standstill in
preparations for actual climate change.
The task of determining a realistic range of emission scenarios is
therefore of crucial importance and by no means trivial. The task goes far
beyond climate science as a research discipline. Demographic and
economic developments in a heterogeneous world must be considered, as
well as technological innovations and political, social, and cultural changes,
because these factors will have a significant impact on energy demand and
technological change. In recognition of the fundamental importance of this
analytical task, the Intergovernmental Panel on Climate Change (IPCC),
jointly established by the United Nations and the World Meteorological
Association, published a Special Report on Emission Scenarios (SRES) in
2000, in which the most important drivers of global climate gas emissions
were investigated in detail. The presumed future developments of the
drivers were discussed in expert panels and translated into 40 emissions
scenarios (Nakićenović et al., 2000). According to the “scenario axes”
method, two global future uncertainties were chosen that are likely to
strongly shape the world of tomorrow. The authors choose the key
questions (1) whether the world in the future will be oriented more toward
economic or more toward ecological paradigms and (2) whether the world
will be characterized more by global or more by regional structures. This
approach results in four scenario families (A1, A2, B1, B2), each containing
several scenarios. The A1 scenario family was also further subdivided into
three subgroups (Fig. 7.5). Each dot in the figure represents a scenario. Its
shade indicates the magnitude of the scenario’s cumulative global CO2
emissions over the 1990–2100 period. Light dots correspond to low-
emissions scenarios. Dark dots represent high-emissions scenarios.
Fig. 7.5 Scenario axes diagram of the forty SRES emissions scenarios. Own illustration based on
data from Nakićenović et al. (2000)

The CO2 emissions of the scenarios up to 2100 were then estimated


using energy models. Figure 7.6 shows the emissions trajectories of the
forty scenarios for the first 20 years and compares them retrospectively with
the actual emissions trajectory up to 2010 (circles).4 The actual emissions
initially ran at the upper boundary of the scenario bundle, and it can be
assumed that this course would probably have continued in the initial trend
had it not been for the exceptional occurrence of the world financial crisis
of 2007–2009 and the associated economic slump. Even so, the actual
emissions dynamics were only captured by a few outliers among the
scenarios.
Fig. 7.6 Initial phase of the emissions trajectories of the forty SRES scenarios (own illustration
based on data from Nakićenović et al. (2000) (SRES scenario emissions) and IPCC (2014) (historical
CO2 emissions))

This comparison only covers the first decade of the time horizon of the
scenarios, and the SRES have at least the merit that the actual course was
within the scenario bundle. Nevertheless, it was significant that the mass of
SRES scenarios clearly underestimated the actual emissions course because
the view of politics and the public was naturally directed above all to the
scenarios in the middle range as the supposedly “most probable” or “most
credible” scenarios.
Under the impression of the actual emissions trajectory in the first
decade, Vanessa Schweizer and Elmar Kriegler from the Climate Decision
Making Center at Carnegie Mellon University (Pittsburgh) examined how
the internal consistency of the SRES scenarios presents itself from the
perspective of a CIB analysis (Schweizer & Kriegler, 2012). To this end,
they examined the six key drivers of the carbon intensity of energy
production.5 The guiding question of the study was which scenarios would
likely have emerged in 2000 if CIB had been used as the scenario
methodology in the SRES study. Therefore, in creating their cross-impact
matrix, Schweizer and Kriegler were careful only to use information that
was available to the team of authors of the SRES analysis.
When these data were finally evaluated using the CIB method,
surprising differences emerged between the SRES scenarios and the CIB-
based scenarios. The CIB analysis rated more than 70% of the SRES
scenarios as significantly inconsistent. Conversely, the CIB analysis
revealed numerous scenarios that were not captured in the SRES analysis.
The authors of the study noted as particularly significant that the additional
scenarios suggested by CIB were, for the most part, carbon-intensive
scenarios, precisely the type of scenarios that the SRES analysis had
identified but classified as peripheral to the possibility space. Moreover, the
higher- and high-carbon intensity scenarios in the CIB analysis were
characterized by particularly high average consistency (the key quality
measure for scenarios). In addition, the carbon-intensive scenarios were
also particularly robust in the sensitivity test, in which the authors varied
the cross-impact ratings that they considered uncertain.
Figure 7.7 shows how the SRES scenarios and the CIB scenarios are
distributed over the carbon intensity. This distribution reveals that the CIB
analysis assigned a much higher importance to the category of high-
emissions scenarios than the SRES analysis and that this (as we know
today, crucial) category is particularly intensively explored with scenarios.
The CIB analysis thus correctly recognized that the play of forces among
the drivers of carbon intensity has more opportunities for evolution toward
“high carbon worlds” than toward low-carbon futures and that the carbon-
intensive scenarios should therefore be at the center, not on the margins, of
consideration.

Fig. 7.7 Number of SRES and CIB scenarios in four classes of carbon intensity. Own illustration
based on data from Schweizer and Kriegler (2012)
CIB analysis can only classify scenarios on a coarse grid of carbon
intensity. However, it succeeded in doing this with a more accurate focus
than the SRES study because of its attention to interrelationships between
drivers. The climate simulations conducted in years subsequent to SRES
study would not have resulted in different estimates of the maximum and
minimum climate change if they had been based on the CIB scenarios.
However, the focus of the scenarios would have shifted significantly within
this range. Thus, if CIB could have been used at that time, the bulk of
climate scenarios would have provided much more serious advance warning
of the extent of impending climate change in the absence of climate policy
early in the twenty-first century than the SRES scenarios were able to do.
Vanessa Schweizer, the lead author of this study, later wondered what
difference it would have made if the SRES author team had had the CIB
analysis tool, which did not become widely available until 6 years after the
SRES report and summarized (Schweizer 2020): “Perhaps, global climate
policy commitments as seen in the Paris [Climate Protection] Agreement
could have materialized at the turn of the century, rather than 15–20 years
later.”

References
Ayandeban. (2016). Future scenarios facing iran in the coming year 1395 (March 2016–March 2017)
[in Persian]. Ayandeban Iran Futures Studies. www.ayandeban.com

BMELV/BMG. (2008). In Form. Der Nationale Aktionsplan zur Prävention von Fehlernährung,
Bewegungsmangel, Übergewicht und damit zusammenhängenden Krankheiten, Bundesministerium
für Ernährung, Landwirtschaft und Verbraucherschutz und Bundesministerium für Gesundheit
Berlin, Germany.

Deuschle, J., & Weimer-Jehle, W. (2016). Übergewicht bei Kindern und Jugendlichen—Analyse
eines Gesundheitsrisikos. In L. Benighaus, O. Renn, & C. Benighaus (Eds.), Gesundheitsrisiken im
gesellschaftlichen Diskurs (pp. 66–98). EHV Academic Press.

IPCC. (2014). Climate Change 2014 – Impacts, adaption, and vulnerability. Assessment Report 5,
Report of Working Group II. Intergovernmental Panel on Climate Change, Genf. https://www.ipcc.
ch/report/ar5/wg2. Accessed 28 March 2019.

Maziak, W., Ward, K. D., & Stockton, M. B. (2008). Childhood obesity. Are we missing the big
picture? Obesity Reviews, 9, 35–42.

Nakićenović, N., Alcamo, J., Davis, G., de Vries, B., Fenhann, J., Gaffin, S., Gregory, K., Griibler,
A., Jung, T. Y., Kram, T., La Rovere, E. L., Michaelis, L., Mori, S., Morita, T., Pepper, W., Pitcher,
H., Price, L., Riahi, K., Roehr, A., Rogner, H.-H., Sankovski, A., Schlesinger, M., Shukla, P., Smith,
S., Swart, R., van Rooijen, S., Victor, N., & Dadi, Z. (2000). Special report on emissions scenarios.
Report of the Intergovernmental Panel on Climate Change (IPCC). Cambridge University Press.

OECD. (2017). Obesity update 2017. Organisation for Economic Co-operation and Development.
www.oecd.org/health/obesity-update.htm

Pregger, T., Naegler, T., Weimer-Jehle, W., Prehofer, S., & Hauser, W. (2020). Moving towards
socio-technical scenarios of the German energy transition—lessons learned from integrated energy
scenario building. Climatic Change, 162, 1743–1762. https://doi.org/10.1007/s10584-019-02598-0
[Crossref]

Schippl, J., Grunwald, A., & Renn, O. (Eds.). (2017). Die Energiewende verstehen – orientieren –
gestalten. Erkenntnisse aus der Helmholtz-Allianz ENERGY-TRANS. Nomos Verlag.

Schweizer, V. J. (2020). Reflections on cross-impact balances, a systematic method constructing


global socio-technical scenarios for climate change research. Climatic Change, 162, 1705–1722.
[Crossref]

Schweizer, V. J., & Kriegler, E. (2012). Improving environmental change research with systematic
techniques for qualitative scenarios. Environmental Research Letters, 7(4), 044011.
[Crossref]

Vögele, S., Hansen, P., Kuckshinrichs, W., Schürmann, K., Schenk, O., Pesch, T., Heinrichs, H., &
Markewitz, P. (2013). Konsistente Zukunftsbilder im Rahmen von Energieszenarien. STE Research
Report 3/2013, Forschungszentrum Jülich.

Vögele, S., Hansen, P., Poganietz, W.-R., Prehofer, S., & Weimer-Jehle, W. (2017). Scenarios for
energy consumption of private households in Germany using a multi-level cross-impact balance
approach. Energy, 120, 937–946. https://doi.org/10.1016/j.energy.2016.12.001
[Crossref]

Weimer-Jehle, W., Buchgeister, J., Hauser, W., Kosow, H., Naegler, T., Poganietz, W.-R., Pregger, T.,
Prehofer, S., von Recklinghausen, A., Schippl, J., & Vögele, S. (2016). Context scenarios and their
usage for the construction of socio-technical energy scenarios. Energy, 111, 956–970. https://doi.org/
10.1016/j.energy.2016.05.073
[Crossref]

Weimer-Jehle, W., Deuschle, J., & Rehaag, R. (2012). Familial and societal causes of juvenile
obesity—a qualitative model on obesity development and prevention in socially disadvantaged
children and adolescents. Journal of Public Health, 20(2), 111–124.
[Crossref]

Footnotes
1 http://www.ayandeban.com.

2 Helmholtz-Allianz “ENERGY-TRANS” (2011–2016). For more information, see Schippl et al.


(2017).
3 Pregger et al. (2020), Supplementary Materials.

4 Fossil and other emissions.

5 Based on the Kaya Identity.


© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis, Contributions to Management
Science
https://doi.org/10.1007/978-3-031-27230-1_8

8. Reflections on CIB
Wolfgang Weimer-Jehle1
(1) ZIRIUS, University of Stuttgart, Stuttgart, Germany

Wolfgang Weimer-Jehle
Email: [email protected]

Keywords Cross-Impact Balances – CIB – Scenario – Interpretation –


Strength – Challenge – Limitation – Qualitative method – Semiquantitative
method – Unsuitable use case – Alternative method

This chapter summarizes the interpretations, strengths, challenges and


limitations of CIB analysis. For readers interested primarily in the method’s
technical implementation, these reflections are of less importance and can
be disregarded. However, these considerations help develop a more
thorough understanding of CIB, which is key to determining whether CIB
can be profitably applied to a particular analysis task and, if so, being able
to soundly interpret the results of the analysis.

8.1 Interpretations
Technically, the CIB algorithm uses cross-impact data to provide a
consistency evaluation of a proposed combination of descriptor variants.
The relevance of CIB analysis and its results for decision-making arises
from the interpretations we assign to the descriptors, descriptor variants,
cross-impact data, consistency criterion, and scenarios derived from the
evaluation. Since the variety of usage types of CIB analysis requires case-
specific interpretations, three different interpretations will be discussed
below, each appropriate for a particular usage type.

8.1.1 Interpretation I (Time-Related): CIB in Scenario


Analysis
The predominant use of CIB in current practice aims at creating future
scenarios, whereby a scenario is understood as a combination of trends or
end states (implying trends) from different areas. The CIB algorithm then
aims to select scenarios in which each trend (i.e., each active descriptor
variant) is—on balance—promoted as strongly as possible by the other
trends and thus stabilized. Thus, a CIB scenario is defined as follows:

A CIB scenario describes a self-stabilizing (self-reinforcing) combination of trend M16


developments (optionally expressed in terms of a combination of end states).

This “mechanical” property of self-stabilization corresponds to the


formal property of consistency, which in turn is a precondition for the
plausibility of scenarios.1
CIB does not make any direct statement about the probability of
occurrence for a scenario. Instead, it claims that consistent scenarios, once
they have occurred, have a particularly good chance of lasting as a trend
combination for a significant period of time due to their self-stabilizing
capability. In contrast, from the CIB perspective, it can be expected that
inconsistent trends will soon change, mostly with consequences for other
trends, so that an inconsistent trend pattern will soon dissipate step by step.
In this interpretation, CIB generates an indirect qualitative statement
about the probability of the scenarios in the sense that consistent trend
combinations, once realized, are, as a rule, more stable than inconsistent
trend combinations. Thus, we should expect that we find consistent (self-
stabilizing) trend combinations more frequently in reality than inconsistent
ones.2 Conversely, inconsistent trend combinations are not meaningless in
this interpretation. Instead, they are plausible as transient states of the
system. The system passes through them quickly until it has found a stable
(consistent) trend combination again, which can last for a longer period of
time, that is, until the occurrence of trend-breaking events or until a change
in the rules of the system.
Thus, the fundamental peculiarity of the probability interpretation in
CIB is that CIB, at least in its basic form, does not make a statement about
the probability of occurrence of a scenario. However, it does make a
qualitative statement about the probability of persistence of a scenario that
has already occurred.

8.1.2 Interpretation II (Unrelated to Time): CIB in Steady-


State Systems Analysis
CIB can also be used to analyze systems for which the research focus is not
their temporal development but the question of which states a system can
assume and how a system reacts to external interventions. Such analysis
adopts a steady-state systems-analytic perspective in which time as a
category plays only a peripheral role. An example of a steady-state systems-
analytic CIB application is the analysis of the network of causes of juvenile
obesity described in Sect. 7.3.
Applications of this type obviously require different interpretations of
the descriptors, descriptor variants, cross-impacts, and evaluation results
than a future analysis does. Here, we understand the cross-impacts as the
system forces exerted on one another by the network nodes of the system
and the solutions of the cross-impact matrix as system states in which the
system forces are in equilibrium, bringing the system (temporarily) to rest.
This understanding leads to the following interpretation of the consistent
scenarios:

In a steady-state system analysis, CIB examines qualitative (discrete-valued) networks, M17


and the results of the CIB analysis describe stable network configurations.

8.1.3 Interpretation III: CIB in Policy Design


The consistency criterion in CIB requires for each descriptor of a scenario
that the impact sum of its active variant is not exceeded by the impact sums
of the nonactive variants of the same descriptor (cf. Sect. 3.4). In other
words, for each descriptor, an optimal choice has been made with respect to
the impact sum, with the active variants of the other descriptors determining
which choice is optimal.
This design of the consistency criterion suggests a game-theoretical
analogy of CIB analysis (cf. Appendix). In game theory, actors (“players”)
are considered, each of whom has an individual set of optional game
strategies at his or her disposal. The success of these strategies in terms of
payoffs or losses depends on which strategies the other players choose. The
players’ search for their optimal strategy comes to an end, and the mix of
player strategies becomes stable when no player can increase his or her own
payoff by unilaterally changing his or her strategy (Nash equilibrium, Nash,
1951).
This analogy enables the application of CIB to determine consistent
policy-mix designs in the presence of conflicting objectives. The
descriptors represent the objectives of the actors in a strategic field. The
descriptor variants represent the possible strategies (“policies”) of the
actors, and the cross-impacts describe how the policy choice for one
objective would improve or diminish the prospects of the policy options for
another objective. If these improvements or degradations can be added to
the overall success of a policy, the CIB scenarios accurately describe the
Nash equilibria of the actors’ policy choices. Thus,

CIB scenarios of actors’ policy choices describe in which policy mixes all actors have M18
made an individually optimal policy choice to achieve their goals, so that no actor within
the policy mix has an immediate motive to change his or her strategy.

The use of CIB to develop consistent policy mixes is a new and


developing application field of the CIB method. A detailed description is
provided by Kosow et al. (2022).

8.1.4 Classification of CIB as a Qualitative-Semiquantitative


Method of Analysis
The classification of CIB in the qualitative to quantitative method spectrum
requires a differentiated view, since the different phases of method
implementation have a different character. Since CIB aims to provide an
analysis framework for Sector II problems (Fig. 2.2), it must be judged
primarily by how well it meets the requirements of this problem type.
The primary requirement of this problem type is that qualitative
information about a system can be taken up and qualitative scenarios can be
created on this basis. This requirement is fulfilled by CIB. The initial data
for creating a cross-impact matrix are (usually) qualitative (textual)
statements about system interrelationships, and the result of CIB analysis is
a portfolio of qualitative scenarios. While it is allowed to introduce
numbers in the definition of descriptor variants, which become part of the
scenario descriptions, these quantitative statements are only optional and,
when provided, are descriptive and indicative only. They are in no way used
by the scenario construction algorithm. Thus, as far as the actual input and
output of CIB is concerned, it would have to be classified as a qualitative
method. The fact that its central result, the portfolio, can optionally be made
the subject of quantitative secondary analyses, as described in this book,
does not affect the classification of the method because the secondary
analysis of the portfolio is not part of the CIB method. Rather, the
secondary analysis, if conducted, is the second part of a method
combination.
However, for the classification of the method, it is also relevant how the
processing of the input to the output is designed, i.e., the question of in
which form qualitative input enters the analysis as data and which type of
“mechanical reasoning” is applied in the data processing. CIB starts from a
qualitative system model consisting of the textual-verbal discussions of
descriptor interdependencies. After their coding on the cross-impact rating
scale, a cross-impact matrix filled with small integers emerges as a
semiquantitative representation of the qualitative system model (Fig. 8.1).
This matrix is the starting point for the evaluation algorithm.
Fig. 8.1 Qualitative system model and its semiquantitative representation

In the data processing process (the consistency algorithm), CIB adds


small integers and compares integer sums. However, small integers are
objects in the gray area between qualitative and quantitative objects. This
status is underlined by the fact that small integers and integer operations as
used in CIB can be formulated by fully verbal descriptions and process
instructions without using numbers (cf. Appendix, Table A.1). In this
respect, small integers and integer operations are categorically different
from fully quantified (real-valued) numbers and operations. Consequently,
for the core of the CIB method, the cross-impact data and the CIB
algorithm, the term “semiquantitative” is an appropriate designation, and
for the overall process of CIB analysis, the term “qualitative-
semiquantitative” is appropriate (Fig. 8.2).

Fig. 8.2 Comparative classification of CIB as a qualitative-semiquantitative analysis method

In contrast, common mathematical modeling methods, such as systems


dynamics (SD), and traditional cross-impact methods, such as KSIM (Kane,
1972) and BASICS (Honton et al., 1985), belong to the genuinely
quantitative level, at least for data processing and operating real-valued
variables and operations (cf. Sect. 8.5).

CIB occupies a special position in the spectrum of analysis methods, since it enables a M19
causal system analysis based on qualitative system descriptions without entering the fully
quantified domain of real-valued variables and operations.

To operate completely on the qualitative level has thus far only been
achieved by methods that avoid the use of causal information, with the
consequence of thereby providing a lower system-analytical power (for
example, the consistency matrix, cf. Sect. 8.5).
In summary, CIB also requires experts (serving as sources of qualitative
knowledge) to take a step out of the purely qualitative realm by coding their
knowledge on the interval-scaled cross-impact scale, either directly or as
mediated by the core team. The resulting data, in the form of small integers,
are then further processed by the method (appropriate to the nature of the
data) at a moderate level of quantification. The realm of full quantification
(real numbers and real number operations) is not entered by CIB at any
point in the procedure. Thus, CIB cannot be classified as a fully qualitative
scenario and analysis method. However, it comes closest to this ideal type
of all customary causal-system-analytic scenario methods. It comes even
closer to this ideal if it is applied without strength evaluation in the cross-
impact rating (i.e., using the rating scale [−1... + 1]), a methodological
option that is possible but not common.

8.2 Strengths of CIB


Like any method, CIB has specific capabilities and strengths. Yet it also
faces challenges, limitations and methodological difficulties. Both sides
must be kept in mind in the decision on a method application, the research
design and the interpretation of the results.3 First, we provide an overview
of the specific strengths of the CIB method.

8.2.1 Scenario Quality


The quality of the internal logic of a scenario is already by definition a
prerequisite for qualifying a future narrative as a scenario (cf. Sect. 2.1). In
the literature, it is often observed as an argument in favor of CIB that its
scenarios exhibit high narrative coherence and consistency (for example,
Girod et al., 2009; Schweizer & Kriegler, 2012; Weimer-Jehle et al., 2016)
and that CIB is capable of establishing internal consistency even for
complex scenarios (Lloyd and Schweizer 2013:2064). With its consistency
check, CIB offers assistance from which, according to Kemp-Benedict,
even the best scenario teams can benefit (Kemp-Benedict, 2012:2).

8.2.2 Traceability of the Scenario Consistency


However, high scenario quality is of limited use if it only proves itself in
opaque calculations and is not independently auditable for the target
audience of the scenario analysis. Without authentic plausibility perception
by the users, the chances of the scenarios being accepted as a basis for
decision-making are low. It is therefore an advantage that CIB establishes
scenario consistency in a way that can be understood by nonexperts
(Wachsmuth, 2015). Although scenario calculations in CIB usually must be
software-based due to time considerations, any scenario presented as a
solution by the software can be validated in terms of the CIB consistency
criterion without effort and without requiring special mathematical
knowledge. In this way, reservations against a software-supported creation
of scenarios can be reduced. By means of the impact balances of the
scenarios, it can always be easily understood how each individual piece of
knowledge documented in the matrix affects the composition of the
scenarios or why it does not do so if other, predominant influences point in
a different direction. In the same way, it always remains comprehensible for
the expert participants of a CIB analysis that all collected pieces of
knowledge were considered in scenario construction and had an appropriate
chance of making a difference.

8.2.3 Reproducibility and Revisability


Especially in science, the reproducibility of results is an essential quality
criterion. The formalization of scenario construction, as occurs in CIB and
other algorithmic scenario methods, such as the consistency matrix, meets
this requirement. If the cross-impact data are documented along with the
scenarios, any person competent in the method can repeat the evaluation at
any later time and verify the technical correctness of the scenarios. If the
person seeking to reproduce an evaluation has a partially different system
view, he or she can modify the matrix in parts, recalculate the scenarios and
compare the outcome to the original results. This is not possible with
discussion-based scenario methods.
Furthermore, in the event of a subsequent change in the assessments of
certain descriptors and contexts (for example, as a result of newly acquired
information), the scenario generation can be revised and updated without
great effort by revising the relevant parts of the cross-impact matrix and
repeating the scenario calculation. Nonformalized scenario exercises, in
contrast, would have to be completely repeated at much greater expense of
effort if essential assessments need to be updated.

8.2.4 Complete Screening of the Scenario Space


While nonformal scenario methods rely on intuitive recognition of essential
scenarios—with important alternatives easily overlooked—CIB
systematically reviews the entire scenario space and thus has a greater
potential to discover unexpected futures (Lloyd & Schweizer, 2014;
Schweizer, 2020). Specifically, preparation for the future in the context of
risk analysis relies on not only extending ad hoc intuition (which is also
possible in a well-managed intuitive scenario process) but also
fundamentally moving beyond ad hoc intuition.4 Several scenario studies
are available in which portfolios were compared that were created on the
same topic with the intuitive scenario method IL and with CIB. In all cases,
CIB found consistent scenarios that were not identified in the intuitive
scenario construction (Schweizer & Kriegler, 2012; CfWI, 2014;
Kurniawan, 2018).
The ability to produce large scenario sets when appropriate to the nature
of the system under study contributes to the potential of CIB to explore
scenario spaces more fully than discursive scenario methods.5 Discursive
scenario methods can produce only small scenario sets, regardless of the
possibly higher multiplicity of future options for a system. For systems with
many distinct possible futures, this limitation inevitably leads to an
incomplete representation of the future space.

8.2.5 Causal Models


CIB not only provides scenarios without comments but also, in the form of
the cross-impact matrix, a qualitative and semiquantitative causal model of
the system under study that can explain the causal relationships in each
scenario. This ability enriches the scenario storylines and makes them easier
to understand and more amenable to critical review. The matrix, or, in the
case of dissent, the matrix ensemble represents, assuming a valid elicitation
process, the mental reality models of the experts whose knowledge was
used directly or via the detour of publications to construct the cross-impact
matrix. Thus, even apart from its use as a database for scenario building, a
cross-impact matrix represents a valuable product in its own right. Causal
models in CIB are generally not monocausal; rather, and in line with reality,
they are multicausal. That is, a particular development usually does not
occur as the result of the influence of a single other development but is
generally caused by several parallel influences that prevail over conflicting
influences.
The foundation of CIB scenarios on a causal model is the deeper reason
why CIB analysis allows for numerous additional analyses in addition to
constructing the portfolio as the primary product, such as the identification
of the weak points of scenario stability by the consistency profile (cf. Sect.
3.6.2), the intervention analysis (Sect. 4.4), or the study of critical
phenomena (“tipping points,” cf. Sect. 5.4.2). These additional analyses
enable additional insights into the system under investigation.

8.2.6 Knowledge Integration and Inter- and Transdisciplinary


Learning
Knowledge stocks from very different knowledge domains and expressed in
very different knowledge forms can contribute to a CIB analysis, including
knowledge stocks that are formulated qualitatively. The exercise of
examining the descriptor interrelationships relates the different bodies of
knowledge to one another, and the “metalanguage” of cross-impact ratings
can be used to discuss these interrelationships across disciplines (Weimer-
Jehle, 2015). Importantly, knowledge holders who do not have modeling
experience can also participate in a CIB analysis (Kemp-Benedict, 2015).
Transdisciplinary research designs pose an even greater challenge to the
accessibility of an analysis method than interdisciplinary work. They
require particularly careful and target group-adapted preparation. However,
experience shows that this is possible with CIB and that this method can
develop special opportunities in bringing together inter- and
transdisciplinary actors and supporting them in developing a shared and
consistent picture of the future despite their different perspectives
(Venjakob et al., 2017). CIB can thus be used for participatory modeling
provided the goal is to develop a conceptual model rather than a
quantitative simulation.
In this way, CIB acts as a catalyst for cross-disciplinary exchange and as
a discourse-structuring communication procedure and can promote genuine
knowledge integration (Weimer-Jehle, 2015; Prehofer et al., 2021). During
the creation of the cross-impact matrix, the discussion of the pair
relationships often also raises novel interdisciplinary questions, for the
assessment of which the subject experts involved must enter “new
territory.” This introduces new perspectives to the disciplines involved and
identifies gaps in knowledge and the need for further research.
8.2.7 Objectivity
Subjectivity is a natural component of scenarios when they are designed to
express stakeholder visions. However, it is a detriment to quality when fact-
based analysis is needed. Scenario construction that is entirely free of
subjective elements would be an unrealistic goal, regardless of the
construction method. However, reducing subjectivity and increasing
objectivity in scenario construction is a reasonable goal, and achieving this
goal through CIB is a recurring argument in the literature for the use of this
method (Lloyd & Schweizer, 2014; Wachsmuth, 2015; Carlsen et al., 2017).
It is argued that CIB, in concert with suitable supplementary methods, can
help scenario authors “be more scientific and neutral” (Carlsen et al., 2017)
by providing well-defined step-by-step procedures, greater transparency,
and the recognition and systematic exploration of uncertainties and that “…
[f]rom a purely philosophical perspective, the CIB method clearly promotes
an increase of objectivity…” (Lloyd & Schweizer, 2014: 2085) compared
with intuition-based methods. One way in which subjectivity can manifest
in narratives is through wishful thinking, and several authors note the
ability of CIB to limit this problem (Musch & von Streit, 2017; Schweizer,
2020).

8.2.8 Scenario Criticism


Scenarios are used by organizations as an instrument for preparing for the
future. One peril here is that the scenarios are inevitably shaped by the
specific world views, expectations, and convictions that dominate in an
organization, and thus a genuine “stepping out” of one’s own world of
thought and an unbiased view of the future is only possible to a limited
extent—with potentially serious consequences for an organization’s ability
to anticipate structurally different futures and adapt to disruption.
To overcome narrow perspectives (not only with respect to in looking at
the future), applying the concept of poststructuralism is recommended in
the humanities and social sciences.6 In it, “texts” in the most general sense
are broken down into their constituent parts (“deconstruction”), and the
supposed “self-evident” assumptions behind them are identified; the latter
are confronted by countertheses, and then, the constituent parts are
reassembled (“reconstruction,” Inayatullah, 1990, 1998). The potential
usefulness of a poststructuralist perspective in scenario building has been
discussed for a long time, but its feasibility has been skeptically assessed
thus far.
CIB opens up new opportunities here and can be understood as a
method of practicing poststructuralism in scenario analysis (Scheele et al.,
2018; Schweizer et al., 2018). CIB’s ability to deconstruct existing
scenarios by challenging their latent underlying cross-impact matrix and
then reconstructing a modified future space by rerunning the modified
matrix translates the poststructuralist program into a feasible procedure and
can lead to previously overlooked scenarios (Schweizer & Kriegler, 2012;
CfWI, 2014; Schweizer & O’Neill, 2014; Kurniawan, 2018).

8.3 Challenges and Limitations


Despite the strengths of the CIB method, it is important not to lose sight of
the difficulties that CIB can cause during application and its methodological
limitations. These challenges are mainly related to the time the method
requires, data uncertainties and the specific perspective of the method in
conceptualizing systems as qualitative networks.

8.3.1 Time Resources


A solid CIB analysis requires significant time resources. The selection of
descriptors and their variants and, especially, the compilation of cross-
impact data, usually require substantial research and expertise as well as
thorough documentation. Subsequent data evaluation, in contrast, usually
consumes only a small part of the total time needed. The insight gained
from “quick-and-dirty” analyses is typically very limited and far from what
a thorough analysis would provide.
Participants inexperienced in the CIB method are therefore occasionally
surprised by the effort involved, especially in matrix creation, and are
tempted to perceive this effort as disproportionate. However, it must be
asked how a sound picture of the future should be arrived at without fully
reflecting system relationships. There is no way around recognizing that for
a scenario consisting of N factors, N(N-1) influence relationships are
potentially relevant. To do less than consider all N(N-1) influence
relationships one by one and at least discussing them qualitatively,
necessarily means not taking a full view of the system under study and
creating the scenarios partially “blind”. Thus, the reason for the
considerable time involved is the nature of the task, not the way CIB
approaches it. CIB is merely the “messenger” delivering this unpleasant
truth.7

8.3.2 Aggregation Level and Limited Descriptor Number


CIB analysis takes place on a highly aggregated level, since the number of
descriptors should not be too high to limit the data collection effort. This
also often forces one to adopt a highly generalized perspective when
considering the influence relationships, which can be perceived as
unsatisfactory by domain experts. There is little that can be done to remedy
this problem. Although it would be methodologically manageable for CIB
to process matrices of almost any size,8 the effort required to create very
large matrices is impractical for many projects. In the end, CIB is a
methodological option that offers the advance of system-analytical
processing of qualitative information without taking the step into genuine
quantitative modeling. However, for practical reasons, it is difficult for CIB
to do this in a highly differentiated way.

8.3.3 System Boundary


Conceptually, CIB assumes that the descriptor field forms a closed network
of influences. In other words, the behavior of all descriptors should be
explained by the influences of the other descriptors (with the exception of
autonomous descriptors). Thus, the descriptors form a closed “arena” in
which the descriptors “negotiate” their states by means of their interactions.
In fact, however, in CIB practice, we regularly face the problem that no
completely closed system of descriptors can be achieved. If we are
interested in understanding the behavior of certain target descriptors, the
first thing we must do is to include all of their important drivers in the list
of descriptors. Then, however, the behavior of the drivers must also be
explained, and therefore, the drivers of the drivers would also have to be
included in the descriptor field, and so on. This is obviously impractical and
closing the influence network is therefore a desirable but practically
unattainable ideal. It nevertheless represents a necessary, albeit
counterfactual, working hypothesis for CIB, as for other forms of systems
analysis.

8.3.4 Limits to the Completeness of Future Exploration


CIB selects scenarios from the combinatorial space of descriptor variants.
The experts who define the descriptor variants thereby also define the
boundaries of what CIB can explore as a possibility space. If the experts set
too narrow a range of descriptor variants, this narrowness inevitably
translates to the CIB scenarios, and relevant future developments can be
overlooked. The CIB algorithm does not define the boundaries of the
considered possibility space and does not check the appropriateness of the
range of descriptor variants. It merely explores the given possibility space
but does so completely while searching for “islands of consistency” in it.
Thus, the ability of CIB to generate surprise scenarios does not arise
because the algorithm could present variants for individual descriptors in
the scenarios that step outside the given spectrum of descriptor variants,
which is impossible. Rather, CIB’s ability to generate unexpected scenarios,
which has been demonstrated many times in practice (including by
Schweizer & Kriegler, 2012; CfWI, 2014; Schneider & Gill, 2016, and
Kurniawan, 2018), is due to CIB’s ability to find surprising ways to
combine the supplied descriptor variants into consistent arrangements.

8.3.5 Discrete-Valued Descriptors and Scenarios


CIB works with discrete-valued descriptors. Thus, it always leads to rather
coarsely graded scenarios because the scenarios cannot represent fine
differentiations due to the intentionally small number of variants per
descriptor. CIB scenarios are designed to identify futures of clearly different
quality and thus to reveal fundamental alternatives for system development.
Discrete-valued descriptors and scenarios are sufficient for this task. CIB is
generally not a suitable tool for tasks that require fine-grained consideration
of the transitional forms between the fundamental future alternatives.

8.3.6 Trend Stability Assumption


In scenario building, CIB identifies self-stabilizing bundles of trends and
interprets them as trend scenarios. This approach assumes that the trends in
the various descriptors persist sufficiently long to exert effects on the other
descriptors. CIB thus implicitly assumes systems in which the interacting
trends in the different areas (apart from the short transient phase after
disturbances or after changes to the system architecture) are in dynamic
equilibrium for a longer period of time, resulting in steady system
development. In contrast, systems whose temporal development will
foreseeably be characterized by permanently changing trends are difficult to
describe meaningfully by CIB, and CIB analysis, at least in its basic form,
can contribute little to the understanding of such systems.9 However,
preliminary considerations regarding the capture of changing trend
situations with CIB analyses have been published (e.g., Vögele et al., 2019).

8.3.7 Uncertainty and Residual Subjectivity in Data Elicitation


Assessing cross-impacts can be challenging for complex or little-studied
interrelationships. This can lead to expert mistakes and expert dissent. It can
also occur that it only becomes apparent during matrix creation that there is
no reliable knowledge about certain interrelationships and that only
speculation is possible. In this case, it is an (imperfect) option to express the
lack of knowledge by i) zeroing out the unclear relationships, by ii) using
several matrix variants for different assumptions and to clearly document
the speculative component, or, in the worst case, by iii) terminating the
analysis if too many and too substantial knowledge gaps become apparent.
Although, as previously stated, CIB increases the objectivity of scenario
generation compared to traditional intuitive methods, there remains a
certain degree of subjectivity in the choice of descriptors, the choice of their
variants, in the cross-impact ratings, and (in the case of large portfolios) in
the selection of scenarios. In assigning cross-impact ratings, another
difficulty is that there is no absolute scale for strength ratings, and relative
strength ratings are required. This difficulty can easily lead to controversial
assessments.

8.3.8 Context-Sensitive Influences


As discussed in Sect. 5.7, CIB usually assumes that system
interrelationships consist of a network of bilateral influence relationships,
i.e., that the impact of A on B depends only on the state of A and the state
of B. When an influence is context sensitive, i.e., its strength or even its
sign depends on the state of a third descriptor, the influence relationship can
no longer be expressed by a simple cross-impact value, and special
procedures are required to correctly account for the relationship in the
analysis, as described in Sect. 5.7. Context-sensitive influence relationships
therefore always represent a complication in CIB. A system with many
context-sensitive relationships may therefore be unsuitable for CIB
analysis.
8.3.9 Consistency as a Principle of Scenario Design
CIB may exclude scenarios for good reasons if they assume internally
inconsistent descriptor variant combinations. Nevertheless, discussion of
such scenarios can be fruitful in developing visions of the future, and their
exclusion for formal reasons can narrow the cognitive range of a discourse
on the future (Musch & von Streit, 2017). To avoid this negative effect on a
vision discourse, participants can be encouraged to formulate visions
beyond consistency considerations, and CIB can then be used to identify
inconsistencies within visions. The identified inconsistencies can be
interpreted as barriers to implementation, and CIB can thus contribute to
constructive commentary on such visions (Weimer-Jehle et al. 2011: 31).
Furthermore, inconsistent scenarios can also function as transition states
for a system (cf. Sect. 8.1, Section “Interpretation I”), and their
consideration can help one understand how systems move from one stable
state to another. This type of consideration goes beyond the focus on
consistent scenarios currently prevalent in CIB practice.

8.3.10 Critical Role of Methods Expertise


A lack of methodological expertise can severely compromise the quality of
a CIB analysis by causing inappropriate selection of descriptors and
variants, the elicitation of cross-impact data with invalid procedures (or a
failure to interpret them in a method-compliant manner), or inadequate
analysis and interpretation of results. Expertise is generally required with
any method. However, CIB analysis is particularly sensitive because the
method is only fully standardized in the core step (data evaluation), whereas
the other steps (data elicitation, analysis, and interpretation of results) have
thus far been less standardized. As a result, method application is more
dependent on the expertise of the core team than is the case with thoroughly
standardized methods.
For good expert-based data elicitation, it is also essential that not only
the core team but also the participating experts receive an introduction to
the method tailored to their role, enabling them to understand the
significance and impact of their statements in the analysis process.
Otherwise, inaccurate expert assessments (Kosow, 2016), and possibly also
resentment against the method, can occur (Drakes et al., 2017, 2020;
Venjakob et al., 2017).
8.3.11 CIB Does Not Study Reality but Mental Models of Reality
The descriptor field and cross-impact assessments and hence the results of a
CIB analysis do not represent reality per se but, rather, the mental reality
models of the experts who create the cross-impact matrix or provide
information for it. From a critical perspective, future research—and thus
also CIB—is not concerned with the future per se but with present
conceptions of the future (Grunwald, 2013).
The ultimate validity measure for the CIB method is therefore not
whether the calculated scenarios are “realistic” and whether one of them
actually occurs. The sole validity measure for a CIB analysis is whether the
method has correctly captured the ideas of the knowledge sources about the
future and composed them into scenarios consistently and in the greatest
possible variety. Conversely, a failure would not be indicated by the fact
that a nondiscovered future occurs but by the fact that the experts
justifiably10 perceive the scenarios as incompatible with their mental
models or if the experts can specify a scenario compatible with their mental
models that the CIB analysis has not discovered.
Provided the quality, relevance, and sufficient completeness of the
recorded knowledge, however, a process-related validity claim such as the
one raised by the CIB analysis also translates into a relevance of the
analysis results for practical preparation for the future, as the application
examples in Sects. 7.1, 7.3, and 7.4 demonstrate.

8.4 Unsuitable Use Cases: A Checklist


Based on the preceding discussion of the challenges and limitations of the
CIB method, we can summarize which features of a use case should be
taken as an indication that CIB is unsuitable for the case:

⇒ Research For research questions for which the answer can also be inferred by intuition or
questions of low simple reasoning, the effort of a CIB analysis is usually not justified.
complexity One example is a system that tends to be bipolar such that a portfolio of a
consistently favorable and a consistently unfavorable scenario can be expected
without further ado. Another example is a system that has few and weak
interdependencies such that there is little constraint on how the descriptor
variants can be combined.
⇒ Research Systems whose factors and interrelationships can be described by quantitative
questions in variables and mathematical formulas without the need to exclude important
which aspects should be investigated by mathematical analysis and modeling rather
quantification of than CIB analysis (see Sect. 8.5). The role of CIB is not to replace adequate
all factors and mathematical modeling but to provide a system analytical method for cases in
relationships is which adequate mathematical modeling is impossible.
possible
⇒ Research For systems whose factors are not connected by asymmetric causal
questions that relationships (such as “A promotes B, but B does not promote A,” or, more
include no severely, “A promotes B, but B hinders A”) but in which all relationships
asymmetry of between the factors are reciprocal (“A and B promote one another,” or “A and
causal B hinder one another”), it can be considered to use the simpler consistency
relationships matrix method instead of CIB (see Sect. 8.5).
⇒ Research The result of a CIB analysis is based on expert knowledge about system
questions for interrelationships (either collected directly or indirectly by using the research
which sufficient literature, in other words, expert knowledge documented in writing). The
knowledge about quality and relevance of CIB analysis results presuppose the quality and
the relationships relevance of the underlying expert knowledge. CIB does not “magically purify”
is not available unsubstantiated opinions: It is a case of “garbage in—garbage out.” To avoid
the problem of analyses based on unreliable data being overestimated in their
substance, it should always be critically questioned whether sufficient albeit
qualitative knowledge about the essential system interrelationships is available
before a CIB analysis is conducted. Occasional knowledge gaps can be
addressed by sensitivity analysis. If, in contrast, a significant proportion of the
influencing relationships cannot be soundly assessed, consideration should be
given to abandoning the analysis.
⇒ Research CIB analyses are usually performed using approximately 5–20 descriptors. In
questions that most cases, several subsystems are included, e.g., politics, economy, society.
cannot be Therefore, only a few descriptors are assigned to each subsystem, and each
meaningfully aspect can only be addressed in a relatively aggregated and generalized way.
treated at an Thus, there is usually little room for detailed differentiation within the various
aggregate level topics. Questions that require a high degree of differentiation are therefore
usually difficult to address in a CIB analysis.
⇒ Research The capture of system interrelationships in the form of bilateral influence
questions for relationships is a conceptual core element of all cross-impact analyses and thus
which many also of CIB analysis. Systems that cannot be adequately captured by this type of
interdependencies conceptualization are difficult or impossible to access by the CIB approach. If
cannot be there are only a few interrelationships with a more complex structure, CIB can
described, even still be applied by using special procedures (see Sect. 5.7). However, for
approximately, by systems whose interrelationships escape the cross-impact concept to a large
bilateral influence extent, CIB analysis is not recommended.
relationships
⇒ Research CIB in its basic form identifies self-stabilizing trend combinations or state
questions on combinations. It thus identifies the equilibrium states or (if the descriptor
unstable systems variants are trends) the dynamic equilibria of a system. Systems that do not tend
with frequently to occupy steady states or present dynamic equilibria for extended periods are
changing trend therefore less suitable for CIB analysis in its basic form. At a minimum, a
directions dynamic interpretation of the CIB consistency algorithm would be required
(succession analysis; see Appendix).
8.5 Alternative Methods
The optimal choice of method for a scenario analysis requires a
comparative consideration of the alternatives. This section lists the most
important method alternatives to CIB and describes possible reasons that
could suggest the choice of a different method. The assessments refer to the
case in which a scenario analysis is to be performed with the aim of
creating qualitative scenarios.
The diversity of commonly used scenario methods requires that a
selection be made for presentation. Discussed are methods (1) that are used
as well as CIB to exploit qualitative knowledge for building scenarios and
therefore might be of interest as alternatives to CIB, (2) that have more
favorable properties than CIB in at least one respect (but are less favorable
at other respects), and (3) for which a considerable amount of documented
method applications exists so that assessments can be made on the basis of
sufficient experience. For certain methods, different variants exist. In these
cases, the methods discussed below must be understood as “family
representatives,” since it would be beyond the purpose of this book to
discuss all method variants individually.

Method Advantages To consider References


compared to CIB
System Generates time Quantitative simulation: Variables and Forrester (1968),
Dynamics series of the interdependencies must be quantified, and Sterman (2000)
system variables. qualitative results are only obtained by
Free design of qualitative interpretation of the quantitative
system results.
interrelationships Mathematical modeling competence
(system recommended (low-threshold “beginner
perspective: platforms” are available).
Stocks and flows). High-time expenditure.
Also suitable for No inherent exploration of the future space:
disaggregated Scenarios must be initiated by parameter
systems and variations.
medium numbers
of system variables
(more than CIB).
Method Advantages To consider References
compared to CIB
Agent- Generates time Own algorithm design needed. Epstein and
based series of the Usually, quantitative simulation; however, Axtell (1996),
modeling system variables. qualitative/semiquantitative modeling is Taylor (2014)
Free design of possible.
system Mathematical modeling competence
interrelationships recommended (low-threshold “beginner
(system platforms” are available).
perspective:
Interacting and In general, high time expenditure.
learning actors No inherent exploration of the future space:
with individual Scenarios must be initiated by parameter
goals). variations.
Also suitable for
disaggregated
systems and high
numbers of system
variables.
Consistency Easy to No causal system analysis possible: Only Rhyne (1974),
matrix/field communicate also correlational data on system relationships are von Reibnitz
anomaly to laypersons. used. (1987), Johansen
relaxation Less data Often many scenarios. In general, post- (2018)
(FAR) elicitation effort processing required (e.g., through cluster
than in CIB, since analysis).
only half of the
consistency matrix
must be filled in.
BASICS Probability-based Qualitative information is processed Honton et al.
path analysis based quantitatively using real-valued operations. (1985), Götze
on a cross-impact In addition to semiquantitative input (CI (1993). For a
matrix. matrix), quantitative input is required (a recent method
priori probabilities). variant
(“AXIOM”), see
Since the analysis is designed as a path also Panula-Ontto
analysis, the exploration of the future space (2019).
is limited (maximum number of scenarios: 2
× number of descriptor-variants).
Method Advantages To consider References
compared to CIB
KSIM Generates time Quantitative simulation: Variables must be Kane (1972)
series of the quantified (KSIM uses a standardized
system variables. dynamic model predefined by the method
Low elicitation and parameterized by the cross-impact
effort, as only a matrix). Qualitative results are only obtained
cross-impact by qualitative interpretation of the
matrix without quantitative results.
descriptor variants No inherent exploration of the future space:
is needed. Scenarios must be initiated by parameter
variations.

References
Carlsen, H. C., Richard, J. T., Klein, R. J. T., & Wikman-Svahn, P. (2017). Transparent scenario
development. Nature Climate Change, 7, 613.
[Crossref]

CfWI/Centre for Workforce Intelligence. (2014). Scenario generation - Enhancing scenario


generation and quantification. CfWI technical paper series no. 7.

Drakes, C., Cashman, A., Kemp-Benedict, E., & Laing, T. (2020). Global to small Island; a cross-
scale foresight scenario exercise. Foresight, 22(5/6), 579–598. https://doi.org/10.1108/FS-02-2020-
0012
[Crossref]

Drakes, C., Laing, T., Kemp-Benedict, E., & Cashman, A. (2017). Caribbean scenarios 2050 -
CoLoCarSce report. CERMES Technical Report No. 82.

Epstein, J. M., & Axtell, R. (1996). Growing artificial societies: Social science from the bottom up.
Brookings Institution Press.
[Crossref]

Forrester, J. W. (1968). Principles of systems. Pegasus Communications.

Girod, B., Wiek, A., Mieg, H., et al. (2009). The evolution of the IPCC’s emissions scenarios.
Environmental Science & Policy, 12, 103–118.
[Crossref]

Götze, U. (1993). Szenariotechnik in der strategischen Unternehmensplanung. Springer Fachmedien


Wiesbaden.
[Crossref]

Grunwald, A. (2013). Modes of orientation provided by futures studies: Making sense of diversity
and divergence. European Journal of Futures Research, 2, 30.
[Crossref]
Honton, E. J., Stacey, G. S., & Millet, S. M. (1985). Future scenarios—The BASICS computational
method, economics and policy analysis occasional paper (Vol. 44). Batelle Columbus Division.

Inayatullah, S. (1990). Deconstructing and reconstructing the future: Predictive, cultural and critical
epistemology. Futures, 22, 116–141.
[Crossref]

Inayatullah, S. (1998). Causal layered analysis: Poststructuralism as method. Futures, 30, 815–829.
[Crossref]

Johansen, I. (2018). Scenario modelling with morphological analysis. Technological Forecasting and
Social Change, 126, 116–125.
[Crossref]

Kane, J. (1972). A primer for a new cross impact language-KSIM. Technological Forecasting and
Social Change, 4, 129–142.
[Crossref]

Kemp-Benedict, E. (2012). Telling better stories - strengthening the story in story and simulation.
Environmental Research Letters, 7, 041004.
[Crossref]

Kemp-Benedict, E. (2015). GoLoCarSce scenario development workshop agenda. The global-local


caribbean climate change adaption and mitigation scenarios project. Stockholm Environment
Institute.

Kosow, H. (2016). The best of both worlds? An exploratory study on forms and effects of new
qualitative-quantitative scenario methodologies. Dissertation, University of Stuttgart, Germany.

Kosow, H., Weimer-Jehle, W., León, C. D., & Minn, F. (2022). Designing synergetic and sustainable
policy mixes - a methodology to address conflictive environmental issues. Environmental Science
and Policy, 130, 36–46.
[Crossref]

Kurniawan, J. H. (2018). Discovering alternative scenarios for sustainable urban transportation. In


48th Annual Conference of the Urban Affairs Association, April 4–7, 2018, Toronto, Canada.

Lloyd, E. A., & Schweizer, V. J. (2014). Objectivity and a comparison of methodological scenario
approaches for climate change research. Synthese, 191(10), 2049–2088.
[Crossref]

Musch, A.-K., & von Streit, A. (2017). Szenarien, Zukunftswünsche, Visionen - Ergebnisse der
partizipativen Szenarienkonstruktion in der Modellregion Oberland. INOLA report no. 7, Ludwig-
Maximilians University, Munich, Germany.

Nash, J. F. (1951). Non-cooperative games. The Annals of Mathematics, 54, 286–295.


[Crossref]

Panula-Ontto, J. (2019). The AXIOM approach for probabilistic and causal modeling with expert
elicited inputs. Technological Forecasting and Social Change, 138, 292–308.
[Crossref]
Pregger, T., Naegler, T., Weimer-Jehle, W., Prehofer, S., & Hauser, W. (2020). Moving towards
socio-technical scenarios of the German energy transition - lessons learned from integrated energy
scenario building. Climatic Change, 162, 1743–1762. https://doi.org/10.1007/s10584-019-02598-0
[Crossref]

Prehofer, S., Kosow, H., Naegler, T., Pregger, T., Vögele, S., & Weimer-Jehle, W. (2021). Linking
qualitative scenarios with quantitative energy models. Knowledge integration in different
methodological designs. Energy, Sustainability and Society, 11, 25. https://doi.org/10.1186/s13705-
021-00298-1
[Crossref]

Rhyne, R. (1974). Technological forecasting within alternative whole futures projections.


Technological Forecasting and Social Change, 6, 133–162.
[Crossref]

Scheele, R., Kearney, N. M., Kurniawan, J. H., & Schweizer, V. J. (2018). What scenarios are you
missing? Poststructuralism for deconstructing and reconstructing organizational futures. In H.
Krämer & M. Wenzel (Eds.), How organizations manage the future - theoretical perspectives and
empirical insights. Springer International Publishing., Chapter 8.. https://doi.org/10.1007/978-3-319-
74506-0_8

Schmidt-Scheele, R. (2020). The plausibility of future scenarios. Conceptualising an unexplored


criterion in scenario planning. Transcript Independent Academic Publishing, Bielefeld. See also:
Scheele R (2019) Applause for scenarios!? An explorative study of ‘plausibility’ as assessment
criterion in scenario planning. Dissertation, University of Stuttgart, Germany.

Schneider, M., & Gill, B. (2016). Biotechnology versus agroecology - entrenchments and surprise at
a 2030 forecast scenario workshop. Science and Public Policy, 43, 74–84. https://doi.org/10.1093/
scipol/scv021
[Crossref]

Schweizer, V. J. (2020). Reflections on cross-impact balances, a systematic method constructing


global socio-technical scenarios for climate change research. Climatic Change, 162, 1705–1722.
[Crossref]

Schweizer, V. J., & Kriegler, E. (2012). Improving environmental change research with systematic
techniques for qualitative scenarios. Environmental Research Letters, 7, 044011.
[Crossref]

Schweizer, V. J., & O’Neill, B. C. (2014). Systematic construction of global socioeconomic pathways
using internally consistent element combinations. Climatic Change, 122, 431–445.
[Crossref]

Schweizer, V. J., Scheele, R., & Kosow, H. (2018, June). Practical poststructuralism for confronting
wicked problems. In 9th International Congress on Environmental Modelling and Software, Fort
Collins.

Sterman, J. D. (2000). Business dynamics. Systems thinking and modeling for a complex world. Irwin
McGraw-Hill.

Taylor, S. (2014). Agent-based modeling and simulation. Palgrave Macmillan.


[Crossref]
Venjakob, J., Schüver, D., & Gröne, M.-C. (2017). Leitlinie Nachhaltige Energieinfrastrukturen,
Teilprojekt Transformation und Vernetzung von Infrastrukturen. Project report „Energiewende Ruhr“,
Wuppertal Institut für Klima, Umwelt, Energie, Wuppertal, Germany.

Vögele, S., Poganietz, W.-R., & Mayer, P. (2019). How to deal with non-linear pathways towards
energy futures. Concept and application of the cross-impact balance analysis.
Technikfolgenabschätzung in Theorie und Praxis, 29(3).

von Reibnitz, U. (1987). Szenarien - Optionen für die Zukunft. McGraw-Hill.

Wachsmuth, J. (2015). Cross-sectoral integration in regional adaptation to climate change via


participatory scenario development. Climatic Change, 132, 387–400. https://doi.org/10.1007/s10584-
014-1231-z
[Crossref]

Weimer-Jehle, W., Wassermann, S., & Kosow, H. (2011). Konsistente Rahmendaten für
Modellierungen und Szenariobildung im Umweltbundesamt. Expert report for the German Federal
Environment Agency (UBA), UBAReport 20/2011, Dessau-Roßlau, Germany.

Weimer-Jehle, W. (2015). Cross-impact analyse. In M. Niederberger & S. Wassermann (Eds.),


Methoden der Experten- und Stakeholdereinbindung in der sozialwissenschaftlichen Forschung (pp.
17–34). VS-Verlag.

Weimer-Jehle, W., Buchgeister, J., Hauser, W., Kosow, H., Naegler, T., Poganietz, W.-R., Pregger, T.,
Prehofer, S., von Recklinghausen, A., Schippl, J., & Vögele, S. (2016). Context scenarios and their
usage for the construction of socio-technical energy scenarios. Energy, 111, 956–970. https://doi.org/
10.1016/j.energy.2016.05.073
[Crossref]

Weimer-Jehle, W., Vögele, S., Hauser, W., Kosow, H., Poganietz, W.-R., & Prehofer, S. (2020).
Socio-technical energy scenarios: State-of-the-art and CIB-based approaches. Climatic Change.
https://doi.org/10.1007/s10584-020-02680-y

Wiek, A., Keeler, L. W., Schweizer, V., & Lang, D. J. (2013). Plausibility indications in future
scenarios. Int. J. Foresight and Innovation Policy, 9, 133–147.
[Crossref]

Footnotes
1 For the concept of plausibility and its application to scenarios, see Wiek et al. (2013) and Schmidt-
Scheele (2020).

2 This conclusion presupposes certain system properties, in particular the ergodicity of the system,
i.e., its ability to effectively search its complete state space for stable states.
3 The following description of the strengths, problems, and limitations of CIB analysis draws
heavily on the method discussion in Weimer-Jehle et al. (2020).

4 However, the complete screening of the possibility space is not a unique feature of CIB. It is also
applied by the consistency matrix method (Rhyne, 1974; von Reibnitz, 1987). The difference
between the two methods lies in the type of data analyzed. Consistency matrix analysis uses
correlational information about system interdependencies, whereas CIB uses causal information.

5 Examples of CIB studies with large scenario sets include Schweizer & O’Neill, 2014 and Pregger
et al., 2020.

6 Leading theorists of poststructuralism were Michel Foucault, Jacques Derrida and others.

7 Somewhat more modest is the time requirement of the consistency matrix method, which uses
correlational instead of causal data. This approach assumes symmetry in the system relationships,
and therefore, only half of the matrix must be filled in.

8 Very large matrices can be solved by using approximation algorithms, in particular the Monte
Carlo method integrated in the ScenarioWizard software (see Appendix, Network Analysis section).

9 One way to look at series of trend changes in CIB is succession analysis. However, this type of
analysis is not covered in depth in this book.

10 Here, “justifiably” means that the experts acknowledge the arguments that CIB makes for the
scenario but still reject the scenario.
Appendix: Analogies
CIB’s concept of conducting a qualitative scenario or system analysis is
basically plausible in itself, and the broad reception of the method in
scenario practice can be interpreted as a form of empirical confirmation of
the concept’s validity. Nevertheless, it can be questioned whether the CIB
algorithm can be theoretically justified. Since there is no general theory of
social systems from which a derivation could be considered, the pursuit of
theoretical justification means looking for analogous systems analysis
concepts in other research domains.
For a scenario heuristic that seeks its legitimacy first in practical
success, the search for theoretical analogies to the CIB algorithm is
surprisingly rewarding, all the more because these analogies consist of not
only conceptual similarities but also rigid mathematical equivalences. The
presence of these analogies confirms that the CIB algorithm can be
understood as a transfer of general systems analytic principles to the field of
qualitative scenario analysis. From a practical standpoint, the analogies are
significant because each sheds new light on CIB and entails the possibility
of understanding CIB and its algorithm in a different way. Moreover, as will
be shown by a discussion of the game-theoretic analogy of CIB, analogies
can also inspire new fields of application for the method.
Nevertheless, these considerations are primarily theoretical in nature.
The necessary excursions into the theoretical foundations of other
disciplines may be less inspiring for more practically oriented readers who
may prefer to skip this part of the Appendix.

Physics
The analogy to the theory of equilibrium of forces in physics is the oldest
known CIB analogy and was noted in the original publication of CIB as a
means to support the plausibility of the CIB algorithm.1 The analogy uses
the metaphor of a heavy body moving under the effect of gravity in a
“terrain profile” and ending up at the lowest point of the terrain and
stabilizing there (Fig. A.1).
Fig. A.1 Analogy of the equilibrium of forces: valleys as rest points for heavy bodies

It can be shown that the CIB algorithm corresponds mathematically to


the case in which several interacting bodies (one per descriptor) seek their
rest point in a terrain profile. The interactions between the bodies determine
how each body can influence how the terrain profile is shaped for the other
bodies. All bodies come to rest when each body is in a valley (i.e., a deep
position), and, at the same time, a body’s deep position causes the positions
of all other bodies to be deep.
In CIB terms, this analogy is expressed as follows. To assess the
consistency of a scenario, its impact balances are determined. As can be
easily calculated with the help of the matrix shown in Fig. 3.7, the impact
balances for the Somewhereland descriptor variant combination [A1 B3 C2
D2 E3 F1] are as follows:

When the impact balances are drawn with positive values downward, they
form terrain profiles in which the descriptors prefer deep positions in the
manner of heavy bodies by seeking the descriptor variants of high impact
sums (Fig. A.2). The impact balances are negatively plotted in this diagram
because high impact sums in CIB indicate consistency, just as valleys in
physics indicate stability.

Fig. A.2 “Terrain profiles” of Somewhereland descriptors in the case of an inconsistent scenario

Descriptors A, B, D, and E are placed in valleys and are consistent. In


contrast, descriptors C and F are not in valleys, which signifies instability in
physics and inconsistency in CIB for these two descriptors.
The terrain profiles after the transition of the unstable (inconsistent)
descriptors to the valleys are shown in Fig. A.3. Because of the new
position of two descriptors and the associated change in their impacts on
other descriptors, the terrain profiles have changed. In this case, however,
the deformation does not cause the valleys to shift, and the new scenario
[A1 B3 C1 D2 E3 F3] thus consists exclusively of descriptors in valleys and
is therefore consistent.
Fig. A.3 “Terrain profiles” in a consistent scenario

However, it can also occur that the deformation of the terrain profiles in
the course of the transition of the unstable descriptors into valleys is so
strong that previously stable descriptors now become unstable. Then, the
respective descriptors must move further until finally a state is reached in
which all descriptors are simultaneously positioned in valleys.

Network Analysis
Networks are used in complexity research to study complexity phenomena
through mathematical experiments. One concept used for this purpose is
Boolean networks or Kauffman networks. They are named after Stuart
Kauffman, a pioneer in complexity research and cybernetic biology.
Kauffman proposed this form of network as a conceptual model for the self-
organized (“autocatalytic”) emergence of primordial life from concatenated
protein synthesis reactions, for cell differentiation processes, and generally
for order–disorder phase transitions (Kauffman 1969a, 1969b, 1993).2
In Boolean networks, each node can be active or inactive, and the
network evolves in generational steps. According to a set of rules, the
decision of which state a node will adopt in the next generation is
determined by the current states of the nodes it is connected to. Thus, a rule
could be, for instance, that node no. 87 is active in the next generation if (1)
node no. 12 is currently active and (2) either node no. 35 or node no. 92 is
currently active.
The analogy between this form of network analysis and CIB arises
because CIB can be understood as a generalized Boolean network (Weimer-
Jehle 2008).3 This view is possible because the inference from impact
balances as to which descriptor variant should be active can be exactly
expressed by Boolean rules. For instance, the decision of which variant of
the Somewhereland descriptor “B. Foreign Policy” is activated in a
consistent scenario can be exactly formulated by the Boolean rules shown
in Table A.1.
Table A.1 Representation of the Somewhereland descriptor column “B. Foreign Policy” in Boolean
rules. (Just like the cross-impact matrix, the rule set allows two B variants in certain cases)

Active variant of descriptor B Activation condition


B1 If E1, then not A1
If E2, then A2
B2 If E1, then not A2
If not E1, then A2
B3 Not A2

Thus, B3 is always allowed as an active descriptor variant if the


scenario does not contain A2. B2 is allowed if and only if either E1 is
present and A2 is not present or if E2 or E3 and A2 are present at the same
time. By inserting appropriate combinations for A and E into
Somewhereland’s cross-impact matrix, one can easily convince oneself that
these rules in fact ensure that the corresponding B variant reaches the
maximum impact sum.
CIB is referred to as a generalized Boolean net as opposed to a Boolean
net because the nodes (the descriptors) in CIB cannot only be either on or
off as in Boolean nets but can adopt more than two states.
The search for consistent scenarios is performed from the network
perspective of CIB, as shown in Fig. A.4 for Somewhereland. We start with
an arbitrary combination of descriptor variants, for example, the scenario
[A2 B2 C1 D2 E3 F1]. In Fig. A.4, this scenario is labeled Generation 1
(G1). Inserting this scenario into the cross-impact matrix yields the impact
balances listed in Column G1 on the right. These results show that the G1
scenario generates network forces that do not promote the G1 scenario itself
but, rather, as evidenced by the maximum preferred descriptor variants, aim
for a different network state that deviates from the G1 scenario for two
descriptors (A and C). This modified scenario [A1 B2 C3 D2 E3 F1] is the
Generation 2 (G2) state of the network.
Fig. A.4 Somewhereland as a dynamic network

Now, the newly emerging impact balances again refer to another


scenario (G3), and thus, the scenario evolves from generation to generation
until it finally (in this case, in G5) reaches a configuration in which it
reproduces itself. From here on, the network figuratively treads water, and
the scenario remains unchanged from generation to generation. That is, a
consistent scenario has been found. In this case, it is the Somewhereland
scenario no. 2 from Table 3.4.
This scenario successionprocedure also has a practical application in
CIB. Using the scenario succession technique, consistent scenarios can be
found even if the cross-impact matrix is too large to fully test all
combinatorial possibilities. Then, in a Monte Carlo procedure, a large
number of random starting configurations are iterated, as shown in Fig. A.4,
until a consistent scenario is found.

Game Theory
Game theory is a branch of mathematics that studies decision problems for
groups of interacting individuals or organizations. The fields of application
of game theory include, among others, the analysis of social conflicts and
economic competition.
The type of “games” in which an analogy to scenario development and
CIB arises concerns the case where a number of players each have a choice
among individual game strategies. Depending on the choice of strategy, a
player gains or loses, depending not only on his or her choice of strategy
but also on the strategies chosen by the other players. A favorable strategy
can become unfavorable if a fellow player changes his or her strategy and
vice versa. A “payoff table” regulates the success of each player for each
possible strategy combination of all players. Game theory then seeks
answers to the question of which rational strategy choices can be expected
from players when each strives to optimize his or her own net gain.
Here, an analogy with CIB’s task of constructing consistent scenarios
from cross-impact data emerges (Fig. A.5). In this analogy, the descriptors
take the role of the players, and the descriptor variants represent the
strategic options of the players. The cross-impact matrix regulates which
gains (positive impacts) or losses (negative impacts) accrue to a player from
the decisions of fellow players, depending on the player’s own game
strategy. While in game theory each player wants to maximize his or her
win–loss balance, CIB assumes that each descriptor activates the variant
that maximizes its balance of promotions and inhibitions (the impact sum).
Fig. A.5 Analogy between CIB and game theory

CIB thus represents a game in which the player has a small number of
different game strategies at his or her disposal and in which each player is
in an individual game relationship with every other player. For each game
relationship, the player’s game outcome depends on his or her own strategy
decision and that of the fellow players according to fixed rules (the
respective judgment sector of the cross-impact matrix). The outcome is
determined separately for each player relationship and is collected or paid
out by a bank. Thus, a player’s overall success depends on choosing a
strategy that is successful with respect to as many fellow players (and their
strategies) as possible. CIB is not a zero-sum game in which A can only win
what B loses. Instead, depending on the nature of the player relationships
(i.e., the judgment sections), joint wins, joint losses, one-sided indifference,
and competitive wins and losses can occur.
The question now is how game theory proceeds to determine the
strategy configurations to which the players are likely to converge in the
end (i.e., the consistent scenarios from the CIB perspective). The central
concept for this process is Nash equilibria, named after the mathematician
John Forbes Nash (Nash 1950, 1951).4 A Nash equilibrium is characterized
by the fact that no player can unilaterally improve his or her outcome by
changing his or her strategy. If this is the case for all players
simultaneously, it is in the self-interest of each player to maintain his or her
strategy, and a stable strategy configuration of the game is found.
Mathematically speaking, the players thereby implement a discrete
multiobjective optimization because each player pursues his or her own
profit goal and each has a finite number of alternative actions available for
this purpose.
Thus, game theory solves its problem in formally the same way that
CIB pursues its task as a scenario method. In CIB, a scenario is consistent if
no descriptor can unilaterally improve its impact sum by switching to
another variant, i.e., if each descriptor achieves its maximum net promotion
in the given environment of the other descriptor variants. Thus, we can say
that CIB solves the task of scenario construction in a way that is consistent
with general mathematical concepts of discrete-value multiobjective
optimization.
Importantly, CIB’s game-theoretic analogy has a practical dimension in
addition to the theoretical support provided for its formal approach. Since
CIB determines its scenarios in a way that corresponds to Nash equilibria in
game theory, CIB can be used to address certain game-theoretic problems.
For example, CIB can be used to study policy design problems among
actors with conflicting goals, provided that the synergy and conflict
relationships can be formulated in terms of a cross-impact matrix, i.e., that
the synergy and conflict effects between the actors are at least
approximately additive (cf. Kosow et al. 2022).5

Glossary
Cross-impact matrix (in the context of CIB)
Autonomous Autonomous descriptors exert influence but are not in turn influenced by any
descriptor other descriptor. Consequently, their column in the cross-impact matrix is empty.
Autonomous descriptors are suitable to represent the external conditions of a
system.
Column sum (of The sum of all cross-impact ratings in the matrix column of a descriptor variant.
a descriptor In contrast to the impact sum, all column entries are added when calculating the
variant) column sum, not only the column entries that belong to the rows of the active
descriptor variants. Therefore, the column sum has a fixed value for each
descriptor variant, while the impact sums are scenario-specific.
Connectivity (of The share of nonempty judgment sections in the total number of judgment
a matrix) sections.
Cross-impact A single entry in the cross-impact matrix indicating the influence of one
descriptor variant (row) on another (column).
Cross-impact A matrix whose rows and columns are formed by the variants of all descriptors
matrix and in which the direct promoting and hindering influences between descriptor
(in CIB) variants are noted in the form of positive or negative strength ratings.

Descriptor Descriptors are the key elements of a system that are required to sufficiently
characterize the system’s state or evolution and whose mutual influence
relationships are capable of explaining that state or evolution.
Descriptor field The list of all descriptors of a cross-impact matrix.
Descriptor Alternative states or developments (also termed, for instance, alternative futures
variant or descriptor categories) that a descriptor can adopt.
Active descriptor Descriptor variants that occur in a scenario are referred to as active descriptor
variant variants of the scenario (e.g., “in scenario no. 1, the variant X3 is active for
descriptor X”).
Driver descriptor A descriptor that is not itself a target descriptor but influences one or more target
descriptors.
Ensemble A collection of cross-impact matrices with matching descriptors and descriptor
(matrix variants but differing cross-impact ratings. For example, a matrix ensemble can
ensemble) be the result of independent creating matrices by several experts with (partly)
different system views.
Impact balance The impact balance of a descriptor consists of the impact sums of all variants of
(of a descriptor) this descriptor.

Impact sum of a Sum of all cross-impact values in the matrix column of a descriptor variant,
descriptor whereby only the rows of the active descriptor variants are included in the
variant (with summation. Therefore, the impact sum of a descriptor variant is only valid for a
respect to a specific scenario and—unlike the column sum—not a general property of the
scenario) descriptor variant.
Intermediary A descriptor that is not a target descriptor and not a driver descriptor butis
descriptor influenced by them and influences one or more driver descriptor(s).
Judgment cell A matrix cell containing a single cross-impact rating (cf. Fig. 3.7).
Judgment group A horizontal row of judgment cells that summarizes the influence of one
descriptor variant on all variants of another descriptor (cf. Fig. 3.7).
Judgment section A rectangular section from the cross-impact matrix containing the impacts of all
variants of one descriptor on all variants of another descriptor (cf. Fig. 3.7).
Passive A descriptor that receives influences from other descriptors but does not itself
descriptor exert influences on other descriptors. This is expressed in the cross-impact
matrix by an empty descriptor row.
Predetermined A descriptor is referred to as predetermined if the cross-impact ratings in its
descriptor column one-sidedly favor a certain descriptor variant. Predetermined matrices
Predetermined are matrices that contain enough predetermined descriptors to be constrained to
matrix a unique solution (scenario).
Scenario space The scenario space consists of all combinatorially possible (inconsistent and
nonconsistent) scenarios.
Sparse matrix Cross-impact matrices with a high proportion of judgment cells carrying the
value zero. Matrices with more than approx. 70% empty cells can be considered
unusually sparse.
Specific cross- Reduced form of the cross-impact matrix for describing the interdependencies
impact matrix within a specific scenario. The specific cross-impact matrix is derived from the
full cross-impact matrix by removing the rows and columns of all nonactive
descriptor variants from the matrix.
Systemic A descriptor that receives influences from and exerts influences on other
descriptor descriptors.
Target descriptor A descriptor that directly represents the research question to be addressed by the
CIB analysis.
Underdetermined A descriptor that is subject to minor influences from the other descriptors of a
descriptor cross-impact matrix but whose major determinants lie outside the descriptor
field of the matrix (see Sect. 5.5).

Portfolio (in CIB)


Diversity score of a The maximum number of scenarios that can be selected from a portfolio
portfolio (with respect such that the selected scenarios differ from one another in at least d
to a minimum distance descriptors.
d)
Ensemble evaluation Evaluation of a matrix ensemble. The resulting portfolio contains all
scenarios that are consistent in at least q matrices of the ensemble. q is
the quorum of the ensemble evaluation.
Full presence portfolio Portfolio in which all descriptor variants listed in the cross-impact matrix
are active in at least one scenario.
IC0 portfolio Portfolio of scenarios with inconsistency value 0.
IC1 portfolio Portfolio of scenarios with inconsistency value ≤1.
IC2 portfolio Portfolio of scenarios with inconsistency value ≤2.
Portfolio The collection of all scenarios resulting from the CIB evaluation of a
matrix or matrix ensemble.
Portfolio volume The number of scenarios in a portfolio.
Presence rate The total number of descriptor variants occurring in a portfolio divided
by the total number of descriptor variants listed in the cross-impact
matrix. The presence rate is usually specified as a percentage.
Vacancy rate Percentage of descriptor variants missing in a portfolio out of the total
number of variants listed in the cross-impact matrix. Complementary
value to the presence rate.

Scenarios (in the context of CIB)


Consistency The consistency value of a descriptor with respect to a given scenario is the
value of a difference of the impact sum of the active descriptor variant from the highest impact
descriptor sum of all nonactive variants of the same descriptor.
Consistency The consistency value of the scenario is the minimum of the consistency values of
value of a all descriptors (without consideration of the autonomous descriptors, if any). The
scenario consistency value is 0 or positive for consistent scenarios and negative for
inconsistent scenarios.
Consistent A scenario of consistency value 0 or higher.
scenario
Distance The number of descriptors that take different variants in two scenarios is termed the
(between two distance between these scenarios.
scenarios)
Diversity Diversity sampling aims at selecting a set of scenarios in which each pair of
sampling scenarios differs in at least d descriptors.
(with respect
to minimum
distance d).
Fully Synonymous with the term “consistent scenario.” The term is usually used when the
consistent difference to the marginally inconsistent scenarios is to be emphasized.
scenario
Impact Graphical representation of the influence relationships between the active descriptor
diagram variants of a scenario (see, for example, Fig. 3.11).
Inconsistent A scenario of negative consistency value.
scenario
Marginal An inconsistency value below the significance threshold.
inconsistency
Marginally A scenario with consistency value 0 and thus the lowest consistency value sufficient
consistent for classification as a (fully) consistent scenario.
Scenario
Marginally A scenario with nonsignificant inconsistency, that is, with small negative
inconsistent consistency value below the significance threshold.
scenario (or
slightly
inconsistent
scenario)
Marginally A descriptor with descriptor consistency 0. Scenarios can be destabilized by external
stable disturbances most easily at the marginally stable descriptors.
descriptor
Scenario (in A combination of descriptor variants that assigns to each descriptor of a cross-
CIB) impact matrix exactly one of its variants. The term “scenario” does not indicate
whether the combination is consistent or not.
Total impact The total impact score of a scenario is the sum of the impact sums of all active
score (TIS) descriptor variants. The total impact score is a measure of the overall internal
strength of the scenario logic, while the consistency value measures the
logical strength of the scenario at its weakest point.

Index
A
Absolute cross-impact 191–192, 195
Addition invariance 185
Application examples 219–230
Autonomous descriptor 46
B
Boolean networks 260
Boolean rules 260
C
Calibration of strength ratings 186
Central descriptor variant 175–176
CIB miniature
emerging country 132
global socioeconomic pathways 121
group opinion 159
mobility demand 152
oil price 113
resource economy 87
SmallCountry 145
social sustainability 136
Somewhereland 29
Somewhereland-City 71, 72
Somewhereland plus 55
water supply 77
Climate protection 227–230
Column sum 134, 148
Conservative scenarios 175–176
Consistency
consistency check 32–36
consistency profile 47–48
consistency value 44–47, 267
IC0 111
IC1 111
inconsistency value 46
marginal consistency 48, 268
Consistent scenarios
number of scenarios 107
Context-dependent impact 150–155
Context sensitive influence 247
Cross-impact 25–30, 177–194, 265
absolute cross-impact 191–192, 195
balance 183
calibration 186
compensation principle 186
context dependency 150–155
context sensitive influence 247
cross-impact query 26
cross-impact statement 26
definition 177
dissent categories 85
double negation 187–188
empty judgment section 179
expert workshop 206–213
indirect influence 180–182
interviews 204–206
inverse coding 182–183
judgment cell 29, 266
judgment group 29, 266
judgment section 29, 266
key dissent 98
literature review 196–199
rating interval 26, 178, 179
rating scale 26
self-elicitation 195
sign balance 183
sign error 187–188
standardization 186
Cross-impact balances (CIB)
aggregation level 244
algorithm 30–40
algorithm (impact diagram) 34–35
algorithm (table) 37–39
causal model 241
challenges and limitations 243–248
CIB as a qualitative/semiquantitative method 236–238
consistency 247
consistency profile 47–48
consistency value 44–47
construction transparency 239–240
context sensitive influence 247
game theoretical interpretation 262–264
impact balance 38, 44, 266
impact sum 34–35, 38, 266
inconsistency value 46
interpretations 233–234
Kennwerte 44–49
key characteristics 44–49
knowledge integration 241–242
mental models as a object of analysis 248
network analogy 260–262
objectivity 242
physical analogy 258–260
policy design 235–236
scenario quality 239
screening of the scenario space 240
steady state system analysis 234–235
strengths 239–243
system boundary 244–245
time management 209
time-related interpretation of CIB 233–234
time required 243–244
time-unrelated interpretation of CIB 234–235
total impact score 49
Cross-impact matrix 25–30, 265
addition invariance 185
column sum 134, 148
conditional matrix 153
connectivity 179, 265
ensemble 204, 266
mean value matrix 91
multiplication invariance 91
sparse matrix 109–110, 266
specific matrix 101–102, 266
sum matrix 86–92
D
Data elicitation
based on previous work 213–214
expert workshop 206–213
interviews 204–206
iteration 211
literature review 196–199
method combination 210, 211
self-elicitation 195
theory-based 213–214
written/online 199–203
Descriptor 265
aggregation level 165, 166
ambivalent descriptor 138
autonomous descriptor 46, 158, 265
classification 167–171
definition 157, 265
descriptor field 22, 265
documentation 166
driver descriptor 162, 266
essay 166
intermediary descriptor 162, 266
nominal descriptor 169
number of descriptors 164, 165
ordinal descriptor 169
passive descriptor 158, 266
predetermined descriptors 188–189, 266
ratio descriptor 169
screening 196, 199, 204, 207
systemic descriptor 158, 267
target descriptor 54, 161, 162, 267
types 167–171
underdetermined descriptor 143–147, 267
variants 22–23
Descriptor variant 166–177
absence of overlap 173–174
central variant 175–176
characteristic variant 171
completeness 172
definition 166, 167
gradation 175
mutual exclusivity 172, 173
peripheral variant 175–176
phantom variant 189–191
rare descriptor variant 123
regular variant 171
robust variant 170
scales of measurement 168–169
vacant variant 147–150, 170
Dissent
categories 85
Distance table 115
Diversity 114–117
Diversity sampling 124–127, 268
Domino intervention 143
Double negation 187–188
Driver descriptor 162
E
Elicitation
pretest 210
Empty judgment section 179
Energy 222–223
Ensemble 87, 204, 266
ensemble evaluation 267
ensemble table 95, 96
Excursus
alternative scenario selection approaches 127
low descriptor variant frequency 123–124
Extreme scenarios 175–176
F
Feedback 211
G
Game theory 235–236
H
Health prevention 224–227
I
IC0 111
IC1 111
Impact
direct impact 29
impact diagram 32, 268
indirect impact 29
Impact balance 38, 44, 266
Impact sum 34–35, 38, 266
Inconsistency
frequency distribution 110–111
IC classes 110–111
inconsistency value 46
marginal inconsistency 50, 268
significant inconsistency 49–51
Indirect influence 180–182
Interdependence 25–30
Intermediary descriptor 162
Intervention 76–82
Intervention analysis 73–100
J
Judgment cell 29, 266
Judgment group 29, 266
Judgment section 29, 266
M
Marginal consistency 48
Marginal stability 48
Mean value matrix 91
Memo
M 1 - cross-impact query 26
M 2 - impact source and target 28
M 3 - condition of consistency 35
M 4 - significant inconsistency 50
M 5 - significant inconsistency in sum matrices 90
M 6 - multiplication invariance 91
M 7 - intervention analysis 141
M 8 - underdetermined descriptors 147
M 9 - descriptors (definition) 158
M10 - uniform aggregation level 166
M11 - qualitative interpretation of CIB descriptors 170
M12 - empty judgment section 180
M13 - direct and indirect influences 181
M14 - addition invariance 185
M15 - calibration of strength ratings 187
M16 - interpretation of CIB solutions as scenarios 234
M17 - interpreting CIB solutions as network configurations 235
M18 - CIB and Nash equilibria 236
M19 - unique character of CIB 238
Morphological analysis 24
N
Nash equilibrium 235–236, 263
Network analysis 16
Nuclear deal Iran 220–221
Nutshell
Nutshell I - intervention analysis 75
Nutshell II - group evaluation 99
Nutshell III - diversity sampling 126
Nutshell IV - context-dependent impact 151
Nutshell V - using subscenarios as descriptor variants 197
Nutshell VI - scenario validation 212
O
Overlap, absence of 173–174
P
Peripheral descriptor variant 175–176
Portfolio 107–117, 267
bipolar portfolio 136–143
distance table 115
diversity 114–117
diversity sampling 268
diversity score 115, 267
full presence 267
IC0 portfolio 53, 267
IC1 portfolio 53, 267
IC2 portfolio 53, 267
monotonous portfolio 131–135
number of scenarios 108
portfolio volume 108, 267
presence rate 111–112, 267
structuring a portfolio 54–64
Post-structuralism 242–243
Presence rate 111–112, 267
Pretest 210
Q
Quality assurance 186
R
Rating interval 26, 178, 179
S
Scenario 24, 268
axes diagram 60–64, 130
conservative scenario 175–176
consistent scenario 30–40, 268
construction 40
distance 114, 268
diversity sampling 268
extreme scenario 175–176
fully consistent scenario 268
inconsistent scenarios 34–35, 268
list format 41
marginal inconsistent scenario 49–51
number of scenarios 107–111
scenario axes 57–64
scenario portfolio 107–117
short format 41
surprise-driven scenarios 82–84
tableau format 41
validation 211
verification 219
Scenario axes 57–64, 130
Scenario space 24–25, 266
Scenario succession 262
Sensitivity analysis 96–97
Sign balance 183–185
Sign error 187–188
Somewhereland
cross-impact matrix 28
descriptors 22
Standardization 186
Statistics box
connectivity 180
cross-impact ratings 186–187
expansion factor 111
judgment uncertainty 193
number of descriptors 164, 165
number of descriptor variants 175
number of scenarios 108
portfolio diversity 116
zero value density and scenario count 109
Storylines 227–230
Succession analysis 2, 250, 262
Sum matrix 86–92
Surprises 82–84
T
Tableau format 41
Target descriptor 54, 161, 162, 267
Time management 209
Total impact score (TIS) 49, 268
V
Vacancy 147–150
Validation 211
Variant 23–24
completeness 23–24
exclusivity 23–24
W
Wildcard 82

Footnotes
1 Weimer-Jehle W (2006) Cross-Impact Balances: A System-Theoretical Approach to Cross-Impact
Analysis. Technological Forecasting and Social Change, 73(4):334-361.

2 Kauffman SA (1969a) Metabolic stability and epigenesis in randomly constructed nets, Journal of
Theoretical Biology 22:437–467.
Kauffman SA (1969b) Metabolic stability and epigenesis in randomly constructed nets, Journal
of Theoretical Biology 22:437–467.
Kauffman SA (1993) The Origins of Order, Oxford University Press, New York, Oxford.

3 Weimer-Jehle W (2008) Cross-Impact Balances - Applying pair interaction systems and multi-
value Kauffman nets to multidisciplinary systems analysis. Physica A, 387(14):3689-3700.
4 Nash JF (1950) Equilibrium points in n-person games. Proceedings of the National Academy of
Sciences, 36(1):48-49.
Nash JF (1951) Non-Cooperative Games. The Annals of Mathematics 54:286-295.

5 Kosow H, Weimer-Jehle W, León CD, Minn F (2022) Designing synergetic and sustainable policy
mixes - a methodology to address conflictive environmental issues. Environmental Science and
Policy 130:36–46.

You might also like