Introduction To fs/QCA
Introduction To fs/QCA
Ιntroduction to fs/QCA
• Depending on the type of sets identified, QCA is distinguished in three specific techniques:
• Crisp Set QCA (cs / QCA): The sets used in the analysis are conventional Boolean partition sets and the cases
whether they belong or not to a set.
• Multi-Value QCA (mv / QCA): Multiple values are used which allow for many categories of conditions and use of a
nominal scale.
• Fuzzy set QCA (fs / QCA): Variables are transformed into fuzzy sets where cases other than full integration and full
non-inclusion can also have partial membership, allowing for scores between 0 and 1. A distinction is made between
"full membership" in a set of "non-membership" points, with a cross-over point for those which is neither inside nor
outside the set.
DSS Lab
Steps of fs/QCA
• Calibration of raw data using three anchor points for each set
• Necessity analysis for identifying necessary conditions
• Assignment of causal conditions and outcome
• Calculation of the consistency of the subset relation between the outcome
and each causal condition and XY Plot Diagrams
• Truth Table analysis for identifying sufficient configurations
• Assignment of causal conditions and outcome
• Initial creation of Truth Table
• Classification of cases across Truth Table rows
• Consistency and frequency calculation for every possible logical combination
of causal conditions
DSS Lab
Steps of fs/QCA
Fuzzy sets are calibrated from the researcher based on theoretical and substantive knowledge (Ragin,2008), and
take into account the concept, definition and labeling of each set. The final result is the detailed membership
calibration of the cases in sets, with scores ranging from 0 to 1.
There are mainly two calibration methods. The Direct Method and the Indirect method.
• In the Direct Method, the researcher specifies the values of an interval scale that correspond to the three
qualitative breakpoints that structure a fuzzy set: full membership, full nonmembership, and the
crossoverpoint. These three benchmarks are then used to transform the original interval-scale values to
fuzzy membership scores.
• In the Indirect Method, the external standard used is the researcher’s qualitative assessment of the degree
to which cases with given scores on an interval scale are members of the target set. The researcher assigns
each case into one of six categories and then uses a simple estimation technique to rescale the original
measure so that it conforms to these qualitative assessments. The end product of both methods is the fine
grained calibration of the degree of membership of cases in sets, with scores ranging from 0.0 to 1.0.
DSS Lab
Calibration
In contrast to the Direct Method, the Indirect Method is based on the large groups of cases according to their degree of
membership in the sets of the research.
The researcher performs an initial classification of cases at different levels of membership, assigns them preliminary
membership scores, and then improves those membership using a simple estimation technique to rescale the original
measure.
The first and most important step of the indirect method is to classify the cases in a qualitative way, according to their alleged
degree of membership in the target set. These qualitative classification can be preliminary and open for review. However, it
should be based as much as possible on existing theoretical and substantive knowledge.
Both methods give accurate grades of membership levels based either on qualitative points (direct method) or qualitative
groups (indirect method).
DSS Lab
Calibration
Mendel,Korjani. A new method for calibrating the fuzzy sets used in fsQCA, Information Sciences 468 (2018)
DSS Lab
Membership function
0.90
0.95
0.80
0.70
0.60
x3 x2 x1
0.50 7 5 3
0.50
0.40
0.30
0.20
0.10 0.05
0.00
0 1 2 3 4 5 6 7 8 9 10
DSS Lab
Necessary conditions
The consistency measure for necessary conditions assesses the degree to which
the empirical information at hand is in line with the statement of necessity, i.e.,
how far the outcome can be considered a subset of the condition. As in the case
of sufficiency, with fuzzy sets, the parameter takes into account both how many
cases deviate from the pattern of necessity and how strongly they deviate.
The formulas for consistency of necessity, on the one hand, and coverage of
sufficiency, on the other, are mathematically identical but have different
substantive interpretations.
The coverage measure for necessary conditions is better interpreted as a
measure of the relevance of a necessary condition. High values indicate
relevance, whereas low values indicate trivialness. Conditions that pass the
consistency test as a necessary condition should not be deemed to be relevant
necessary conditions unless they also obtain a high value in the relevance
measure.
The coverage measure for necessity captures only one source of trivialness,
though. It detects whether the outcome set is much smaller than the condition
set but is not capable of capturing whether both the condition and the outcome
are (close to) universal sets.
DSS Lab
Sufficient conditions
Consistency provides a numerical expression for the degree to which the
empirical information deviates from a perfect subset relation. This information
plays a crucial role when deciding which Truth Table rows can be interpreted as
sufficient conditions and can thus be included in the logical minimization process.
• Necessary and sufficient if it is the only condition that produces the result
• Necessary but not sufficient if included in all combinations associated with the result, but can not
in itself lead to the result
• Sufficient but not necessary, if it is itself capable of producing the effect, nevertheless there are
other conditions of the combinations of conditions that are also associated with the outcome
• Neither sufficient nor necessary if it produces the result only in conjunction with other conditions
(INUS conditions). Thus, there may be paths leading to the result that do not include the
condition at all or include the negation of the condition
DSS Lab
Causal Complexity In Set-Theoretic Methods
Three elements render the specific form of causality in QCA particularly relevant: equifinality refers to the
characteristic that various (combinations of) conditions imply the same outcome; conjunctural causation
draws our attention to the fact that conditions do not necessarily exert their impact on the outcome in
isolation from one another, but sometimes have to be combined in order to reveal causal patterns;
asymmetrical causation implies that both the occurrence and the non-occurrence of social phenomena require
separate analysis and that the presence and absence of conditions might play crucially different roles in
bringing about the outcome.
These aspects also enable us to analyze INUS and SUIN conditions with the help of QCA. INUS conditions are
defined as insufficient but necessary parts of a condition which is itself unnecessary but sufficient for the
result; SUIN conditions refer to sufficient, but unnecessary, parts of a factor that by itself is insufficient, but
necessary, for the result.
Necessity analysis DSS Lab
A causal condition can be claimed to be necessary for the occurrence of the outcome, when it can be
proved that the outcome fuzzy-set membership scores are a subset of the membership scores of this causal
condition.
In other words, the fuzzy-set membership scores of the outcome should be consistently smaller than or
equal to the membership scores of the causal condition considered as necessary.
The conditions that will be identified as necessary should be taken into account as necessary conditions for
the outcome, and therefore they should be present in every combination of causal conditions that lead to
the outcome (Ragin, 2009).
In order to argue that a causal condition is almost always necessary for a result, the consistency of the
corresponding subset relationship should be high (Consistency> 0,9).
In addition to consistency, coverage of this relationship should be greater than 0.5, as a consistently
necessary condition with very low overall coverage can be considered as empirically irellevant (Ragin, 2006).
DSS Lab
Truth Table analysis
The key tool for systematic analysis of causal complexity is the “Truth Table.” Crisp Truth Tables list the logically
possible combinations of dichotomous causal conditions along with the outcome exhibited by the cases
conforming to each combination of causal conditions.
In a Truth Table, the rows (each representing a different combination of causal conditions) may be numerous,
for the number of causal combinations is an exponential function of the number of causal conditions (number
of combinations = 2k, where k is the number of causal conditions). In effect, a crisp Truth Table turns k
presence/absence causal conditions into 2k configurations.
The Truth Table approach considers all logically possible combinations of conditions, considering both their
presence and their absence. Thus, the Truth Table approach allows for the possibility that different casual
recipes may operate when a given condition is present versus absent.
DSS Lab
Truth Table analysis
The goal of Truth Table construction is to identify explicit connections between combinations of causal
conditions and outcomes. Using the Truth Table, it is possible to assess the sufficiency of all logically possible
combinations of presence/absence conditions (the 2k causal configurations) that can be constructed from a
given set of k causal conditions. The combinations that pass sufficiency are then logically simplified in a
bottom-up fashion.
Truth Tables also discipline the process of learning about cases and the effort to generalize about them. it is
often difficult to identify causal ingredients that must be absent when studying only positive instances of an
outcome.
fs/QCA constructs a conventional Boolean Truth Table from fuzzy-set data and then use this table to unravel
causal complexity. This technique takes full advantage of the gradations in set membership central to the
constitution of fuzzy sets and is not predicated upon a dichotomization of fuzzy membership scores.
DSS Lab
Truth Table analysis
The Truth Table Algorithm is the central tool to analyze sufficient conditions and consists of the three steps.
First, the data matrix is converted into a Truth Table. Second, each Truth Table row is classified either as a
logical remainder, as consistent for the outcome of interest, or as not consistent. Third, the Truth Table is
logically minimized.
The Truth Table Algorithm can be applied both to crisp and fuzzy sets. Dichotomizing fuzzy sets and executing a
crisp-set analysis lead to different results. The outcome and the non-occurrence of the outcome have to be
analyzed separately.
Based on the Truth Table Algorithm, necessary conditions are commonly not correctly identified.
Types of solutions in fs / QCA DSS Lab
Depending on the approach to simplifying assumptions (counterfactuals) in fs/QCA, Truth Table analysis yields three different
solutions:
The causal recipes included in these solutions may differ more or less with each other, but they are always equal in terms of
logical truth and never contain contradictory information (Ragin & Sonnett, 2005, Ragin, 2008).
• Complex Solution
The Complex Solution does not allow for any simplifying assumptions in the analysis. This results in the difficulty of reducing
the complexity of the terms of the solution, thus contributing to the smallest percentage in data analysis, especially when
there is a relatively large number of causal conditions. Of course, this solution is proposed when the number of causal
conditions is not very high.
Types of solutions in fs/QCA DSS Lab
• Parsimonious Solution
The Parsimonious Solution includes all simplifying assumptions (counterfactuals), whether they are based on easy /
difficult counterfactuals, and reduces the terms of the solution (causal recipes) to include as few as possible conditions.
The terms contained in this solution can not be left out of any other solution to the truth table. Decisions about logical
remainders are made automatically, without taking into account the theoretical or empirical knowledge of whether a
simplistic case is meaningful. However, with such a strong case, a Parsimonious Solution should be used only if these
assumptions about logical remainders are fully justified.
• Intermediate Solution
The Intermediate Solution includes only easy counterfactuals based on simplifications to reduce complexity. Thus, it
should not include hypotheses that might be inconsistent with the theoretical or empirical knowledge of the researcher.
Intermedia Solution could be interpreted as the Complex Solution reduced by conditions that run counter to the
researcher's fundamental theoretical or empirical knowledge. The reliability of the Intermediate Solution depends on the
quality of the counterfactuals used in the minimization method. When using simplifying assumptions correctly, the
Intermediate Solution is the main reference for interpreting QCA results (Ragin, 2008).