Implementing Iso 31000: August 2016

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/305893318

IMPLEMENTING ISO 31000

Working Paper · August 2016


DOI: 10.13140/RG.2.1.3968.7923

CITATIONS READS

0 425

1 author:

David Slater
Cardiff University
149 PUBLICATIONS   675 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Modelling the Brain as a Complex System View project

Performance Horse Index View project

All content following this page was uploaded by David Slater on 05 August 2016.

The user has requested enhancement of the downloaded file.


9/14/2012

CAMBRENSIS IMPLEMENTING ISO 31000

Handling “Uncertainty on Objectives” | David Slater


Implementing ISO 31000 –
Handling “Uncertainty on Objectives”

Abstract

Enterprise Risk Management, following the ISO 31000 framework, sets out to address
all risks to a business (not just technical and safety risks) on a consistent
basis. Because it defines risk differently to the classic QRA approaches (i.e. “effect of
uncertainties on objectives”) dependency modeling is an ideal tool to use, as it has
many added benefits over current less rigorous, useful, or effective “recipes” .This
note sets out an approach which meets all the requirements of the new standard.

Context
Guidance on the Standard (PR4GM4, 2011) lists the benefits that the standard seeks to
provide for enterprises: in

•understanding risk and its potential (uncertain) impact on objectives;


•providing information for decision-making;
• facilitating the selection of treatment options;
•identifying the main factors contributing to risk and weak links of a system or organization;
•enabling risk comparisons with other systems, technologies or approaches;
•facilitating communication about risks and uncertainties;
•helping set priorities;

Let’s start with understanding -

Risk and Uncertainty

Anyone running an endeavour - be it a government, army, utility, bureau, infrastructure,


company, project or village fête – will sooner or later need to look at risk.
Though pivotal to the survival of any enterprise, Risk is perhaps the least understood of all
executive disciplines. Most of us shy away even from thinking about it. It sounds horribly
negative – all to do with disasters, unintended consequences and failure. We're
uncomfortable with it and much happier with goals and objectives.
Risk is a term used in different contexts to have different meanings. It's used in finance,
engineering, warfare, politics, project management, insurance, IT, economics, business,
psychology and everyday life.
But what is it? There are as many definitions as there are authors on the subject. Some are
obscure, some vague, some are descriptive and others numerical.
Here are a few:

2
DS ISO31000 0.1 ©Cambrensis 2012
1. The effect of uncertainty on objectives - ISO 31000 (2009) /ISO Guide 73
2. The potential that a given threat will exploit vulnerabilities of an asset or group
of assets and thereby cause harm to the organization - often described as IT risk.
3. (Probability of Disaster) x (Cost of Disaster) - the expected utility.
4. A state of uncertainty where some of the possibilities involve a loss, catastrophe, or
other undesirable outcome - [see Hubbard 2009]
A wide choice, but, to most of us, risk is more about uncertainty.
Imagine you are an investor, and an applicant comes to you with a business-plan. Suppose
it's a novel enterprise in an untried market whose success depends on many things, and you
suspect that the applicant, in his enthusiasm might be entertaining unjustified optimism. You
would rightly be concerned with the possibility of some unforeseen or ill-understood
circumstance that could overturn all the assumptions behind the business model and take
the applicant by surprise, and thereby cause the enterprise to fail.
All this leads to the following observations.

Risk is about goals


You may need to think about it, but with no goals there is no risk, and the term risk only
makes sense in terms of those goals.
For instance if your goal is to stay alive then a dangerous cliff might be a risk, but if you are
bent on suicide then the cliff could be an opportunity, and a safety net might be a threat.

Goals depend on other things for their success


Pick a goal, any goal – like throwing a successful party. This might depend on friends being
available on the day; maybe on the weather; on there being no industrial action preventing
people travelling; on the reliability of the sound system; on the caterers turning up; on you or
some close relative not being taken ill at the last moment; on there not being some sudden
National emergency … You get the picture.

Risk is about “things we cannot control predict or understand?”


We cannot control all the things we depend on. If we could control everything we depend on
there would be no risks. Everything would go according to plan and nothing would take us
by surprise. But we can't. It gets worse. We can't even predict accurately which
dependencies will let us down. If we could predict accurately then at least we wouldn't be
taken by surprise and we could make adjustments to compensate – like the insurer of the
gunpowder factory.
Finally, if we cannot even fully understand all the things that our goal depends on then we
certainly can't control or predict them, and in a sense our risks are even greater.
Risk then is about goals and how their achievement depends on things we may not be able
to control, predict or understand. Let’s try a new definition of terms

Risk: The degree to which the chances of achieving our goals are affected by things we
cannot control, predict or understand
Dependency: Anything that you rely upon to achieve or sustain your goal or desired state

These definitions fit well with the conceptualisation of risk as suggested by the ISO 31000
standard.

3
DS ISO31000 0.1 ©Cambrensis 2012
The Process of Risk Management
ISO 31000 sets out a framework for the process below:-

Fig 1 - The ISO 31000 Process

The “Identify Risks” Box is defined as the process of research, recognition and
registration of risk.

GOAL: To identify the reasons why the objectives of the system or organization may not be
achieved – in other words, it’s DEPENDENCIES!

The “Analyse Risks” Box is concerned with the both the assessment of “Controls”
effectiveness and reliability and the consequences of their failure.

GOAL: To obtain estimates of the level of risk; which in turn, depends on the adequacy and
effectiveness of existing controls (protection or mitigation) on its critical dependencies, or
their consequences (resilience).

This involves answering the following questions:

4
DS ISO31000 0.1 ©Cambrensis 2012
 What are the existing controls in relation to a particular risk?
 Are these controls able to handle the risk so as to maintain a tolerable level?
 In practice, do the controls they work as expected and can their effectiveness be
demonstrated?
 What is the nature, scale and type of impact that may occur?
 How do these rank in terms of significance?

The “Treat Risks” Box is then about utilizing the insights from the risk analysis to prioritize
actions and to choose the most cost effective measures to address them based on corporate
criteria and then to ensure they are monitored and managed more intelligently and
effectively from these insights.

Finally the importance of communicating and consultation is emphasized, to which could


be added the importance of documenting the process to facilitate audit, learning and
development of corporate memory as continuous improvement.

To this should be added the recording and utilization of interruption/upset and near miss
incidents to inform and improve the effectiveness of reliance on critical dependencies.

The standard then goes on to detail a selection of evaluation techniques which are suitable.

This selection does not currently utilize what is potentially the most useful approach which is
proving increasingly useful and is utilized as a basis (The Open Group, 2012)for recording
and exchanging digital risk information within and among organizations.

Below is set out an outline of how it could be used.

Dependency Modeling
Overview
Most people can define their main goals in an enterprise. It is then relatively easy to identify
various other things on whose contributions success depends. In turn, these can - like the
proverbial fleas - depend “ad infinitum” on yet other dependencies in a fractal-like way. To
help ensure we achieve our goal, we need to identify this critical network of dependencies
and make sure that they deliver.

But at some point they move outside our sphere of influence and control. These
uncontrollable, but critical dependencies are what, in today’s complicated world, catch us out
more and more frequently. The recent financial chaos is a classical example.

This new analytical methodology, “Dependency Analysis”, similarly allows you to capture
and manage dependencies (influences), quantitatively; and provides measures of
probability of their status. It is essentially about building a Bayesian Belief Net, which in
practice is as simple as quantitative “mind mapping”.

These influences can then be expanded into a set of dependencies and analyzed using
dependency analysis as to their probable status and the sensitivity of the goal to each
one.

5
DS ISO31000 0.1 ©Cambrensis 2012
Changes in the status of any one of these influences can be instantly translated into a new
“Risk picture” (It is possible to do this in real time as in watching the shop?)
The potential of the tool can be further illustrated by looking at a model of the Fukushima
dependencies –

Figure 2 – Fukushima’s Failed – Likely Cause?

From an analysis point of view, it allows us to set the top “goal” to fail and provides a ranked
list of the likelihood it will be one of the identified dependencies. If we apply dependency
analysis to identify the critical dependencies and “most likely” to be a cause of not meeting
our goal, we get a series of possible contributions, ranked in order of likelihood.

Figure 3 -

The Blackett review (OCSA, 2012) implied the Japanese dismissed the over topping as
inconceivable. In fact an 11 meter tsunami was deemed possible, but not probable enough
to take (more expensive?) mitigation measures.
If we accept that we can never be 100% certain, then this analysis is a most useful
presentation of potential failures/causes ranked by probability/significance. It also crucially
allows interaction of interdependent models from systems of systems
The approach has been developed through research funding from CPNI(Centre for the
Protection of the National Infrastructure)/TSB (Technology Strategy Board).

The protocols for exchanging risk information securely are being rolled out by The Open
Group as a global standard. A series of workshops is currently helping customize the tool for
specific applications, from Cyber security to Scenario building.

Because of its simple elegance and intuitive ease of use, this has provided a powerful new
addition to the tools available for addressing the problems that today’s complex systems
throw at us.

6
DS ISO31000 0.1 ©Cambrensis 2012
As a Vehicle for achieving ISO 31000
1. Establish the Context

“Dependency modeling starts with understanding and defining our Objectives.”(Context)


An objective must have all of the following characteristics:

 It is desirable or necessary - a sort of goal. We deal in goals, not problems. To mix


good and bad outcomes would require the logic to include NOT, NOR, NAND etc and
confuse the already overloaded user.
 It is abstract, not material - i.e. to "keep out the bad guys", not "an access control
system" because you can have an access control system but still not keep out the
bad guys. Keeping out the bad guys is the goal. This distinction also deals with
congenital box-tickers.
 It is precisely defined. In other words we're not using "abstract" to mean "vague". A
well defined goal would be along the lines of "to prevent attackers or their agents
from entering company premises, or introducing malicious software into company
computers or networks at any time between midnight January 1st 2011 to midnight
January 1st 2012, or to detect their presence in a sufficiently timely manner as to be
able to react in such a way as to prevent any losses as catalogued in list 1"
 It must have at least two, distinct, named, ordered states, indexed in increasing order
of desirability. This is necessary to allow the automatic inference of whether a goal
has succeeded or failed and to measure risk.
 These states must have meaningful, statistical probabilities which, if the goal has
immediate dependencies, will be predicated on the states of those immediate
dependencies.
 Values of 0 or 1 are meaningful permitted probabilities for goals with dependencies
but not for leaf dependencies, since in the latter case they are equivalent respectively
to non-existence or non-dependence.
 Leaf goals are assumed to be statistically independent. Any statistical
interdependence must be explicitly expressed by cross-linking a common cause,
such as one or more extra, common goals.

2. Identify the Risks

To set a framework for the analysis an objective driven work breakdown ODWBS
approach is used. We define a Top “Goal” (an entity in a value chain) as a coherent set
of aspirations – a goal, and can assign the necessary performance requirements for its
successful operation as “Dependencies”. At some point these fall outside the scope of
our direct control (a “Leaf” dependency). The status of this leaf could be the
responsibility of another business unit, e.g. “downstream” raw materials supplier, or
service provider. These supplier, or dependency functions can then be subsequently
modeled, or their status notified to the “upstream” model. This is illustrated in the Annex
example – Home alone. The analysis can then be extended at will, both upstream and
downstream and the connections (identified dependencies) are made automatically
wherever they occur. Annex 1 shows an illustrative application to an objective everyone
can relate to (my home’s OK).

7
DS ISO31000 0.1 ©Cambrensis 2012
3. Analyze the Risks

The previous section has provided us with a quantified “mind map” of critical dependencies,
the failure of any one of which, can cause the failure of meeting our objectives. This Cause
can therefore result in Consequences for the business. The impact on the business can
then be modeled by identifying the sequence of events that follow. One convenient way is
shown below borrowed from incident analysis (Slater, 2012), Reason’s Swiss cheese
(Reason) and “Bow ties” (Slater)

Failure
Cause Consequence
Event

Protective Mitigating
Barriers Barriers

Fig. 4 – Conventional accident sequence (Reason, Bow Ties)

There is a great deal of confusion about the terms used by different people in any formal
study of Risk. Almost every industry and new “Standard” reinvents a glossary that that
particular group (but probably no other?) recognizes as definitive. We thus have no
alternative but to try and avoid ambiguity by defining the terms yet again as we move
through the development of an issue.

 Unexpected (Uncertain) Incidents, or perturbations, in systems can lead to failure


to meet our objectives.
 A potential Threat is a Hazard, which if realized (with a probability of occurrence –
F1) can Cause such a failure of a critical dependency..
 The initiating cause exploits a Vulnerability (the probability of which is the
probability of the occurrence times the probability of failure on demand (PFDP’s) of
any protective “Barriers”).
 This results in an Event which is usually identified as the point at which control of
the system is lost.
 Loss of control can lead to a number of outcomes, with different consequences,
and with different probabilities of occurrence.
 These are the Event probability (Vulnerability) times the probabilities of Failure on
Demand (PFDM ‘s) of any Mitigation “Barriers” (the system’s Resilience).
 An event can also be the Cause of another similar sequence, and so on if it keeps
interacting and initiating further events – Domino effects.
 The” effectiveness”, the product of all these individual PFD’s (“Layers of

Protection (LOPA)”) is a measure of the “System Integrity Level” (=F1x πPFD’s)

8
DS ISO31000 0.1 ©Cambrensis 2012
 The effectiveness of the protective Barriers (e.g. ABS) defines the System’s

Vulnerability (= π PFD ’s) and its Resilience (= 1/π PFD ’s) is due to the
P M

mitigating measures (e.g. seat belts and air bags).

This effectively summarizes the terms commonly encountered but rarely defined in Risk
studies.

Probability of Failure

We can utilize the dependency model to give a measure of risk for the failure (the
uncertainty) that our objective will be achieved. The protection is then the additional
dependencies that can be added as a “back up” (e.g. the spare TV in the Home alone
example).The standard refers to these as the “Controls”

From the Bow ties and Barriers model, it is normal to look for three types of “Control”:-

 “Hard” engineered or system devices *e.g. pressure relief valves.


 Warning or alarm systems which rely on intervention or response, and
 “Soft” measures which rely on observing procedures, or training.

These can be modeled as separate systems with hard and soft dependencies but linked to
our top model to modify the leaf dependency failure probability. Annex 2 shows how one
system can have dependencies in common which can be automatically updated in every
model so linked.

Failure
Cause
Event

Barrier 1

Barrier 2

Barrier 3

Fig. 5 –“Controls” modifying failure probability

Consequences or Impact

In Dependency modeling, the “Goal” can have any number of states from the bare two (on or
off) to a series of intermediary states, reflecting partial degradation or reduction in
effectiveness. An example is of a power station which can supply say 50%, or 25% of its

9
DS ISO31000 0.1 ©Cambrensis 2012
output on which a customer depends. This is calculated from a condition table in which
combinations of input states are defined rather than just the basic and/or logic. Thus the
Failure event can have a range of expectation probabilities (inverse uncertainties) of
different defined impacts on the business depending on the different states of the causes
and barriers.

We can even add a state of “Uncertainty” which we can represent as a third band ( On, 0ff
and Don’t Know) sometimes known as an “Italian Flag” representation.

The idea of multiple failure mode specification is described in Annex 3

4. Record and communicate the Results

The dependency model has been created in the cloud and is a permanent display of the
Risk Analysis. It is a shared editable/ interrogatable record of the nature, identity and status
of the system’s/ Business Unit’s / Organization’s critical dependencies, visible if so wished
across the whole organization.

In practice each level would have its own model mining down to outlying dependencies only
where appropriate. This could easily be extended to supply chains.

5. Monitoring and Management

The Open Group Protocol (Group, 2012), specifies a data format (XML file), that can be
imported into a dependency model as a leaf dependency. This supplies the status data for
the model and can be obtained from the cloud if the URL is known, or can be a dedicated
sensor for that particular application(e.g. Power or not from the local distribution Board). Now
we have bypassed the need to estimate the probability of having a power supply (or not!),
we know and the models (all the relevant models are updated instantaneously.

This can give management their customary “dashboard” display of key dependencies (Risks)
and their actual status; only now we can give it them in real time right across the
organization globally if necessary through the cloud!

6. Incident reporting and Organizational Learning

Just as we can use dependency modeling in reverse – i.e. put the top goal as failed and look
for the most likely cause (Fukushima example)., we can use real performance data to
calibrate our “Barrier” effectiveness probabilities and institute actions on a focused, justified
basis as the extent and mode of performance improvement can be demonstrated.

In a recent application to a Blast furnace, this technique proved itself in diagnosing effects
which were known anecdotally but could not be justified by conventional analytical
techniques.

Finally the model can be utilized in a “what if” mode to assess the effectiveness of different
emergency planning scenarios to increase the resilience of the organization.

10
DS ISO31000 0.1 ©Cambrensis 2012
Conclusions

We believe that Dependency modeling offers Risk Managers a new , more powerful tool
than is currently specified, which meets the spirit and aspirations of the standard and can be
developed into a truly “Enterprise Wide” Risk Management system of immediate use and
benefit to hard pressed management who do not have time perhaps to read all the reports
and the ring binders.

DS

Bibliography
Group, T. o. (2012). Dependency Analysis. The Open Group.

OCSA. (2012). Blackett Review of Low Probability, High Consequence Risks. London: UK Gov't.

PR4GM4. (2011). Risk Assessment Techniques ISO 3100.

Reason. (n.d.).

Slater. (n.d.).

Slater. (2012). Rooting out the Cause.

The Open Group. (2012). Protocol for exchange of trusted digital risk information. The Open Group.

11
DS ISO31000 0.1 ©Cambrensis 2012
Annex 1 - Example Application–
Home Alone!

Interdependencies in a home

There is a main goal, called home OK and it depends upon the provision of entertainment,
heating, nutrition and entertainment.

Entertainment (by which we merely mean the ability to watch TV) in turn depends on the
availability of TV, an antenna and electricity.

Electricity in turn depends on mains supply and wiring. (Notice that after its first appearance,
electricity is shown with three dots, meaning that the dependencies of electricity are the same
as before, but for compactness are not repeated.)

12
DS ISO31000 0.1 ©Cambrensis 2012
The rest of the diagram should be more or less self evident.

Only the names of the interdependent entities are shown, neither their nature nor all the types
of relationships, but nevertheless these details are present in the model and can be viewed and
changed. We'll come back to this later.

Notice that all the entities in the boxes are goals in some sense, in other words they are
something we must have, desire or need. They can also represent things we already have and
the goal is not to lose them. None of them represents a problem. With this formulation
problems are merely unsatisfied goals.

Terminology
Notice that some of the entities have dependencies - i.e. entities joined to their right - the
things they most immediately depend on.

For instance entertainment has the dependencies TV, antenna and electricity. Conversely
electricity etc. have entertainment as a dependent. The main goal, home ok can also be
thought of as the root (as in a tree), while lesser goals with dependencies, such as
entertainment are sometimes called branches. Finally some entities (e.g. mains supply) don't
have dependencies at all, so to extend the tree metaphor they are said to be leaves.

Leaves are very important. Sometimes they are called uncontrollables to emphasize that they
are usually beyond our control. For the same reason they can be described as given. So
although some of our goals depend on a mains electricity supply, nevertheless this is
something we are simply "given", and as far as we are concerned it is an uncontrollable
because we can't change its properties.

It turns out that all risk is due either to uncontrollables or to incomplete information and we
shall return to this when we come to measure risk.

(Note that some authors use a dynastic metaphor wherby if C has a dependency P then P is
said to be a parent of C, while C is a child of P. In other words dependency=parent and
dependent= child.)

Probability of achieving goals


The probabilities of achieving all the goals are shown as coloured bands in this diagram:

13
DS ISO31000 0.1 ©Cambrensis 2012
Probabilities of achieving all the goals

This model is a simple one where each entity has only two possible states - success shown in
green and failure shown in red. The widths of the colour bands show the probabilities that
each entity will be found in these states. It is possible to have as many states as we like, but
for simplicity in this model all entities have just two.

States are always arranged in increasing order of desirability with the first state representing
failure and the last state meaning success, and any in-between states, if any representing some
intermediate condition.

Data for the model


These bands result from calculations based on just two pieces of information supplied as part
of the model:

 Base probabilities for the leaves


 Branch relationships
 Leaf probabilities

14
DS ISO31000 0.1 ©Cambrensis 2012
Leaf base probabilities

A leaf has a-priori probabilities specified for each state. For instance mains supply may have
two states called (say) off and on with probabilities of (say) 0.1% and 99.9% respectively,
meaning that during a particular period (of perhaps a week) there is a 0.1% probability of a
non-trivial power outage and a 99.9% probability of uninterrupted service. We can't change
the properties of the mains supply, so it's given and uncontrollable.

(Note that sometimes we will use percentages and sometimes decimal fractions to express
probabilities. So 0.5% could also be written as 0.005.)

Branch relationships

There are three kinds of branch relationships called AND-type, OR-type and CUSTOM-type.
The AND and OR types are indicated by small words above the links

The branch called electricity is an AND-type. By this we mean that for electricity to be in its
on state it needs both mains supply AND wiring to be working (i.e. to be in their success
state).

The branch TV has an OR-type relationship. We have a functioning TV if either the main TV
OR the spare TV, or both are working.

It is important to grasp that AND-type relationships enormously increase risk while OR-type
relationships greatly reduce it. In other words all relationships are not born equal.

There is an infinite number of possible relationship types, and all the ones that are not AND
or OR types are lumped together under the heading CUSTOM-type.

But we can make a lot headway with just two states for each entity, and just the two types of
relationship, AND and OR.

What if something changes?


The model is completely under our control so we can experiment with making changes and
see what difference it makes. That is one of the ways we come to understand the risks to our
enterprise.

For instance we can specify that the spare TV is not working. Here's how the top part of the
diagram changes:

With spare TV not working

15
DS ISO31000 0.1 ©Cambrensis 2012
We see that the red sections of home OK, entertainment and TV have all increased a bit.

The spare TV was an uncontrollable. We can also specify that one of our goals is not
achieved. For instance if instead of spare TV we specify that entertainment was found to be
not achieved, then the picture changes very dramatically:

Entertainment not achieved

Now we see that our main goal home OK fails. But looking more closely we see that other
entities, for instance electricity and antenna have increased probabilities of failing too. Why
is this?

It's because the model has automatically adjusted itself to take account of the fact that
entertainment failed. It provides in effect an explanation totally consistent with all the
available data. In other words it's almost as if the model is trying to explain the likely causes
of the failure of entertainment.

This is made possible because it has a Bayesian engine to recalculate all the statistics.

16
DS ISO31000 0.1 ©Cambrensis 2012
Annex 2 – El Metodo
“Consider a Business unit (functionality) that provides IP-connectivity between various
locations.” Its manager defines the “objective “ or goal of this functionality and models what
he is in turn dependent on to meet his obligation,

Examples:

• IP-connectivity needs to be provided for at least 99.9% of the time.


• IP-connectivity services need to comply with the Data Protection Act.
• Annual revenue of IP-connectivity Unit should be at least € R
A manager can only meet his “goal” if he can rely on his dependencies to be fulfilled, either

 within his own scope of control (internal dependencies)


 or, by some other business unit or functionality (external dependencies)”.

Below is the screen shot of the simple iDEPEND model of this case. We can see that he is
dependent on his network infrastructure working, his “boxes” functioning correctly and
reliable (maintained and serviceable); which requires as an example, the routers to be
available; and an electric power supply connected and working.

The System for providing IP connectivity.

Fig. 1: Linking paragons (Obligations) to Goals and Dependencies in unit IP connect

17
DS ISO31000 0.1 ©Cambrensis 2012
In turn, this depends on the glass fibre network providers doing their job (– twice for both an
A – net and a B-net.). These units can now be modeled in the same way with the paragons
A Net OK and B Net OK being common dependencies of the two separate systems –IP
connect and the Glass Fibre provider systems.

Glass Fibre System models

A – Net and B-Net


Optical transponders
A-net work

tical transponders

BBB- Net
B - Net

B-net work

Fig. 2: Linking Obligations and Expectations Between Functionalities in the separate systems

In the Dutch paper “Figure 2 illustrates the linking of expectations of the functionality ‘IP-
connect’ with obligations of the functionality ‘Glass’. Not only does this provide insight in
functional dependencies, it also provides a means to automate risk management, as we
shall see in the next section”.
iDepend clearly accomplishes these requirements and more.

Results and Discussion

In the modeling sequence, the software prompts the user to specify not only the gate logic
(And / Or) for the dependencies, but also the probabilities of the dependency paragons to be
in the states required (expectations). This is accomplished by typing in a dialogue box or
estimating from experience using a “slider”. The availability of A Net and B Net are

18
DS ISO31000 0.1 ©Cambrensis 2012
automatically updated from the other business unit models and the Maintenance
performance from the SLA’s.
But the power is an external feed, which can be estimated from experience, but because it
is readily available from a website or a direct feed from the supply distribution board, this can
be updated in real time as an actual state (on/off) not an estimated probability.

The software can then produce three reports.

 Failure Probabilities
 Failure Modes, and
 A 3 – point sensitivity presentation of the effect of the critical dependencies

Failure Probabilities

This is displayed in the application as a red/green bar chart presentation, overlaid on the
respective paragons. (Red means failure, green success). This display is “live” and responds
immediately to any change(s) in the linked paragons involved.

It will also indicate, quantitatively, the key vulnerabilities of the system(s), by setting the top
paragon to fail and observing the failure status of key leaves. The display can then generate
a “Modality” report, which can indicate which dependencies have a direct, unattenuated
effect on the System performance. In the illustration below, it can be seen that maintenance
is a critical Mode 1 dependency.

Failure Mode and Effect Reports

19
DS ISO31000 0.1 ©Cambrensis 2012
Figure 3 – Failure Modes and Probabilities for IP Connect

Sensitivity Report

This report is a very helpful display of 3 crucial insights:-

1. The predicted (or actual?) probability of the system meeting its objective (99.5%
availability).
2. The relative impact of the critical leaves on the overall failure, and
3. The extent to which, improving the performance of a key leaf can have on the total
system behaviour.

This is shown below for our three systems:-

Figure 4 – the Ranking of Importance and Scopes for the key dependencies of IP Connect

The overall probability of meeting the performance targets for this system are shown as 86%
with the two most important dependencies as Maintenance and the availability of the
hardware, which makes sense.

Further the analysis suggests that improving the reliability of maintenance SLA could
increase the overall performance of the system to over 90%.

On the other hand, the intuitive nervousness of the effect of interruption of Power supply has
been shown to be effectively addressed with the provision of the alternative UPS back up.
Improving the reliability of the UPS could reduce this anxiety still further.

20
DS ISO31000 0.1 ©Cambrensis 2012
Again it clearly meets the Dutch requirement –

“Ideally, Unit managers will communicate the probabilities of their obligations to the units
that depend on them”,

“For business units within a single company this is feasible because the required
transparency is readily achievable. Communication of probabilities between business units in
different companies (supply chains) may need more coordination. However, the method still
works, although unit managers may need to second guess or assure the probabilities of
external expectations themselves.

Note that if all managers in a value chain really cooperate, they could see the effects that
their risk treatment decisions have in other areas. Also, risk mitigation can be optimized as
risks that are run in a specific unit might well be mitigated by controls implemented in
another scope”.

But the software will do it for him; implemented as a cloud based portal, it is instantly
updated and accessible across the whole organization. Thus, a chain (or better: web) of
functionalities can be modeled and corresponding risks computed automatically).
The potential of the tool can be further illustrated by looking at a model of the Fukushima
dependencies -

Figure 5 - Fukushima Reactors Safe – iDEPEND illustrative example

21
DS ISO31000 0.1 ©Cambrensis 2012
22
DS ISO31000 0.1 ©Cambrensis 2012
Annex 3 - Failure Modes
Take a look at this model.

Two ways of organizing a filling-station forecourt

It shows two ways of organizing a garage filling station. Each of the ways is show as a
different goal.

The garage has four appliances - two pumps and two pay-points (tills). The first setup is
called Islands where the appliances are organized as two islands each with a pump and a
till. Each island as an AND-relationship of a pump and a till, while Islands is an OR-
relationship of the two islands.

In the second setup, called Functions, the plumbing and wiring is slightly different so that
either till works with either pump. It's organized by functionality, and there are two function
called Pumps and Tills. Pumps is satisfied if one or more pumps is working, so it's an OR-
relationship of the two pumps, and Tills is similar for the tills. However Functions is an AND-
relationship of Pumps and Tills.

As the diagram shows, Functions is more likely to succeed than Islands, which some people
find surprising. A good way to help people understand why this happens is to show the
failure modes.

These are the four failure modes for Islands:

23
DS ISO31000 0.1 ©Cambrensis 2012
Four failure modes for Islands

The entities with red borders fail. The failure is caused by the uncontrollables failing on the
right. For instance the first failure mode is caused by Pump-1 and Pump-2 failing. As a result
both islands fail, and hence Islands fails. These are the two failure modes for Functions:

Two failure modes for Functions

24
DS ISO31000 0.1 ©Cambrensis 2012
The obvious difference is that Islands has more failure modes than Functions. Moreover
every failure mode of Functions is also a failure mode of Islands and so those occur with the
same probability in both models.

The fact that Islands has more ways to go wrong is pretty convincing as an explanation,
even for someone with a non-technical background.

But what's a failure mode?

It's worth saying what we mean by a failure mode.

If M is a set of one or more uncontrollables and G is a goal, then M is a failure mode of G if


and only if

 G inevitably fails if every uncontrollable in M fails, but


 G doesn't fail unless all the uncontrollables in M fail.

The number of uncontrollables in the smallest failure mode M is an important measure of


risk. The phrase "belt and braces" is an everyday expression suggesting a failure mode of
size 2.

If M consists of just one uncontrollable then M is a single-point failure. This is normally


considered a serious risk because a single item going wrong can cause a disaster.

The larger the number of uncontrollables in the smallest failure mode, the more things have
to go wrong for the goal to fail, and the more resilient and less risky is the enterprise.

Custom types

Earlier we mentioned that in addition to AND-type and OR-type relationships we could have
CUSTOM-types.

The nature of the relationship is specified by a conditional probability table. Here's the one
for entertainment on the home OK model.

prob of state
TV antenna electricity no yes
no no no [1] [0]
no no yes [1] [0]
no yes no [1] [0]
no yes yes [1] [0]
yes no no [1] [0]
yes no yes [1] [0]
yes yes no [1] [0]
yes yes yes [0] [1]

This is a typical table for an AND-relationship. All the elements have just two states called no
and yes. The probability of the yes state is zero unless all three of TV, antenna and
electricity are all yes.

The probabilities in each row must add up to 1.

25
DS ISO31000 0.1 ©Cambrensis 2012
Here is the table for TV, with dependencies main TV and spare TV

main TV spare TV no yes


no no [1] [0]
no yes [0] [1]
yes no [0] [1]
yes yes [0] [1]

This is a typical OR-relationship.

Now there is no reason to limit ourselves to just these two cases. Here is an example where
we capture the idea that even when TV, antenna and electricity are all present there is still
only a 95% chance that we have entertainment.

prob of state
TV antenna electricity no yes
no no no [1] [0]
no no yes [1] [0]
no yes no [1] [0]
no yes yes [1] [0]
yes no no [1] [0]
yes no yes [1] [0]
yes yes no [1] [0]
yes yes yes [0.05] [0.95]

We can model a wide range of relationships by cunningly specifying the conditional


probability table of entities.

26
DS ISO31000 0.1 ©Cambrensis 2012

View publication stats

You might also like