Computer - Security - (PG - 239 253)
Computer - Security - (PG - 239 253)
Gollmann, Dieter. Computer Security, Wiley Textbooks, 2011. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/londonww/detail.action?docID=819182.
Created from londonww on 2025-02-17 11:02:11.
Copyright © 2011. Wiley Textbooks. All rights reserved.
Gollmann, Dieter. Computer Security, Wiley Textbooks, 2011. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/londonww/detail.action?docID=819182.
Created from londonww on 2025-02-17 11:02:11.
220 12 SECURITY MODELS
12.1 T H E B I B A M O D E L
Consider integrity policies that label subjects and objects with elements from a lattice
(L, ≤) of integrity levels and that prohibit the corruption of ‘clean’ high-level entities by
‘dirty’ low-level entities. Information may only flow downwards in the integrity lattice.
As in the BLP model, we will only contemplate information flows caused directly by
access operations. ‘Clean’ and ‘dirty’ are used as shorthand for high integrity and low
integrity. The concrete meaning of integrity levels would depend on the given application.
The Biba model [35] formalizes this type of integrity policy. It is a state machine model
similar to BLP, and we will use the mathematical notations introduced in the previous
chapter. The assignment of integrity levels to subjects and objects is given by the functions
fS : S → L and fO : O → L. Unlike BLP, there is no single high-level integrity policy.
Instead, there are a variety of approaches. Some even yield mutually incompatible policies.
Simple integrity property (no write-up). If subject s can modify (Alter) object o, then
fS (s) ≥ fO (o).
Integrity ∗-property (no read-down). If subject s can read (Observe) object o, then s can have
write access to some other object o only if fO (o ) ≤ fO (o).
These two integrity properties are the dual of the mandatory BLP policies and are the
basis for claims that integrity is the dual of confidentiality.
Subject low watermark property. Subject s can read (Observe) an object o at any integrity
level. The new integrity level of the subject is inf(fS (s), fO (o)), where fS (s) and fO (o) are the
integrity levels before the operation.
Object low watermark property. Subject s can modify (Alter) an object o at any integrity
level. The new integrity level of the object is inf(fS (s), fO (o)), where fS (s) and fO (o) are the
integrity levels before the operation.
These are examples of policies with dynamically changing access rights. As integrity levels
can only be lowered, there is a danger that all subjects and objects eventually sink to
the lowest integrity level. Note that organizations working with MLS policies observed a
corresponding tendency. Objects had a way of percolating up to System High over time.
Gollmann, Dieter. Computer Security, Wiley Textbooks, 2011. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/londonww/detail.action?docID=819182.
Created from londonww on 2025-02-17 11:02:11.
12.2 CHINESE WALL MODEL 221
Subjects are only allowed to invoke tools at a lower level. Otherwise, a dirty subject
could use a clean tool to access, and contaminate, a clean object (see Section 6.3.6).
Alternatively, we may use tools for this very purpose: dirty subjects may have access to
clean objects, but only if they use a clean tool to do so (controlled invocation). This
tool may perform a number of consistency checks to ensure that objects remain clean.
Integrity protection mechanisms in operating systems that use protection rings (Section
5.6.4) fall into this category. In this scenario, a more privileged subject should not use
less privileged tools and we get the
Ring property. A subject s can read objects at all integrity levels. It can only modify objects
o with fO (o) ≤ fS (s); it can invoke a subject s only if fS (s) ≤ fS (s ).
Quite obviously, the last two properties are contradictory. It will depend on the
application which property is more appropriate.
12.2 C H I N E S E W A L L M O D E L
The Chinese Wall model by Brewer and Nash captures access rules in a consultancy
business. Analysts have to avoid conflicts of interest when dealing with different clients
[44]. Informally, conflicts arise because clients are direct competitors in the same market
or because of the ownerships of companies. Analysts are governed by the following
security policy:
Copyright © 2011. Wiley Textbooks. All rights reserved.
The state set of the Bell–LaPadula model needs some slight adaptations to address this
policy.
• The objects are items of information. Each object refers to a single company. The set
of objects is denoted by O.
Gollmann, Dieter. Computer Security, Wiley Textbooks, 2011. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/londonww/detail.action?docID=819182.
Created from londonww on 2025-02-17 11:02:11.
222 12 SECURITY MODELS
• All objects concerning the same company are collected in a company data set. The
function y : O → C gives the company data set of each object.
• Conflict of interest classes indicate which companies are in competition. The function
x : O → P(C) gives the conflict of interest class for each object, i.e. the set of all
companies that should not learn about the contents of the object.
• The security label of an object o is the pair (x(o), y(o)).
• Sanitized information has been purged of sensitive details and is not subject to access
restrictions. The security label of a sanitized object is (∅, y(o)).
Conflicts of interest arise not only from objects currently accessed but also from objects
that have been accessed in the past. We therefore need a data structure that records the
history of the subjects’ actions. This purpose is served by a Boolean S × O matrix N,
with
TRUE, if the subject s has had access to object o,
Ns,o =
FALSE, if the subject s has never had access to object o.
Setting Ns,o = FALSE for all s ∈ S and all o ∈ O gives an initial state that fulfils the
security properties below.
The first security policy deals with direct information flow. We want to prevent a subject
from being exposed to a conflict of interest. Therefore, access is granted only if the object
requested belongs to
ss-property. A subject s is permitted to access an object o only if for all objects o with
Ns,o = TRUE, y(o) = y(o ) or y(o) ∈/ x(o ).
On its own, this property does not guarantee compliance with the stated security policy.
Indirect information flow is still possible. Consider the following example (Figure 12.1).
Two competitors, A and B, have their accounts with the same Bank. Analyst_A, dealing
Copyright © 2011. Wiley Textbooks. All rights reserved.
with A and the Bank, updates the Bank portfolio with sensitive information about
company A. Analyst_B, dealing with company B and the Bank, now has access to
information about a competitor’s business. Therefore, also write access is regulated.
∗-property. A subject s is granted write access to an object o only if s has no read access to
an object o with y(o) = y(o ) and x(o ) = ∅.
Gollmann, Dieter. Computer Security, Wiley Textbooks, 2011. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/londonww/detail.action?docID=819182.
Created from londonww on 2025-02-17 11:02:11.
12.3 THE CLARK–WILSON MODEL 223
read write
Analyst_A
conflict A
of interest
class
B
Analyst_B Bank
write read
In contrast to the BLP model, where the assignment of access rights is usually assumed to
be static, we have here a model where access rights have to be reassigned in every state
transition.
12.3 T H E C L A R K – W I L S O N M O D E L
Clark and Wilson address the security requirements of commercial applications [66].
They argue that these requirements are predominantly about (data) integrity, i.e. about
preventing unauthorized modification of data, fraud, and errors. This is a rather wide
definition of integrity. In fact, the authors even include issues of concurrency control,
which are beyond our scope of security. Integrity requirements are divided into two parts:
• internal consistency, which refers to properties of the internal state of a system and
can be enforced by the computing system;
• external consistency, which refers to the relation of the internal state of a system to
the real world and has to be enforced by means outside the computing system, e.g. by
auditing.
The Clark–Wilson model uses programs as an intermediate layer between subjects and
objects (data items). Subjects are authorized to execute certain programs. Data items can
Gollmann, Dieter. Computer Security, Wiley Textbooks, 2011. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/londonww/detail.action?docID=819182.
Created from londonww on 2025-02-17 11:02:11.
224 12 SECURITY MODELS
be accessed through specific programs. Defining the set of programs that may access data
of a certain type is a general mechanism in software engineering (see abstract data types,
object-oriented programming), which can be gainfully employed in constructing secure
systems. It is testimony to the influence of the BLP model that Clark and Wilson write
about ‘labelling subjects and objects with programs instead of security levels’.
The following points are considered in the Clark–Wilson model (see Figure 12.2):
In a formalization of this model, the data items governed by the security policy are called
constrained data items (CDIs). Inputs to the system are captured as unconstrained data
user
authentication
authorization
Copyright © 2011. Wiley Textbooks. All rights reserved.
TP
Gollmann, Dieter. Computer Security, Wiley Textbooks, 2011. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/londonww/detail.action?docID=819182.
Created from londonww on 2025-02-17 11:02:11.
12.4 THE HARRISON–RUZZO–ULLMAN MODEL 225
items (UDIs). Conversion of UDIs to CDIs is a critical part of the system which cannot
be controlled solely by the security mechanisms in the system. CDIs can be manipulated
only by transformation procedures (TPs). The integrity of an item is checked by integrity
verification procedures (IVPs).
Security properties are defined through five certification rules, suggesting the checks
that should be conducted so that the security policy is consistent with the application
requirements:
CR1 IVPs must ensure that all CDIs are in a valid state at the time the IVP is run
(integrity check on CDIs).
CR2 TPs must be certified to be valid, i.e. valid CDIs must always be transformed into
valid CDIs; each TP is certified to access a specific set of CDIs.
CR3 The access rules must satisfy any separation-of-duties requirements.
CR4 All TPs must write to an append-only log.
CR5 Any TP that takes a UDI as input must either convert the UDI into a CDI or reject
the UDI and perform no transformation at all.
Four enforcement rules describe the security mechanisms within the computer system that
should enforce the security policy. These rules have some similarity with discretionary
access control in the BLP model.
ER1 For each TP, the system must maintain and protect the list of entries (CDIa,CDIb,
…) giving the CDIs the TP is certified to access (capability of the TP).
ER2 For each user the system must maintain and protect the list of entries (TP1,TP2,
…) specifying the TPs the user can execute (capability of the user).
ER3 The system must authenticate each user requesting to execute a TP.
ER4 Only a subject that may certify an access rule for a TP may modify the respective
entry in the list. This subject must not have execute rights on that TP.
The Clark–Wilson model is a framework and guideline (‘model’) for formalizing security
Copyright © 2011. Wiley Textbooks. All rights reserved.
policies rather than a model of a specific security policy. It stresses the importance of
management approval of the processes and of the security policies to be followed in an
organization. The model refers to this step as certification.
12.4 T H E H A R R I S O N – R U Z Z O – U L L M A N
MODEL
The Bell–LaPadula model does not state policies for changing access rights or for the
creation and deletion of subjects and objects. The Harrison–Ruzzo– Ullman (HRU)
Gollmann, Dieter. Computer Security, Wiley Textbooks, 2011. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/londonww/detail.action?docID=819182.
Created from londonww on 2025-02-17 11:02:11.
226 12 SECURITY MODELS
model defines authorization systems that address these issues [114]. In the HRU model,
there is:-
• a set of subjects S;
• a set of objects O;
• a set of access rights R;
• an access matrix M = (Mso )s∈S, o∈O ; the entry Mso ⊆ R specifies the rights subject s has
on object o.
There exist six primitive operations for manipulating the set of subjects, the set of objects,
and the access matrix:
• enter r into Mso • create subject s • create object o
The indices s1 , . . . , sm and o1 , . . . , om are subjects and objects that appear in the parameter
list (x1 , . . . , xk ). The conditions check whether particular access rights are present. The
list of conditions may be empty. If all conditions hold, the sequence of basic operations
is executed. Each command contains at least one operation. For example, the command
Copyright © 2011. Wiley Textbooks. All rights reserved.
command create_file(s, f )
create f
enter o into Ms,f
enter r into Ms,f
enter w into Ms,f
end
is used by subject s to create a new file f so that s is the owner of the file (access right o )
and has read and write permission to the file (access rights r and w ). The owner s of file
f grants read access to another subject p with
Gollmann, Dieter. Computer Security, Wiley Textbooks, 2011. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/londonww/detail.action?docID=819182.
Created from londonww on 2025-02-17 11:02:11.
12.4 THE HARRISON–RUZZO–ULLMAN MODEL 227
command grant_read(s, p, f )
if o in Ms,f
then enter r in Mp,f
end
An authorization system is defined by a set of commands and by its state, captured by the
access matrix. The effect of a command is recorded as a change to the access matrix. We
denote the modified access control matrix by M . The HRU model can capture security
policies regulating the allocation of access rights. To verify that a system complies with
such a policy, you have to check that there exists no way for undesirable access rights to
be granted.
A state of an authorization system is said to leak the right r if there exists a command c that
adds the right r into a position of the access matrix M that previously did not contain r.
/ Ms,o but r ∈ Ms,o .
More formally, there exist s and o so that r ∈
A state of an authorization system, i.e. an access matrix M, is said to be safe with respect to
the right r if no sequence of commands can transform M into a state that leaks r.
Verifying compliance with a security policy in the HRU model thus comes down to
verifying safety properties (see also Section 12.6). The following theorem holds.
Theorem. Given an authorization system with access matrix M and a right r, verifying the
safety of M with respect to the right r is an undecidable problem [114].
You now find yourself in the unenviable position of not being able to tackle the safety
problem in its full generality. You have to restrict the HRU model to have a better chance
of success. For example, you could only allow mono-operational systems in which each
command contains a single operation.
Limiting the size of the authorization system is another way of making the safety problem
tractable.
Copyright © 2011. Wiley Textbooks. All rights reserved.
Theorem. The safety problem for arbitrary authorization systems is decidable if the number
of subjects is finite [155].
These results on the decidability of the safety problem reveal glimpses of the third design
principle (Section 3.4.3). If you design complex systems that can only be described
by complex models, it becomes difficult to find proofs of security. In the worst case
(undecidability), there does not exist a universal algorithm that verifies security for all
problem instances. If you want verifiable security properties, you are better off limiting
the complexity of the security model. Such a model may not describe all desirable security
properties, but you may gain efficient methods for verifying ‘security’. In turn, you would
Gollmann, Dieter. Computer Security, Wiley Textbooks, 2011. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/londonww/detail.action?docID=819182.
Created from londonww on 2025-02-17 11:02:11.
Copyright © 2011. Wiley Textbooks. All rights reserved.
Gollmann, Dieter. Computer Security, Wiley Textbooks, 2011. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/londonww/detail.action?docID=819182.
Created from londonww on 2025-02-17 11:02:11.
12.5 INFORMATION-FLOW MODELS 229
As an example, consider the assignment IF x=0 THEN y:=1; from above. Let x and
y be binary variables, with y initially set to 0 and both values of x equally likely. If y = 0
still holds after the assignment, x must have been 1; if y = 1, x must have been 0. We get
Indeed, after performing the assignment and observing y, we know the exact value of
x. All information in x has flowed to y. If x can take the values 0, 1, 2 with equal
probability, we get q(0) = 23 , q(1) = 13 ,
1 2
p(0|0) = p(1|1) = p(2|1) = 0, p(1|0) = p(2|0) = , p(0|1) = 1, and Hy (x) = .
2 3
• the security policy – information flow from an object with label c1 to an object with
label c2 is permitted only if c1 ≤ c2 , and any information flow violating this rule
Copyright © 2011. Wiley Textbooks. All rights reserved.
is illegal.
A system is called secure if there is no illegal information flow. The advantage of such a
model is that it covers all kinds of information flow. The disadvantage is that it becomes
more difficult to design secure systems. For example, it has been shown that checking
whether a given system is secure in the information-flow model is an undecidable problem.
Gollmann, Dieter. Computer Security, Wiley Textbooks, 2011. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/londonww/detail.action?docID=819182.
Created from londonww on 2025-02-17 11:02:11.
230 12 SECURITY MODELS
flow may be possible in theory (and therefore should be detected in the static analysis)
but will never occur during execution. Therefore, static analysis tends to produce too
restrictive systems.
12.6 E X E C U T I O N M O N I T O R S
The previous two sections have shown that certain security problems are undecidable.
There cannot be a general algorithm that solves all instances of these problems. Now
our theoretical investigations will follow a different route. We will start from the typical
access control mechanisms in use today and characterize the policies these mechanisms
can enforce. After all, a policy is useful in practice only if it can be enforced reasonably
efficiently. We consider three classes of security policies [205]:
• access control policies – define restrictions on the operations principals can perform
on objects;
• information-flow policies – restrict what principals can infer about objects from observ-
ing system behaviour (see Section 12.5);
• availability policies – restrict principals from denying others the use of a resource.
Access control should prevent insecure behaviour of a target system. The mechanisms
deployed today in firewalls, operating systems, middleware architectures such as CORBA,
Copyright © 2011. Wiley Textbooks. All rights reserved.
or in web services have in common that they monitor the execution of that target system
and step in if an execution step is prohibited by the given security policy. The term
execution monitoring (EM) was introduced in [205] for enforcement mechanisms that
monitor the execution steps of a target system and terminate the target’s execution if a
violation of the security policy is about to occur.
Execution monitors have two important limitations. First, they do not have a model
of the target system, so they cannot predict the outcomes of possible continuations
Gollmann, Dieter. Computer Security, Wiley Textbooks, 2011. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/londonww/detail.action?docID=819182.
Created from londonww on 2025-02-17 11:02:11.
12.6 EXECUTION MONITORS 231
of the execution they are observing. Compilers and theorem-provers, for example,
work by analyzing a static representation of the target and can deduce information
about all of its possible executions. These methods are therefore not EM mechanisms.
Secondly, EM mechanisms cannot modify a target before executing it. In-line reference
monitors and reflection in object-oriented systems thus do not fall into the execution
monitor category.
Let denote a set of executions. A security policy p that can be enforced by an execution
monitor must be specified by a predicate of the form
Not every security policy is a property. Some security policies cannot be defined as a
predicate on individual executions. For example, to check compliance with information-
flow policies you have to show that given execution is indistinguishable from another
which is guaranteed to contain no information flow (see the discussion at the end of
Section 12.5).
Copyright © 2011. Wiley Textbooks. All rights reserved.
Gollmann, Dieter. Computer Security, Wiley Textbooks, 2011. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/londonww/detail.action?docID=819182.
Created from londonww on 2025-02-17 11:02:11.
232 12 SECURITY MODELS
• Safety properties – nothing bad can happen. (The ‘safety’ property of access matrices
in the HRU model meets this description.)
• Liveness properties – something good will happen eventually.
There exists a close relationship between safety and the policies that can be enforced
by execution monitors. We formally define safety properties by characterizing their
complements. In the definition, the first i steps of a sequence σ ∈ will be denoted by
σ [..i]. A property is called a safety property if, for every finite or infinite execution σ ,
σ ∈
/ ⇒ ∃i(∀τ ∈ : σ [..i]τ ∈
/ )
holds [143]. If an execution σ is unsafe, the execution has to have some point of no return
i after which it is no longer possible to revert to a safe continuation of the execution.
If the set of executions for a security policy is not a safety property, then there exists
an unsafe execution that could be extended by future steps into a safe execution. As
discussed above, such properties (policies) do not have an enforcement mechanism from
EM. So, if a policy is not a safety property, it is not EM enforceable. Put the other way
round, execution monitors enforce security policies that are safety properties. However,
not all safety properties have EM enforcement mechanisms. This leads us to the following
classification.
• Information-flow policies do not define sets of executions that are properties; thus,
information flow cannot be a safety property and in turn cannot be enforced by EM.
• Availability policies define properties but not safety properties; any partial execution
could be extended so that the principal would get access to the resource in the end.
• Availability policies that refer to a maximum waiting time (MWT) [100] are safety
properties; once an execution has waited beyond the MWT, any extension will naturally
also be in violation of the availability policy; MWT policies cannot be enforced by EM
as they refer to time.
Copyright © 2011. Wiley Textbooks. All rights reserved.
• Access control policies define safety properties; partial executions ending with an
unacceptable operation being attempted will be prohibited.
12.7 F U R T H E R R E A D I N G
Surveys of research on security models are given in [146] and [166]. The
original paper by Clark and Wilson is highly recommended reading [66]. An
implementation of the Clark–Wilson model using capabilities is described in
[136]. A slight extension of the Biba model providing mandatory integrity
controls that can be used to implement Clark–Wilson is proposed in [152].
Gollmann, Dieter. Computer Security, Wiley Textbooks, 2011. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/londonww/detail.action?docID=819182.
Created from londonww on 2025-02-17 11:02:11.