Reasoning About Qos Contracts in The Probabilistic Duration Calculus PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Electronic Notes in Theoretical Computer Science 238 (2010) 41–62

www.elsevier.com/locate/entcs

Reasoning about QoS Contracts


in the Probabilistic Duration Calculus

Dimitar P. Guelev2,3
Institute of Mathematics and Informatics
Bulgarian Academy of Sciences
Sofia, Bulgaria

Dang Van Hung1,4


International Institute of Software Technology
The United Nations University
Macau, P. R. China

Abstract
The notion of contract was introduced to component-based software development in order to facilitate
the semantically correct composition of components. We extend the form of this notion which is based
on designs to capture probabilistic requirements on execution time. We show how reasoning about such
requirements can be done in an infinite-interval-based system of probabilistic duration calculus.

Keywords: components, contracts, quality of service, duration calculus

Introduction
Combining off-the-shelf and dedicated components has become an established ap-
proach to achieving reuse, modularity, productivity and reliability. Contracts facili-
tate the correct use of components. A contract is a collection of requirements which
are written in terms of the component interface. Contract requirements should be
satisfied by implementations of the component, provided that the items imported
from other components also satisfy the requirements appearing in the contract for
them. Four levels of contracts have been identified in [1]. These are the syntactical

1 This work has been partially supported by the research project No. 60603037 granted by the National
Natural Science of Foundation of China
2 Work on this paper was done during the D. Guelev’s visit to UNU/IIST in August-September 2007.
3 Email: [email protected]
4 Email: [email protected]

1571-0661/$ – see front matter © 2010 Published by Elsevier B.V.


doi:10.1016/j.entcs.2010.06.004
42 D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62

level, the behavioural level, the synchronization level and the quality of service level.
Quality of Service (QoS) is a collective term for non-functional requirements such
as worst-case and average execution time, and the consumption of resources such
as memory, power, bandwidth, etc.
Component models are built around appropriate formalisations of the notions
of interface, contract, component composability, composition, etc. A contract theory
for components based on the notion of design from [12] has been proposed in [13,14]
and has become known as the rCOS model. Since designs capture input-output
relations, this model is mostly about the functional requirements on components
and leaves out the QoS level from [1]. In our previous work we extended the rCOS
model to capture requirements on timing and resources [3,11]. We have considered
hard requirements, where, e.g., missing a deadline is regarded as fatal. We used
the Duration Calculus (DC ) as our notation. QoS is mainly concerned with soft
requirements, where, e.g., missing a deadline by little and not too often is tolerable.
Handling requirements on the QoS involves reasoning about probability.
In this paper we extend designs to capture probabilistic requirements on exe-
cution time and develop a technique to reason about QoS of real-time embedded
systems using an infinite-interval-based system of probabilistic DC (PDC ) which
was proposed in [10] as an extension of a corresponding system of Probabilistic
Interval Temporal Logic with infinite intervals (PITL). P DC with infinite inter-
vals subsumes the systems of PDC from [17,5,9] and has a relatively complete
proof system to support formal reasoning. The fitness of (non-probabilistic) DC
for reasoning about real-time systems has been asserted by numerous case studies
[25,4,21,2,16]. Since DC is interval-based, reasoning about the behaviour of whole
method executions, including their execution time, is relatively straightforward in
DC . By using a probabilistic extension of DC we are able to enjoy this advantage
when reasoning about QoS requirements too.

1 Preliminaries
We consider only the extended set of the real numbers R = R ∪ {∞} as the flow of
time in PITL and PDC . In order to facilitate the description of repetitive behaviour,
we include a least-fixed-point operator for non-probabilistic formulas, which was
introduced in [18] and studied in [8]. ITL with infinite intervals [23,20,21,22] is
the underlying non-probabilistic logic of PITL and PDC . It extends the syntax of
predicate logic by a binary modality (.; .), known as chop. 5 Non-logical symbols are
divided into rigid and flexible depending on whether their meaning is required to
be the same at all reference intervals or not. Individual variables are rigid.
An interpretation of a vocabulary L is a function I on L which maps the symbols
from L to members of R, functions and predicates on R, according to the type and
arity of symbols. I(s) takes an interval from Ĩ as an additional argument in case s

5 Many authors write chop as ϕψ instead of (ϕ; ψ).


D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62 43

is flexible. We use the sets of intervals


Ifin = {[τ1 , τ2 ] : τ1 , τ2 ∈ R, τ1 ≤ τ2 }, Iinf = {[τ, ∞] : τ ∈ R} and Ĩ = Ifin ∪ Iinf .
Given σ1 ∈ Ifin and σ2 ∈ Ĩ such that max σ1 = min σ2 , σ1 ; σ2 stands for σ1 ∪
σ2 . Given an interpretation I, the values Iσ (t) of terms t at intervals σ ∈ Ĩ are
defined in the usual way, with the reference interval being an additional argument
for flexible symbols. Satisfaction |= is defined with respect to an interpretation and
a reference interval. Flexible relation symbols are interpreted predicates which take
the reference interval as an argument too. The clauses for ⊥, ⇒ and ∃ are the usual
ones. The clause for (.; .) is

I, σ |= (ϕ; ψ) iff I, σ1 |= ϕ and I, σ2 |= ψ for some σ1 ∈ Ifin and σ2 ∈ Ĩ


such that σ1 ; σ2 = σ.
0, ∞, + and = are mandatory in ITL vocabularies and always have the usual
interpretation. A mandatory flexible constant  always evaluates to the length of
the reference interval. Infix notation for + and = and , ∧, ⇒, ⇔ and ∀ are used
in the usual way. ITL-specific abbreviations include
(ϕ1 ; . . . ; ϕn−1 ; ϕn )  (ϕ1 ; . . . (ϕn−1 ; ϕn ) . . .)
3ϕ  ( ; ϕ; ) ∨ ( ; ϕ), 2ϕ  ¬3¬ϕ.
3 and 2 bind more tightly and (.; .) binds less tightly than the boolean connectives.
A complete proof system for ITL with infinite intervals with respect to an ap-
propriate abstract domain of durations was presented in [22].
Vocabularies for DC with infinite intervals additionally include state variables
P, Q, . . .; state expressions S are boolean combinations of state variables with the
logicalconstants written as 0 and 1 and in turn appear as the argument of duration
terms S, which are the DC -specific construct in the syntax of DC terms. Formulas
in DC are as in ITL. State variables evaluate to piece-wise constant functions of
type R → {0, 1}. The value Iτ (S) of state expression S at time τ is defined using
I(P )(τ ) for the involved state variables P in the usual way. Values of duration
terms are defined by the equality
  σ
max
Iσ ( S) = Iτ (S)dτ
min σ
 
Iσ ( S) can be ∞ for σ ∈ Iinf .
The expression
 S abbreviates  = 0 ∧ ¬S = 0
and  can be viewed as an abbreviation for 1 in DC .
Axioms and rules for DC (with infinite intervals) which are complete relative to
validity in real-time ITL (with infinite intervals), have been presented in [15] (resp.
[10].)
The least-fixed-point operator If ϕ1 , . . . , ϕn have no negative occurrences of the
propositional variables X1 , . . . , Xn and i ∈ {1, . . . , n}, then μi X1 . . . Xn .ϕ1 , . . . , ϕn
is well-formed and I, σ |= μi X1 . . . Xn .ϕ1 , . . . , ϕn iff σ ∈ Ai , where A1 , . . . , An are
the smallest subsets of I˜ which satisfy the equalities
λσ.σ∈A1 ,...,λσ.σ∈An
Ai = {σ ∈ Ĩ : IX1, ... ,Xn , σ |= ϕi }, i = 1, . . . , n.
44 D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62

Iteration, also known as Kleene star, can be defined using μ by the equivalence
ϕ∗  μ1 X. = 0 ∨ (ϕ; X). I, σ |= ϕ∗ can be defined independently by the condition:
min σ = max σ or σ = σ1 ; . . . ; σn and I, σi |= ϕ, i = 1, . . . , n, for some n <
ω, σ1 , . . . , σn ∈ Ĩ.
Axioms and rules for μ and ∗ in DC were proposed in [18,8,7].
Higher-order quantifiers We use ∃ on flexible constants and state variables with
the usual meaning, in order to describe the semantics of local variables. The de-
ductive power of some axioms and rules for this usage has been studied in [24,8,7].
Probabilistic ITL and DC with infinite intervals (PITL) extends the syntax of
ITL terms by probability terms of the form p(ϕ) where ϕ is a formula. Formula
syntax is as in ITL, with μ and higher-order quantifiers included. A PITL model is
based on a collection of interpretations of a given vocabulary L. Each interpretation
is meant to describe a possible behaviour of the modelled system. Consider a non-
empty set W, a function I on W into the set of the ITL interpretations of L and
a function P of type W × R × 2W → [0, 1]. Let I w and P w abbreviate I(w) and
λτ, X.P (w, τ, X), respectively, for all w ∈ W. I w and P w , w ∈ W, are intended
to represent the set of behaviours and the associated probability distributions for
every τ ∈ R in the PITL models for L.

Definition 1.1 Let τ ∈ R. Then w ≡τ v iff


I w (s) = I v (s) for all rigid symbols s ∈ L, except possibly the individual vari-
ables;
I w (s)(σ, d1 , . . . , d#s ) = I v (s)(σ, d1 , . . . , d#s ) for all flexible s ∈ L, all d1 , . . . , d#s ∈
R and all σ ∈ Ĩ such that max σ ≤ τ ;
P w (τ  , X) = P v (τ  , X) for all X ⊆ W and all τ  ≤ τ .

Clearly ≡τ is an equivalence relation on W for all τ ∈ R. Members of W which


are τ -equivalent model the same behaviour up to time τ . If τ1 > τ2 , then ≡τ1 ⊂≡τ2
and w ≡∞ v holds iff P w = P v and I w and I v agree on all symbols, except possibly
some individual variables. [w]≡τ is the set of those v ∈ W which represent the
probabilistic branching of w from time τ onwards.

Definition 1.2 A general PITL model for L is a tuple of the form W, I, P  where
F , W, I and P are as above and satisfy the following requirements for every w ∈ W:
• W is closed under variants of interpretations. If w ∈ W, x is an individual
variable from L and a ∈ R, then there is a v ∈ W such that P v = P w and
I v = (I w )ax , where (I w )ax maps x to a and is the same as I w on other symbols.
• The functions P w are probability measures. For every w ∈ W and τ ∈ R the
function λX.P w (τ, X) is a probability measure on the boolean algebra 2W , ∩, ∪, ∅, W.
Furthermore λX.P w (τ, X) is required to be concentrated on [w]≡τ : P w (τ, X) =
P w (τ, X ∩ [w]≡τ ) for all X ⊆ W.

Informally, the probability for a behaviour in X ⊆ [w]≡τ to be chosen is P w (τ, X).


Satisfaction |= is defined in PITL with respect to a model M = W, I, P , a w ∈ W,
D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62 45

and a σ ∈ Ĩ. If ψ is a sentence, then


[[ψ]]M,w,σ = {v ∈ [w]≡max σ : M, v, [min σ, ∞] |= ψ}.
This means that [[ψ]]M,w,σ consists of the interpretations v which are max σ-equivalent
to w and satisfy ψ at the infinite interval starting at min σ. In case ψ has free vari-
ables x1 , . . . , xn , M, v, [min σ, ∞] |= ψ should be evaluated with I w (x1 ), . . . , I w (xn )
as the values of x1 , . . . , xn , in order to preserve the intended meaning. Then [[ψ]]M,w,σ
consists of those v ∈ [w]≡max σ which satisfy the condition
  w w
(∀v  ∈ W )(P v = P v ∧ I v = (I v )xI 1 (x 1 ),...,I (xn )
, ... , xn ⇒ M, v  , [min σ, ∞] |= ψ).
Using this notation, term values wσ (t) of probability terms t can be defined by
putting
wσ (p(ψ)) = P w (max σ, [[ψ]]M,w,σ ).
Values of terms of other forms are defined as in (non-probabilistic) ITL.
The probability functions λX.P w (τ, X) for w ∈ W and τ ∈ T in general PITL
models M = W, I, P  are needed just as much as they provide values for prob-
ability terms. That is why we accept structures of the form W, P, I with their
probability functions λX.P w (τ, X) be defined just on the (generally smaller) alge-
bras {[[ψ]]M,w,σ : ψ ∈ L, σ ∈ Ĩ, max σ = τ }, ∩, ∪, ∅, [w]≡τ  as general PITL models
too.
PITL is a conservative extension of ITL. Axioms and a proof rule which extend
the proof system for ITL with infinite intervals to a system for PITL were shown in
[10] to be complete with respect to a generalisation of the R-based semantics, where
R is replaced by an abstract domain and the probability measures are required to
be only finitely aditive.
The probability functions λX.P w (τ, X) need not be related to each other in
general models for PITL, whereas applications typically lead to models with an
origin of time τ0 = min T and a distinguished w0 ∈ W such that [w0 ]≡τ0 = W and
λX.P w0 (τ0 , X) can be regarded as the global probability function. Then, given an
arbitrary w ∈ W and τ ∈ R, the probability function λX.P w (τ, X) should represent
conditional probability, the condition being τ -equivalence with w. Hence we should
have

(1) P w0 (τ, A) = P w (τ  , A)d(λX.P w0 (τ, X)).
w∈[w0 ]≡τ

The following rules enable approximating (1) with arbitrary precision in PITL
proofs:
ϕ ⇒ ¬(ϕ;  = 0)
(P )
 = 0 ∧ p(ϕ ∧ p(ψ) < x; ) = 0 ⇒ p((ϕ; ) ∧ ψ) ≥ x.p(ϕ; )
ϕ ⇒ ¬(ϕ;  = 0)
(P )
 = 0 ∧ p(ϕ ∧ p(ψ) > x; ) = 0 ⇒ p((ϕ; ) ∧ ψ) ≤ x.p(ϕ; )
The proof system for PITL from [10] is minimal. Using the abbreviations
ϕhl  ϕ ∧  ≥ l ∧  ≤ h
46 D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62

and

[ETϕ ∈ [l, h]]x   = 0 ∧ p(ϕ; ) = 1 ⇒ p(ϕhl ; ) = x,

we can write the derived rule

α ⇒ ¬(α;  = 0), β ⇒ ¬(β;  = 0), [ETα ∈ [l1 , h1 ]]x1 , [ETβ ∈ [l2 , h2 ]]x2
(Seq) ,
 = 0 ∧ p(α; β; ) = 1 ⇒ p(αlh11 ; βlh22 ; ) = x1 x2

which is particularly important to our examples.


The system of probabilistic DC (PDC ) with infinite intervals which we use in
this paper is obtained by adding state variables and duration terms to PITL in the
way used to obtain (non-probabilistic) DC from ITL. The axioms and rules for DC
with infinite intervals are complete for PDC relative to validity in PITL models
based on R.

2 A toy concurrent programming language and its se-


mantics in DC with infinite intervals

We propose a toy language to illustrate our approach. It is shaped after that from
[6] and has restricted form of method call, in order to set the stage for the use of
components and contracts.
Programs consist of components, which import and/or export methods. Their
syntax is:

component ::= component name


{import method }∗
{export method }∗
end name

method ::= name(parameter list)[ code];

The part code is required only for exported methods. It has the syntax
D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62 47

code ::= stop | (thread) termination statement


return[ e] | return control and
possibly a value
X continuation
(x := e; code) | assignment
(delay r; code) | delay by the specified
amount of time
(call [x :=]name(parameter list); code) | call method and possibly
obtain a value
if b then code else code | conditional statement
letrec code where X : code; . . . ; X : code | mutual recursion statement
var x; code local variable declaration
codecode parallel composition

We do not allow var to occur in the scope of other control statements. Assign-
ments are atomic. Parameters are passed by value. A mutual recursion statement
can trigger an infinite computation. Components are passive. The active part of a
program is just a piece of code, typically a collection of concurrently running inter-
leaved threads. The syntax of control statements deliberately makes tail-recursion
the only expressible form of repetitive behaviour. We give no details on the type
system and tacitly assume an appropriately many-sorted system of DC .
The execution of code can be described in terms of the values of its signals, vari-
ables and parameters as functions of time. The semantic function [[.]] defined below
maps every piece of code to a DC formula which defines the set of its observable
behaviours. We model each program variable v by a corresponding pair of flexible
constants v and v  , which denote the value of v at the beginning and at the end of
the reference interval and therefore satisfy the axiom ∀x¬(v  = x; v = x) where x is
a rigid individual variable. We model methods m which return a value by a corre-
sponding flexible function symbol. A formula of the form v  = m(e1 , . . . , en ) means
that the reference interval describes a complete invocation of m with e1 , . . . , en as
the input parameters and v  as the value. We use a flexible predicate symbol for
methods which return no value. We use dedicated state variables R and W to indi-
cate that the thread is currently running, or has terminated, respectively. Building
on the work from [19,6], we use a state variable N to mark computation time,
which, unlike the time consumed by the execution of delay statements, waiting for
the reaction of the environment, etc., is regarded as Negligible, in order to simplify
calculations. R, W and N satisfy the axioms

T(R, W )  R ⇒ N  ∧ R ⇒ ¬W  ∧ 2¬(W ; ¬W ),


48 D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62

which express that computation time is negligible, a process can never be both
running and terminated, and, once terminated, is never re-activated. A dedicated
pair of state variables R and W describes the status of each thread. N marks
negligible time for all threads. The formulas

K(V )  x = x and KR (V )  K(V ) ∧ R.
x∈V

mean that the variables from V preserve their values. KR (V ) additionally means
that the thread is active throughout the reference interval. The clauses below define
[[.]]V , where V is the set of program variables which are in the scope in the given
code.

[[stop]]V  W 
[[return e]]V  (¬R; KR (V ) ∧ r = e)
[[return]]V  ¬R
[[X]]V X
[[(C1 ; C2 )]]V  ([[C1 ]]V ; [[C2 ]]V )
[[x := e]]V  (¬R; KR (V \ {x}) ∧ x = e)

[[delay r]]V  ¬R ∧ r = ¬N
[[if b then C1 else C2 ]]V  (¬R; (b ∧ KR (V ); [[C1 ]]V ) ∨ (¬b ∧ KR (V ); [[C2 ]]V ))
[[call v := m(e1 , . . . , en )]]V  (¬R; K(V \ {v}) ∧ v  = m(e1 , . . . , en ))
[[call m(e1 , . . . , en )]]V  (¬R; K(V ) ∧ m(e1 , . . . , en ))

[[letrec C where X1 : C1 ; . . . Xn : Cn ]]V  μn+1 X1 . . . Xn Y.[[C1 ]]V , . . . , [[Cn ]]V , [[C]]V

[[var v; C]]V  ∃v∃v  (2((¬R ⇒ v  = v) ∧ ∀x¬(v  = x; v = x)) ∧ ∧[[C]]V ∪{v}

⎛ ⎞
W ⇔ W1 ∧ W2  ∧ R ⇔ R1 ∨ R2 ∧
⎜ ⎟
⎜ ⎟
⎜ ¬R1 ∧ R2 ∧ ⎟
[[(C1 C2 )V ]]  ∃R1 ∃R2 ∃W1 ∃W2 ⎜



⎜ T(R1 , W1 ) ∧ [R1 /R, W1 /W ][[C1 ]]V ∧ ⎟
⎝ ⎠
T(R2 , W2 ) ∧ [R2 /R, W2 /W ][[C2 ]]V

[[export m(p1 , . . . , pn ) code]]  2∀p1 . . . ∀pn ∀r (r = m(p1 , . . . , pn ) ⇔ [[code]]∅ ),

if m returns a value;
[[export m(p1 , . . . , pn ) code]]  2∀p1 . . . ∀pn (m(p1 , . . . , pn ) ⇔ [[code]]∅ ),
D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62 49

if m returns no value. The semantics of a component is the conjunction of the


formulas [[export m(p1 , . . . , pn ) code]] for its exported methods. Declarations of
imported methods carry only typing information.

3 Reasoning about timed programs in PDC : pattern


and examples
Let C be a piece of code. Then the formula [[C]]V contains the flexible function and
relation symbols for the methods with calls in C. Let m be such a method; let m
return no value for the sake of simplicity. Let B be the body of m. By replacement
of equivalents we can derive
[[export m(p1 , . . . , pn ) B]] ∧ [[C]]V ⇒ [[[B]]∅ /m][[C]]V ,
where the substitution [[[B]]∅ /m] distributes over the boolean connectives, chop and
quantifiers and [[[B]]∅ /m]m(e1 , . . . , en ) is defined as [e1 /p1 , . . . , en /pn ][[B]]∅ . Assume
that the satisfaction of a requirement Req C written in DC by C is expressed as the
equivalence
[[C]]V ⇒ Req C
and, according to a contract, m is supposed to satisfy a requirement Req m , that is,
[[B]]∅ ⇒ Req m
is valid for every acceptable B. Then the formula
[Req m /m][[C]]V ⇒ Req C
states that C would satisfy Req C , provided that the imported implementation of m
satisfies Req M .
This setting enables reasoning about the probability distribution of the execution
time of code that calls imported methods too. Let C and m be as above. Then the
probability for C to terminate within d time units can be expressed as the PDC
term

p([[C]]V ∧ ¬N ≤ d; ),

where we use ¬N to measure only non-negligible execution time spent on the
execution of delay or by other processes. Now let Fm be a rigid function symbol
such that Fm (x) denotes a lower bound for the probability for m to terminate
within time x. Let Pm be the precondition for the successful execution of m. Let p
abbreviate p1 , . . . , pn . Then

∀x( = 0 ⇒ p(Pm (p) ∧ m(p) ∧ ¬N > x; ) < 1 − Fm (x)) PITL

p([[C]]V ∧ ¬N ≤ d; ) ≥ c
means that the probability for C to terminate within d time units is at least c. The
correspondence between the assumption on the execution time of m and the derived
estimate of the execution time of C can be expressed even more accurately, if we
50 D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62

make d and Fm parameters in an appropriate expression FC in place of c:



∀x( = 0 ⇒ p(Pm (p) ∧ m(p) ∧ ¬N > x; ) < 1 − Fm (x)) PITL

p([[C]]V ∧ ¬N > d; ) < 1 − FC (d, Fm ).
In general FC represents a mapping from distributions to distributions, but if the
form of Fm is known up to numerical parameters such as mean and variance, then
FC can be defined as a mapping from their numerical domains instead of the space
of distributions.

Example 3.1 Consider downloading e-mail, which consists of establishing a con-


nection with a server, followed by the actual download. Let the code C for this call
two imported methods, connect() and getMail ():
var ok; call ok := connect(); if ok then (call getMail (); stop) else stop
Let Fconnect (t) be the probability for connecting within time t. Let the amount of
the e-mail be probabilistically distributed too and the probability for downloading it
in time t be FgetMail (t). Then lower bounds FC for the distribution of the execution
time of C satisfy the formula:

 = 0 ⇒ p([[call ok := connect()]] ∧ ¬N > t; ) < 1 − Fconnect (t),

 = 0 ⇒ p(ok ∧ [[call getMail ()]] ∧ ¬N > t; ) < 1 − FgetMail (t)

PITL p([[C]] ∧ ¬N > t; ) < 1 − FC (t).
Since the time for connecting and the quantity of e-mail to download can be assumed
independent,
y
(2) FC (t) = Fconnect (y − t)dFgetMail (t).
0
FC can be derived in PITL only approximately, because PITL does not capture
taking the limits involved in the definition of the integral in (2). This corresponds to
the established use of numerical approximations for distributions. Except for some
thoroughly studied distributions, cummulative probability functions seldom have a
closed form. Using contracts makes it natural to work with lower bounds and not
exact probabilities. The latter may as well not exist. This makes approximations
satisfactory. To derive such approximations for (2) in PITL, we find a sequence Ak ,
k = 0, 1, . . ., of terms involving Fconnect , FgetMail and t such that

p([[C]] ∧ ¬N > t; ) < 1 − Ak

for all k can be derived in PITL and, by the definition of , limk Ak = FC (t). Taking
this limit briefly takes us outside PITL. The part of the derivation within PITL is
a formalisation of a standard calculation. Let
 
(3) ϕtt21  ϕ ∧ ¬N > t1 ∧ ¬N ≤ t2
Every method call can terminate at most once. This implies the validity of the
formulas connect ⇒ ¬(connect;  = 0) and getMail ⇒ ¬(getMail ;  = 0) and enables
an application of Seq to derive
D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62 51

 = 0 ∧ p(connect; getM ail; ) = 1 ⇒


(l+1)t (m+1)t
p(connect lt k ; getM ail mt k ; ) =
k k
lt
(Fconnect ( (l+1)t
k ) − Fconnect ( k ))(FgetMail (
(m+1)t
k ) − FgetMail ( mt
k ))
for all l, m ∈ {0, . . . , k − 1}. Now by a repeated application of the PITL axiom P+ ,
and using that Fconnect (0) = 0, we obtain

p((connect; getM ail) ∧ ¬N ≤ t; )

(l+1)t (m+1)t
≤ p(connect lt k ; getM ail mt k ; )+
l+m≤k−1 k k

(l+1)t (m+1)t
p(connect lt
k
; getM ail mt
k
; )
l+m=k k k

= Fconnect ( (k−m+1)t
k )(FgetMail ( (m+1)t
k ) − FgetMail ( mt
k )) +Bk
m≤k−1

Sk

lt
where Bk ≤ maxl≤k−1 Fconnect ( (l+1)t
k ) − Fconnect ( k ), and therefore limk Bk = 0. By
the definition of Stieltjes integral, we have limk Sk = FC (t). Hence we can take Ak
to be the expression on the right of ≤ above.
Note that Seq was formulated with ϕtt21 standing for ϕ ∧ l ≥ t1 ∧  ≤ t2 , but it
applies to (3) as well.

Example 3.2 Consider attempting to download 5 files in quick succession. With


a server which allows at most 4 files to be downloading simultaneously, the 5th
request can be cancelled by the browser due to a timeout. We are interested in
the probability of cancellation. Here follows an extremely simplified variant of the
relevant browser code:

letrec X where 1
X : if userRequest then 2
(userRequest := false; 3
⎛ ⎞
call handle := requestDownload (url , timeout); 4
⎜ ⎟
⎜ ⎟
(4) ⎜ if handle! = null ⎟ 5

X ⎜ ⎟

⎜ then (call download (handle); stop) ⎟ 6
⎝ ⎠
else (call signalTimeout(url ); stop) 7
) 8
else X 9

A separate process is assumed to indicate the arrival of a new download request by


setting the shared variable userRequest and placing the URL in the shared variable
52 D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62

url . Let
⎛ ⎞
(¬R; KR (V ) ∧ ¬userRequest)∗ ;
⎜ ⎟ 
⎜ ⎟
α(R, T )  ⎜ ¬R; KR (V ) ∧ userRequest; ¬R; ⎟ ∧ ¬N = T
⎝ ⎠
R 
K (V \ {userRequest}) ∧ userRequest = false
and
⎛ ⎞
¬R;
⎜ ⎟
⎜ R ⎟
⎜ K (V \ {handle}) ∧ handle  = requestDownload (url , timeout); ⎟

β(R, W, T )  ⎜ ⎟

⎜ ¬R; KR (V ) ∧ handle! = null; ¬R; ⎟
⎝ ⎠

KR (V ) ∧ download (handle) ∧ ¬N = T ; W 

According to the semantics of (4), α(R, T ) describes the repeated execution of lines
2-3 and 9 until userRequest becomes true with T denoting the overall execution time,
and β(R, W, T ) corresponds to the execution of lines 4-6, with R and W describing
the status of the thread created in order to complete the requested download, and T
denoting the download time. The scenario of launching the five downloads involves
six threads: one for each download and one to keep the system ready for further
requests. Let R1 , . . . , R6 , W1 , . . . , W6 describe the status of the six threads. Then
the scenario can be described by the formula
⎛ ⎞

6 
6
⎜ W ⇔ W i  ∧ R ⇔ R i ∧ ⎟
(5) ∃R1 . . . ∃R6 ∃W1 . . . ∃W6 ⎜ ⎝ 
i=1 i=1

6


 ¬(Ri ∧ Rj ) ∧ T(Ri , Wi ) ∧ γ
1≤i<j≤6 i=1

where γ describes the concurrent execution of the six threads and is written using
the additional abbreviations:
αi (T )  α(Ri+1 ∨ . . . ∨ R6 , Wi+1 ∧ . . . ∧ W6 , T ) and βi (T )  β(Ri , Wi , T ).
With these abbreviations γ can be written as
⎛ ⎛ ⎞⎞
β1 (D1 )∧
⎜ ⎜⎛ ⎛ ⎞⎞ ⎟⎟
⎜ ⎜ ⎟⎟
⎜ ⎜ β2 (D2 )∧ ⎟⎟
⎜ ⎜⎜ ⎜⎛ ⎛ ⎞⎞ ⎟ ⎟ ⎟⎟
⎜ ⎜⎜ ⎜ ⎟⎟ ⎟⎟
⎜α(R, W, T0 ); ⎜ ⎜ ⎜ β (D )∧ ⎟⎟ ⎟⎟
⎜ ⎜ ⎜α1 (T1 ); ⎜ ⎜ ⎜ ⎛3 3 ⎛ ⎞⎞ ⎟⎟ ⎟⎟ ⎟⎟
⎜ ⎜⎜ ⎜⎜ ⎜ ⎟⎟ ⎟⎟ ⎟⎟
⎜ ⎜⎜ ⎜ ⎜α2 (T2 ); ⎜ β (D )∧ ⎟⎟ ⎟⎟ ⎟⎟
⎝ ⎝⎝ ⎝⎝ ⎝ ⎝α3 (T3 ); ⎝ 4 4 ⎠⎠ ⎠⎠ ⎠⎠ ⎠⎠
(α4 (T4 ); ξ ∧ η)

Here Ti denotes the time between launching the ith and the i + 1st download
and Di denotes the duration of the ith download, i = 1, . . . , 5. The formulas ξ and
η denote
D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62 53

(¬R5 ; KR5 (V \ {handle}) ∧ handle  = requestDownload (url , timeout) ∧ ¬N = x; ))

and
((¬R6 ; KR6 (V ) ∧ ¬userRequest)∗ ; ),
and correspond to the thread for the 5th download and the thread for subsequent
user requests after the 5th download request. The occurrences of in them mark
future behaviour which is not specified in our scenario. The semantics of letrec
implies (5); this can be established using the validity of μX.ϕ ⇔ [μX.ϕ/X]ϕ. As-
suming that the rate of downloading is the limiting factor for the working of the
entire system, which allows us to ignore time taken for dialog, computation and by
requestDownload for the first four downloads, the 5th download becomes cancelled
in case x exceeds timeout, which is equivalent to
⎛ ⎞
4 i−1

4
timeout + Ti < min ⎝Di + Tj ⎠ .
i=1
j=1 j=1

Let F (l, t) be a lower bound for the probability for download do complete a download
of length l within time t. It can be assumed that F (al, at) = F (l, t) for all a > 0 and
that F (l, t) = 0 in case tl exceeds the top transmission rate v. Let li be the length
of the ith download, i = 1, . . . , 5. Let li > v(T1 + T2 + T3 + T4 ) for i = 1, . . . , 4, that
is, none of the downloads can be over before all of them have been launched, for the
sake of simplicity. Then the probability Pi for i ∈ {1, . . . , 4} to be the first download
to complete, and to complete before the timeout for the pending 5th download is
at least
 ∂ 
4

4
∂q F (q, t)(li − qi , timeout). F  (qk , Ts )dq1 . . . dq4 ,
{
q1 ,...,q4 :li −qi ≤lj −qj ,j=1,...,4} k=1 s=k

The probability for the 5th download not to be cancelled is P1 + . . . + P4 . Ap-


proximations of the above integral can be derived in PITL using Seq much like in
Example 3.1.
Using a contract in which the execution time of download is approximated by a
distribution depending just the amount of data to transmit is too crude. A more
accurate calculation is possible by taking the amount of competing traffic in account,
but the form of contract that we propose does not enable it.

4 Probabilistic timed designs


A design P, R, usually written as P  R, describes a computation by a precondition
P , an input-output relation R. P constrains the initial values v of the variables, and
R is a relation between v and the final values v  of the variables, which holds if v
initially satisfy P . A probabilistic timed design P, R, F  additionally includes an
execution time distribution F . F (v, t) is a lower bound for the probability for the
computation to terminate within time t, provided that P (v) holds. A hard bound
d on execution time can be expressed by a F satisfying F (d ) = 1 for all d ≥ d.
54 D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62

4.1 Describing designs in PDC

The property of method m encoded by P, R, F  can be written as the PITL for-
mulas
m ⇒ (P (v) ⇒ R(v, v  )) and  = 0 ⇒ p(P (v) ∧ m ∧  > t; ) < 1 − F (v, t).
The first one is for the functional behaviour of m. The second one states that if P (v)
holds, then m to takes more than t time units with probability less than 1 − F (v, t).
F is just a lower bound, because an exact probability need not exist.

4.2 Refinement of probabilistic timed designs

Design D1 = P1 , R1 , F1  refines design D2 = P2 , R2 , F2 , written D1  D2 , if


∀x(P2 (x) ⇒ P1 (x)), ∀x∀x (R1 (x, x ) ⇒ R2 (x, x )), and ∀x∀t(F2 (x, t) ≤ F1 (x, t)).
This means that D1 has a weaker or equivalent precondition and a stronger or
equivalent input-output relation, and on average terminates at least as fast as D2 .
Obviously if D1  D2 , then
P1 (v) ∧ m ⇒ R1 (v, v  ) and ∀x( = 0 ⇒ p(P1 (v) ∧ m ∧  > x; ) < 1 − F1 (v, x))
entail
P2 (v) ∧ m ⇒ R2 (v, v  ) and ∀x( = 0 ⇒ p(P2 (v) ∧ m ∧  > x; ) < 1 − F2 (v, x)).

5 Probabilistic timed contracts


The execution time of a method depends on the execution times of the methods
which have calls in its body.

Definition 5.1 [component declaration] A component declaration is a pair Mi , Me 


where Mi and Me are disjoint sets of declarations for imported and exported meth-
ods, respectively.

Definition 5.2 [probabilistic timed contract] Let Mi , Me  be a component dec-


laration and Vm be the set of the valuations for the variables of declaration m,
m ∈ Mi ∪ Me . The tuple C = Dm : m ∈ Mi ∪ Me  is a contract for Mi , Me , if
Dm are of the form Pm , Rm , Fm  where
(i) Pm , Rm  is a (non-probabilistic) design for m, m ∈ Mi ∪ Me .
(ii) Fm is a variable of type Vm × R+ → [0, 1] for method declarations m ∈ Mi
and is meant to denote a distribution of the execution time of implementations of
m.
(iii) For declarations m ∈ Me , Fm an expression for the distibution of the ex-
ecution time of an implementation of declaration m as in probabilistic designs in
terms of Fn , n ∈ Mi .

We denote {n ∈ Mi : Fn occurs in Fm } by Ci,m . Semantically, if m ∈ Me , then


D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62 55

the type of Fm is
⎛ ⎞

⎝ Vn × R+ → [0, 1]⎠ → (Vm × R+ → [0, 1]).
n∈Ci,m

Syntactically we assume that Fm is an expression such as, e.g., (2). A contract C is


meant to express that if the methods m ∈ Mi satisfy their corresponding designs Dm
and the distribution variables Fm are assigned lower bounds for the distributions
of their execution times, then the methods from Me satisfy their corresponding
designs and the expressions Fm evaluate to lower bounds for the distributions of their
execution times too. If Ci,m = 0 then Pm , Rm , Fm  is essentially a probabilistic
timed design.

Definition 5.3 [refinement of probabilistic timed contracts] Let C and C  be proba-


bilistic timed contracts for Mi , Me  and Mi , Me , respectively. Let C = Pm , Rm , Fm  :
m ∈ Mi ∪Me  and C  = Pm  , R , F   : m ∈ M  ∪M  . Then, C  refines C, written
m m i e
C   C, if
(i) Mi ⊆ Mi , Me ⊇ Me ;
(ii) Pm , Rm   Pm , R  for m ∈ M  , P  , R   P , R  for m ∈ M ;
m i m m m m e

(iii) Fm (v, t) ≤ Fm (v, t) for m ∈ Me , v ∈ Vm , t ∈ R+ regardless of the values of
Fn , n ∈ M i .

5.1 Composing probabilistic timed contracts


Let Ak = Mik , Mek  and C k = Pm k , Rk , F k  : m ∈ M k ∪ M k , k = 1, 2, be two
m m i e
component declarations and probabilistic timed contracts for them, respectively. A1
and A2 are composable, if Me1 ∩ Me2 = ∅. C 1 and C 2 are composable, if A1 and A2
are composable, and Dm k  D 2−k for m ∈ M k ∩ M 2−k , k = 1, 2. The composition of
m e i
C 1 and C 2 , written C 1 ∪ C 2 , is Pm , Rm , Fm  : m ∈ Mi1 ∪ Me1 ∪ Mi2 ∪ Me2  where:
(i) Pm (v)  Pm 1 (v) ∧ P 2 (v), R (v, v  )  R1 (v, v  ) ∧ R2 (v, v  ) and F = F 1 =
m m m m m m
Fm2 for m ∈ M 1 ∩ M 2 ;
i i
(ii) Pm = Pm k and R = Rk for m ∈ M k ∪ (M k \ M 2−k ), k = 1, 2;
m m e i i
(iii) Fm = Fm k for m ∈ M k \ M 2−k , k = 1, 2.
i i
To facilitate the understanding, we first define Fm , m ∈ Me1 ∪Me2 , in case C 1 and
C 2 allow no circular dependency between the methods, that is, if there is no sequence
m0 , . . . , m2s−1 such that mr ∈ Me1 ∩ Mi2 for r = 1, 3, . . . , 2s − 1, Mr ∈ Me2 ∩ Mi1
for r = 0, 2, . . . , 2s − 2 and mr ∈ Ci,m 2−r mod 2
r+1 mod 2s
, r = 0, . . . , 2s − 1. Given that there
is no circular dependency, we can define dependency depth of m from C 1 ∪ C 2 as
the length s of the longest sequence of the form m1 , . . . , ms such that m1 ∈ Ci,m
and mr+1 ∈ Ci,m k , where k is such that mr ∈ Mek , for r = 1, . . . , s − 1, and we can
r
define Fm by induction on the dependency depth of m by the clauses:
Fm = Fm k for m ∈ M k of dependency depth 0;
e
Fm = [Fn /Fnk : n ∈ Ci,m k ]F k for m ∈ M k of nonzero dependency depth.
m e
Note that the substitution replaces Fnk with the expression for it from C 2−k , in case
n ∈ Me2−k . Otherwise Fnk is not affected by this substitution.
56 D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62

If thereare circular dependencies between C 1 and C 2 , then the Fm s for the


exported methods in C 1 ∪ C 2 should be a solution of the system of equations
Xm = [Xn /Fnk : n ∈ Ci,m ]Fm
k
.
k can be hard 6 , but if F k and monotonic, then
Solving it without restrictions on Fm m
j 0 ≡ 0 and
Xm can be obtained as the limits of the sequences Xm , j < ω, where Xm
j+1
Xm = [Xnj /Fnk : n ∈ Ci,m
k k
]Fm for m ∈ Mek .
Observe that Xm 0 ≡ 0 implies that X 1 would give non-zero termination probability
m
only to runs of m with no calls to other imported methods; Xm 2 would give non-

zero probability for runs with calls to imported methods which themselves lead to
no further calls, etc. Since Fm k are meant to be under-approximations, and the
k s
monotonicity of Fm entails Xm ≤ Xm s+1 ≤ lim X j for all s < ω, X s can be used as
j m m
j
Fm instead of limj Xm for sufficiently large s, to achieve a crude, but less expensive
approximation.

Concluding remarks
Here we focused just on soft requirements on execution time, but we believe that the
approach can be used to capture other QoS requirements involving probability as
well. The notion of QoS originated from telecommunications. Our examples come
from everyday use of the Internet and need no expertise to understand. However, we
believe that our technique would work just as well in other areas such as embedded
systems.

References
[1] Antoine Beugnard, Jean-Marc Jézéquel, Noël Plouzeau, and Damien Watkins. Making Components
Contract Aware. Computer, 32(7):38–45, 1999.

[2] Dang Van Hung. Modelling and Verification of Biphase Mark Protocols in Duration Calculus Using
PVS/DC− . In Proceedings of the 1998 International Conference on Application of Concurrency to
System Design (CSD’98), pages 88–98. IEEE Computer Society Press, March 1998.

[3] Dang Van Hung. Toward a formal model for component interfaces for real-time systems. In Tiziana
Margaria and Mieke Massink, editors, Proceedings of the 10th international workshop on Formal
methods for industrial critical systems, pages 106 – 114. ACM Press, 2005.

[4] Dang Van Hung and Wang Ji. On The Design of Hybrid Control Systems Using Automata Models. In
Proceedings of FST TCS 1996, volume 1180 of LNCS, pages 156–167. Springer, 1996.

[5] Dang Van Hung and Zhou Chaochen. Probabilistic Duration Calculus for Continuous Time. Formal
Aspects of Computing, 11(1):21–44, 1999.

[6] Dimitar P. Guelev and Dang Van Hung. Prefix and Projection onto State in Duration Calculus. In
Proceedings of TPTS’02, volume 65(6) of ENTCS. Elsevier Science, 2002.

[7] Dimitar P. Guelev and Dang Van Hung. On the Completeness and Decidability of Duration Calculus
with Iteration. Theoretical Computer Science, 337:278–304, 2005.

[8] Dimitar P. Guelev. A Complete Fragment of Higher-order Duration μ-calculus. In Proceedings of FST
TCS 2000, volume 1974 of LNCS, pages 264–276. Springer, 2000.

6 In practice F k can be non-monotonic: increasing the execution time of an imported method may indeed
m
shorten the execution time of code which would abort if an imported method misses a deadline.
D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62 57

[9] Dimitar P. Guelev. Probabilistic Neighbourhood Logic. In Mathai Joseph, editor, Proceedings of
FTRTFT 2000, volume 1926 of LNCS, pages 264–275. Springer, 2000. A proof-complete version is
available as UNU/IIST Technical Report 196 from http://www.iist.unu.edu.

[10] Dimitar P. Guelev. Probabilistic Interval Temporal Logic and Duration Calculus with Infinite
Intervals: Complete Proof Systems. Logical Methods in Computer Science, 3(3), 2007. URL:
http://www.lmcs-online.org/.

[11] Hung Ledang and Dang Van Hung. Concurrency and Schedulability Analysis in Component-based
Real-Time System Development. In Proceedings of the 1st IEEE & IFIP International Symposium on
Theoretical Aspects of Software Engineering. IEEE Computer Society Press, 2007.

[12] C.A.R. Hoare and He Jifeng. Unifying Theories of Programming. Prentice Hall, 1998.

[13] He Jifeng, Li Xiaoshan, and Liu Zhiming. A Theory of Reactive Components. In Liu Zhiming and
Luis Barbosa, editors, Proceedings of the International Workshop on Formal Aspects of Component
Software (FACS 2005), volume 160 of ENTCS, pages 173–195. Elsevier, 2006.

[14] He Jifeng, Xiaoshan Li, and Zhiming Liu. A refinement calculus of object systems. Theoretical
Computer Science, 365(1-2):109–142, 2006.

[15] Michael R. Hansen and Zhou Chaochen. Semantics and Completeness of Duration Calculus. In Real-
Time: Theory and Practice, volume 600 of LNCS, pages 209–225. Springer, 1992.

[16] Li Li and He Jifeng. A Denotational Semantics of Timed RSL using Duration Calculus. In Proceedings
of RTCSA’99, pages 492–503. IEEE Computer Society Press, 1999.

[17] Liu Zhiming, A. P. Ravn, E. V. Sørensen, and Zhou Chaochen. A Probabilistic Duration Calculus.
In H. Kopetz and Y. Kakuda, editors, Dependable Computing and Fault-tolerant Systems Vol. 7:
Responsive Computer Systems, pages 30–52. Springer, 1993.

[18] Paritosh K. Pandya. Some extensions to Mean-Value Calculus: Expressiveness and Decidability. In
Proceedings of CSL’95, volume 1092 of LNCS, pages 434–451. Springer, 1995.

[19] Paritosh K. Pandya and Dang Van Hung. Duration Calculus of Weakly Monotonic Time. In Proceedings
of FTRTFT’98, volume 1486 of LNCS, pages 55–64. Springer, 1998.

[20] Paritosh K. Pandya, Wang Hanping, and Xu Qiwen. Towards a Theory of Sequential Hybrid Programs.
In D. Gries and W.-P. de Roever, editors, Proceedings of IFIP Working Conference PROCOMET’98,
pages 336–384. Chapman & Hall, 1998.

[21] Gerardo Schneider and Xu Qiwen. Towards a Formal Semantics of Verilog Using Duration Calculus. In
Anders P. Ravn and Hans Rischel, editors, Proceedings of FTRTFT’98, volume 1486 of LNCS, pages
282–293. Springer, 1998.

[22] Wang Hanpin and Xu Qiwen. Completeness of Temporal Logics over Infinite Intervals. Discrete Applied
Mathematics, 136(1):87–103, 2004.

[23] Zhou Chaochen, Dang Van Hung, and Li Xiaoshan. A Duration Calculus with Infinite Intervals.
In Horst Reichel, editor, Fundamentals of Computation Theory, volume 965 of LNCS, pages 16–41.
Springer, 1995.

[24] Zhou Chaochen, Dimitar P. Guelev, and Zhan Naijun. A Higher-order Duration Calculus. In Millennial
Perspectives in Computer Science, pages 407–416. Palgrave, 2000.

[25] Zheng Yuhua and Zhou Chaochen. A Formal Proof of a Deadline Driven Scheduler. In Proceedings of
FTRTFT’94, volume 863 of LNCS, pages 756–775. Springer, 1994.

A Proof systems
A.1 Proof system for ITL with infinite intervals
The following axioms and rules have been shown to form a complete proof system for ITL with infinite
intervals when added to a Hilbert-style proof system for classical first-order predicate logic and appropriate
axioms about an abstract domain of durations in [22]:
58 D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62

(A1) (ϕ; ψ) ∧ ¬(χ; ψ) ⇒ (ϕ ∧ ¬χ; ψ), (ϕ; ψ) ∧ ¬(ϕ; χ) ⇒ (ϕ; ψ ∧ ¬χ)


(A2) ((ϕ; ψ); χ) ⇔ (ϕ; (ψ; χ))
(R) (ϕ; ψ) ⇒ ϕ, (ψ; ϕ) ⇒ ϕ if ϕ is rigid
(B) (∃xϕ; ψ) ⇒ ∃x(ϕ; ψ) if x has no free occurrences in ψ
(ψ; ∃xϕ) ⇒ ∃x(ψ; ϕ)
(L1) ( = x; ϕ) ⇒ ¬( = x; ¬ϕ), (ϕ; = x ∧ x = ∞) ⇒ ¬(¬ϕ; = x)
(L2) = x + y ∧ x = ∞ ⇔ ( = x; = y)
(L3) ϕ ⇒ ( = 0; ϕ), ϕ ∧ = ∞ ⇒ (ϕ; = 0)
(S1) ( = x ∧ ϕ; ψ) ⇒ ¬( = x ∧ ¬ϕ; χ)
(P 1) ¬( = ∞; ϕ)
(P 2) (ϕ; = ∞) ⇒ = ∞
(P 3) (ϕ; = ∞) ⇒ = ∞

(N ) ϕ ϕ
,
¬(¬ϕ; ψ) ¬(ψ; ¬ϕ)

(Mono) ϕ⇒ψ ϕ⇒ψ


,
(ϕ; χ) ⇒ (ψ; χ) (χ; ϕ) ⇒ (χ; ψ)
Using the first order logic axiom
(∃r ) [t/x]ϕ ⇒ ∃xϕ.

is correct only if no variable in t becomes bound due to the substitution, and either t is rigid or (.; .) does
not occur in ϕ.

A.2 Axioms and rules for DC with infinite intervals


The axioms and rules below were proposed for DC with finite intervals and have been shown to be complete
relative to validity in real-time ITL in [15].
R
(DC1) 0=0
R
(DC2) 1=
R
(DC3) S≥0
R R R R
(DC4) S1 + S2 = (S1 ∨ S2 ) + (S1 ∧ S2 )
R R R
(DC5) ( S = x; S = y) ⇒ S = x + y
R R
(DC6) S1 = S2 if S1 and S2 are propositionally equivalent
(IR1) [ = 0/A]ϕ ϕ ⇒ [A ∨ (A; S ∨ ¬S)/A]ϕ
[/A]ϕ
(IR2) [ = 0/A]ϕ ϕ ⇒ [A ∨ (S ∨ ¬S; A)/A]ϕ
[/A]ϕ
The completeness proof from [15] involves two theorems which can be derived using the rules IR1 and
IR2, instead of the rules themselves. The second of these theorems does not hold for infinite intervals and
therefore we modify it appropriately:
(T 1) = 0 ∨ (S; ) ∨ (¬S; )
(T 2) = 0 ∨ = ∞ ∨ (; S) ∨ (; ¬S)
DC1-DC6, T 1 and the infinite-interval variant of T 2 form a relatively complete proof system for DC with
infinite intervals.

A.3 Proof system for PITL


PITL is a conservative extension of ITL. Adding the axioms and a proof rule below to the proof system for
ITL leads to a system which is complete for PITL with respect to a generalisation of the R-based semantics,
where R is replaced by an abstract domain and the probability measures are required to be only finitely
aditive.
Extensionality
(P; ) ( = x; p(ψ) = y) ⇒ p(( = x; ψ)) = y
(P∞ ) = ∞ ⇒ (ϕ ⇔ p(ϕ) = 1)
 (ϕ; = ∞) ⇒ (ψ ⇒ χ)
(P≤ )
 ϕ ∧ < ∞ ⇒ p(ψ) ≤ p(χ)
Arithmetics of probabilities
D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62 59

(P⊥ ) p(⊥) = 0
(P ) p() = 1
(P+ ) p(ϕ) + p(ψ) = p(ϕ ∨ ψ) + p(ϕ ∧ ψ)

A.4 Useful theorems and derived rules for PITL


All the theorems and rules below except P; are valid in general PITL models. P; is valid in PITL models
with global probability.
∞) (ϕ; = ∞) ∨ (ϕ ∧ = ∞) ⇒ (ψ ⇒ χ)
(P≤
ϕ ⇒ p(ψ) ≤ p(χ)
ϕ⇒ψ ϕ⇔ψ
(PITL1) ,
p(ϕ) ≤ p(ψ) p(ϕ) = p(ψ)
(PITL2) p(ϕ) + p(¬ϕ) = 1
(PITL3) p(ϕ) < p(ψ) ⇒ p(ψ ∧ ¬ϕ) = 0
(PITL4) p(ϕ) = p(ϕ ∧ = ∞)
(PITL5) p(ϕ) ≤ 1
ϕ ¬ϕ
(PITL6) ,
p(ϕ) = 1 p(ϕ) = 0
(PITL7) (ϕ; ) ⇒ p(ϕ; ) = 1
(PITL8) p(ϕ) = 1 ∧ p(ψ) = x ⇒ p(ϕ ∧ ψ) = x
(PITL9) p(ϕ ⇒ ψ) = 1 ⇒ (p(ϕ) = 1 ⇒ p(ψ) = 1)
p(ϕ ⇒ ψ) = 1 ⇒ (p(ψ) = 0 ⇒ p(ϕ) = 0)
(PITL10) p(ϕ) + p(ψ) > 1 ⇒ p(ϕ ∧ ψ) > 0
 ϕ ⇒ ¬(ϕ; = 0)
(P; )
(ϕ; p(ψ) = x) ⇒ p(ϕ; ψ) = x
Here follow the proofs of the above PITL theorems and derived rules. The purely ITL parts are skipped
and marked “ITL” for the sake of brevity.
∞:
P≤

1 (ϕ; = ∞) ⇒ (ψ ⇒ χ) assumption, ITL


2 ϕ ∧ < ∞ ⇒ p(ψ) ≤ p(χ) 1, P≤
3 = ∞ ∧ ϕ ⇒ (p(ψ) = 0 ∧ p(χ) = 0) assumption, P∞ , PITL2
∨(p(ψ) = 0 ∧ p(χ) = 1)
∨(p(ψ) = 1 ∧ p(χ) = 1)
4 ϕ ∧ = ∞ ⇒ p(ψ) ≤ p(χ) 3, ITL
5 <∞∨ =∞ ITL
6 ϕ ⇒ p(ψ) ≤ p(χ) 2, 4, 5
PITL1:

1 ϕ⇒ψ assumption
2 (; = ∞) ∨ ( ∧ = ∞) ⇒ (ϕ ⇒ ψ) 1, ITL
3 p(ϕ) ≤ p(ψ) 2, P≤∞

The second rule PITL1 is proved by two applications the first.


PITL2:

1 ϕ ∧ ¬ϕ ⇔ ⊥ ITL
2 p(ϕ ∧ ¬ϕ) = p(⊥) 1, PITL1
3 p(ϕ ∧ ¬ϕ) = 0 2, P⊥
4 ϕ ∨ ¬ϕ ⇔  ITL
5 p(ϕ ∨ ¬ϕ) = p() 4, PITL1
6 p(ϕ ∧ ¬ϕ) = 1 5, P
7 p(ϕ) + p(¬ϕ) = p(ϕ ∧ ¬ϕ) + p(ϕ ∧ ¬ϕ) P+
8 p(ϕ) + p(¬ϕ) = 1 2, 6, 7, ITL
PITL3:
60 D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62

1 p(ψ) ≤ p(ϕ ∨ ψ) ∞
P≤
2 p(ϕ) + p(ψ ∧ ¬ϕ) = p(ϕ ∧ ψ ∧ ¬ϕ) + p(ϕ ∨ ψ ∧ ¬ϕ) P+
3 p(ϕ) + p(ψ ∧ ¬ϕ) = p(ϕ ∨ ψ) 2, PITL1, P⊥
4 p(ϕ) < p(ψ) ⇒ p(ϕ) < p(ϕ ∨ ψ) 1
5 p(ϕ) < p(ψ) ⇒ p(ψ ∧ ¬ϕ) = 0 3, 4
∞ to the ITL theorems
PITL4 is obtained by applying P≤
(; = ∞) ∨ ( ∧ = ∞) ⇒ (ϕ ⇒ ϕ ∧ = ∞)
and
(; = ∞) ∨ ( ∧ = ∞) ⇒ ( = ∞ ∧ ϕ ⇒ ϕ).
PITL5:

1 ϕ⇒ ITL
2 p(ϕ) ≤ p() 1, PITL1
3 p(ϕ) ≤ 1 2, P
PITL6:

1 ⇒ϕ assumption 1 ¬ϕ assumption


2 p() ≤ p(ϕ) 1, PITL1 2 p(¬ϕ) = 1 1, PITL6
3 1 ≤ p(ϕ) 2, P 3 p(ϕ) = 0 PITL2
4 p(ϕ) ≤ 1 PITL5
5 p(ϕ) = 1 3, 4
PITL7:

1 (ϕ; ; = ∞) ∨ ((ϕ; ) ∧ = ∞) ⇒ ( ⇒ (ϕ; )) ITL


2 (ϕ; ) ⇒ p(ϕ; ) = 1 P≤∞

PITL8:

1 p(ϕ) = 1 ∧ p(ψ) = x ⇒ p(ϕ ∧ ψ) + p(ϕ ∨ ψ) = 1 + x P+


2 ϕ ⇒ (ϕ ∨ ψ) ITL
3 p(ϕ) ≤ p(ϕ ∨ ψ) 2, PITL1
4 p(ϕ) = 1 ⇒ p(ϕ ∨ ψ) = 1 3, PITL5
5 p(ϕ) = 1 ∧ p(ψ) = x ⇒ p(ϕ ∧ ψ) = x 1, 4
PITL9:

1 p(ϕ ⇒ ψ) = 1 ∧ p(ϕ) = 1 ⇒ p((ϕ ⇒ ψ) ∧ ψ) = 1 PITL8


2 (ϕ ⇒ ψ) ∧ ψ ⇒ ψ
3 p((ϕ ⇒ ψ) ∧ ψ) ≤ p(ψ) 2, PITL1
4 p(ψ) ≤ 1 PITL5
5 p(ϕ ⇒ ψ) = 1 ⇒ (p(ϕ) = 1 ⇒ p(ψ) = 1) 1−4
1 p((ϕ ⇒ ψ) ⇒ (¬ψ ⇒ ¬ϕ)) = 1 PITL6
2 p(ϕ ⇒ ψ) = 1 ⇒ p(¬ψ ⇒ ¬ϕ) = 1 1, PITL9
3 p(¬ψ ⇒ ¬ϕ) = 1 ⇒ (p(¬ψ) = 1 ⇒ p(¬ϕ) = 1) PITL9
4 p(¬ψ ⇒ ¬ϕ) = 1 ⇒ (p(¬ψ) = 0 ⇒ p(¬ϕ) = 0) 3, PITL2
5 p(ϕ ⇒ ψ) = 1 ⇒ (p(¬ψ) = 0 ⇒ p(¬ϕ) = 0) 2, 4
PITL10:

1 p(ϕ) + p(ψ) > 1 ⇒ p(ϕ ∧ ψ) + p(ϕ ∨ ψ) > 1 P+


2 p(ϕ ∨ ψ) ≤ 1 PITL5
3 p(ϕ) + p(ψ) > 1 ⇒ p(ϕ ∧ ψ) > 0 1, 2
P; :

1 (ϕ; p(ψ) = x) ⇒ ∃t((ϕ ∧ = t; ) ∧ ( = t; p(ψ) = x)) ITL


2 (ϕ ∧ = t; ) ⇒ p(ϕ ∧ = t; ) = 1 PITL7
3 ( = t; p(ψ) = x) ⇒ p( = t; ψ) = x P;
D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62 61

4 p(ϕ ∧ = t; ) = 1 ∧ p( = t; ψ) = x ⇒ p(ϕ ∧ = t; ψ) = x PITL8, PITL1, ITL


5 (ϕ ∧ = t; ψ) ⇒ (ϕ; ψ) ITL
6 p(ϕ ∧ = t; ψ) = x ⇒ p(ϕ; ψ) ≥ x 5, PITL1
7 ∃t((ϕ ∧ = t; ) ∧ ( = t; p(ψ) = x)) ⇒ ∃t(p(ϕ; ψ) ≥ x) 2 − 6, ITL
8 ∃t(p(ϕ; ψ) ≥ x) ⇔ p(ϕ; ψ) ≥ x ITL
9 (ϕ; p(ψ) = x) ⇒ p(ϕ; ψ) ≥ x 1, 7, 8
10 (ϕ; p(ψ) = x) ⇔ (ϕ; p(¬ψ) = 1 − x) PITL2, ITL
11 (ϕ; p(¬ψ) = 1 − x) ⇒ p(ϕ; ¬ψ) ≥ 1 − x like 1 − 9, but with ¬ψ as ψ
12 (p(ϕ; ψ) > x ∧ p(ϕ; ¬ψ) ≥ 1 − x)∨
(p(ϕ; ψ) ≥ x ∧ p(ϕ; ¬ψ) > 1 − x)
⇒ p((ϕ; ψ) ∧ (ϕ; ¬ψ)) > 0 PITL10
13 (ϕ; ψ) ∧ (ϕ; ¬ψ) ∧ ¬(ϕ ∧ (ϕ; = 0); ) ⇒ ⊥ ITL
14 p((ϕ; ψ) ∧ (ϕ; ¬ψ) ∧ ¬(ϕ ∧ (ϕ; = 0); )) = 0 13, PITL6
15 p(¬(ϕ ∧ (ϕ; = 0); )) = 1 assumption, PITL6
16 p((ϕ; ψ) ∧ (ϕ; ¬ψ)) = 0 14, 15, PITL8
17 p(ϕ; ¬ψ) ≥ 1 − x ∧ p(ϕ; ψ) ≥ x
⇒ p(ϕ; ψ) ≤ x ∧ p(ϕ; ¬ψ) ≤ 1 − x 12, 16, ITL
18 (ϕ; p(ψ) = x) ⇒ p(ϕ; ψ) = x 9, 11, 17, ITL

A.5 The rule Seq

In the proof of the admissibility of Seq below ϕ ∧ ≥ l ∧ ≤ h is abbreviated by ϕh


l .
1 ( = 0 ∧ p(α; β; ) = 1; ) ⇒ p(p(α; β; ) = 1 ∧ = 0; ) = 1 PITL7
2 (α; p(β; ) = x ∧ = 0) ⇒ p(α; β; ) = x P; , assumptions
3 ∃x((α; ) ⇒ (α; p(β; ) = x ∧ = 0; )) ITL
0 1
(α; ) ⇒
B p(α; β; ) = x∧ C
B C
4 p(α; β; ) = 1 ⇒ ∃x B C 2, 3, ITL
@ (α; p(β; ) = x ∧ = 0; )∧ A
p(α; β; ) = 1)
5 p(α; β; ) = 1 ⇒ ∃x((α; ) ⇒ (α; p(β; ) = 1 ∧ = 0; )) 4, ITL
6 p(α; β; ) = 1 ∧ (α; ) ⇒ (α; p(β; ) = 1 ∧ = 0; ) 5, ITL
7 p(p(α; β; ) = 1 ∧ (α; ) ⇒ (α; p(β; ) = 1 ∧ = 0; )) = 1 6, PITL6
8 p(p(α; β; ) = 1; ) = 1 ∧ p(α; ) = 1 ⇒
p(α; p(β; ) = 1 ∧ = 0; ) = 1 7, PITL9
9 = 0 ∧ p(α; β; ) = 1 ∧ p(α; ) = 1 ⇒
p(α; p(β; ) = 1 ∧ = 0; ) = 1 1, 8
10 p(α; β; ) = 1 ⇒ p(α; ) = 1 PITL9
11 = 0 ∧ p(α; β; ) = 1 ⇒ p(α; p(β; ) = 1 ∧ = 0; ) = 1 9, 10
12 p(α; p(β; ) = 1 ∧ = 0; ) = 1 ∧ p(α; p(β; ) = 1 ∧ = 0; ) = x ⇒
p((α; p(β; ) = 1 ∧ = 0; ) ∧ (α; p(β; ) = 1 ∧ = 0; )) = x PITL8
13 p((α; p(β; ) = 1 ∧ = 0; ) ∧ (α; p(β; ) = 1 ∧ = 0; )) = 0 PITL6
14 p(α; p(β; ) = 1 ∧ = 0; ) = 1 ⇒ p(α; p(β; ) = 1 ∧ = 0; ) = 0 12, 13
15 = 0 ∧ p(α; β; ) = 1 ⇒ p(α; p(β; ) = 1 ∧ = 0; ) = 0 11, 14
16 = 0 ⇒ (p(βlh2 ; ) = x2 ⇒ p(β; ) = 1) [ETβ ∈ [l2 , h2 ]]x2
2
17 αh
l ⇒α
1
ITL
1
18 (αh h2
l ; p(βl ; ) = x2 ∧ = 0; ) ⇒ (α; p(β; ) = 1 ∧ = 0; )
1
16, 17, ITL
1 2
19 = 0 ∧ p(α; β; ) = 1 ⇒ p(αh h2
l ; p(βl ; ) = x2 ∧ = 0; ) = 0
1
15, 18, PITL6, PITL9
1 2 !
(αh h2
l ; p(βl ; ) = x2 ∧ = 0; ) ⇔
1
20 ¬(α ∧ (α; = 0); ) ⇒ 1 2 ITL
(αh h2
l ; ) ∧ (α; p(βl ; ) = x2 ∧ = 0; )
1
1 2
62 D.P. Guelev, D. Van Hung / Electronic Notes in Theoretical Computer Science 238 (2010) 41–62

21 p(¬(α ∧ (α; = 0); )) = 1 assumption, PITL6


22 p(αh h2
l ; p(βl ; ) = x2 ∧ = 0; ) = 0 ⇔
1
1 2
p((αh h2
l ; ) ∧ (α; p(βl ; ) = x2 ∧ = 0; )) = 0
1
20, 21, PITL9
1 2
23 α ∧ p(α; βlh2 ; ) = x2 ⇒ ¬(α; p(βlh2 ; ) = x2 ∧ = 0) P;
2 2
24 ¬(α; p(βlh2 ; ) = x2 ) ∧ α ⇒ (α; p(βlh2 ; ) = x2 ∧ = 0) ITL
2 2
25 α ∧ p(α; βlh2 ; ) = x2 ⇒ (α; p(βlh2 ; ) = x2 ∧ = 0) 23, 24
2 2
26 p((αh h2
l ; ) ∧ (α; p(βl ; ) = x2 ∧ = 0; )) = 0 ⇒
1
1 2
p((α ; ) ∧ (α ∧ p(α; βlh2 ; ) = x2 ; )) = 0
h1
l1 25, PITL9, PITL6
2
27 ¬(α ∧ (α; = 0); ) ⇒
((αh h2 h1 h2
l ; ) ∧ (α ∧ p(α; βl ; ) = x2 ; ) ⇔ (αl ∧ p(α; βl ; ) = x2 ; )) ITL
1
1 2 1 2
28 p((αh h2
l ; ) ∧ (α ∧ p(α; βl ; ) = x2 ; )) = 0 ⇔
1
1 2
p(αh
l1
1
∧ p(α; βlh2 ; ) = x2 ; ) = 0 21, 27, PITL9
2
29 = 0 ∧ p(α; β; ) = 1 ⇒ p(αh h2
l ∧ p(α; βl ; ) = x2 ; ) = 0
1
19, 22, 26, 28
1 2

30 = 0 ∧ p(αh h2
l ∧ p(α; βl ; ) = x2 ; ) = 0 ⇒
1
1 2
p((αh h2 h1
l ; ) ∧ (α; βl ; )) = x2 .p(αl ; )
1
P , P , assumptions
1 2 1
31 ¬(α ∧ (α; = 0); ) ⇒ ((αh h2 h1 h2
l ; ) ∧ (α; βl ; ) ⇔ (αl ; βl ; )) ITL
1
1 2 1 2
32 (αh1 h2 h1 h2
l ; ) ∧ (α; βl ; ) ⇔ (αl ; βl ; ) assumption, 31
1 2 1 2
33 p((αh
l1 ; )
1
∧ (α; βlh2 ; )) = p(αh h2
l1 ; βl2 ; )
1
32, PITL1
2
34 = 0 ∧ p(α; ) = 1 ⇒ p(αh
l ; ) = x1
1
[ETα ∈ [l1 , h1 ]]x1
1
35 = 0 ∧ p(α; β; ) = 1 ⇒ p(αh h2
l ; βl ; ) = x2 .x1
1
10, 29, 30, 33, 34
1 2

You might also like