Extending & Automating Basic Probability Theory With Propositional Computability Logic
Extending & Automating Basic Probability Theory With Propositional Computability Logic
1 Introduction
Classical probability theory[2] is formulated using sets. Unfortunately, the
language of sets lacks expressiveness and is, in a sense, a low-level ‘assembly
language’ of the probability theory. In this paper, we develop a ‘high-level
approach’ to classical probability theory with propositional computability
logic[1] (CoL). Unlike other formalisms such as sets, logic and linear logic,
computability logic is built on the notion of events/games, which is central
to probability theory. Therefore, CoL is a perfect place to begin the study
of automating probability theory.
To be specific, CoL is well-suited to describing complex (sequential/parallel)
experiments and events, and more expressive than set operations. In con-
trast, classical probability theory – based on ∩, ∪, etc – is designed to repre-
sent mainly the simple/additive events – the events that occur under a single
experiment.
Naturally, we need to talk about composite/multiplicative events – events
that occur under two different experiments. Developing probability along
this line requires a new, powerful language. For example, consider the fol-
lowing events E1 , E2 :
E1 : toss a coin two times (events 1 and 2) and get H,T in that order.
E2 : toss two dices (which we call 1, 2) and get at least one 5.
1
Suppose a formalism has the notion of △, ▽ (sequential-and/or) and ∧ , ∨
(parallel-and/or). Then E1 would be written as H d △T d . Similarly, E2 would
be written concisely as (51 ∨ 52 ). The formalism of classical probability theory
fails to represent the above events in a concise way.
Computability logic[1] provides a formal and consistent way to represent
a wide class of experiments and events. In particular, multiplicative experi-
ments (sequential and parallel experiments) as well as additive experiments
(choice-AND, choice-OR) can be represented in this formalism.
{(a1 , . . . , an ), (b1 , . . . , bn ), . . .}
where each a, b is an atomic event. As we shall see, this can also be repre-
sented as an event formula in set normal form of the form
(a1 ∧ . . . ∧ an ) ⊔ (b1 ∧ . . . ∧ bn ) ⊔ . . .
2
{E1 , . . . , En } of mutually exclusive points. This mapping makes it much
easier to compute the probability of E. That is, p(E) = p(E1 ) + . . . + p(En ).
To mentioned earlier, a(t) represents an atomic event. In addition,
a(t)∗ = {a(t)}
(¬A)∗ = U − A∗
The choice-OR event A ⊔ B represents the event in which only one of the
event A and event B happen under a single experiment. For example, 4d ⊔ 5d
represents the event that we get either 4 or 5 when a dice d is tossed. This
operation corresponds to the set union operation.
(A ⊔ B)∗ = A∗ ∪ B ∗
(A ⊓ B)∗ = A∗ ∩ B ∗
(A ∧ B)∗ = A∗ × B ∗
3
A × B = {((a1 , . . . , am , b1 , . . . , bn ))|(a1 , . . . , am ) ∈ A and (b1 , . . . , bn ) ∈ B}
For example, let A = {(0, 1), (1, 2)} and B = {0, 1}. Then A × B =
{(0, 1, 0), (0, 1, 1), (1, 2, 0), (1, 2, 1)}.
The parallel-OR event A ∨ B represents the event in which at least one of
event A and event B happen under two different experiments. For example,
((41 ⊔ 51 ) ∨ (42 ⊔ 52 ) represents the event that we get at least one 4 or one 5
when two dices are tossed. Formally,
4
(5) p(A ∧ B) = p(A)p(B k A) = p(B)p(A k B) % parallel-and
For example, suppose two coins are tossed. Now, p(H 1 ∧ H 2 ) = 1/4.
In addition, suppose a coin and a dice are tossed simultaneouly. Then
p(H ∧ 6) = p(H)p(6) = 1/12.
% Below, p computes the probability of an event space rather than an
event formula.
3 Examples
Let us consider the following event E where E = roll a dice and get 4 or 5.
The probabilities of E and E ∨ E is the following:
5
As another example, (H 1 ∧ H 2 ) ⊔ (H 1 ∧ T 2 ) ⊔ (T 1 ∧ H 2 ) represents the event
that at least one head comes up when two coins are tossed. Now, it is easy
to see that p((H 1 ∧ H 2 ) ⊔ (H 1 ∧ T 2 ) ⊔ (T 1 ∧ H 2 )) = 3/4.
As the last example, suppose two dice are tossed and we get 6 from one
dice. What is the probability that we get 5 from another dice? This kind
of problem is very cumbersome to represent/ solve in classical probability
theory. Fortunately, it can be represented/solved from the above formula in
a concise way. It is shown below:
% computing the following probability requires converting the event to
its event space.
% computing the following does not require converting the event to its
event space.
6
k disjoint events A1 , . . . , Ak . Understanding ∩ as ⊓ , the Bayes rule can be
written as:
p(Ai ⊓ B)
p(Ai |B) =
p(A1 ⊓ B) + . . . + p(Ak ⊓ B)
or
p(Ai ⊓ B)
p(Ai |B) =
p(A1 )p(B|A1 ) + . . . + p(Ak )p(B|Ak )
p(Ai ∧ B)
p(Ai k B) =
p(A1 ∧ B) + . . . + p(Ak ∧ B)
or
p(Ai ∧ B)
p(Ai k B) =
p(A1 )p(B k A1 ) + . . . + p(Ak )p(B k Ak )
That is, we need two versions of the Bayes rule and it is crucial to apply
the correct version to get the correct answer. As a well-known example of
the Bayes rule, consider the problem of sending 0 or 1 over a noisy channel.
Let r(0) be the event that a 0 is received. Let t(0) be the event that a 0
is transmitted. Let r(1) be the event that a 1 is received. Let t(1) be the
event that a 1 is transmitted. Now the question is: what is the probability
of 0 having been transmitted, given 0 is received? In solving this kind of
problem, it is more natural to use Bayes rule on ∧ , rather than on ⊓ . This
is so because p(t(0) ∩ r(0)) = p(t(0) ⊓ r(0)) = 0.
5 Conclusion
Computability logic [1] provides a formal and consistent way to represent
a wide class of experiments and events. For this reason, we believe that
probability theory based on computability logic is an interesting alternative
to the traditional probability theory and uncertainty reasoning.
7
References
[1] G. Japaridze, “Propositional computability logic II”, ACM Transaction
on Computational Logic, vol.7(2), pp.331–362, 2006.