Probabilistic Reasoning
Probabilistic Reasoning
1. Yomlata Abera……………………………………………….RT10001/13
2. Ayantu Bullo ……………………….………………..............RU/5728/12
3. Bedri Abdi ………………….………………………………..RU/4771/12
4. Aisha Roba.…………………………………………………..RU/2135/11
Causes of Uncertainty........................................................................................................................... 6
Nodes ................................................................................................................................................. 8
Discrete .......................................................................................................................................... 8
Continuous ..................................................................................................................................... 8
Links .................................................................................................................................................. 9
Conclusion........................................................................................................................................... 10
References ........................................................................................................................................... 11
Table of Figures
Figure 1: Venn diagram for conditional Probability ............................................................................ 3
Introduction
Probabilistic reasoning is a way of knowledge representation where we apply the concept of
probability to indicate the uncertainty in knowledge. In this document we will discuss probabilistic
reasoning concept starting from its definition to uncertainty solving methods. In uncertainty
solving methods we will see baye’s theorem and Bayesian network. The other things we will see
in this document are causes of uncertainty and need of probabilistic reasoning.
Probabilistic Reasoning
In Artificial Intelligence and Cognitive Science, probabilistic approaches are critical for
generating, reasoning, and making decisions (simple and complex) [1]
As probabilistic reasoning uses probability and related terms, so before understanding probabilistic
reasoning, let's understand some common terms:
Probability: Probability can be defined as a chance that an uncertain event will occur. It is the
numerical measure of the likelihood that an event will occur. The value of probability always
remains between 0 and 1 that represent ideal uncertainties.
P(S∨T) = P(S) + P(T) - P(S∧T) where P(S∨T) means Probability of happening of either S or T and
P(S∧T) means Probability of happening of both S and T [3]
We can find the probability of an uncertain event by using the below formula.
Page 1 of 14
P (¬A) + P (A) = 1
Sample space: The collection of all possible events is called sample space.
Random variables: Random variables are used to represent the events and objects in the real
world.
Prior probability: The prior probability of an event is probability computed before observing new
information.
Posterior Probability: The probability that is calculated after all evidence or information has
taken into account. It is a combination of prior probability and new information.
Conditional probability: is a probability of occurring an event when another event has already
happened [2]
It can be explained by using the below Venn diagram, where B is occurred event, so sample space
will be reduced to set B, and now we can only calculate event A when event B is already occurred
by dividing the probability of P(A⋀B) by P( B ).
Page 2 of 14
Figure 1: Venn diagram for conditional Probability
Example:
In a class, there are 70% of the students who like English and 40% of the students who likes
English and mathematics, and then what is the percent of students those who like English also like
mathematics?
Solution:
Hence, 57% are the students who like English also like Mathematics.
The most popular knowledge representation that we use in our day to day activity is first-order
logic and propositional logic with certainty, which means we are sure about the predicates. With
this knowledge representation, we might write A→B, which means if A is true then B is true, but
consider a situation where we are not sure about whether A is true or not then we cannot express
this statement, this situation is called uncertainty [2]
Page 3 of 14
So to represent uncertain knowledge, where we are not sure about the predicates, we need uncertain
reasoning or probabilistic reasoning
We use probability in probabilistic reasoning because it provides a way to handle the uncertainty.
Probabilistic modelling provides a framework for accepting the concept of learning. The
probabilistic framework specifies how to express and deploy model reservations. In scientific data
analysis, predictions play a significant role. Machine learning, automation, cognitive computing,
and artificial intelligence all rely heavily on them [3]
While probabilistic reasoning models the likelihood (or degree of uncertainty) of particular
relations between concepts, or of concept membership; fuzzy reasoning caters for degrees of truth.
The true–false dichotomy of classical reasoning is replaced by the ability to specify that a concept
is true to a certain degree. For example, we may use probabilistic geospatial reasoning to express
how certain we are that the statement “Orewa is in Auckland” is true (because perhaps our data
has come from an unreliable source), and we may use fuzzy geospatial reasoning to express that
Orewa is somewhere around the border of Auckland, and by some definitions may be considered
in Auckland, and by others out of Auckland (it is considered part of greater Auckland) [4]
All statistical reasoning is probabilistic, but not all probabilistic reasoning is statistical. In many
contexts people routinely make probabilistic judgments about events that are unique, singular, or
one of a kind, and for which no relevant statistics exist. In short, there is a necessity for non-
enumerative conceptions of probability. For example, we cannot play the world over again 1000
times to tabulate the number of occasions on which defendant committed the crime or a witness
reported an event that actually occurred.
Page 4 of 14
History and Proposal of Probabilistic Reasoning
The term "probabilistic logic" was first used in a paper by Nils Nilsson published in 1986, where
the truth values of sentences are probabilities. The proposed semantical generalization induces a
probabilistic logical entailment, which reduces to ordinary logical entailment when the
probabilities of all sentences are either 0 or 1. This generalization applies to any logical system for
which the consistency of a finite set of sentences can be established.
The central concept in the theory of subjective logic is opinions about some of the propositional
variables involved in the given logical sentences. A binomial opinion applies to a single
proposition and is represented as a 3-dimensional extension of a single probability value to express
probabilistic and epistemic uncertainty about the truth of the proposition. For the computation of
derived opinions based on a structure of argument opinions, the theory proposes respective
operators for various logical connectives, such as e.g. multiplication (AND), multiplication (OR),
division (UN-AND) and co-division (UN-OR) of opinions conditional deduction (MP) and
abduction (MT)., as well as Bayes' theorem.
The approximate reasoning formalism proposed by fuzzy logic can be used to obtain a logic in
which the models are the probability distributions and the theories are the lower envelopes. In such
a logic the question of the consistency of the available information is strictly related with the one
of the coherence of partial probabilistic assignment and therefore with Dutch book phenomena.
Markov logic networks implement a form of uncertain inference based on the maximum entropy
principle—the idea that probabilities should be assigned in such a way as to maximize entropy, in
analogy with the way that Markov chains assign probabilities to finite state machine transitions.
Systems such as Pei Wang's Non-Axiomatic Reasoning System (NARS) or Gentzel’s Probabilistic
Logic Networks (PLN) add an explicit confidence ranking, as well as a probability to atoms and
sentences. The rules of deduction and induction incorporate this uncertainty, thus side-stepping
difficulties in purely Bayesian approaches to logic (including Markov logic), while also avoiding
the paradoxes of Dumpster–Shafer theory. The implementation of PLN attempts to use and
generalize algorithms from logic programming, subject to these extensions.
Page 5 of 14
In the field of probabilistic argumentation, various formal frameworks have been put forward. The
framework of "probabilistic labeling", for example, refers to probability spaces where a sample
space is a set of labeling of argumentation graphs. In the framework of "probabilistic
argumentation systems" probabilities are not directly attached to arguments or logical sentences.
Instead it is assumed that a particular subset W of the variables V involved in the sentences defines
a probability space over the corresponding sub-σ-algebra. This induces two distinct probability
measures with respect to V, which are called degree of support and degree of possibility,
respectively. Degrees of support can be regarded as non-additive probabilities of provability,
which generalizes the concepts of ordinary logical entailment (for V = { }) and classical posterior
probabilities (for V = W). Mathematically, this view is compatible with the dumpster–Shafer
theory.
The theory of evidential reasoning also defines non-additive probabilities of probability (or
epistemic probabilities) as a general notion for both logical entailment (provability) and
probability. The idea is to augment standard propositional logic by considering an epistemic
operator K that represents the state of knowledge that a rational agent has about the world.
Probabilities are then defined over the resulting epistemic universe Kip of all propositional
sentences p, and it is argued that this is the best information available to an analyst. From this
view, dumpster–Shafer theory appears to be a generalized form of probabilistic reasoning.
Causes of Uncertainty
Following are some leading causes of uncertainty to occur in the real world. These are:
Page 6 of 14
When there are unpredictable outcomes.
When specifications or possibilities of predicates becomes too large to handle.
When an unknown error occurs during an experiment.
In probabilistic reasoning, there are two ways to solve problems with uncertain knowledge: [2]
Bayes' Theorem
Bayes' theorem provides us with a formula to calculate probability of an event given probabilities
of other events. Bayes’ theorem describes the probability of occurrence of an event related to any
condition. It is also considered for the case of conditional probability. Bayes theorem is also known
as the formula for the probability of “causes”. For example: if we have to calculate the probability
of taking a blue ball from the second bag out of three different bags of balls, where each bag
contains three different colour balls viz. red, blue, black. In this case, the probability of occurrence
of an event is calculated depending on other conditions is known as conditional probability.
Bayesian Network
Bayesian network is a directed acyclic graph model which helps us represent probabilistic data.
Page 7 of 14
A Bayesian network contains two basic components:
When designing a Bayesian network, we keep the local probability table at each node.
A Bayesian network (BN) is a probabilistic graphical model for representing knowledge about an
uncertain domain where each node corresponds to a random variable and each edge represents the
conditional probability for the corresponding random variables
Bayesian networks are probabilistic because they are built from probability distributions and also
use the laws of probability for prediction and anomaly detection, for reasoning and diagnostics,
decision making under uncertainty and time series prediction. [4]
Nodes
In many Bayesian networks, each node represents a Variable such as someone's height, age or
gender. A variable might be discrete, such as Gender = {Female, Male} or might be continuous
such as someone's age.
In Bayes Server each node can contain multiple variables. We call nodes with more than one
variable multi-variable nodes.
The nodes and links form the structure of the Bayesian network, and we call this the structural
specification.
Discrete
A discrete variable is one with a set of mutually exclusive states such as Gender = {Female, Male}.
Continuous
Bayes Server support continuous variables with Conditional Linear Gaussian distributions
(CLG). This simply means that continuous distributions can depend on each other (are
multivariate) and can also depend on one or more discrete variables.
Page 8 of 14
Although Gaussians may seem restrictive at first, in fact CLG distributions can model complex
non-linear (even hierarchical) relationships in data. Bayes Server also supports Latent variables
which can model hidden relationships (automatic feature extraction, similar to hidden layers in a
deep neural network) [1].
Links
Links are added between nodes to indicate that one node directly influences the other. When a link
does not exist between two nodes, this does not mean that they are completely independent, as
they may be connected via other nodes. They may however become dependent or independent
depending on the evidence that is set on other nodes.
Although links in a Bayesian network are directed, information can flow both ways (according to
strict rules described later).
Structural learning
Bayes Server include a Structural learning algorithm for Bayesian networks, which can
automatically determine the required links from data.
Note that structural learning is often not required, as there are many well-known structures that
can solve many problems [2].
Page 9 of 14
Conclusion
Probabilistic reasoning uses probability and related terms. Probability is can be defined as a chance
that an uncertain event will occur. Probabilistic reasoning is a way of knowledge representation
where we apply the concept of probability to indicate the uncertainty in knowledge. There are
some causes that are the reasons for the occurrence of uncertainty in real world. These are
information occurred from unreliable sources, experiment error, equipment fault, temperature
variation and climate change. The other point that we have discussed in this document is need of
probabilistic reasoning. When there are unpredictable outcomes, when specifications or
possibilities of predicates becomes too large to handle and when an unknown error occurs during
an experiment are the main need of probabilistic reasoning. Uncertainty solving methods are
divided to two. These are baye’s theorem and Bayesian network [4].
Page 10 of 14
References
[1] D. N. Jeevanandam, "AI concepts for beginners: The Importance of Probabilistic Reasoning in AI,"
INDIAai, [Online]. Available: https://indiaai.gov.in/article/the-importance-of-probabilistic-
reasoning-in-ai. [Accessed 22 May 2022].
[3] P. Mishra, "Probabilistic Reasoning & Artificial Intelligence," Study.com, [Online]. Available:
https://study.com/academy/lesson/probabilistic-reasoning-artificial-intelligence.html. [Accessed 22
may 2022].
Page 11 of 14