Class: Modelling Bayesian Networks and Probabilistic Inference
Class: Modelling Bayesian Networks and Probabilistic Inference
Answer the questions of the videos. In your notebook, explain in your own words the following
concepts and how/where to use them:
Bayesian Network
They are graphical models (representations) for reasoning under uncertainty, in other words, they
are graphical representations of probability distribution functions, where the nodes represent
variables (Random) and the arcs represent direct connections between them (usually causal
relationships). They also represent the quantitative strength of the connections for obtaining and
updating probabilistic beliefs.
Types of nodes
They can be divided according to the type of variable they represent, continuous and discrete,
where the last one can be divided as:
Boolean nodes: Represent propositions and only acquire the True or False value.
Ordered values: Represent levels or categories for a variable (i.e., high, medium, low).
Integral values: Values taken from a range (i.e. age from 1 to 120).
Moreover, the types of nodes can be divided according to its position in the Bayes Networks as:
Parent Node: Node at the beginning of an arc. Implies causality to the nodes at the end of
the arc.
Child Node: Node at the ending of an arc. Causal effect of the Parent Node.
Ancestor Node: Node above a Child Node which can be reached following the path of
various arcs.
Descendant Node: Node below a Parent node which can be reached following the path of
various arcs.
Furthermore, when updating our beliefs, the nodes can be divided according to the role they take
for this process:
Network Complexity
The complexity of the network is directly related to its compactness. Networks with lots of nodes
and arcs are more complex as they depend on a higher number of parameters (and relationships)
for obtaining the probabilities of the nodes.
Probabilistic inference
Also known as belief updating or conditioning, is performed via a flow of information through
the network. It is the process of getting the posterior probability distribution for some query
nodes given the values for some evidence nodes (i.e., P(A|B)).
Conditional tables
It is the conditional probability distribution of a node given that it is a discrete variable. For
obtaining it, first we need to gather all the possible combinations of the values of the parent
nodes (each combination is an instantiation); For each distinct instantiation, we need to specify
the probability that the child will take each of its values
Types of reasoning
Diagnostic: Reasoning from symptoms to cause. Having a child node as the evidence
node, one can obtain the probabilities for its ancestors (queries).
Predictive: Reasoning from cause to symptoms. Having a parent node as the evidence
node, one can obtain the probabilities for its descendants (queries).
Intercausal: Reasoning from a cause and a common symptom to another cause. Also
called as explaining away, it is present in a v-structure and means that knowing that the
symptom is present and some cause is present will lower the probability of the other
cause.
Udacity Answers: