Lecture7 - Probabilistic Reasoning (Updated)
Lecture7 - Probabilistic Reasoning (Updated)
Probabilistic Reasoning
1. Quantifying Uncertainty
2. Bayesian Networks
CCIS@UTAS CSDS3203 2
Acting Under Uncertainty
CCIS@UTAS CSDS3203 3
Acting Under Uncertainty, Cont’d
CCIS@UTAS CSDS3203 4
Acting Under Uncertainty, Cont’d
• Probability theory
◦ tool to deal with degrees of belief of relevant sentences.
◦ summarizes the uncertainty that comes from our laziness and ignorance
• Uncertainty and rational decisions
◦ An agent requires preference among different possible outcomes of various plans
◦ Utility Theory: the quality of the outcome being useful
− Every state has a degree of usefulness/utility
− Higher utility is preferred
◦ Decision Theory: Preferences (Utility Theory) combined with probabilities
− Decision theory = probability theory + utility theory.
− agent is rational if and only if it chooses the action that yields the highest expected utility, averaged over all
the possible outcomes of the action.
− principle of maximum expected utility (MEU).
CCIS@UTAS CSDS3203 5
Basic Probability Notation
• For our agent to represent and use probabilistic information, we need a formal
language.
• Sample space: the set of all possible world
◦ The possible worlds are mutually exclusive and exhaustive.
• A fully specified probability model associates a numerical probability 𝑃 𝜔 with each
possible world.
• The basic axioms of probability theory say that every possible world has a probability
between 0 and 1 and that the total probability of the set of possible worlds is 1:
CCIS@UTAS CSDS3203 6
Basic Probability Notation, Cont’d
• Conditional or posterior probability: given evidence that and event has happened,
the degree of belief of new event
◦ Makes use of unconditional probabilities
• Probability of 𝑎 given 𝑏
𝑃(𝑎 ∧ 𝑏)
𝑃 𝑎𝑏 =
𝑃(𝑏)
• Can also be written as:
𝑃 𝑎 ∧ 𝑏 = 𝑃 𝑎 𝑏) 𝑃(𝑏)
• Example of rolling a fair dice, rolling doubles when the first dice is 5
𝑃(𝑑𝑜𝑢𝑏𝑙𝑒𝑠 ∧ 𝐷𝑖𝑒1 = 5)
𝑃 𝑑𝑜𝑢𝑏𝑙𝑒𝑠 𝐷𝑖𝑒1 = 5 =
𝑃(𝐷𝑖𝑒1 = 5)
CCIS@UTAS CSDS3203 7
Random Variables and Probability Distribution
CCIS@UTAS CSDS3203 8
Inference Using Full Joint Distribution, Cont’d
• Start with the joint distribution
𝑡𝑜𝑜𝑡ℎ𝑎𝑐ℎ𝑒 ¬𝑡𝑜𝑜𝑡ℎ𝑎𝑐ℎ𝑒
𝑐𝑎𝑡𝑐ℎ ¬𝑐𝑎𝑡𝑐ℎ 𝑐𝑎𝑡𝑐ℎ ¬𝑐𝑎𝑡𝑐ℎ
𝑐𝑎𝑣𝑖𝑡𝑦 0.108 0.012 0.072 0.008
¬𝑐𝑎𝑣𝑖𝑡𝑦 0.016 0.064 0.144 0.576
CCIS@UTAS CSDS3203 9
Inference Using Full Joint Distribution, Cont’d
• Start with the joint distribution
𝑡𝑜𝑜𝑡ℎ𝑎𝑐ℎ𝑒 ¬𝑡𝑜𝑜𝑡ℎ𝑎𝑐ℎ𝑒
𝑐𝑎𝑡𝑐ℎ ¬𝑐𝑎𝑡𝑐ℎ 𝑐𝑎𝑡𝑐ℎ ¬𝑐𝑎𝑡𝑐ℎ
𝑐𝑎𝑣𝑖𝑡𝑦 0.108 0.012 0.072 0.008
¬𝑐𝑎𝑣𝑖𝑡𝑦 0.016 0.064 0.144 0.576
CCIS@UTAS CSDS3203 10
Normalization
𝑡𝑜𝑜𝑡ℎ𝑎𝑐ℎ𝑒 ¬𝑡𝑜𝑜𝑡ℎ𝑎𝑐ℎ𝑒
𝑐𝑎𝑡𝑐ℎ ¬𝑐𝑎𝑡𝑐ℎ 𝑐𝑎𝑡𝑐ℎ ¬𝑐𝑎𝑡𝑐ℎ
𝑐𝑎𝑣𝑖𝑡𝑦 0.108 0.012 0.072 0.008
¬𝑐𝑎𝑣𝑖𝑡𝑦 0.016 0.064 0.144 0.576
• We can also compute the distribution of a query variable (given some evidence)
• What is the probability distribution of 𝐶𝑎𝑣𝑖𝑡𝑦 given 𝑡𝑜𝑜𝑡ℎ𝑎𝑐ℎ𝑒.
• General idea: we compute the distribution on a query variable (𝐶𝑎𝑣𝑖𝑡𝑦) by fixing
evidence variables (𝑡𝑜𝑜𝑡ℎ𝑎𝑐ℎ𝑒) and summing over hidden variables (𝐶𝑎𝑡𝑐ℎ).
CCIS@UTAS CSDS3203 11
Normalization, Cont’d
𝑡𝑜𝑜𝑡ℎ𝑎𝑐ℎ𝑒 ¬𝑡𝑜𝑜𝑡ℎ𝑎𝑐ℎ𝑒
𝑐𝑎𝑡𝑐ℎ ¬𝑐𝑎𝑡𝑐ℎ 𝑐𝑎𝑡𝑐ℎ ¬𝑐𝑎𝑡𝑐ℎ
𝑐𝑎𝑣𝑖𝑡𝑦 0.108 0.012 0.072 0.008
¬𝑐𝑎𝑣𝑖𝑡𝑦 0.016 0.064 0.144 0.576
CCIS@UTAS CSDS3203 12
Normalization, Exercise
𝑡𝑜𝑜𝑡ℎ𝑎𝑐ℎ𝑒 ¬𝑡𝑜𝑜𝑡ℎ𝑎𝑐ℎ𝑒
𝑐𝑎𝑡𝑐ℎ ¬𝑐𝑎𝑡𝑐ℎ 𝑐𝑎𝑡𝑐ℎ ¬𝑐𝑎𝑡𝑐ℎ
𝑐𝑎𝑣𝑖𝑡𝑦 0.108 0.012 0.072 0.008
¬𝑐𝑎𝑣𝑖𝑡𝑦 0.016 0.064 0.144 0.576
CCIS@UTAS CSDS3203 13
Inference Using Full Joint Distribution
• Let X be all the variables. Typically we want:
◦ The joint distribution of the query variables 𝑌
◦ Given specific values e for the evidence variables 𝐸
• Let the hidden variables by 𝐻 = 𝑋 – 𝑌 – 𝐸
• The required summation of joint entries is done by summing out the hidden variables:
◦ 𝑃 𝑌 𝐸 = 𝑒) = 𝛼𝑃 𝑌, 𝐸 = 𝑒 = 𝛼 σℎ 𝑃(𝑌, 𝐸 = 𝑒, 𝐻 = ℎ)
• The terms in the summation are joint entries because 𝑌, 𝐸, and 𝐻 together exhaust the
set of random variables.
• Obvious problems:
◦ The worst-case time complexity 𝑂 𝑑 𝑛 where d is the largest arity (e.g., 2 in the case of Boolean
variables)
◦ Space complexity 𝑂 𝑑 𝑛 to store the joint distribution
◦ How to find the number of 𝑂 𝑑 𝑛 entries??
CCIS@UTAS CSDS3203 14
Independence
• Independence is the knowledge that the occurrence of one event does not affect the
probability of the other event. For example,
◦ One’s dental problems has nothing to do with the weather.
◦ Coin flips are independent
CCIS@UTAS CSDS3203 15
Bayes’ Rule and Its Use
• Bayes' rule is derived from the product rule
• 𝑃 𝑎, 𝑏 = 𝑃 𝑎 𝑏 𝑃(𝑏) and 𝑃 𝑎, 𝑏 = 𝑃 𝑏 𝑎 𝑃(𝑎)
• Equating the two right hand side and dividing be 𝑃(𝑎), we get
𝑃 𝑏 𝑎 𝑃(𝑎)
•𝑃 𝑎𝑏 =
𝑃(𝑏)
• Often, we perceive as evidence the effect of some unknown cause and we would like
to determine that cause. In that case Bayes’ rules becomes:
𝑃 𝑒𝑓𝑓𝑒𝑐𝑡 𝑐𝑎𝑢𝑠𝑒 𝑃(𝑐𝑎𝑢𝑠𝑒)
• 𝑃 𝑐𝑎𝑢𝑠𝑒 𝑒𝑓𝑓𝑒𝑐𝑡 =
𝑃(𝑒𝑓𝑓𝑒𝑐𝑡)
• The conditional probability 𝑃(𝑒𝑓𝑓𝑒𝑐𝑡|𝑐𝑎𝑢𝑠𝑒) quantifies the relationship in the causal
direction, whereas 𝑃(𝑐𝑎𝑢𝑠𝑒|𝑒𝑓𝑓𝑒𝑐𝑡) describes the diagnostic direction.
CCIS@UTAS CSDS3203 16
Bayes’ Rule Example
• A doctor knows that the disease meningitis causes a patient to have a stiff neck, say,
70% of the time. The doctor also knows some unconditional facts: the prior probability
that any patient has meningitis is 1/50,000, and the prior probability that any patient
has a stiff neck is 1%. Letting 𝑠 be the proposition that the patient has a stiff neck and
𝑚 be the proposition that the patient has meningitis, what is the probability of
meningitis given that someone have a stiff neck.
CCIS@UTAS CSDS3203 17
Bayes’ Rule Solution
• A doctor knows that the disease meningitis causes a patient to have a stiff neck, say,
70% of the time. The doctor also knows some unconditional facts: the prior probability
that any patient has meningitis is 1/50,000, and the prior probability that any patient
has a stiff neck is 1%. Letting 𝑠 be the proposition that the patient has a stiff neck and
𝑚 be the proposition that the patient has meningitis, what is the probability of
meningitis given that someone have a stiff neck.
• 𝑃(𝑠 𝑚 = 0.7
1
•𝑃 𝑚 =
50000
• 𝑃 𝑠 = 0.01
𝑃 𝑠𝑚 𝑃(𝑚) 0.7×1/50000
•𝑃 𝑚𝑠 = = = 0.0014
𝑃(𝑠) 0.01
CCIS@UTAS CSDS3203 18
Outline
1. Quantifying Uncertainty
2. Bayesian Networks
CCIS@UTAS CSDS3203 19
What is a Bayesian Network?
CCIS@UTAS CSDS3203 20
Example
• Dr. Khamis Juma Al Sabti is out of the country for a conference. Before
leaving he installed a new alarm system in his house. The alarm can go off
for one of two causes, either the house is burgled or there is an
earthquake. If the the alarm goes off, one of Dr. Khamis siblings (Moza
and Salim) will call. Note that Moza and Salim will not confer with each
other before calling.
CCIS@UTAS CSDS3203 21
Bayesian Network: Alarm Example
Burglary Earthquake
Alarm
CCIS@UTAS CSDS3203 22
Exercise
CCIS@UTAS CSDS3203 23
Conational Probability Tables (CPTs)
CCIS@UTAS CSDS3203 24
CPTs: Alarm Example
𝑷(𝑩) 𝑷(𝑬)
true false true false
Burglary Earthquake
0.001 0.999 0.002 0.998
𝑷(𝐀|𝐁, 𝐄)
B E
true false
t t 0.70 0.30
Alarm t f 0.01 0.99
f t 0.70 0.30
f f 0.01 0.99
𝑷(𝑺|𝑨)
A
true false Salim Calls 𝑷(𝐌|𝑨) Moza Calls
A
t 0.90 0.10 true false
f 0.05 0.95 t 0.70 0.30
CCIS@UTAS
f 0.01 0.99
CSDS3203 25
Exercise
• Define probability tables for the Bayesian network we have created in the
Exercise in slide 21. You can give any plausible probability values.
CCIS@UTAS CSDS3203 26
The Semantics of Bayesian Networks
𝑃 𝑋1 , 𝑋2 , … , 𝑋𝑛 = ෑ 𝑃 𝑋𝑖 𝑃𝑎𝑟𝑟𝑒𝑛𝑡𝑠 𝑋𝑖
𝑖=1
• For example, we calculate the probability of:
𝐵 = 𝑡𝑟𝑢𝑒, 𝐸 = 𝑡𝑟𝑢𝑒, 𝐴 = 𝑓𝑎𝑙𝑠𝑒, 𝑀 = 𝑓𝑎𝑙𝑠𝑒, 𝑆 = 𝑓𝑎𝑙𝑠𝑒:
𝑃 𝑏, 𝑒, ¬𝑎, ¬𝑚, ¬𝑠
= 𝑃 ¬𝑚 ¬𝑎 𝑃 ¬𝑠 ¬𝑎 𝑃 ¬𝑎 𝑏, 𝑒 𝑃 𝑏 𝑃 𝑎
= 0.99 × 0.95 × 0.30 × 0.001 × 0.002
= 5.64 × 10−7
CCIS@UTAS CSDS3203 27
Probabilistic Inference Using Bayesian Networks
CCIS@UTAS CSDS3203 28
Probabilistic Inference Using Bayesian Networks
CCIS@UTAS CSDS3203 29
Inference through Enumeration in Bayesian Networks
• This approach involves calculating the probability for a specific variable or a group of
variables within a Bayesian network.
• The process entails listing all possible value combination for the network’s variables
and computing the exact probability for each possible combination.
• To compute the probability distribution of a query Variable 𝑌 given the evidence 𝐸 and
accounting for any hidden variable 𝐻, the following formula is applied:
𝑃 𝑌 𝐸 = 𝛼 𝑃(𝑌, 𝐸, ℎ)
ℎ∈𝐻
• Where 𝛼 is a normalizing constant ensuring that the probabilities sum to 1.
CCIS@UTAS CSDS3203 30
Exercise
CCIS@UTAS CSDS3203 31
Homework
• Given that that both burglary and an earthquake has occurred, what is the probability
distribution that Moza class.
Challenging:
• Given that both Salim and Moza call, calculate the probability distribution of Burglary.
CCIS@UTAS CSDS3203 32
Computational Complexity of Enumeration-Based Inference
• While enumeration is a valuable technique for calculating probabilities in Bayesian
networks, its computational demands increase with the network's complexity due to the
growing number of variable combinations.
• Complexity Considerations
◦ Single-Connection Networks:
− Time and space complexity is linear (𝑂(𝑛)), making enumeration tractable.
− Defined as networks where a single undirected path exists between any two nodes.
◦ Multiple-Connection Networks
− Complexity can skyrocket to exponential levels in the worst cases (𝑂(𝑐 𝑛 )), where 𝑐 is a constant factor and
𝑛 is the count of nodes.
− These are networks with several undirected paths between nodes.
• Strategies for Complex Networks
◦ To manage complex networks more effectively, we must employ more efficient computational
strategies, such as sampling methods.
CCIS@UTAS CSDS3203 33
Sampling as an Approximation Technique in Bayesian Networks
CCIS@UTAS CSDS3203 34
Can Insurance Bayesian Network
SocioEcon
Age
GoodStudent
ExtraCar
Mileage
Sampling methods RiskAversion
VehicleYear
becomes useful when SeniorTrain
dealing with large,
complex networks like DrivingSkill MakeModel
Ruggedness Accident
Theft
OwnDamage
Cushioning OwnCost
OtherCost
CCIS@UTAS CSDS3203 35
Example 𝑷(𝑩) 𝑷(𝑬)
true false true false
0.001 0.999 0.002 0.998
Earthquake
𝑷(𝐀|𝐁, 𝐄)
Burglary B E
true false
t t 0.70 0.30
t f 0.01 0.99
Alarm f t 0.70 0.30
𝑷(𝑺|𝑨) f f 0.01 0.99
A
true false 𝑷(𝐌|𝑨)
A
t 0.90 0.10 Salim Calls true false Moza Calls
f 0.05 0.95 t 0.70 0.30
f 0.01 0.99
B E A S M
CCIS@UTAS CSDS3203 36
Example 𝑷(𝑩)
true false
0.001 0.999
Earthquake
Burglary
Alarm
Sample B
B E A S M
CCIS@UTAS CSDS3203 37
Example 𝑷(𝑬)
true false
0.002 0.998
Earthquake
Burglary
Alarm
Sample E
B E A S M
false
CCIS@UTAS CSDS3203 38
Example
Earthquake
𝑷(𝐀|𝐁, 𝐄)
Burglary B E
true false
t t 0.70 0.30
t f 0.01 0.99
Alarm f t 0.70 0.30
f f 0.01 0.99
Sample A
B E A S M
false true
CCIS@UTAS CSDS3203 39
Example
Earthquake
Burglary
Alarm
𝑷(𝑺|𝑨)
A
true false
t 0.90 0.10 Salim Calls Moza Calls
f 0.05 0.95
Sample S
B E A S M
false true true
CCIS@UTAS CSDS3203 40
Example
Earthquake
Burglary
Alarm
𝑷(𝐌|𝑨)
A
Salim Calls true false Moza Calls
t 0.70 0.30
f 0.01 0.99 Sample M
B E A S M
false true true true
CCIS@UTAS CSDS3203 41
Example
Earthquake
Burglary
Alarm
Sample M
B E A S M
false true true true false
CCIS@UTAS CSDS3203 42
Generated Samples
The previous slide showed how we generate a single sample. The process is repeated
many times to generate a large number of samples.
B E A S M B E A S M
false true true true false false true true true true
B E A S M B E A S M
false true true false false true true true true false
B E A S M B E A S M
true false true false false false false false true true
B E A S M B E A S M
true true true false false false false true true false
……
CCIS@UTAS CSDS3203 43
Using Sample to Estimate Probabilities
To use sampling to calculate the probability distribution of an earthquake given that Moza
calls (i.e, 𝑃(𝐸|𝑀 = 𝑡𝑟𝑢𝑒)), follow these steps:
1. Collect a large number of sample using the method describe in the previous slides,
say 10,000 samples.
2. Filter Samples: Go through the 10,000 sample and filter out those samples where
Moza calls (i.e., 𝑀 = 𝑡𝑟𝑢𝑒).
3. Count Relevant Samples: Count the number of filtered samples where the
earthquake occurred (i.e., 𝐸 = 𝑡𝑟𝑢𝑒)
4. Calculate Probability: Calculate the conditional probability 𝑃(𝐸 = 𝑡𝑟𝑢𝑒|𝑀 = 𝑡𝑟𝑢𝑒) by
dividing the count of relevant samples by the total number of samples where Moza
call. The formula for the probability is:
Number of samples with 𝐸 = 𝑡𝑟𝑢𝑒 and 𝑀 = 𝑡𝑟𝑢𝑒
𝑃 𝐸 𝑀 = 𝑡𝑟𝑢𝑒 =
Total number of sample where 𝑀 = 𝑡𝑟𝑢𝑒
Repeat steps 3 and 4 with 𝐸 = 𝑓𝑎𝑙𝑠𝑒.
CCIS@UTAS CSDS3203 44
Outline
1. Quantifying Uncertainty
2. Bayesian Networks
CCIS@UTAS CSDS3203 45
Probabilistic Inference Over Time
• To integrate time, we will introduce a new variable, 𝑋, and change it based on the event
of interest, such that 𝑋𝑡 is the current event, 𝑋𝑡+1 is the next event, and so on.
• For predicting future events, we will utilize Markov Models that lever time-structured
variable 𝑋.
CCIS@UTAS CSDS3203 46
Why Inference Over Time?
• Inference over time is crucial in many fields that requires modeling sequential to time-
dependent phenomena.
• Examples:
◦ Weather Forecasting: Predicting weather conditions helps in planning agricultural activities,
construction projects, and disaster preparedness.
◦ Healthcare Monitoring: Continuous monitoring of patient vitals can predict critical health events,
allowing for timely intervention.
◦ Text Prediction: In applications like email or messaging, predicting the next word or sentence helps
in faster and more efficient communication.
◦…
CCIS@UTAS CSDS3203 47
The Markov Assumption
• Markov Assumption Basics: The Markov assumption posits that the current state
depends only on a limited, fixed number of previous states, simplifying the prediction
process.
• Practical Necessity: Considering all past data (e.g., a year's weather) for predictions is
impractical due to computational limitations and diminishing relevance of older data.
• Application Example: In weather forecasting, using the Markov assumption allows the
consideration of only recent data (e.g., the previous few days) rather than the entire
historical record.
• Simplification and Efficiency: By applying the Markov assumption, predictions become
more computationally feasible and manageably approximate, although they might be
less precise.
• Specific Model Use: Markov models often utilize data from the most recent event (e.g.,
using today’s weather to predict tomorrow's) to efficiently forecast future states.
CCIS@UTAS CSDS3203 48
Markov Chains
CCIS@UTAS CSDS3203 49
Transition Model: Example
Tomorrow (𝑋𝑡+1 )
0.3 0.7
CCIS@UTAS CSDS3203 50
First Markov Chain
Given the transition model in the previous slide we can calculate the probability of a
sequence of events, such as observing two sunny days followed by four rainy days.
◦ Assume that the initial probability of both events (rainy or sunny) is 0.5.
𝑋0 𝑋1 𝑋2 𝑋3 𝑋4 𝑋5
𝑃 𝑠𝑢𝑛, 𝑠𝑢𝑛, 𝑟𝑎𝑖𝑛, 𝑟𝑎𝑖𝑛, 𝑟𝑎𝑖𝑛, 𝑟𝑎𝑖𝑛 = 0.5 × 0.8 × 0.2 × 0.7 × 0.7 × 0.7 = 0.02744
CCIS@UTAS CSDS3203 51
Inference with Markov Chains
• Given some sequence 𝑋 of length 𝑡, we can compute how probable the sequence
given a Markov chain model using the following formula:
𝑃 𝑋 = 𝑃 𝑥𝑖 ෑ 𝑃(𝑥𝑖 |𝑥𝑖−1 )
𝑖=2
• A key property of (first order) Markov chain is that the probability of each 𝑥𝑖 depends
only on the value of 𝑥𝑖−1
CCIS@UTAS CSDS3203 52
Hidden Markov Models
• In some applications, observations we make are influenced by hidden (to us) states.
• Here we need a mode that allows us to measure the outcomes of hidden states.
◦ We can observe the events generated by the states, but not the states themselves.
• Hidden Markov models (HMMs) are probabilistic models that involve underlying states
that are not directly observable, which influence observable evens.
• Application:
◦ In speech recognition: spoken words (hidden states) are inferred from sound waves (observations).
◦ In Web search: user engagement with the results is a hidden state which is computed from
clickthrough log (observations.
CCIS@UTAS CSDS3203 53
Using HHMs
• Consider having a camera outside your home that records people carrying umbrellas.
People carry umbrellas both on sunny and rainy days. However, some people never
carry umbrellas, regardless of the weather. In this scenario, observing someone with or
without an umbrella is the observable event, while the actual weather condition (sunny
or rainy) represents the hidden state.
Observation (𝐸𝑡 )
CCIS@UTAS CSDS3203 54
Sensor Markov Assumption
• The sensor Markov assumption assumes that the observable evidence depends solely
on the corresponding hidden state.
• In our example, it's assumed that carrying an umbrella is dictated only by the weather
condition.
• Limitation of Reality: The assumption may not capture all factors, as some individuals
carry umbrellas irrespective of weather based on personal habits or preferences.
CCIS@UTAS CSDS3203 55
HMMs A Two Layers View
A hidden Markov model can be depicted as a two-layer Markov chain, where the top
layer, variable 𝑋, represents the hidden state, and the bottom layer, variable 𝐸,
represents the observable evidence.
𝑋0 𝑋1 𝑋2 𝑋3 𝑋4
𝐸0 𝐸1 𝐸2 𝐸3 𝐸4
CCIS@UTAS CSDS3203 56
Inference on HMMs
Hidden Markov models facilitate several key tasks:
• Filtering: Computes the current state's probability distribution based on all prior
observations, such as determining if it's raining today based on historical umbrella
usage.
• Prediction: Estimates future state probabilities using past and present observations.
• Smoothing: Determines past state probabilities using data up to the present, like
predicting yesterday’s weather from today’s umbrella sightings.
• Most Likely Explanation: Identifies the most probable sequence of events based on
observed data.
CCIS@UTAS CSDS3203 57
Recommended Reading
For more on the topics covered in this lecture please refer to the following
sources:
• Russell-Norvig Book (Russell & Norvig, 2020): Sections 12.1 – 12.5
(Quantifying Uncertainty), Sections 13.1 – 13.3 (Probabilistic
Reasoning), Sections 14.1 – 14.3 (Probabilistic Reasoning Over
Time).
CCIS@UTAS CSDS3203 58
References
CCIS@UTAS CSDS3203 59