0% found this document useful (0 votes)
9 views50 pages

Inference in Graphical Models

Uploaded by

jialuyang98
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views50 pages

Inference in Graphical Models

Uploaded by

jialuyang98
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

§ Inference in graphical models involves determining the probabilities of certain

variables given observed data (evidence).


§ It is a core task used for prediction, decision-making, and learning in probabilistic
models.

Inference Tasks:
1. Marginal Inference:
Compute the probability of a subset of variables, marginalizing over others.
2. Conditional Inference:
Determine the probability of variables given certain evidence.
3. MAP Inference (Maximum A Posteriori):
Find the most probable assignment to the variables given evidence.

90
§ Parameters (e.g., θ) are often treated as global latent variables that influence the
entire dataset.
§ Maximum A Posteriori (MAP) Inference is used to find the most probable
parameters given the observed data.

✓ˆM AP = arg max P (✓|X)


<latexit sha1_base64="6EIR8HCIAsikWOJG1sye2VfTj6k=">AAACG3icbZDJSgNBEIZ7XGPcoh69NAZBL2EmiHoRXC5ehAhmgUwYajqdpEnPQneNGMa8hxdfxYsHRTwJHnwbO8kc1FjQ8PH/VVTX78dSaLTtL2tmdm5+YTG3lF9eWV1bL2xs1nSUKMarLJKRaviguRQhr6JAyRux4hD4ktf9/sXIr99ypUUU3uAg5q0AuqHoCAZoJK9QdnuAqYs9jjD00quzypCeUBdU1w3gzpsYtLKXwT1t7HuFol2yx0WnwcmgSLKqeIUPtx2xJOAhMglaNx07xlYKCgWTfJh3E81jYH3o8qbBEAKuW+n4tiHdNUqbdiJlXoh0rP6cSCHQehD4pjMA7Om/3kj8z2sm2DlupSKME+QhmyzqJJJiREdB0bZQnKEcGACmhPkrZT1QwNDEmTchOH9PnoZaueQclg6vD4qn51kcObJNdsgeccgROSWXpEKqhJEH8kReyKv1aD1bb9b7pHXGyma2yK+yPr8B6Ligrw==</latexit>


§ Using Bayes rule:
prior

P (✓|X) / P (X|✓)P (✓)


<latexit sha1_base64="5zBGNb18C7xbIRIccCJPq60eMeQ=">AAACFnicbVDLSsNAFJ34rPUVdelmsAjtwpKIVJdFNy4j2DbQhjKZTtqhkwczN0KJ/Qo3/oobF4q4FXf+jdM2grYeGDiccy93zvETwRVY1pextLyyurZe2Chubm3v7Jp7+00Vp5KyBo1FLF2fKCZ4xBrAQTA3kYyEvmAtf3g18Vt3TCoeR7cwSpgXkn7EA04JaKlrnjjlDgwYEHyP3QruJDJOIMZO2dXCzKngn5lK1yxZVWsKvEjsnJRQDqdrfnZ6MU1DFgEVRKm2bSXgZUQCp4KNi51UsYTQIemztqYRCZnysmmsMT7WSg8HsdQvAjxVf29kJFRqFPp6MiQwUPPeRPzPa6cQXHgZj5IUWERnh4JUYB180hHucckoiJEmhEqu/4rpgEhCQTdZ1CXY85EXSfO0ateqtZuzUv0yr6OADtERKiMbnaM6ukYOaiCKHtATekGvxqPxbLwZ77PRJSPfOUB/YHx8A7DbnTk=</latexit>

A conjugate prior is a prior distribution that, when combined with a


function from a particular family, results in a distribution that belongs to
the same family of distributions as the prior. 91
§ Prior: We assume the probability of heads p follows a Beta distribution:
p↵ 0 1 1
<latexit sha1_base64="cgG9d3r51qojPusU6M7I5q3KbjQ=">AAACR3icbVDNS8MwHE3n15xfVY9egkPoQEcrMr0IY148TnAfsHYjzVIXlrYhSYVR+9958erNf8GLB0U8mn0Iuvkg5PHe++Xj+ZxRqWz7xcgtLa+sruXXCxubW9s75u5eU8aJwKSBYxaLto8kYTQiDUUVI20uCAp9Rlr+8Grst+6JkDSObtWIEy9EdxENKEZKSz2zG1gcPkAXMT5APfsYuj5RmpTgJXQDgXDKu+mPC0+gk0HL0TsvaXkanahZWrMWD8l6ZtEu2xPAReLMSBHMUO+Zz24/xklIIoUZkrLj2Fx5KRKKYkaygptIwhEeojvS0TRCIZFeOukhg0da6cMgFnpFCk7U3xMpCqUchb5OhkgN5Lw3Fv/zOokKLryURjxRJMLTi4KEQRXDcamwTwXBio00QVhQ/VaIB0i3p3T1BV2CM//lRdI8LTuVcuXmrFitzerIgwNwCCzggHNQBdegDhoAg0fwCt7Bh/FkvBmfxtc0mjNmM/vgD3LGN6i5rRU=</latexit>

(1 p) 0 Normalization
p ⇠ Beta(↵0 ,
<latexit sha1_base64="qetZ5zOKwF8Hpt4FSfy+9d6unNs=">AAACDnicbVA9SwNBEN3zM8avqKXNYggoSLgTiZYhNpYRjAq5cMxtJmZx74PdOTEc+QU2/hUbC0Vsre38N25iCjU+GHi8N8PMvDBV0pDrfjozs3PzC4uFpeLyyuraemlj88IkmRbYEolK9FUIBpWMsUWSFF6lGiEKFV6GNycj//IWtZFJfE6DFDsRXMeyJwWQlYJSJeW+kRH3Ce8obyDBcNcHlfYhcPe5H1ohcPeCUtmtumPwaeJNSJlN0AxKH343EVmEMQkFxrQ9N6VODpqkUDgs+pnBFMQNXGPb0hgiNJ18/M6QV6zS5b1E24qJj9WfEzlExgyi0HZGQH3z1xuJ/3ntjHrHnVzGaUYYi+9FvUxxSvgoG96VGgWpgSUgtLS3ctEHDYJsgkUbgvf35WlycVD1atXa2WG53pjEUWDbbIftMo8dsTo7ZU3WYoLds0f2zF6cB+fJeXXevltnnMnMFvsF5/0Lir+bJw==</latexit>

0) f (p|↵0 , 0) =
B(↵0 , 0 ) constant which is a
Beta function.
§ Likelihood: Likelihood of observing H heads and T tails in N coin flips assuming
each coin flip comes from Bernoulli distribution with parameter p:

P (data|p) = pH (1 p)T
<latexit sha1_base64="lhwKSzhoxXKyNk5yJWQJGjcBWK4=">AAACC3icbVC7SgNBFJ2Nrxhfq5Y2Q4IQC8OuSLQRgjYpI+QFySbMzk6SIbMPZu6KYU1v46/YWChi6w/Y+TdOHoUmHrhw5px7mXuPGwmuwLK+jdTK6tr6Rnozs7W9s7tn7h/UVRhLymo0FKFsukQxwQNWAw6CNSPJiO8K1nCHNxO/ccek4mFQhVHEHJ/0A97jlICWuma2km8Du4fEI0DG+AFHJ/gKR50yztv4VL861a6ZswrWFHiZ2HOSQ3NUuuZX2wtp7LMAqCBKtWwrAichEjgVbJxpx4pFhA5Jn7U0DYjPlJNMbxnjY614uBdKXQHgqfp7IiG+UiPf1Z0+gYFa9Cbif14rht6lk/AgioEFdPZRLxYYQjwJBntcMgpipAmhkutdMR0QSSjo+DI6BHvx5GVSPyvYxULx9jxXup7HkUZHKIvyyEYXqITKqIJqiKJH9Ixe0ZvxZLwY78bHrDVlzGcO0R8Ynz+/rphO</latexit>

§ Posterior: The posterior also follows a Beta distribution:


pH+↵0 1 (1 p)T + 0 1
<latexit sha1_base64="DsZDTN7Zf0d4Xg3eL8cVuR4XyRk=">AAACUXicbVFNbxMxFHSWj5YUaIAjlycipERAtItQ4YJUlUuPQWraStk0euu8bax6dy37bUVk9i9ygBP/gwsHEE4apNAykuXRzDzZHmdGK8dx/L0V3bp95+7W9r32zv0HD3c7jx4fu6q2kkay0pU9zdCRViWNWLGmU2MJi0zTSXbxYemfXJJ1qiqPeGFoUuB5qXIlkYM07cyHPQOfIWX6xH6GjE0f3kOaW5TenPlDeAEpajPHaQyvIGmgl4Td9M/80dLKiP86jT/obcZfwkai30w73XgQrwA3SbImXbHGcNr5ms4qWRdUstTo3DiJDU88WlZSU9NOa0cG5QWe0zjQEgtyE79qpIHnQZlBXtmwSoaVujnhsXBuUWQhWSDP3XVvKf7PG9ecv5t4VZqaqZRXB+W1Bq5gWS/MlCXJehEISqvCXUHOMbTJ4RPaoYTk+pNvkuPXg2RvsPfxTXf/YF3HtngqnomeSMRbsS8OxVCMhBRfxA/xS/xufWv9jEQUXUWj1nrmifgH0c4fpcStOA==</latexit>

P (p|data) =
B(H + ↵0 , T + 0 )
§ Take derivative with respect to p and solve for p
<latexit sha1_base64="zTEAUW77MXxnocDJRUa4O5i+MuE=">AAACXnicbVFdS9xAFJ3Ej9qodasvhb4MXVospUsiRX0RrH3xpbAFV4XNEm5mb3YHJ8kwc1NcYv6kb6Uv/SnObvbBqhcGzpxzLvfOmVQraSkM/3j+yura+quN18Hm1vabnc7b3UtbVkbgQJSqNNcpWFSywAFJUnitDUKeKrxKb37M9avfaKwsiwuaaRzlMClkJgWQo5JOpZM6Jryl+uf3ftPwTye8vYKZ5HDbJJrHqpzw/r6+a4UxEDSfeRwHc29mQNTn/AuPQekpJCH/yqNmwVw8Zh1MkVr9oEk63bAXLoo/B9ESdNmy+knnPh6XosqxIKHA2mEUahq5JUkKhU0QVxY1iBuY4NDBAnK0o3oRT8M/OmbMs9K4UxBfsI87asitneWpc+ZAU/tUm5MvacOKsuNRLQtdERaiHZRVilPJ51nzsTQoSM0cAGGk25WLKbjEyP1I4EKInj75Obg86EWHvcNf37qnZ8s4Nth79oHts4gdsVN2zvpswAT763le4G16//x1f9vfaa2+t+zZY/+V/+4BT9ewuQ==</latexit>

pMAP = argmaxp log P (p|data)


H + ↵0 1
= 92
H + T + ↵0 + 0 2
• Belief Propagation (BP) is an algorithm used for performing inference on graphical
models, specifically in Bayesian networks and Markov networks.
• It calculates marginal probabilities of nodes by passing messages between them.
• BP returns the exact marginal probabilities for any tree-structure graphs
(without undirected loops).
• Loopy BP: If graph has cycles (ignoring the edge directions), BP results are
approximate and may not converge or yield exact marginals.

93
X1 X2 X3 Xi XN-2 XN-1 XN

1 X X
Pr(x1 ) = ··· 1,2 (x1 , x2 ) 2,3 (x2 , x3 ) · · · N 1,N (xN 1 , xN )
Z x x
2 N

1 X X X X
= 1,2 (x1 , x2 ) 2,3 (x2 , x3 ) · · · N 2,N 1 (xN 2 , xN 1) N 1,N (xN 1 , xN )
Z x x3 xN xN
2 1
µxN !xN 1
(xN 1)

1 X X X
= 1,2 (x1 , x2 ) 2,3 (x2 , x3 ) · · · N 2,N 1 (xN 2 , xN 1 )µxN !xN 1
(xN 1)
Z x x3 xN
.
2 1

. µxN 1 !xN 2
(xN 2)

1 X
= 1,2 (x1 , x2 )µx3 !x2 (x2 )
Z x
2
1 94
= µx2 !x1 (x1 )
Z
X1 X2 X3 Xi XN-2 XN-1 XN

µx1 !x2 (x2 ) µx2 !x3 (x3 ) µxN 2 !xN 1


(xN 1) µxN 1 !xN
(xN )

µxi 1 !xi
(xi ) µxi+1 !xi (xi )

X1 X2 X3 Xi XN-2 XN-1 XN

µx1 !x2 (x2 ) µx2 !x3 (x3 ) µxN 1 !xN 2


(xN 2) µxN !xN 1
(xN 1)

1
Pr(xi ) = µxi 1 !xi (xi )µxi+1 !xi (xi )
Z
95
In general, with two passes over the tree we can answer
any marginal distribution:
-- one in each direction and cache all the messages

X1 X1 X1

X2 X3 X2 X3 X2 X3

X4 X5 X4 X5 X4 X5

1 Upward pass Downward pass


Pr(x3 ) = µx !x (x3 )µx5 !x3 (x3 )µx1 !x3 (x3 )
Z 4 3
96
Every loop of 4 or more nodes has a chord!
3
)
, x X3
X1 ( x2
3
2,
X6
X2 X8 X9
2,
5
(x 2

X4 X5 X7
, x5
)

A Chordal Graph (also known as a Triangulated Graph) is an undirected graph in


which all cycles of four or more vertices have a chord.

A chord is an edge that connects two non-adjacent vertices in a cycle, effectively


breaking the cycle into smaller ones.
98
X3
X1

X6
X2 X8 X9

X4 X5 X7

X1
2,3 (x2 , x3 ) 2,5 (x2 , x5 ) 5,7 (x5 , x7 ) 3,6 (x3 , x6 ) 6,7 (x6 , x7 ) 6,8 (x6 , x8 ) 7,8 (x7 , x8 )

X 2 , X3 , X5 X 3 , X5 , X7 X 3 , X6 , X7 X 6 , X7 , X8 X9

S O T R
X4

A junction tree is a tree where every node represents a clique in the graph
• Running intersec1on property: If variable X appears in cliques S and T,
it must appear on every node along the path between them. For example, X3 must appear in O
• Each potential function must be assigned to exactly one clique in the tree
• We can use similar belief propagation on the junction tree!
99
<latexit sha1_base64="Y08Y+WwLfysSWe+ArZJjlDNKf6M=">AAACOXicbZBLS8NAFIUn9VXrq+rSzWARWpCSVK1uhKIbd1Zq20ATwmQ6bYdOHsxMhBL6t9z4L9wJblwo4tY/4CTNQlsvBA7fuYfJPW7IqJC6/qLllpZXVtfy64WNza3tneLuXkcEEcekjQMWcNNFgjDqk7akkhEz5AR5LiNdd3yd+N0HwgUN/Hs5CYntoaFPBxQjqZBTbHpO3IKWDKDpGNOy6dQq8BJaIlLcdE6OFT6bQisU1GklbgJmtALVym0abSXBjDrFkl7V04GLwshECWTTdIrPVj/AkUd8iRkSomfoobRjxCXFjEwLViRIiPAYDUlPSR95RNhxevkUHinSh4OAq8+XMKW/EzHyhJh4rtr0kByJeS+B/3m9SA4u7Jj6YSSJj2cPDSIG1bVJjbBPOcGSTZRAmFP1rxCPEEdYqrILqgRj/uRF0alVjXq1fndaalxldeTBATgEZWCAc9AAN6AJ2gCDR/AK3sGH9qS9aZ/a12w1p2WZffBntO8fWtKnzw==</latexit>

X
mS!X1 (X2 ) = S (X2 , X3 , X5 )mO!S (X3 , X5 )
X3 ,X5
<latexit sha1_base64="C664X55vnbndpaGqdsypbvZGI/g=">AAACN3icbVDLSsNAFJ3UV62vqEs3g0WoICVRad0IRTeutEofgaaEyXTaDp08mJkIJeSv3Pgb7nTjQhG3/oGTNoK2Hhg4nHMPd+5xQ0aFNIxnLbewuLS8kl8trK1vbG7p2zstEUQckyYOWMAtFwnCqE+akkpGrJAT5LmMtN3RZeq37wkXNPAbchySrocGPu1TjKSSHP3ac+IGtGUAb5KS5ZwcQcupHsJzaItIWZZTSaAdCuo0ftxKNqLcu0mwkQYz1dGLRtmYAM4TMyNFkKHu6E92L8CRR3yJGRKiYxqh7MaIS4oZSQp2JEiI8AgNSEdRH3lEdOPJ3Qk8UEoP9gOuni/hRP2diJEnxNhz1aSH5FDMeqn4n9eJZP+sG1M/jCTx8XRRP2JQXZuWCHuUEyzZWBGEOVV/hXiIOMJSVV1QJZizJ8+T1nHZrJQrt6fF2kVWRx7sgX1QAiaoghq4AnXQBBg8gBfwBt61R+1V+9A+p6M5Lcvsgj/Qvr4BGImnOQ==</latexit>

X
m
<latexit

sha1_base64="rPNFk3SQlmKUHM/04VZ9kpuo1uQ=">AAAB/HicbVBNS8NAEN3Ur1q/oj16WSxCvZSkSPVY9OKxoq2BNoTNdtMu3d2E3Y0QQv0rXjwo4tUf4s1/47bNQVsfDDzem2FmXpgwqrTjfFultfWNza3ydmVnd2//wD486qk4lZh0ccxi6YVIEUYF6WqqGfESSRAPGXkIJ9cz/+GRSEVjca+zhPgcjQSNKEbaSIFd5UF+Bwc6hl7gTute0DyDgV1zGs4ccJW4BamBAp3A/hoMY5xyIjRmSKm+6yTaz5HUFDMyrQxSRRKEJ2hE+oYKxIny8/nxU3hqlCGMYmlKaDhXf0/kiCuV8dB0cqTHatmbif95/VRHl35ORZJqIvBiUZQyaH6dJQGHVBKsWWYIwpKaWyEeI4mwNnlVTAju8surpNdsuK1G6/a81r4q4iiDY3AC6sAFF6ANbkAHdAEGGXgGr+DNerJerHfrY9FasoqZKvgD6/MHW/iTTA==</latexit>

S!
mT !O (X3 , X7 ) = T (X3 , X6 , X7 )mR!T (X6 , X7 )
X X6
1 (X
X1 2)

2,3 (x2 , x3 ) 2,5 (x2 , x5 ) 5,7 (x5 , x7 ) 3,6 (x3 , x6 ) 6,7 (x6 , x7 ) 6,8 (x6 , x8 ) 7,8 (x7 , x8 )
<latexit

sha1_base64="u/j+v4THbldAtCcEHnC/j3dNusk=">AAAB+XicbVBNS8NAEJ3Ur1q/oh69LBahgpSkSPVY9OKxgv2ANoTNdtMu3WzC7qZQQv+JFw+KePWfePPfuG1z0NYHA4/3ZpiZFyScKe0431ZhY3Nre6e4W9rbPzg8so9P2ipOJaEtEvNYdgOsKGeCtjTTnHYTSXEUcNoJxvdzvzOhUrFYPOlpQr0IDwULGcHaSL5t9xPFfLfS9d0r1PVrl75ddqrOAmiduDkpQ46mb3/1BzFJIyo04Vipnusk2suw1IxwOiv1U0UTTMZ4SHuGChxR5WWLy2fowigDFMbSlNBoof6eyHCk1DQKTGeE9UitenPxP6+X6vDWy5hIUk0FWS4KU450jOYxoAGTlGg+NQQTycytiIywxESbsEomBHf15XXSrlXderX+eF1u3OVxFOEMzqECLtxAAx6gCS0gMIFneIU3K7NerHfrY9lasPKZU/gD6/MHCZyR/A==</latexit>

1 (X
1,X
2)
X 2 , X3 , X5 X 3 , X5 , X7 X 3 , X6 , X7 X 6 , X7 , X8 X9

S O T R
X4 <latexit sha1_base64="fTDrwski0EYSQA5kpAfZ3Rkw56k=">AAACN3icbVBLSwMxGMzWV62vqkcvwSK0IGXXR+tFKHrxZCv2sdAtSzZN29DsgyQrlKX/yot/w5tePCji1X9gul2Ktn6QMMx8QzLjBIwKqesvWmppeWV1Lb2e2djc2t7J7u41hR9yTBrYZz43HSQIox5pSCoZMQNOkOsw0nKG1xO99UC4oL5Xl6OAdFzU92iPYiQVZWdvXTuqQkv68H6cN+3TY2ja5wV4CS0RKsm0y2NoBYLa1Zk6ucoFqNR6bKzOjOWCnc3pRT0euAiMBORAMjU7+2x1fRy6xJOYISHahh7IToS4pJiRccYKBQkQHqI+aSvoIZeIThTnHsMjxXRhz+fqeBLG7G9HhFwhRq6jNl0kB2Jem5D/ae1Q9i46EfWCUBIPTx/qhQyqtJMSYZdygiUbKYAwp+qvEA8QR1iqqjOqBGM+8iJonhSNUrF0d5arXCV1pMEBOAR5YIAyqIAbUAMNgMEjeAXv4EN70t60T+1ruprSEs8++DPa9w8BrKcr</latexit>

X X
mO!S (X3 , X5 ) = O (X3 , X5 , X7 )mT !O (X3 , X7 )
<latexit sha1_base64="uVQi7QWlC9DD0pOUTwMst2pDXcI=">AAACI3icbVBLSwMxGMzWV62vqkcvwSK0IGVXpC2CUPTisUofC92yZNNsG5rNLklWKMv+Fy/+FS8elOLFg//F9AFq60DCMDMfyTdexKhUpvlpZNbWNza3stu5nd29/YP84VFbhrHApIVDFgrbQ5IwyklLUcWIHQmCAo+Rjje6nfqdRyIkDXlTjSPSC9CAU59ipLTk5q8CN3mAjgphMy3abuUc2m61BK+hI2Nt2W4thU4kqU79+NOrVnLzBbNszgBXibUgBbBAw81PnH6I44BwhRmSsmuZkeolSCiKGUlzTixJhPAIDUhXU44CInvJbMcUnmmlD/1Q6MMVnKm/JxIUSDkOPJ0MkBrKZW8q/ud1Y+XXegnlUawIx/OH/JhBXcm0MNingmDFxpogLKj+K8RDJBBWutacLsFaXnmVtC/KVqVcub8s1G8WdWTBCTgFRWCBKqiDO9AALYDBE3gBb+DdeDZejYnxMY9mjMXMMfgD4+sbQv+g8w==</latexit>

mR!T (X6 , X7 ) = R (X6 , X7 , X8 )


X7
X8

1 X
<latexit sha1_base64="x7iFIULjANfDfXofaEHyC6dzYII=">AAACKnicbVDLSgMxFM3UV62vUZdugkVoQcqkSHUjVN24rGgf2ClDJs20oZkHSUYow3yPG3/FTRdKceuHmGm70NYLgXPPOZebe9yIM6ksa2rk1tY3Nrfy24Wd3b39A/PwqCXDWBDaJCEPRcfFknIW0KZiitNOJCj2XU7b7ugu09svVEgWBk9qHNGejwcB8xjBSlOOedModRxUhtfQ9gQmCUqT59SWse8kHaea2pFkDsos51D3Zaj5R2irUHcoLWWUYxatijUruArQAhTBohqOObH7IYl9GijCsZRdZEWql2ChGOE0LdixpBEmIzygXQ0D7FPZS2anpvBMM33ohUK/QMEZ+3siwb6UY9/VTh+roVzWMvI/rRsr76qXsCCKFQ3IfJEXc6hPzXKDfSYoUXysASaC6b9CMsQ6MqXTLegQ0PLJq6BVraBapfZwUazfLuLIgxNwCkoAgUtQB/egAZqAgFfwDj7Ap/FmTIyp8TW35ozFzDH4U8b3D1hBpCg=</latexit>

P (X1 ) = 1 (X1 , X2 )mS!X1 (X2 )


Z
X2
<latexit sha1_base64="f3QFmPA3b14fnwRuSDfviFZN8W0=">AAAB8XicbVBNSwMxEJ31s9avqkcvwSLUS9lVqV7EohePFewHtkvJptk2NJssSVYoS/+FFw+KePXfePPfmLZ70NYHA4/3ZpiZF8ScaeO6387S8srq2npuI7+5tb2zW9jbb2iZKELrRHKpWgHWlDNB64YZTluxojgKOG0Gw9uJ33yiSjMpHswopn6E+4KFjGBjpcdaqdU9O0FX6LpbKLpldwq0SLyMFCFDrVv46vQkSSIqDOFY67bnxsZPsTKMcDrOdxJNY0yGuE/blgocUe2n04vH6NgqPRRKZUsYNFV/T6Q40noUBbYzwmag572J+J/XTkx46adMxImhgswWhQlHRqLJ+6jHFCWGjyzBRDF7KyIDrDAxNqS8DcGbf3mRNE7LXqVcuT8vVm+yOHJwCEdQAg8uoAp3UIM6EBDwDK/w5mjnxXl3PmatS042cwB/4Hz+AP9ejzA=</latexit>

P (X3 ) =? Construct the maximal clique in the graph


<latexit sha1_base64="o3Vbj6524anPfkknvwkU5e/u0Jg=">AAACG3icbVBLSwMxGMz6rPW16tFLsAgtSNmtWr0IRS/erNS2C+2yZNNsG5p9kGSFsvR/ePGvePGgiCfBg//GbLuItg4kDDPfRzLjRowKaRhf2sLi0vLKam4tv76xubWt7+y2RBhzTJo4ZCG3XCQIowFpSioZsSJOkO8y0naHV6nfvidc0DC4k6OI2D7qB9SjGEklOXrFLTZK8AJ2I0GdRtFyKkfQco7T67QEfSe5gV0Zwsa4+KM6esEoGxPAeWJmpAAy1B39o9sLceyTQGKGhOiYRiTtBHFJMSPjfDcWJEJ4iPqko2iAfCLsZJJtDA+V0oNeyNUJJJyovzcS5Asx8l016SM5ELNeKv7ndWLpndsJDaJYkgBPH/JiBlXatCjYo5xgyUaKIMyp+ivEA8QRlqrOvCrBnI08T1qVslktV29PCrXLrI4c2AcHoAhMcAZq4BrUQRNg8ACewAt41R61Z+1Ne5+OLmjZzh74A+3zG1R2nKc=</latexit>

b(S) = S (X2 , X3 , X5 )mO!S (X3 , X5 ) <latexit sha1_base64="ciab+9VaUNdKqS1vk02m3K1Pxg0=">AAACMnicbZBLS8NAEMc39VXrK+rRy2IRWpCS+Gi9CEUverJiW4NNCZvtxi7dPNjdCCXkM3nxkwge9KCIVz+Em7agVgd2+PGfGWbn70aMCmkYz1puZnZufiG/WFhaXlld09c32iKMOSYtHLKQWy4ShNGAtCSVjFgRJ8h3Gbl2B6dZ/fqOcEHDoCmHEen66DagHsVIKsnRzxsly9kvw2NoexzhxEyTmxTaIvadxHL2dqHlHGapmqVaCt3SVRnauBdKhRff2Cw7etGoGKOAf8GcQBFMouHoj3YvxLFPAokZEqJjGpHsJohLihlJC3YsSITwAN2SjsIA+UR0k9HJKdxRSg96IVcvkHCk/pxIkC/E0HdVp49kX0zXMvG/WieW3lE3oUEUSxLg8SIvZlCGMPMP9ignWLKhAoQ5VX+FuI+UdVK5XFAmmNMn/4X2XsWsVqqXB8X6ycSOPNgC26AETFADdXAGGqAFMLgHT+AVvGkP2ov2rn2MW3PaZGYT/Art8wuR/6WC</latexit>

1 X
P (X3 ) = b(S) · b(O) · b(T )
<latexit sha1_base64="ugS9/s95jcGuH3BKpJ4ArRN26Zg=">AAACG3icbVBLSwMxGMzWV62vqkcvwSK0IGW3autFKHrx1gp9QXdZsmnahia7S5IVytL/4cW/4sWDIp4ED/4b03YFbR1IGGa+j2TGCxmVyjS/jNTK6tr6Rnozs7W9s7uX3T9oySASmDRxwALR8ZAkjPqkqahipBMKgrjHSNsb3Uz99j0RkgZ+Q41D4nA08GmfYqS05GZLXr5WgFfQDiV1a/mOe3YKO+7F9KoUIHfjBrRVAGuTH6tScLM5s2jOAJeJlZAcSFB3sx92L8ARJ77CDEnZtcxQOTESimJGJhk7kiREeIQGpKupjziRTjzLNoEnWunBfiD08RWcqb83YsSlHHNPT3KkhnLRm4r/ed1I9S+dmPphpIiP5w/1IwZ12mlRsEcFwYqNNUFYUP1XiIdIIKx0nRldgrUYeZm0SkWrXCzfneeq10kdaXAEjkEeWKACquAW1EETYPAAnsALeDUejWfjzXifj6aMZOcQ/IHx+Q1T2Jyn</latexit>

b(O) = O (X3 , X5 , X7 )mT !O (X3 , X7 ) Z


<latexit sha1_base64="k+PDAsvBrj4jZkzuk51kk9ae/34=">AAACG3icbVBLSwMxGMzWV62vqkcvwSK0IGW3SutFKHrxWGX7gO6yZNO0DU12lyQrlKX/w4t/xYsHRTwJHvw3Ztsi2jqQMMx8H8mMHzEqlWl+GZmV1bX1jexmbmt7Z3cvv3/QkmEsMGnikIWi4yNJGA1IU1HFSCcSBHGfkbY/uk799j0RkoaBrcYRcTkaBLRPMVJa8vIVv2iX4CV0Ikk9u9jxzk5hx6umV60EuZfcQUeF0J4Uf1QvXzDL5hRwmVhzUgBzNLz8h9MLccxJoDBDUnYtM1JugoSimJFJzokliRAeoQHpahogTqSbTLNN4IlWerAfCn0CBafq740EcSnH3NeTHKmhXPRS8T+vG6v+hZvQIIoVCfDsoX7MoE6bFgV7VBCs2FgThAXVf4V4iATCSteZ0yVYi5GXSatStqrl6u15oX41ryMLjsAxKAIL1EAd3IAGaAIMHsATeAGvxqPxbLwZ77PRjDHfOQR/YHx+A2+DnLg=</latexit>
X2 ,X5 ,X6 ,X7
b(T ) = T (X3 , X6 , X7 )mR!T (X6 , X7 )
100
I

E D
The minimum tree-width of a graph is
the minimum size of the largest maximal clique A G F
of all possible triangulation of the graph minus one.

B C
Exact inference algorithms are exponential in the tree-width
of the underlying graphs. H

Finding the treewidth of a graph is a NP-complete, C,B,H D,C,B,G C,D,F


but in linear time we can figure out that it has a tree-width
≤K or not

E,B,A D,E,B,G E,I,D


101
102
Warren

LSTM
,
D
.
H
.

LSTM
D
.
(
1976
)
. LSTM
Generating
Conditional
Plans
and
Program
LSTM

.
In
Proceeding
of
the
LSTM

Summer
Conference
103

on
AI
and
LSTM

Simulation
of
Behavior
,
Edinburgh
LSTM

.
103
Warren

LSTM
,
D
.
H
.

LSTM
D
.
(
1976
)
. LSTM
Generating
Conditional
Plans
and
Program
LSTM

.
In
Proceeding
of
the
LSTM

Summer
Conference
104

on
AI
and
LSTM

Simulation
of
Behavior
,
Edinburgh
LSTM

.
104
Warren

LSTM
,
D
.
H
.

LSTM
D
.
(
1976
)

LSTM
.
Generating
Conditional
Plans
and
Program
LSTM
.
In
Proceeding
of
the
LSTM

Summer
Conference
105

on
AI
and
LSTM

Simulation
of
Behavior
,
Edinburgh
LSTM

.
105
A factor graph is a bipartite graph with a node for each random variable and each factor.
There is an edge between a factor and each variable that participates in that factor.

106
Different factorizations correspond to different graphs

107
108
§ Loopy Belief Propagation (LBP) is an extension
of BP used on graphs with cycles (loops). X1 X2

§ Messages are passed iteratively between nodes in


a loopy graph.
X8 X3
§ LBP is not guaranteed to converge or provide
exact results but works well in practice for many
problems.

X7 X4

X6 X5

109
§ Initialize messages mi!j (xj ) between all pairs of connected nodes i and j in the graph.
<latexit sha1_base64="uUJzxKfBTqiXdg1XrXa9AqmxcSI=">AAAB+XicbVBNS8NAEJ34WetX1KOXxSLUS0lEqseiF48V7Ae0IWy2m3bbzSbsbool9J948aCIV/+JN/+N2zYHbX0w8Hhvhpl5QcKZ0o7zba2tb2xubRd2irt7+weH9tFxU8WpJLRBYh7LdoAV5UzQhmaa03YiKY4CTlvB6G7mt8ZUKhaLRz1JqBfhvmAhI1gbybftyM8Y6uoYDaflJ3944dslp+LMgVaJm5MS5Kj79le3F5M0okITjpXquE6ivQxLzQin02I3VTTBZIT7tGOowBFVXja/fIrOjdJDYSxNCY3m6u+JDEdKTaLAdEZYD9SyNxP/8zqpDm+8jIkk1VSQxaIw5cg8OosB9ZikRPOJIZhIZm5FZIAlJtqEVTQhuMsvr5LmZcWtVqoPV6XabR5HAU7hDMrgwjXU4B7q0AACY3iGV3izMuvFerc+Fq1rVj5zAn9gff4AlXOS/g==</latexit>

§ Typically messages are initialized to uniform distributions or random values.

§ Message Passing
§ For each edge (i, j) in the graph, update the message from node i to node j:

<latexit sha1_base64="yQE8C0J3IReJecOLnphvria8sV4=">AAACWnicbVFdaxNBFJ1dtR+pH7H65stgEFIoYVek7Uuh1BefpIJpC9mwzE5uktvMxzpzVxKW/ZO+FMG/IjjZ5kFbLwwczjn33pkzRanQU5L8jOJHj59sbe/sdvaePnv+ovty/9LbykkYSqusuy6EB4UGhoSk4Lp0IHSh4KpYfFzrV9/BebTmK61KGGsxMzhFKShQefebzmvkGVl+0/SX+c0BP+WZrwK7zLHhWekxGA5bEQ9568hKZyd5veAZmtAKS6o/A87mhXW+6WMweCCNpvJhKNetMyzAdsZB3u0lg6Qt/hCkG9Bjm7rIuz+yiZWVBkNSCe9HaVLSuBaOUCpoOlnloRRyIWYwCtAIDX5ct9E0/F1gJnxqXTiGeMv+3VEL7f1KF8GpBc39fW1N/k8bVTQ9GddoyorAyLtF00rx8NB1znyCDiSpVQBCOgx35XIunJAUfqMTQkjvP/khuHw/SI8GR18+9M7ON3HssDfsLeuzlB2zM/aJXbAhk+yW/Y62ou3oVxzHu/HenTWONj2v2D8Vv/4DB4Gz7w==</latexit>

X Y
mi!j (xj ) = i,j (xi , xj ) mk!i (xi )
xi k2Neighbors(i)\j
§ Stopping Condition
§ Messages converge (i.e., they stop changing between iterations)
§ A maximum number of iterations is reached.

§ Marginal Computation
<latexit sha1_base64="JIB31OSXnjFU+8+kWj5RWiwVp4A=">AAACNnicbVBNaxsxFNQ6aZu6X256zEXEFJyL2S0lyTE0l1xaHKgTg9csWvnZfrVWWqS3JWbZX5VLfkdvueSQUnrtT6i8MTSJ+0BomHmDNJPmCh2F4XXQ2Nh88vTZ1vPmi5evXr9pvd0+c6awEvrSKGMHqXCgUEOfkBQMcgsiSxWcp/PjpX7+HaxDo7/SIodRJqYaJygFeSppfe51LhLc43FuTU7G3w4T/MeNk/Ibj1HzmOCCyi+A01lqrKs6uFfxrFa9DavakrTaYTesh6+DaAXabDW9pPUjHhtZZKBJKuHcMApzGpXCEkoFVTMuHORCzsUUhh5qkYEblXXsir/3zJhPjPVHE6/Z+45SZM4tstRvZoJm7rG2JP+nDQuaHI5K1HlBoOXdQ5NCcR902SEfowVJauGBkBb9X7mcCSsk+aabvoToceR1cPahG+13908/to8+rerYYjtsl3VYxA7YETthPdZnkl2ya3bLfgZXwU3wK/h9t9oIVp537MEEf/4C7fSreg==</latexit>

Y
P (xi ) / i (xi ) mj!i (xi )
j2Neighbors(i)

110
§ Pros:
§ Generalizable:
§ LBP can provide good approximations in graphs with cycles, where exact inference is
computationally infeasible.
§ Scalable:
§ Works on large-scale probabilistic graphical models such as Markov Random Fields (MRFs) and
Bayesian networks.
§ Cons:
§ No Guarantee of Convergence:
§ LBP may not always converge, especially in densely connected graphs or when loops are small.
§ Non-Exact:
§ Even when it converges, LBP provides only approximate marginals in graphs with cycles.
§ Sensitive to Initialization:
• The results of LBP can be sensitive to the initial message values, affecting final
convergence.

111
e ly
L ik
s
L es

112
Tree Zebra Ocean Meadow

Ocean
False 0.45
True 0.55

113
Tree Zebra Ocea
Ocean Meadow
Meadow
n

Tree Zebra
Ocean
False False 0.9
False 0.45
False True 0.02
True 0.55
True False 0.05
True True 0.03

114
§ Exact inference is intractable for general graphical models

§ Sampling can be used as approximate inference

§ Sampling can also be used to approximate expectation using Monte Carlo integration

X
Ex⇠p [f (x)] = f (x)p(x)
x
T
1X
⇡ f (xt ) xt ⇠ p(.)
T t=1
115
Assume X =Z [E
X
p(E = e) = p(Z = z, E = e)
z
X
= p(x)I(E = e)
x

= Ex⇠p [I(E = e)]

#(E = e)
⇡ 116
T
F H P Pr(F,H,P)
0 0 0 θ1
0 0 1 θ2
0 1 0 θ3
u ⇠ U (0, 1) 0 1 1 θ4
1 0 0 θ5
1 0 1 θ6
(0, 0, 1) if ✓1  u < ✓ 1 + ✓2
1 1 0 θ7
(0, 1, 0) if ✓1 + ✓2  u < ✓ 1 + ✓2 + ✓3 1 1 1 θ8

117
§ Use the topological order of Bayesian networks or other directed graphical
models!

118
B P(B)
E P(E)
0 0.999
0 0.002
1 0.001
Burglary Earthquake 1 0.998
B E A Pr(A|B,E)
0 0 0 0.999
0 0 1 0.001
0 1 0 0.71
Alarm
0 1 1 0.29 (B=1,E=0,A=1,M=1,J=0)
1 0 0 0.06
1 0 1 0.94
1 1 0 0.05
1 1 1 0.95 Mary Calls John Calls

A M Pr(M|A) A J Pr(J|A)
0 0 0.99 0 0 0.95
0 1 0.01 0 1 0.05
1 0 0.30 1 0 0.10
119
1 1 0.70 1 1 0.90
120
We use tractable distribution q(z) that is easy to get sample from it.

X Assume we can compute the probability of


full assignment, like in Bayesian networks
Ex⇠p [f (x)] = f (x)p(x)
x
X p(x)
= f (x) q(x)
x
q(x)
= Ex⇠q [f (x)w(x)]
T
1X
⇡ f (xt )w(xt )
T t=1
121
p(E = e) = Ez⇠p [p(e | z)]

p(z)
= Ez⇠q p(e | z)
q(z)

p(e, z)
= Ez⇠q
q(z)
= Ez⇠q [we (z)]
T
1X
⇡ we (z t ) z t ⇠ q(.)
T t=1
122
P (Xi = xi , E = e) All variables except e
P (Xi = xi | E = e) = .
P (E = e)
X
P (Xi = xi , E = e) = (z)p(e, z) <latexit sha1_base64="NPO0BiLis23dkUtxb2Sgy7U/JxI=">AAACCHicbVDLSgNBEJz1GeNr1aMHB4OQgIRdkehFCHrxGME8IAlhdtKbDJl9ODOrJMsevfgrXjwo4tVP8ObfOJvkoIkFDUVVN91dTsiZVJb1bSwsLi2vrGbWsusbm1vb5s5uTQaRoFClAQ9EwyESOPOhqpji0AgFEM/hUHcGV6lfvwchWeDfqmEIbY/0fOYySpSWOubBQwfyowK+wC1XEBqHeTjGo0IS32k1wdmOmbOK1hh4nthTkkNTVDrmV6sb0MgDX1FOpGzaVqjaMRGKUQ5JthVJCAkdkB40NfWJB7Idjx9J8JFWutgNhC5f4bH6eyImnpRDz9GdHlF9Oeul4n9eM1LueTtmfhgp8OlkkRtxrAKcpoK7TABVfKgJoYLpWzHtEx2I0tmlIdizL8+T2knRLhVLN6e58uU0jgzaR4coj2x0hsroGlVQFVH0iJ7RK3oznowX4934mLQuGNOZPfQHxucPXayXqQ==</latexit>

p(e, z)
z we (z) = Importance Weight
X q(z)
= (z)we (z)q(z)
One if z is consistent with xi z From previous slide:
T
1X
= Ez⇠q [ (z)we (z)] P (E = e) ⇡
T t=1
we (z t )
T
1X PT
⇡ (z t )we (z t ) 1 t t
T t=1 T t=1 (z )we (z )
P̂ (Xi = xi | E = e) = 1
PT t
T t=1 we (z )
123
A Markov Chain is a system that transitions from one
state to another in a state space.

• Transition between states follows the Markov property


• The future state depends only on the current state, not the past

<latexit sha1_base64="yOWBIJ6ol1WXVbX4FqDRhFXkB2U=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48VTFtoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDVh8MPN6bYWZemEph0HW/nNLa+sbmVnm7srO7t39QPTxqmyTTjPsskYnuhtRwKRT3UaDk3VRzGoeSd8LJ7dzvPHJtRKIecJryIKYjJSLBKFrJ76dioAfVmlt3FyB/iVeQGhRoDaqf/WHCspgrZJIa0/PcFIOcahRM8lmlnxmeUjahI96zVNGYmyBfHDsjZ1YZkijRthSShfpzIqexMdM4tJ0xxbFZ9ebif14vw+g6yIVKM+SKLRdFmSSYkPnnZCg0ZyinllCmhb2VsDHVlKHNp2JD8FZf/kvaF3WvUW/cX9aaN0UcZTiBUzgHD66gCXfQAh8YCHiCF3h1lPPsvDnvy9aSU8wcwy84H9/cvY69</latexit>

⇡r

⇡c
<latexit sha1_base64="JoxXiddfEnDEAlVfLl9Lh62loFE=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48VTFtoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDVh8MPN6bYWZemEph0HW/nNLa+sbmVnm7srO7t39QPTxqmyTTjPsskYnuhtRwKRT3UaDk3VRzGoeSd8LJ7dzvPHJtRKIecJryIKYjJSLBKFrJ76diwAbVmlt3FyB/iVeQGhRoDaqf/WHCspgrZJIa0/PcFIOcahRM8lmlnxmeUjahI96zVNGYmyBfHDsjZ1YZkijRthSShfpzIqexMdM4tJ0xxbFZ9ebif14vw+g6yIVKM+SKLRdFmSSYkPnnZCg0ZyinllCmhb2VsDHVlKHNp2JD8FZf/kvaF3WvUW/cX9aaN0UcZTiBUzgHD66gCXfQAh8YCHiCF3h1lPPsvDnvy9aSU8wcwy84H9/GAY6u</latexit>

⇡s <latexit sha1_base64="nc8OZ7BS5XvJccxC7jnwv/ULVmk=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48VTFtoQ9lsJ+3SzSbsboQS+hu8eFDEqz/Im//GbZuDVh8MPN6bYWZemAqujet+OaW19Y3NrfJ2ZWd3b/+genjU1kmmGPosEYnqhlSj4BJ9w43AbqqQxqHATji5nfudR1SaJ/LBTFMMYjqSPOKMGiv5/ZQP9KBac+vuAuQv8QpSgwKtQfWzP0xYFqM0TFCte56bmiCnynAmcFbpZxpTyiZ0hD1LJY1RB/ni2Bk5s8qQRImyJQ1ZqD8nchprPY1D2xlTM9ar3lz8z+tlJroOci7TzKBky0VRJohJyPxzMuQKmRFTSyhT3N5K2JgqyozNp2JD8FZf/kvaF3WvUW/cX9aaN0UcZTiBUzgHD66gCXfQAh8YcHiCF3h1pPPsvDnvy9aSU8wcwy84H9/eQY6+</latexit>

<latexit sha1_base64="iCKjYvCRPjGdNFHQru0oNmA6Q0M=">AAAClnicfZHbSgMxEIaz66nWU7U3gjfBIlSEsitS9UIpLaKXFawK3Vqy2bQNzSZLMiuWpY/ky3jn25jWIh6KA4F//slHMjNhIrgBz3t33IXFpeWV3Gp+bX1jc6uwvXNvVKopa1EllH4MiWGCS9YCDoI9JpqROBTsIRw2JvWHZ6YNV/IORgnrxKQveY9TAtbqFl6DhHfpU1aGI/9wjC/wV26zgEYKcLMcAHuBjAqVRsa6oPAP5xAfTTE9F9OEy38gMxcy6VymWyh5FW8a+K/wZ6KEZtHsFt6CSNE0ZhKoIMa0fS+BTkY0cCrYOB+khiWEDkmfta2UJGamk03HOsYH1olwT2l7JOCp+53ISGzMKA7tzZjAwPyuTcx5tXYKvbNOxmWSApP086FeKrDtd7IjHHHNKIiRFYRqbv+K6YBoQsFuMm+H4P9u+a+4P6741Ur19qRUq8/GkUN7aB+VkY9OUQ3doCZqIeoUnXOn7jTcXffSvXKvP6+6zowpoh/hNj8ACqnJhw==</latexit>

⇡c(t+1) = ⇡c(t) · P (clouds ! clouds) + ⇡r(t) · P (rain ! clouds) + ⇡s(t) · P (sun ! clouds)
124
<latexit sha1_base64="yOWBIJ6ol1WXVbX4FqDRhFXkB2U=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48VTFtoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDVh8MPN6bYWZemEph0HW/nNLa+sbmVnm7srO7t39QPTxqmyTTjPsskYnuhtRwKRT3UaDk3VRzGoeSd8LJ7dzvPHJtRKIecJryIKYjJSLBKFrJ76dioAfVmlt3FyB/iVeQGhRoDaqf/WHCspgrZJIa0/PcFIOcahRM8lmlnxmeUjahI96zVNGYmyBfHDsjZ1YZkijRthSShfpzIqexMdM4tJ0xxbFZ9ebif14vw+g6yIVKM+SKLRdFmSSYkPnnZCg0ZyinllCmhb2VsDHVlKHNp2JD8FZf/kvaF3WvUW/cX9aaN0UcZTiBUzgHD66gCXfQAh8YCHiCF3h1lPPsvDnvy9aSU8wcwy84H9/cvY69</latexit>

⇡r

⇡c
<latexit sha1_base64="JoxXiddfEnDEAlVfLl9Lh62loFE=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48VTFtoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDVh8MPN6bYWZemEph0HW/nNLa+sbmVnm7srO7t39QPTxqmyTTjPsskYnuhtRwKRT3UaDk3VRzGoeSd8LJ7dzvPHJtRKIecJryIKYjJSLBKFrJ76diwAbVmlt3FyB/iVeQGhRoDaqf/WHCspgrZJIa0/PcFIOcahRM8lmlnxmeUjahI96zVNGYmyBfHDsjZ1YZkijRthSShfpzIqexMdM4tJ0xxbFZ9ebif14vw+g6yIVKM+SKLRdFmSSYkPnnZCg0ZyinllCmhb2VsDHVlKHNp2JD8FZf/kvaF3WvUW/cX9aaN0UcZTiBUzgHD66gCXfQAh8YCHiCF3h1lPPsvDnvy9aSU8wcwy84H9/GAY6u</latexit>

⇡s
<latexit sha1_base64="nc8OZ7BS5XvJccxC7jnwv/ULVmk=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48VTFtoQ9lsJ+3SzSbsboQS+hu8eFDEqz/Im//GbZuDVh8MPN6bYWZemAqujet+OaW19Y3NrfJ2ZWd3b/+genjU1kmmGPosEYnqhlSj4BJ9w43AbqqQxqHATji5nfudR1SaJ/LBTFMMYjqSPOKMGiv5/ZQP9KBac+vuAuQv8QpSgwKtQfWzP0xYFqM0TFCte56bmiCnynAmcFbpZxpTyiZ0hD1LJY1RB/ni2Bk5s8qQRImyJQ1ZqD8nchprPY1D2xlTM9ar3lz8z+tlJroOci7TzKBky0VRJohJyPxzMuQKmRFTSyhT3N5K2JgqyozNp2JD8FZf/kvaF3WvUW/cX9aaN0UcZTiBUzgHD66gCXfQAh8YcHiCF3h1pPPsvDnvy9aSU8wcwy84H9/eQY6+</latexit>

<latexit sha1_base64="iCKjYvCRPjGdNFHQru0oNmA6Q0M=">AAAClnicfZHbSgMxEIaz66nWU7U3gjfBIlSEsitS9UIpLaKXFawK3Vqy2bQNzSZLMiuWpY/ky3jn25jWIh6KA4F//slHMjNhIrgBz3t33IXFpeWV3Gp+bX1jc6uwvXNvVKopa1EllH4MiWGCS9YCDoI9JpqROBTsIRw2JvWHZ6YNV/IORgnrxKQveY9TAtbqFl6DhHfpU1aGI/9wjC/wV26zgEYKcLMcAHuBjAqVRsa6oPAP5xAfTTE9F9OEy38gMxcy6VymWyh5FW8a+K/wZ6KEZtHsFt6CSNE0ZhKoIMa0fS+BTkY0cCrYOB+khiWEDkmfta2UJGamk03HOsYH1olwT2l7JOCp+53ISGzMKA7tzZjAwPyuTcx5tXYKvbNOxmWSApP086FeKrDtd7IjHHHNKIiRFYRqbv+K6YBoQsFuMm+H4P9u+a+4P6741Ur19qRUq8/GkUN7aB+VkY9OUQ3doCZqIeoUnXOn7jTcXffSvXKvP6+6zowpoh/hNj8ACqnJhw==</latexit>

⇡c(t+1) = ⇡c(t) · P (clouds ! clouds) + ⇡r(t) · P (rain ! clouds) + ⇡s(t) · P (sun ! clouds)

State probabilities at t: State probabilities at t+1


<latexit sha1_base64="aP+kq0wM2rio0tq7X3j3nw4dNjY=">AAACPnicbZA7SwNBEMf34ivG16mlzWIQIoFwJzHaCEEbywjmAUkMe5tNsmTvwe6cEI58Mhs/g52ljYUitpZuLofoxYGF//xmhtn5O4HgCizr2cgsLa+srmXXcxubW9s75u5eQ/mhpKxOfeHLlkMUE9xjdeAgWCuQjLiOYE1nfDWrN++ZVNz3bmESsK5Lhh4fcEpAo55Z7wS8R++iAhTt4ym+wD+5zjq07wO2SmVcjLlM8dOEqzTvmXmrZMWBF4WdiDxKotYznzp9n4Yu84AKolTbtgLoRkQCp4JNc51QsYDQMRmytpYecZnqRvH5U3ykSR8PfKmfBzimvyci4io1cR3d6RIYqXRtBv+rtUMYnHcj7gUhMI/OFw1CgcHHMy9xn0tGQUy0IFRy/VdMR0QSCtrxnDbBTp+8KBonJbtSqtyU89XLxI4sOkCHqIBsdIaq6BrVUB1R9IBe0Bt6Nx6NV+PD+Jy3ZoxkZh/9CePrG7qYqqg=</latexit>

⇡c(t+1) = ⇡c(t) · 0.4 + ⇡r(t) · 0.5 + ⇡s(t) · 0.5


<latexit sha1_base64="2eZ+A6SwrBltYnTJ/EF6ouwc6xk=">AAAB+nicbVDLSsNAFL3xWesr1aWbwSLUTUhUqhuh6MZlBfuANobJdNIOnTyYmSgl9lPcuFDErV/izr9x2mahrQcuHM65l3vv8RPOpLLtb2NpeWV1bb2wUdzc2t7ZNUt7TRmngtAGiXks2j6WlLOINhRTnLYTQXHoc9ryh9cTv/VAhWRxdKdGCXVD3I9YwAhWWvLMUjdhHrnPKup4jC6RbZ16Ztm27CnQInFyUoYcdc/86vZikoY0UoRjKTuOnSg3w0Ixwum42E0lTTAZ4j7taBrhkEo3m54+Rkda6aEgFroihabq74kMh1KOQl93hlgN5Lw3Ef/zOqkKLtyMRUmqaERmi4KUIxWjSQ6oxwQlio80wUQwfSsiAywwUTqtog7BmX95kTRPLKdqVW/PyrWrPI4CHMAhVMCBc6jBDdShAQQe4Rle4c14Ml6Md+Nj1rpk5DP78AfG5w/wGJKA</latexit>

⇡c(t) = 0.3
<latexit sha1_base64="h/X+u8bBoquzAxX7Ar8BD7C6JHE=">AAAB+nicbVDLSsNAFL3xWesr1aWbwSLUTUikVDdC0Y3LCvYBbQyT6aQdOnkwM1FK7Ke4caGIW7/EnX/jtM1CWw9cOJxzL/fe4yecSWXb38bK6tr6xmZhq7i9s7u3b5YOWjJOBaFNEvNYdHwsKWcRbSqmOO0kguLQ57Ttj66nfvuBCsni6E6NE+qGeBCxgBGstOSZpV7CPHGfVdTpBF0i26p6Ztm27BnQMnFyUoYcDc/86vVjkoY0UoRjKbuOnSg3w0Ixwumk2EslTTAZ4QHtahrhkEo3m50+QSda6aMgFroihWbq74kMh1KOQ193hlgN5aI3Ff/zuqkKLtyMRUmqaETmi4KUIxWjaQ6ozwQlio81wUQwfSsiQywwUTqtog7BWXx5mbTOLKdm1W6r5fpVHkcBjuAYKuDAOdThBhrQBAKP8Ayv8GY8GS/Gu/Exb10x8plD+APj8wcJG5KQ</latexit>

⇡r(t) = 0.3 · 0.4 + 0.4 · 0.5 + 0.3 · 0.5


<latexit sha1_base64="/cIpECQCIO54DKGEKxFcevBPduE=">AAACGXicbVBNS8MwGE79nPOr6tFLcAiCUFrdphdh6MXjBPcBWxlpmm5haVOSVBhlf8OLf8WLB0U86sl/Y9ZV1M0HEp73ed6X5H28mFGpbPvTWFhcWl5ZLawV1zc2t7bNnd2m5InApIE546LtIUkYjUhDUcVIOxYEhR4jLW94NfFbd0RIyqNbNYqJG6J+RAOKkdJSz7ThBbStU9jFPlealeFxdn/Xlaz+8Ss9s2RbdgY4T5yclECOes987/ocJyGJFGZIyo5jx8pNkVAUMzIudhNJYoSHqE86mkYoJNJNs83G8FArPgy40CdSMFN/T6QolHIUerozRGogZ72J+J/XSVRw7qY0ihNFIjx9KEgYVBxOYoI+FQQrNtIEYUH1XyEeIIGw0mEWdQjO7MrzpHliOVWrelMu1S7zOApgHxyAI+CAM1AD16AOGgCDe/AInsGL8WA8Ga/G27R1wchn9sAfGB9fEHWaJw==</latexit>

= 0.4
<latexit sha1_base64="Uw5k4PnkpPQaGFjjmpzvoe1mEdY=">AAAB+nicbVDLSsNAFJ34rPWV6tLNYBHqJiQq1Y1QdOOygn1AG8NkOmmHTh7M3Cgl9lPcuFDErV/izr9x2mahrQcuHM65l3vv8RPBFdj2t7G0vLK6tl7YKG5ube/smqW9popTSVmDxiKWbZ8oJnjEGsBBsHYiGQl9wVr+8Hritx6YVDyO7mCUMDck/YgHnBLQkmeWugn31H1WgeMxvsS2deqZZduyp8CLxMlJGeWoe+ZXtxfTNGQRUEGU6jh2Am5GJHAq2LjYTRVLCB2SPutoGpGQKTebnj7GR1rp4SCWuiLAU/X3REZCpUahrztDAgM1703E/7xOCsGFm/EoSYFFdLYoSAWGGE9ywD0uGQUx0oRQyfWtmA6IJBR0WkUdgjP/8iJpnlhO1arenpVrV3kcBXSADlEFOegc1dANqqMGougRPaNX9GY8GS/Gu/Exa10y8pl99AfG5w8JJ5KQ</latexit>

⇡s(t) = 0.3 = 0.12 + 0.2 + 0.15 = 0.47


<latexit sha1_base64="hpMKD0jAtL7SY6tlGxmKbZb38Ag=">AAACA3icbVDLSgMxFL1TX7W+Rt3pJlgEQRhmSm3dCEU3LivYB7RDyaSZNjTzIMkIZSi48VfcuFDErT/hzr8xbWehrQeSezjnXpJ7vJgzqWz728itrK6tb+Q3C1vbO7t75v5BU0aJILRBIh6Jtocl5SykDcUUp+1YUBx4nLa80c3Ubz1QIVkU3qtxTN0AD0LmM4KVlnrm0RWyLaeEznWZ384Fmmrlas8s2pY9A1omTkaKkKHeM7+6/YgkAQ0V4VjKjmPHyk2xUIxwOil0E0ljTEZ4QDuahjig0k1nO0zQqVb6yI+EPqFCM/X3RIoDKceBpzsDrIZy0ZuK/3mdRPmXbsrCOFE0JPOH/IQjFaFpIKjPBCWKjzXBRDD9V0SGWGCidGwFHYKzuPIyaZYsp2JV7srF2nUWRx6O4QTOwIEq1OAW6tAAAo/wDK/wZjwZL8a78TFvzRnZzCH8gfH5A2mWkkg=</latexit>

<latexit sha1_base64="G9sz4LIiwRkyqJDEKrhOn1G2OXE=">AAAB/XicbVDJSgNBEO2JW4zbuNy8NAYhIoSZEKIXIejFYwSzQDIOPZ2epEnPQneNEIfgr3jxoIhX/8Obf2MnmYNGHxQ83quiqp4XC67Asr6M3NLyyupafr2wsbm1vWPu7rVUlEjKmjQSkex4RDHBQ9YEDoJ1YslI4AnW9kZXU799z6TiUXgL45g5ARmE3OeUgJZc86AXc1fepSU4tU8m+AJb5UrVNYtW2ZoB/yV2RoooQ8M1P3v9iCYBC4EKolTXtmJwUiKBU8EmhV6iWEzoiAxYV9OQBEw56ez6CT7WSh/7kdQVAp6pPydSEig1DjzdGRAYqkVvKv7ndRPwz52Uh3ECLKTzRX4iMER4GgXuc8koiLEmhEqub8V0SCShoAMr6BDsxZf/klalbNfKtZtqsX6ZxZFHh+gIlZCNzlAdXaMGaiKKHtATekGvxqPxbLwZ7/PWnJHN7KNfMD6+AV9ckzw=</latexit> <latexit sha1_base64="aM+UwAkOOTAB7pPDyip+sio9Nl8=">AAAB/XicbVDJSgNBEO2JW4zbuNy8NAYhIoSZIFEPQtCLxwhmgWQcejo9SZOehe4aIQ7BX/HiQRGv/oc3/8ZOMgeNPih4vFdFVT0vFlyBZX0ZuYXFpeWV/GphbX1jc8vc3mmqKJGUNWgkItn2iGKCh6wBHARrx5KRwBOs5Q2vJn7rnknFo/AWRjFzAtIPuc8pAS255l435q66S0twbB+N8QW2ypVz1yxaZWsK/JfYGSmiDHXX/Oz2IpoELAQqiFId24rBSYkETgUbF7qJYjGhQ9JnHU1DEjDlpNPrx/hQKz3sR1JXCHiq/pxISaDUKPB0Z0BgoOa9ifif10nAP3NSHsYJsJDOFvmJwBDhSRS4xyWjIEaaECq5vhXTAZGEgg6soEOw51/+S5qVsl0tV29OirXLLI482kcHqIRsdIpq6BrVUQNR9ICe0At6NR6NZ+PNeJ+15oxsZhf9gvHxDWiDk0I=</latexit>

⇡r(t+1) = 0.24 ⇡s(t+1) = 0.29 125


§ An equilibrium or stationary state in a Markov chain is a probability distribution
over the states of the Markov chain that remains unchanged after any number of
transitions.

Distribution of States at Equilibrium:


<latexit sha1_base64="uRMmsTRBd9L+w7AmlJYSTQbIbZc=">AAACC3icbVDLSgMxFM3UV62vUZduQotQQcpMkepGKLpxJRXsA9phyKRpG5p5kNwRytC9G3/FjQtF3PoD7vwbM+0stPVAuCfn3kNyjxcJrsCyvo3cyura+kZ+s7C1vbO7Z+4ftFQYS8qaNBSh7HhEMcED1gQOgnUiyYjvCdb2xtdpv/3ApOJhcA+TiDk+GQZ8wCkBLblmsRdxfInLurj2KU5LVZd+CGp+uz1xzZJVsWbAy8TOSAllaLjml/bT2GcBUEGU6tpWBE5CJHAq2LTQixWLCB2TIetqGhCfKSeZ7TLFx1rp40Eo9QkAz9TfjoT4Sk18T0/6BEZqsZeK//W6MQwunIQHUQwsoPOHBrHAEOI0GNznklEQE00IlVz/FdMRkYSCjq+gQ7AXV14mrWrFrlVqd2el+lUWRx4doSIqIxudozq6QQ3URBQ9omf0it6MJ+PFeDc+5qM5I/Mcoj8wPn8A5juYdA==</latexit>

⇡ = (⇡1 , ⇡2 , . . . , ⇡N )
Satisfies:
⇡P = ⇡
<latexit sha1_base64="7B9tBpRZKgiKfhfwlsM1KUKkjJY=">AAAB8nicbVBNSwMxEJ2tX7V+VT16CRbBU9kVqV6EohePFewHbJeSTbNtaDa7JLNCKf0ZXjwo4tVf481/Y9ruQVsfhDzem2FmXphKYdB1v53C2vrG5lZxu7Szu7d/UD48apkk04w3WSIT3Qmp4VIo3kSBkndSzWkcSt4OR3czv/3EtRGJesRxyoOYDpSIBKNoJb+bCtIgN8T+vXLFrbpzkFXi5aQCORq98le3n7As5gqZpMb4nptiMKEaBZN8WupmhqeUjeiA+5YqGnMTTOYrT8mZVfokSrR9Cslc/d0xobEx4zi0lTHFoVn2ZuJ/np9hdB1MhEoz5IotBkWZJJiQ2f2kLzRnKMeWUKaF3ZWwIdWUoU2pZEPwlk9eJa2Lqler1h4uK/XbPI4inMApnIMHV1CHe2hAExgk8Ayv8Oag8+K8Ox+L0oKT9xzDHzifP8YPkEo=</latexit>

Transition matrix
The stationary distribution represents the long-term behavior of the Markov chain:
•Over a long period of time, the fraction of time the chain spends in state iii will converge to ⇡i
<latexit sha1_base64="+35/a82a/5scJOPJWChC24N+O3o=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48VTFtoQ9lsJ+3SzSbsboQS+hu8eFDEqz/Im//GbZuDVh8MPN6bYWZemAqujet+OaW19Y3NrfJ2ZWd3b/+genjU1kmmGPosEYnqhlSj4BJ9w43AbqqQxqHATji5nfudR1SaJ/LBTFMMYjqSPOKMGiv5/ZQP+KBac+vuAuQv8QpSgwKtQfWzP0xYFqM0TFCte56bmiCnynAmcFbpZxpTyiZ0hD1LJY1RB/ni2Bk5s8qQRImyJQ1ZqD8nchprPY1D2xlTM9ar3lz8z+tlJroOci7TzKBky0VRJohJyPxzMuQKmRFTSyhT3N5K2JgqyozNp2JD8FZf/kvaF3WvUW/cX9aaN0UcZTiBUzgHD66gCXfQAh8YcHiCF3h1pPPsvDnvy9aSU8wcwy84H9/PGY60</latexit>

•If you start the Markov chain with the stationary distribution, the distribution of the states will be
invariant after each step.
126
§ Irreducibility: it is possible to reach any state from any other state
§ This condition guarantees that the stationary distribution assigns a positive probability to
every state.
§ Aperiodicity: each state does not return to itself at fixed intervals.
§ Ensures that the chain does not oscillate between states and that it is not trapped in
cycles.
§ Positive Recurrence: A state i is positive recurrent if the expected number of
steps it takes to return to state i is finite. If all states in a Markov chain are positive
recurrent, the chain itself is said to be positive recurrent.
§ Positive recurrence ensures that, on average, the Markov chain will revisit each state in a
finite amount of time, allowing the chain to reach a stable, long-term behavior.
§ Stochastic Matrix:
§ The transition matrix P of the Markov chain must be a stochastic matrix, meaning that
each row of the matrix sums to 1

127
§ The detailed balance condition is a sufficient condition for determining whether a
Markov chain has reached a stationary state.
§ It ensures that, at equilibrium, the probability flow between any two states is balanced.

⇡i Pij = ⇡j Pji
<latexit sha1_base64="ygveJLwgw1Biot9uL1MAjLu3fbo=">AAACBHicbVDLSsNAFL3xWesr6rKbwSK4KolIdSMU3bisYB/QhjCZTtppJw9mJkIJWbjxV9y4UMStH+HOv3HSZqGtBwbOPede7tzjxZxJZVnfxsrq2vrGZmmrvL2zu7dvHhy2ZZQIQlsk4pHoelhSzkLaUkxx2o0FxYHHaceb3OR+54EKyaLwXk1j6gR4GDKfEay05JqVfsxchppuysYZukJ5Oc7LMctcs2rVrBnQMrELUoUCTdf86g8ikgQ0VIRjKXu2FSsnxUIxwmlW7ieSxphM8JD2NA1xQKWTzo7I0IlWBsiPhH6hQjP190SKAymngac7A6xGctHLxf+8XqL8SydlYZwoGpL5Ij/hSEUoTwQNmKBE8akmmAim/4rICAtMlM6trEOwF09eJu2zml2v1e/Oq43rIo4SVOAYTsGGC2jALTShBQQe4Rle4c14Ml6Md+Nj3rpiFDNH8AfG5w+Ud5dw</latexit>

- The rate at which the Markov chain leaves state i to


go to state j is exactly balanced by the rate at which it
enters state i from state j.

128
<latexit sha1_base64="yOWBIJ6ol1WXVbX4FqDRhFXkB2U=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48VTFtoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDVh8MPN6bYWZemEph0HW/nNLa+sbmVnm7srO7t39QPTxqmyTTjPsskYnuhtRwKRT3UaDk3VRzGoeSd8LJ7dzvPHJtRKIecJryIKYjJSLBKFrJ76dioAfVmlt3FyB/iVeQGhRoDaqf/WHCspgrZJIa0/PcFIOcahRM8lmlnxmeUjahI96zVNGYmyBfHDsjZ1YZkijRthSShfpzIqexMdM4tJ0xxbFZ9ebif14vw+g6yIVKM+SKLRdFmSSYkPnnZCg0ZyinllCmhb2VsDHVlKHNp2JD8FZf/kvaF3WvUW/cX9aaN0UcZTiBUzgHD66gCXfQAh8YCHiCF3h1lPPsvDnvy9aSU8wcwy84H9/cvY69</latexit>

⇡r

⇡c
<latexit sha1_base64="JoxXiddfEnDEAlVfLl9Lh62loFE=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48VTFtoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDVh8MPN6bYWZemEph0HW/nNLa+sbmVnm7srO7t39QPTxqmyTTjPsskYnuhtRwKRT3UaDk3VRzGoeSd8LJ7dzvPHJtRKIecJryIKYjJSLBKFrJ76diwAbVmlt3FyB/iVeQGhRoDaqf/WHCspgrZJIa0/PcFIOcahRM8lmlnxmeUjahI96zVNGYmyBfHDsjZ1YZkijRthSShfpzIqexMdM4tJ0xxbFZ9ebif14vw+g6yIVKM+SKLRdFmSSYkPnnZCg0ZyinllCmhb2VsDHVlKHNp2JD8FZf/kvaF3WvUW/cX9aaN0UcZTiBUzgHD66gCXfQAh8YCHiCF3h1lPPsvDnvy9aSU8wcwy84H9/GAY6u</latexit>

⇡s <latexit sha1_base64="nc8OZ7BS5XvJccxC7jnwv/ULVmk=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48VTFtoQ9lsJ+3SzSbsboQS+hu8eFDEqz/Im//GbZuDVh8MPN6bYWZemAqujet+OaW19Y3NrfJ2ZWd3b/+genjU1kmmGPosEYnqhlSj4BJ9w43AbqqQxqHATji5nfudR1SaJ/LBTFMMYjqSPOKMGiv5/ZQP9KBac+vuAuQv8QpSgwKtQfWzP0xYFqM0TFCte56bmiCnynAmcFbpZxpTyiZ0hD1LJY1RB/ni2Bk5s8qQRImyJQ1ZqD8nchprPY1D2xlTM9ar3lz8z+tlJroOci7TzKBky0VRJohJyPxzMuQKmRFTSyhT3N5K2JgqyozNp2JD8FZf/kvaF3WvUW/cX9aaN0UcZTiBUzgHD66gCXfQAh8YcHiCF3h1pPPsvDnvy9aSU8wcwy84H9/eQY6+</latexit>

3
<latexit sha1_base64="F5yolCNt/rwYa+emeEKpU/X7t+0=">AAACAnicbVDLSsNAFL3xWesr6krcDBbBVUl8VDdC0Y3LCvYBTSiT6aQdOnkwMxFKCG78FTcuFHHrV7jzb5y0WWjrgQuHc+7l3nu8mDOpLOvbWFhcWl5ZLa2V1zc2t7bNnd2WjBJBaJNEPBIdD0vKWUibiilOO7GgOPA4bXujm9xvP1AhWRTeq3FM3QAPQuYzgpWWeua+E7OeQFfI8QUm6WmWnmco10jPrFhVawI0T+yCVKBAo2d+Of2IJAENFeFYyq5txcpNsVCMcJqVnUTSGJMRHtCupiEOqHTTyQsZOtJKH/mR0BUqNFF/T6Q4kHIceLozwGooZ71c/M/rJsq/dFMWxomiIZku8hOOVITyPFCfCUoUH2uCiWD6VkSGWGehdGplHYI9+/I8aZ1U7Vq1dndWqV8XcZTgAA7hGGy4gDrcQgOaQOARnuEV3own48V4Nz6mrQtGMbMHf2B8/gARRJaa</latexit>

⇡c · Pc!r = ⇡r · Pr!c ⇡r = ⇡c
<latexit sha1_base64="5k3xfHompPt/sHIJYs1THfZJFUE=">AAACGnicbVDLSsNAFJ34rPUVdelmsAiuSiJS3QhFNy4r2Ac0IUwmk3boZBJmJkIJ+Q43/oobF4q4Ezf+jZM0oLYeGDiccy93zvETRqWyrC9jaXlldW29tlHf3Nre2TX39nsyTgUmXRyzWAx8JAmjnHQVVYwMEkFQ5DPS9yfXhd+/J0LSmN+paULcCI04DSlGSkueaTsJ9TB0cBAr2PEyTVUMRQ4vYeGIH0eUDs49s2E1rRJwkdgVaYAKHc/8cIIYpxHhCjMk5dC2EuVmSCiKGcnrTipJgvAEjchQU44iIt2sjJbDY60EMIyFflzBUv29kaFIymnk68kIqbGc9wrxP2+YqvDCzShPUkU4nh0KUwZ1xqInGFBBsGJTTRAWVP8V4jESCCvdZl2XYM9HXiS906bdarZuzxrtq6qOGjgER+AE2OActMEN6IAuwOABPIEX8Go8Gs/Gm/E+G10yqp0D8AfG5zf6mJ+c</latexit>

5 <latexit sha1_base64="4yqh/J2ZOOhnmADDJRRQhCtpL9s=">AAACS3icbVBNSwJBGJ61LLOvrY5dhiQwENntw7oEUpeOBpmCKzI7zurg7Acz7way+P+6dOnWn+jSoYgOjbqBqS8M7/M+7/MwM48bCa7Ast6MzMpqdm09t5Hf3Nre2TX39h9VGEvK6jQUoWy6RDHBA1YHDoI1I8mI7wrWcAe3433jiUnFw+ABhhFr+6QXcI9TAprqmK4TcXyNHcE8KGI9dGhp0uS0KexI3uvDyYzIk4QmF6PEtkeldDpbMv05O2bBKluTwovATkEBpVXrmK9ON6SxzwKggijVsq0I2gmRwKlgo7wTKxYROiA91tIwID5T7WSSxQgfa6aLvVDqEwCesLOOhPhKDX1XK30CfTW/G5PLdq0YvKt2woMoBhbQ6UVeLDCEeBws7nLJKIihBoRKrt+KaZ/oOEDHn9ch2PNfXgSPp2W7Uq7cnxeqN2kcOXSIjlAR2egSVdEdqqE6ougZvaNP9GW8GB/Gt/EzlWaM1HOA/lUm+wsC+rCL</latexit>

✓ ◆
3 5 3 3
<latexit sha1_base64="fmhPWafr9uK2OSwA3UUwNLDU5ZY=">AAACAnicbVDLSsNAFL3xWesr6krcDBbBVUl8VDdC0Y3LCvYBTSiT6aQdOnkwMxFKCG78FTcuFHHrV7jzb5y0WWjrgQuHc+7l3nu8mDOpLOvbWFhcWl5ZLa2V1zc2t7bNnd2WjBJBaJNEPBIdD0vKWUibiilOO7GgOPA4bXujm9xvP1AhWRTeq3FM3QAPQuYzgpWWeua+E7OeRFfI8QUm6WmWnmco10jPrFhVawI0T+yCVKBAo2d+Of2IJAENFeFYyq5txcpNsVCMcJqVnUTSGJMRHtCupiEOqHTTyQsZOtJKH/mR0BUqNFF/T6Q4kHIceLozwGooZ71c/M/rJsq/dFMWxomiIZku8hOOVITyPFCfCUoUH2uCiWD6VkSGWGehdGplHYI9+/I8aZ1U7Vq1dndWqV8XcZTgAA7hGGy4gDrcQgOaQOARnuEV3own48V4Nz6mrQtGMbMHf2B8/gAS3Jab</latexit>

⇡c · Pc!s = ⇡s · Ps!c
<latexit sha1_base64="eJYPvXLjZGdyHBxcqwUKf1w8lb0=">AAACGnicbVDLSsNAFJ34rPUVdelmsAiuSiJS3QhFNy4r2Ac0IUwmk3boZBJmJkIJ+Q43/oobF4q4Ezf+jZM0oLYeGDiccy93zvETRqWyrC9jaXlldW29tlHf3Nre2TX39nsyTgUmXRyzWAx8JAmjnHQVVYwMEkFQ5DPS9yfXhd+/J0LSmN+paULcCI04DSlGSkueaTsJ9TB0cBAr2PEyTVUMZQ4vYeHIH0eWDs49s2E1rRJwkdgVaYAKHc/8cIIYpxHhCjMk5dC2EuVmSCiKGcnrTipJgvAEjchQU44iIt2sjJbDY60EMIyFflzBUv29kaFIymnk68kIqbGc9wrxP2+YqvDCzShPUkU4nh0KUwZ1xqInGFBBsGJTTRAWVP8V4jESCCvdZl2XYM9HXiS906bdarZuzxrtq6qOGjgER+AE2OActMEN6IAuwOABPIEX8Go8Gs/Gm/E+G10yqp0D8AfG5zf/WJ+f</latexit>

⇡s = ⇡c ⇡ = (⇡c , ⇡r , ⇡s ) = , ,
5 11 11 11

⇡r · Pr!s = ⇡s · Ps!r
<latexit sha1_base64="FIpvrVOTs/bB1fQSZj5fK3GckI4=">AAACGnicbVDLSsNAFJ34rPUVdelmsAiuSiJS3QhFNy4r2Ac0IUwmk3boZBJmJkIJ+Q43/oobF4q4Ezf+jZM0oLYeGDiccy93zvETRqWyrC9jaXlldW29tlHf3Nre2TX39nsyTgUmXRyzWAx8JAmjnHQVVYwMEkFQ5DPS9yfXhd+/J0LSmN+paULcCI04DSlGSkueaTsJ9QR0cBAr2PEyTVUMZQ4vYeHIH0eWjsg9s2E1rRJwkdgVaYAKHc/8cIIYpxHhCjMk5dC2EuVmSCiKGcnrTipJgvAEjchQU44iIt2sjJbDY60EMIyFflzBUv29kaFIymnk68kIqbGc9wrxP2+YqvDCzShPUkU4nh0KUwZ1xqInGFBBsGJTTRAWVP8V4jESCCvdZl2XYM9HXiS906bdarZuzxrtq6qOGjgER+AE2OActMEN6IAuwOABPIEX8Go8Gs/Gm/E+G10yqp0D8AfG5zdILZ/M</latexit>

⇡r = 5⇡s
<latexit sha1_base64="KNGWRi7YqQmuZ5MZqLQZByIGoM4=">AAAB+XicbZDLSsNAFIZP6q3WW9Slm8EiuCqJaHUjFN24rGAv0IYwmU7aoZNJmJkUSuibuHGhiFvfxJ1v46TNQlt/GPj4zzmcM3+QcKa043xbpbX1jc2t8nZlZ3dv/8A+PGqrOJWEtkjMY9kNsKKcCdrSTHPaTSTFUcBpJxjf5/XOhErFYvGkpwn1IjwULGQEa2P5tt1PmC/RLbpCOSnk21Wn5syFVsEtoAqFmr791R/EJI2o0IRjpXquk2gvw1Izwums0k8VTTAZ4yHtGRQ4osrL5pfP0JlxBiiMpXlCo7n7eyLDkVLTKDCdEdYjtVzLzf9qvVSHN17GRJJqKshiUZhypGOUx4AGTFKi+dQAJpKZWxEZYYmJNmFVTAju8pdXoX1Rc+u1+uNltXFXxFGGEziFc3DhGhrwAE1oAYEJPMMrvFmZ9WK9Wx+L1pJVzBzDH1mfP4/CklU=</latexit>

⇡c + ⇡r + ⇡s = 1
<latexit sha1_base64="xl/+QTTUafRqiaN9cAADVtlh7xA=">AAACAnicbVDLSsNAFL2pr1pfUVfiZrAIglASkepGKLpxWcE+oA1hMp20QyeTMDMRSilu/BU3LhRx61e482+ctFlo64HhHs65lzv3BAlnSjvOt1VYWl5ZXSuulzY2t7Z37N29popTSWiDxDyW7QArypmgDc00p+1EUhwFnLaC4U3mtx6oVCwW93qUUC/CfcFCRrA2km8fdBPmE3SKsirzqtAVcn277FScKdAicXNShhx13/7q9mKSRlRowrFSHddJtDfGUjPC6aTUTRVNMBniPu0YKnBElTeenjBBx0bpoTCW5gmNpurviTGOlBpFgemMsB6oeS8T//M6qQ4vvTETSaqpILNFYcqRjlGWB+oxSYnmI0Mwkcz8FZEBlphok1rJhODOn7xImmcVt1qp3p2Xa9d5HEU4hCM4ARcuoAa3UIcGEHiEZ3iFN+vJerHerY9Za8HKZ/bhD6zPH++QlTg=</latexit>

129
P12 = P( R = true | C = false, S = true, W = true)
1 2 P24 = P( C = true |R = true, S = true, W = true)

3 4

P(Cloudy, Rain | Sprinkler = true, WetGrass = true)?


1- Initialize the state probabilities
2- Run Markov chain from initial steps for burn-in steps
2- Run the chain for another N steps and 130
collect the visited states
How do we design a transition kernel such that we can have a stationary distribution?

Assuming a conditional distribution Q, we define an acceptance probability for each move:


✓ 0 0

P (x )Q(x | x )
A(x0 | x) = min 1,
P (x)Q(x0 | x) Can be any simple probabilistic distribution
such as a Gaussian centered at x

At each step we chose x’ according to Q then we accept or reject the move according to A

The detailed balance of MH:


P (x0 )Q(x | x0 )A(x | x0 ) = P (x)Q(x0 | x)A(x0 | x)

q(x0 ! x) q(x ! x0 ) 133


135
§ Use optimization to approximate a density
§ Find tractable distribution q that is close to the target distribution p
§ We use q to answer the probabilistic queries instead of p

§ Variational inference finds a variational distribution that is close to the target


distribution but provides no guarantees to reach the exact target distribution.
§ MCMC sampling guarantees that it produces asymptotically exact samples

§ Faster comparing to MCMC sampling


§ Easier to scale to large models
§ Large scale document analysis
§ Computational neuroscience
§ Computer vision

136
8q : KL(qkp) 0
KL(qkp) = 0 if and only if q = p

Forward Reverse
X p(x) X q(x)
KL(pkq) = p(x) log KL(qkp) = q(x) log
x
q(x) p(x)
x
Expectation w.r.t. target distribution

137
image credit: https://blog.evjang.com/2016/08/variational-bayes.html
Mean field Structured mean field
Target distribution
approximation approximation

Y Y Y
p(x) / (xc ) q(x) = qi (xi ) q(x) = qij (xi , xj )
c (i,j)2Tree
i

138
X q(x)
KL(qkp) = q(x) log log p(x) = log p̂(x) log Z(✓)
x
p(x)
X
= q(x)[log q(x) log p̂(x) + log Z(✓)]
x

= log Z(✓) + Eq [log q(x) log p̂(x)]

log Z(✓) Eq [log p̂(x) log q(x)]


Known as Evidence Lower Bound (ELBO)
139
Initialize the variational factors qi

while ELBO has not converged

for i 2 {1, · · · , m}
Use the updated variational factors
to evaluate the next factor
for each state of xi
1
qi (xi ) = exp(Eq i [log p̂(xi , x i )])
Zi
<latexit sha1_base64="EX4wpHtTkS6dv3fPgHexbLqLI8w=">AAACNnicbVDLSsNAFJ3UV62vqks3g0VoQUsiUt0IRRHcCBXsA5MQJtNJO3TycGYiLSFf5cbvcNeNC0Xc+glO0y60emDgcM65zL3HjRgVUtfHWm5hcWl5Jb9aWFvf2Nwqbu+0RBhzTJo4ZCHvuEgQRgPSlFQy0ok4Qb7LSNsdXE789iPhgobBnRxFxPZRL6AexUgqySne3DsUnkNLxL6TDB2aQosMo7LlI9l3XXjlJA9OckTTFJoWC3vQ6iOZRGlZRQ+nIQ8Os0TFrkCnWNKregb4lxgzUgIzNJzii9UNceyTQGKGhDANPZJ2grikmJG0YMWCRAgPUI+YigbIJ8JOsrNTeKCULvRCrl4gYab+nEiQL8TId1VysqmY9ybif54ZS+/MTmgQxZIEePqRFzMoQzjpEHYpJ1iykSIIc6p2hbiPOMJSNV1QJRjzJ/8lreOqUavWbk9K9YtZHXmwB/ZBGRjgFNTBNWiAJsDgCYzBG3jXnrVX7UP7nEZz2mxmF/yC9vUN2KirZA==</latexit>

X
Zi = exp(Eq i [log p̂(xi , x i )])
xi 140
Compute ELBO = Eq [log p̂(x) log q(x)]
Prior on model parameters Data likelihood for the given model parameters

p(✓)p(D|✓)
p(✓|D) =
p(D)
Posterior on the model parameters
The likelihood of data on true distribution

✓ˆ = argmax✓ p(✓|D) = argmax✓ p(✓)p(D|✓)

141
Example: Bayesian mixture of Gaussian
n data points and k Gaussians
2
µk ⇠ N (0, )
All latent variables including the model parameters
ci ⇠ Categorical(1/k, · · · , 1/k)
p(z)p(x|z) xi | ci , µ ⇠ N (cTi µ, 1)
p(z|x) =
p(x) n
Y
p(µ, c, x) = p(µ) p(ci )p(xi | ci , µ)

q (z) = argminq KL(q(z)kp(z | x)) Z Yn i=1
X
p(x) = p(µ) p(ci )p(xi | ci , µ)dµ
KL(q(z)kp(z | x)) = Eq [log q(z)] Eq [log p(z | x)] i=1 ci

= Eq [log q(z)] Eq [log p(z, x)] + log p(x)

log p(x) Eq [log p(z, x)] Eq [log q(z)]


ELBO = Eq [log p(x|z)] KL(q(z)kp(z))

Expected log-likelihood penalty 142


143
Image credit: Blei et al. (2018)

You might also like