Introduction To Markov Chains: Group 5

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 16

INTRODUCTION TO MARKOV

CHAINS
GROUP 5

• RAIZA DC. VILLEGAS


• CRIS ANNE SISON • MARYJOY SAN ROQUE
• ETHEL JANE VALDERAMA • MIKE BENEDICTO
• ALTHEO AUSTRIA • CATHERINE SANPEDRO
WHAT IS A MARKOV
CHAINS?

A Stochastic process containing random


variables, transitioning from one state to
another depending on certain assumptions
and definite probabilistic rules.
WHAT IS THE MARKOV
PROPERTY?

Discrete Time Markov Property states that


the calculated probability of a random
process transitioningDto the next possible
state is only dependent on the current
state and time and it is independent of the
series of states that preceded it.
WHAT IS A MARKOV MODEL?

A Markov Model is a stochastic model that models random variables


in such a manner that the variables follow the Markov property.

As mentioned earlier, Markov chains are used in text generation and


auto-completion applications . For example we’ll take a look at an
example ( random ) sentence and see how it can be modeled by
using Markov chains.

One edureka, two edureka, hail edureka, happy edureka


WHAT IS A MARKOV
MODEL?
The above sentence is our example, I know it doesn’t make much sense ( it
doesn’t have to), it’s a sentence containing random words, wherein:

1. Keys denote the unique words in the sentence, i.e., 5 keys ( one, two,
hail, happy, edureka)

2. Tokens denote the total number of words, i.e. 8 tokens.

Moving ahead, we need to understand the frequency of occurrence of


these words, the below diagram show each word along with a number that
denotes the frequency of that word.
KEYS AND FREQUENCIES bochins.paw
WEIGHTED
DISTRIBUTIONS

In our case, the weighted distribution


for ‘edureka ‘ is 50% (4/8) because its
frequency is 4, out of the total tokens.
The rest of the keys ( one, two, hail,
happy) all have a 1/8th chance of
occurring (=13%).
bochins.paw

Start, one, two, hail, happy, end.

In the above figure, I’ve added two


additional words which denotes the
start and end of the sentence.
UPDATED KEYS AND
FREQUENCIES
bochins.paw

In order to predict the next state, we must only


consider the current state.

In the below diagram diagram, you can see how each token
in our sentence leads to another one.This show that the
future state (next token) is based on the current state
(present token ). So this is the most basic rule in the markov
model,
MARKOV CHAIN PAIRS
AN ARRAY OF MARKOV CHAIN PAIRS
bochins.paw

To summarize this example consider a scenario where you will have a


form a sentence by using the array of keys and tokens we saw in the
above example. Another important point is that we need to specify two
initial measures.

1. An initial probability distribution ( i.e. the start state at time =0,


( Start’ key) .

2. A transition probability of jumping from one state to another ( in


this case, the probabilities and the initial state, now let’s get on with
example.
• So to begin with the initial token is (START) bochins.paw
• Next, we have only one possible token i.e. ‘(ONE)
• Currently, the sentence has only one word, i.e. ‘one ‘.
• From this token, the next possible token is (edureka)
• The sentence is updated to ‘one edureka’
• From (edureka) we can move to any one of the
following tokens (two, hail, happy, end)
• There is a 25%chance that ‘two’ gets picked, this
would possibly result in forming the original sentence
(one edureka hail edureka happy edureka). However, if
‘end’ is picked then the process stops and we will end
up generating a new sentence, I.e.,’one edureka’.
HAT IS A TRANSITION PROBABILITY MATRIX?

In a Markov Process, web use a


matrix to represent the transition
probabilities from one state to
another . This matrix is called the
Transition or probability matrix.
bochins.paw

Thank you...

You might also like