0% found this document useful (0 votes)
0 views8 pages

Assignment 7

The document provides an overview of Hidden Markov Models (HMM), detailing their key components such as hidden states, observations, transition probabilities, and emission probabilities. It outlines the assumptions, implementation steps, benefits, and considerations for choosing parameters in HMMs, along with a Python code example for a Gaussian HMM. The code demonstrates the Viterbi algorithm for decoding the most likely sequence of hidden states based on given observations.

Uploaded by

Yash Shirsat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views8 pages

Assignment 7

The document provides an overview of Hidden Markov Models (HMM), detailing their key components such as hidden states, observations, transition probabilities, and emission probabilities. It outlines the assumptions, implementation steps, benefits, and considerations for choosing parameters in HMMs, along with a Python code example for a Gaussian HMM. The code demonstrates the Viterbi algorithm for decoding the most likely sequence of hidden states based on given observations.

Uploaded by

Yash Shirsat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Assignment 7

Name: satyajit Shinde

Div: TY AI C Roll No.: 41

PRN: 12211701

Understanding Hidden Markov Model


A Hidden Markov Model (HMM) is a statistical model used to
represent systems that evolve over time with hidden
(unobservable) states. It is an extension of the Markov chain,
where the states are not directly visible; instead, we observe
outputs or emissions that depend probabilistically on the hidden
states.
Key Components of an HMM
1. Hidden States:
 These represent the unobservable conditions of the system
(e.g., weather conditions like "Sunny", "Cloudy", "Rainy").
 The states form a Markov chain, meaning the probability of
transitioning to the next state depends only on the current
state.
2. Observations:
 These are the visible outputs generated by the hidden states
(e.g., observations like "Dry", "Wet", "Stormy").
 The relationship between hidden states and observations is
defined by emission probabilities.
3. Transition Probabilities:
 Define the likelihood of moving from one hidden state to
another.
 Represented as a matrix where each row sums to 1.
4. Emission Probabilities:
 Also known as observation probabilities, these define the
likelihood of an observation being generated from a specific
hidden state.
5. Initial State Probabilities:
 Indicate the probability distribution over hidden states at the
start of the process.
Assumptions of HMM
1. Markov Property:
 The current state depends only on the previous state, not on
earlier states.
2. Output Independence:
 Observations are conditionally independent of other
observations and hidden states, given the current hidden
state.
3. Stationarity:
 Transition probabilities do not change over time.
Key Steps in the Hidden Markov Model (HMM) Implementation
1. Information Source:
 The process begins with observations (e.g., weather
conditions like "Dry", "Wet", or "Stormy") generated by a
source that needs to be analysed. These observations
represent the visible outputs of hidden states.
2. Encoding:
 Hidden states (e.g., "Sunny", "Cloudy", "Rainy") are
modelled probabilistically using transition probabilities,
emission probabilities, and initial probabilities. Redundancy
is added through probabilistic relationships to account for
uncertainty in transitions and emissions.
3. Transmission:
 Observations are influenced by noise, represented by
probabilistic emissions from each hidden state. For example,
a "Sunny" state might emit a "Dry" observation with high
probability but could also emit "Wet" due to noise.
4. Decoding:
 The Viterbi algorithm reconstructs the most likely sequence
of hidden states given the noisy observations. It uses dynamic
programming to maximize the likelihood of the observed
sequence.
5. Error Handling:
 Residual errors in state predictions are minimized using
probabilistic models and redundancy inherent in the
transition and emission probabilities.
Benefits of the Hidden Markov Model
 Robust Analysis: Enables accurate prediction of hidden states
even with noisy observations, useful in applications like weather
forecasting and speech recognition.
 Error Detection/Correction: Reduces prediction errors using
probabilistic methods (e.g., Viterbi algorithm).
 Adaptability: Applicable to various domains such as natural
language processing (NLP), bioinformatics, and signal processing.
Choosing Parameters for HMM
1. Noise Characteristics:
 Low noise: States and observations have high correlation;
simpler models suffice.
 Moderate noise: Intermediate complexity in emission
probabilities.
 High noise: Requires robust modeling with detailed transition
and emission probabilities.
2. Trade-offs:
 Model Complexity vs. Efficiency: Higher complexity
improves accuracy but increases computational costs.
 Redundancy vs. Throughput: Incorporating detailed
emission probabilities enhances reliability but may slow
down computation.
Additional Features in Your Code
1. Mean and Variance of Hidden States:
 Each hidden state is associated with Gaussian emission
parameters (mean and variance), enabling continuous
observation modeling.

Problem Statement: WAP to implement Hidden Markov Model


Code:
import numpy as np
import random

class GaussianHiddenMarkovModel:
def __init__(self, n_states, n_observations):
self.n_states = n_states
self.n_observations = n_observations

# Initialize parameters
self.transition_prob = np.random.rand(n_states, n_states)
self.transition_prob /= self.transition_prob.sum(axis=1,
keepdims=True)

# For Gaussian emissions, define mean and variance for each


state
self.emission_means = np.random.rand(n_states, n_observations)
self.emission_variances = np.random.rand(n_states,
n_observations) + 0.1 # Ensure variance is positive

self.initial_prob = np.random.rand(n_states)
self.initial_prob /= self.initial_prob.sum()

def viterbi(self, observations):


T = len(observations)
viterbi_table = np.zeros((self.n_states, T))
backpointers = np.zeros((self.n_states, T), dtype=int)

# Initialize with initial probabilities and first observation


for s in range(self.n_states):
viterbi_table[s, 0] = self.initial_prob[s] *
self.gaussian_pdf(observations[0], self.emission_means[s, 0],
self.emission_variances[s, 0])

# Recursion for subsequent time steps


for t in range(1, T):
for s in range(self.n_states):
trans_prob = viterbi_table[:, t-1] *
self.transition_prob[:, s]
max_trans = np.argmax(trans_prob)
viterbi_table[s, t] = trans_prob[max_trans] *
self.gaussian_pdf(observations[t], self.emission_means[s, t %
self.n_observations], self.emission_variances[s, t %
self.n_observations])
backpointers[s, t] = max_trans

# Traceback to find optimal path


best_path = np.zeros(T, dtype=int)
best_path[-1] = np.argmax(viterbi_table[:, -1])
for t in range(T-2, -1, -1):
best_path[t] = backpointers[best_path[t+1], t+1]

return best_path

def gaussian_pdf(self, x, mean, variance):


return np.exp(-((x - mean) ** 2) / (2 * variance)) / np.sqrt(2 *
np.pi * variance)

# Define states (0: Sunny, 1: Cloudy, 2: Rainy) and observations (0:


Dry, 1: Wet, 2: Stormy)
hmm = GaussianHiddenMarkovModel(n_states=3, n_observations=3)

# Manually set parameters


hmm.transition_prob = np.array([[0.3, 0.2, 0.5], [0.2, 0.5, 0.3], [0.1,
0.4, 0.5]])
hmm.emission_means = np.array([[0.4, 0.6, 0.8], [0.3, 0.5, 0.7], [0.1,
0.3, 0.9]])
hmm.emission_variances = np.array([[0.1, 0.2, 0.3], [0.2, 0.3, 0.4],
[0.3, 0.4, 0.5]])
hmm.initial_prob = np.array([0.4, 0.1, 0.5])

# Observations: Randomly generate a sequence of length between 1 and 8


observations = [random.uniform(0, 1) for _ in range(random.randint(1,
8))]

best_states = hmm.viterbi(observations)
print("Observations:", observations)
print("Predicted hidden states:", best_states)

# Interpret states
state_names = ["Sunny", "Cloudy", "Rainy"]
print("Interpreted states:", [state_names[state] for state in
best_states])

# Print means and variances for each state


for s in range(hmm.n_states):
print(f"State {state_names[s]}:")
print(f" - Mean: {hmm.emission_means[s, :]}")
print(f" - Variance: {hmm.emission_variances[s, :]}")

Output:
1.Observations: [0.3790004094219769, 0.43166097774778567,
0.2432889991045354]

Predicted hidden states: [0 2 1]

Interpreted states: ['Sunny', 'Rainy', 'Cloudy']

State Sunny:

- Mean: [0.4 0.6 0.8]

- Variance: [0.1 0.2 0.3]

State Cloudy:

- Mean: [0.3 0.5 0.7]

- Variance: [0.2 0.3 0.4]

State Rainy:

- Mean: [0.1 0.3 0.9]

- Variance: [0.3 0.4 0.5]


2.Observations: [0.497641659466745, 0.851659427383329,
0.7852794317765039]

Predicted hidden states: [0 0 2]

Interpreted states: ['Sunny', 'Sunny', 'Rainy']

State Sunny:

- Mean: [0.4 0.6 0.8]

- Variance: [0.1 0.2 0.3]

State Cloudy:

- Mean: [0.3 0.5 0.7]

- Variance: [0.2 0.3 0.4]

State Rainy:

- Mean: [0.1 0.3 0.9]

- Variance: [0.3 0.4 0.5]

You might also like