P Is A Markov Transition Matrix
P Is A Markov Transition Matrix
Each row in the matrix represents the current state, and each column represents the next
state.
In a Markov matrix, each row must sum to 1, because it represents all possible next
states from the current state.
# Initial state distribution (start in Operating)
state = 0
n_steps = 10000
state_counts = np.zeros(len(states))
1. state = 0
In a more advanced version, you could choose this initial state randomly using a probability
distribution, but here we start with it always Operating.
2. n_steps = 10000
This defines how many time steps (or transitions) we will simulate.
A higher number gives more stable and reliable results, especially when
approximating steady-state probabilities.
state = 0: We start with the machine Operating
n_steps = 10000: Number of steps (or "transitions") to simulate
state_counts: A counter to track how often we land in each state
3. state_counts = np.zeros(len(states))
We initialize a counter array to track how often the system is in each state.
len(states) is 3 in this case (Operating, Idle, Broken), so:
During the simulation, we'll increment the count for the current state at each step. At
the end, we’ll divide each count by n_steps to get the proportion of time spent in
each state (aka empirical steady-state probabilities).
for _ in range(n_steps):
state_counts[state] += 1
Use np.random.choice to pick the next state, based on the current row of P
E.g., if you're in state 0 (Operating), P[0] = [0.7, 0.2, 0.1], so there's a 70% chance to stay
Operating, etc.
This gives us the empirical steady-state probabilities—i.e., how much time the machine
spends in each state on average.
for i, s in enumerate(states):
Loops through each state and prints the % of time spent there.
Calculating Efficiency
efficiency = state_probs[0]
Defines process efficiency as the proportion of time the machine is in the Operating
state (index 0).
Prints the final result.