Lab 4: Markov Chains
in-class
For this lab, please work with your randomly assigned partner, with code on one computer. Make sure to email each other the code afterwards, since Homework 6 will also be about graphical models.
Now for the transition probabilities, create a 2D array to hold the transition matrix. Use P(L|L)=0.5, P(H|L)=0.5, P(L|H)=0.9, P(H|H)=0.1.
Finally, create a variable called num_days and set it equal to 1000. Our goal is to simulate a sequence of sleep values for 1000 days.
We need to start our Markov chain somewhere, so we will "flip a coin" with weights 0.2 on L and 0.8 on H. To do this, we can use a numpy built-in function that draws from a discrete probability distribution:
elements = ['red', 'green', 'blue'] probabilities = [0.4, 0.1, 0.5] sequence = np.random.choice(elements, 20, p=probabilities)In this example, we will choose a sequence of 20 colors, according to their given probabilities. Use this same concept to draw 1 choice for the initial state, which we can store as the current state. Note that this random.choice method returns a list, but we just want one state.
Now we can create a loop over num_days (technically num_days-1 since we have already chosen the value for the first day). First, based on the current state, find the appropriate row of the transition matrix, then use that row as the probabilities when drawing the next state. Then update the current state based on this outcome. Finally, store the sequence of values in a list (often called Z).
Lastly, print Z, the sequence of states (you can start with num_days to something smaller than 1000). Then print the fraction of 'L' values vs. the fraction of 'H' values. Does this agree with our stationary distribution calculation in class?