University of Colorado Boulder
Discrete-Time Markov Chains and Monte Carlo Methods

Discover new skills with 30% off courses from industry experts. Save now.

University of Colorado Boulder

Discrete-Time Markov Chains and Monte Carlo Methods

Jem Corcoran

Instructor: Jem Corcoran

Included with Coursera Plus

Gain insight into a topic and learn the fundamentals.
Intermediate level

Recommended experience

3 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace
Gain insight into a topic and learn the fundamentals.
Intermediate level

Recommended experience

3 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace

What you'll learn

  • Analyze long-term behavior of Markov processes for the purposes of both prediction and understanding equilibrium in dynamic stochastic systems

  • Apply Markov decision processes to solve problems involving uncertainty and sequential decision-making

  • Simulate data from complex probability distributions using Markov chain Monte Carlo algorithms

Details to know

Shareable certificate

Add to your LinkedIn profile

Recently updated!

August 2025

Assessments

15 assignments

Taught in English

See how employees at top companies are mastering in-demand skills

 logos of Petrobras, TATA, Danone, Capgemini, P&G and L'Oreal

Build your subject-matter expertise

This course is part of the Foundations of Probability and Statistics Specialization
When you enroll in this course, you'll also be enrolled in this Specialization.
  • Learn new concepts from industry experts
  • Gain a foundational understanding of a subject or tool
  • Develop job-relevant skills with hands-on projects
  • Earn a shareable career certificate

There are 6 modules in this course

Welcome to the course! This module contains logistical information to get you started!

What's included

7 readings4 ungraded labs

In this module we will review definitions and basic computations of conditional probabilities. We will then define a Markov chain and its associated transition probability matrix and learn how to do many basic calculations. We will then tackle more advanced calculations involving absorbing states and techniques for putting a longer history into a Markov framework!

What's included

12 videos5 assignments2 programming assignments

What happens if you run a Markov chain out for a "very long time"? In many cases, it turns out that the chain will settle into a sort of "equilibrium" or "limiting distribution" where you will find it in various states with various fixed probabilities. In this Module, we will define communication classes, recurrence, and periodicity properties for Markov chains with the ultimate goal of being able to answer existence and uniqueness questions about limiting distributions!

What's included

9 videos3 assignments2 programming assignments

In this Module, we will define what is meant by a "stationary" distribution for a Markov chain. You will learn how it relates to the limiting distribution discussed in the previous Module. You will also spend time learning about the very powerful "first-step analysis" technique for solving many, otherwise intractable, problems of interest surrounding Markov chains. We will discuss rates of convergence for a Markov chain to settle into its "stationary mode", and just maybe we'll give a monkey a keyboard and hope for the best!

What's included

11 videos3 assignments2 programming assignments

In this Module we explore several options for simulating values from discrete and continuous distributions. Several of the algorithms we consider will involve creating a Markov chain with a stationary or limiting distribution that is equivalent to the "target" distribution of interest. This Module includes the inverse cdf method, the accept-reject algorithm, the Metropolis-Hastings algorithm, the Gibbs sampler, and a brief introduction to "perfect sampling".

What's included

13 videos2 assignments2 programming assignments4 ungraded labs

In reinforcement learning, an "agent" learns to make decisions in an environment through receiving rewards or punishments for taking various actions. A Markov decision process (MDP) is reinforcement learning where, given the current state of the environment and the agent's current action, past states and actions used to get the agent to that point are irrelevant. In this Module, we learn about the famous "Bellman equation", which is used to recursively assign rewards to various states and how to use it in order to find an optimal strategy for the agent!

What's included

5 videos2 assignments2 programming assignments4 ungraded labs

Earn a career certificate

Add this credential to your LinkedIn profile, resume, or CV. Share it on social media and in your performance review.

Instructor

Jem Corcoran
University of Colorado Boulder
7 Courses36,883 learners

Offered by

Explore more from Probability and Statistics

Why people choose Coursera for their career

Felipe M.
Learner since 2018
"To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood."
Jennifer J.
Learner since 2020
"I directly applied the concepts and skills I learned from my courses to an exciting new project at work."
Larry W.
Learner since 2021
"When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go."
Chaitanya A.
"Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits."
Coursera Plus

Open new doors with Coursera Plus

Unlimited access to 10,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription

Advance your career with an online degree

Earn a degree from world-class universities - 100% online

Join over 3,400 global companies that choose Coursera for Business

Upskill your employees to excel in the digital economy

Frequently asked questions