- Platform
- edX
- Provider
- Massachusetts Institute of Technology
- Effort
- 6 hours/week
- Length
- 16 weeks
- Language
- English
- Credentials
- Paid Certificate Available
- Course Link
Overview
The world is full of uncertainty: accidents, storms, unruly financial markets, and noisy communications. The world is also full of data. Probabilistic modeling and the related field of statistical inference are the keys to analyzing data and making scientifically sound predictions.
This is Part 1 of a 2-part sequence on the basic tools of probabilistic modeling. Part 1 introduces the general framework of probability models, multiple discrete or continuous random variables, expectations, conditional distributions, and various powerful tools of general applicability. Part 2 will then continue into further topics that include laws of large numbers, the main tools of Bayesian inference methods, and an introduction to random processes (Poisson processes and Markov chains).
The contents of the two parts of the course are essentially the same as those of the corresponding MIT class, which has been offered and continuously refined over more than 50 years. It is a challenging class, but will enable you to apply the tools of probability theory to real-world applications or your research.
Probabilistic models use the language of mathematics. But instead of relying on the traditional "theorem - proof" format, we develop the material in an intuitive -- but still rigorous and mathematically precise -- manner. Furthermore, while the applications are multiple and evident, we emphasize the basic concepts and methodologies that are universally applicable.
What you'll learn
Taught by
John Tsitsiklis , Patrick Jaillet , Zied Ben Chaouch , Dimitri Bertsekas , Qing He, Jimmy Li, Jagdish Ramakrishnan , Katie Szeto and Kuang Xu
The world is full of uncertainty: accidents, storms, unruly financial markets, and noisy communications. The world is also full of data. Probabilistic modeling and the related field of statistical inference are the keys to analyzing data and making scientifically sound predictions.
This is Part 1 of a 2-part sequence on the basic tools of probabilistic modeling. Part 1 introduces the general framework of probability models, multiple discrete or continuous random variables, expectations, conditional distributions, and various powerful tools of general applicability. Part 2 will then continue into further topics that include laws of large numbers, the main tools of Bayesian inference methods, and an introduction to random processes (Poisson processes and Markov chains).
The contents of the two parts of the course are essentially the same as those of the corresponding MIT class, which has been offered and continuously refined over more than 50 years. It is a challenging class, but will enable you to apply the tools of probability theory to real-world applications or your research.
Probabilistic models use the language of mathematics. But instead of relying on the traditional "theorem - proof" format, we develop the material in an intuitive -- but still rigorous and mathematically precise -- manner. Furthermore, while the applications are multiple and evident, we emphasize the basic concepts and methodologies that are universally applicable.
What you'll learn
- The basic structure and elements of probabilistic models
- Random variables, their distributions, means, and variances
- Probabilistic calculations
- Inference methods
- Laws of large numbers and their applications
- Random processes
Syllabus
- Probability models and axioms
- Conditioning, Bayes’ rule, independence
- Counting methods in discrete probability
- Discrete random variables (distributions, mean, variance, conditioning, etc.)
- Continuous random variables (including general forms of Bayes’ rule)
- Further topics (derived distributions; covariance & correlation, etc.)
Taught by
John Tsitsiklis , Patrick Jaillet , Zied Ben Chaouch , Dimitri Bertsekas , Qing He, Jimmy Li, Jagdish Ramakrishnan , Katie Szeto and Kuang Xu