Component
Faculty of Science
Description
The first part of this course deals with complements to probability theory: conditional expectation, Gaussian vectors. The second part presents one of the main families of discrete-time stochastic processes, Markov chains. These are sequences of dependent random variables, whose dependency relationship is relatively simple, since each variable depends only on the preceding one. They are also a very powerful modeling tool. We will study the main properties of these processes, as well as their behavior in long time and the estimation of their parameters.
Objectives
The objectives of the course are
- to be able to calculate expectations and conditional distributions
- to be able to model an experiment by a Markov chain
- to be able to calculate the quantities of interest (probability and time to reach certain events)
- to be able to determine the asymptotic behavior of the process.
Teaching hours
- Stochastic processes - CMLecture21h
- Stochastic processes - TDTutorial21h
Necessary prerequisites
L3 level probability courses: random variables and vectors, modes of convergence of sequences of random variables, convergence of independent and identically distributed random variable sequences. (+ characteristic function if Gaussian vectors)
Linear algebra: matrix calculus, eigenelements, solving linear systems, linear recurrent sequences.
Recommended prerequisites: Measurement theory
Syllabus
1 Measurability
1.1 Tribes .
1.2 Random processes
1.3 Filtration
1.4 Downtime
2 Conditional expectation
2.1 Conditional probability with respect to an event
2.2 Conditional expectation with respect to a tribe .
2.3 Conditional expectation and independence .
2.4 Conditional laws
3 Markov chains
3.1 Stochastic matrices
3.1.1 Definition and graphic representation
3.1.2 Communicating classes
3.1.3 Periodicity
3.2 Markov processes
3.2.1 Definition of a Markov chain
3.2.2 Markov property
3.3 Passage problems
3.4 Classification of Markov chains
3.4.1 Recurrence and transience
3.4.2 Link to class structure
3.5 Asymptotic behavior
3.5.1 Invariant law
3.5.2 Convergence to the invariant law
3.5.3 Ergodic theorem
3.5.4 Markov chain statistics
Further information
Timetable:
CM: 21
TD: 21
TP:
Field: