Econ 890b. Stochastic Dynamic Programming and Stochastic Games

Printer-friendly versionPDF version
Course Type: 
Graduate
Course term: 
Not offered
Year: 
2014

Stochastic dynamic programming is the theory of how to “control” a stochastic process so as to maximize the chance, or the expected value, of some objective. For example, a player might try to choose bets (investments) to maximize the chance of winning $1000 while playing roulette (the stock market). The subject is also known as Markov decision theory, stochastic control, or even gambling theory. The multitude of names is due in part to the many fields of application which include statistics, economics, operations research, and mathematical finance. The course will cover the basic theory of discrete-time dynamic programming including backward induction, discounted dynamic programming, positive, and negative dynamic programming. Several examples will be treated in detail. In a stochastic game, two or more players jointly control a stochastic process. Typically the players have different objectives and each player seeks to control the process in a way favorable to him or her. There will be an introduction of some fundamental concepts of game theory including the notions of the value of a two-person, zero-sum game and a Nash equilibrium for an n-person game. These concepts will then be used to study n-person stochastic games. The course will introduce strategic market games. These are stochastic games with infinitely many players that are used to model certain aspects of a large economy.