You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/rob_markov_perf.md
+14-14Lines changed: 14 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,7 +56,7 @@ from scipy.linalg import solve
56
56
import matplotlib.pyplot as plt
57
57
```
58
58
59
-
### Basic Setup
59
+
### Basic setup
60
60
61
61
Decisions of two agents affect the motion of a state vector
62
62
that appears as an argument of payoff functions of both agents.
@@ -82,7 +82,7 @@ A Markov perfect equilibrium with robust agents will be characterized by
82
82
Below, we'll construct a robust firms version of the classic duopoly model with
83
83
adjustment costs analyzed in [Markov perfect equilibrium](https://python-intro.quantecon.org/markov_perf.html).
84
84
85
-
## Linear Markov Perfect Equilibria with Robust Agents
85
+
## Linear Markov perfect equilibria with robust agents
86
86
87
87
```{index} single: Linear Markov Perfect Equilibria
88
88
```
@@ -92,7 +92,7 @@ leads us to an interrelated pair of Bellman equations.
92
92
93
93
In linear quadratic dynamic games, these "stacked Bellman equations" become "stacked Riccati equations" with a tractable mathematical structure.
94
94
95
-
### Modified Coupled Linear Regulator Problems
95
+
### Modified coupled linear regulator problems
96
96
97
97
We consider a general linear quadratic regulator game with two players, each of whom fears model misspecifications.
98
98
@@ -160,7 +160,7 @@ agent $i$'s mind charges for distorting the law of motion in a way that harms ag
160
160
* the imaginary loss-maximizing agent helps the loss-minimizing agent by helping him construct bounds on the behavior of his decision rule over a
161
161
large **set** of alternative models of state transition dynamics.
162
162
163
-
### Computing Equilibrium
163
+
### Computing equilibrium
164
164
165
165
We formulate a linear robust Markov perfect equilibrium as follows.
166
166
@@ -268,7 +268,7 @@ Moreover, since
268
268
269
269
we need to solve these $k_1 + k_2$ equations simultaneously.
270
270
271
-
### Key Insight
271
+
### Key insight
272
272
273
273
As in [Markov perfect equilibrium](https://python-intro.quantecon.org/markov_perf.html), a key insight here is that equations {eq}`rmp-orig-3` and {eq}`rmp-orig-5` are linear in $F_{1t}$ and $F_{2t}$.
274
274
@@ -282,7 +282,7 @@ However, in the Markov perfect equilibrium of this game, each agent is assumed t
282
282
283
283
After these equations have been solved, we can also deduce associated sequences of worst-case shocks.
284
284
285
-
### Worst-case Shocks
285
+
### Worst-case shocks
286
286
287
287
For agent $i$ the maximizing or worst-case shock $v_{it}$ is
288
288
@@ -296,7 +296,7 @@ $$
296
296
K_{it} = \theta_i^{-1} (I - \theta_i^{-1} C' P_{i,t+1} C)^{-1} C' P_{i,t+1} (A - B_1 F_{it} - B_2 F_{2t})
297
297
$$
298
298
299
-
### Infinite Horizon
299
+
### Infinite horizon
300
300
301
301
We often want to compute the solutions of such games for infinite horizons, in the hope that the decision rules $F_{it}$ settle down to be time-invariant as $t_1 \rightarrow +\infty$.
302
302
@@ -315,7 +315,7 @@ game with robust planers in the manner described above.
Without concerns for robustness, the model is identical to the duopoly model from the [Markov perfect equilibrium](https://python-intro.quantecon.org/markov_perf.html) lecture.
321
321
@@ -438,7 +438,7 @@ A robust decision rule of firm $i$ will take the form $u_{it} = - F_i x_t$, ind
438
438
x_{t+1} = (A - B_1 F_1 -B_1 F_2 ) x_t
439
439
```
440
440
441
-
### Parameters and Solution
441
+
### Parameters and solution
442
442
443
443
Consider the duopoly model with parameter values of:
444
444
@@ -453,7 +453,7 @@ From these, we computed the infinite horizon MPE without robustness using the co
0 commit comments