Skip to content

Commit f246cbb

Browse files
committed
Update rob_markov_perf.md
1 parent 6bcb40a commit f246cbb

File tree

1 file changed

+14
-14
lines changed

1 file changed

+14
-14
lines changed

lectures/rob_markov_perf.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ from scipy.linalg import solve
5656
import matplotlib.pyplot as plt
5757
```
5858

59-
### Basic Setup
59+
### Basic setup
6060

6161
Decisions of two agents affect the motion of a state vector
6262
that appears as an argument of payoff functions of both agents.
@@ -82,7 +82,7 @@ A Markov perfect equilibrium with robust agents will be characterized by
8282
Below, we'll construct a robust firms version of the classic duopoly model with
8383
adjustment costs analyzed in [Markov perfect equilibrium](https://python-intro.quantecon.org/markov_perf.html).
8484

85-
## Linear Markov Perfect Equilibria with Robust Agents
85+
## Linear Markov perfect equilibria with robust agents
8686

8787
```{index} single: Linear Markov Perfect Equilibria
8888
```
@@ -92,7 +92,7 @@ leads us to an interrelated pair of Bellman equations.
9292

9393
In linear quadratic dynamic games, these "stacked Bellman equations" become "stacked Riccati equations" with a tractable mathematical structure.
9494

95-
### Modified Coupled Linear Regulator Problems
95+
### Modified coupled linear regulator problems
9696

9797
We consider a general linear quadratic regulator game with two players, each of whom fears model misspecifications.
9898

@@ -160,7 +160,7 @@ agent $i$'s mind charges for distorting the law of motion in a way that harms ag
160160
* the imaginary loss-maximizing agent helps the loss-minimizing agent by helping him construct bounds on the behavior of his decision rule over a
161161
large **set** of alternative models of state transition dynamics.
162162

163-
### Computing Equilibrium
163+
### Computing equilibrium
164164

165165
We formulate a linear robust Markov perfect equilibrium as follows.
166166

@@ -268,7 +268,7 @@ Moreover, since
268268

269269
we need to solve these $k_1 + k_2$ equations simultaneously.
270270

271-
### Key Insight
271+
### Key insight
272272

273273
As in [Markov perfect equilibrium](https://python-intro.quantecon.org/markov_perf.html), a key insight here is that equations {eq}`rmp-orig-3` and {eq}`rmp-orig-5` are linear in $F_{1t}$ and $F_{2t}$.
274274

@@ -282,7 +282,7 @@ However, in the Markov perfect equilibrium of this game, each agent is assumed t
282282

283283
After these equations have been solved, we can also deduce associated sequences of worst-case shocks.
284284

285-
### Worst-case Shocks
285+
### Worst-case shocks
286286

287287
For agent $i$ the maximizing or worst-case shock $v_{it}$ is
288288

@@ -296,7 +296,7 @@ $$
296296
K_{it} = \theta_i^{-1} (I - \theta_i^{-1} C' P_{i,t+1} C)^{-1} C' P_{i,t+1} (A - B_1 F_{it} - B_2 F_{2t})
297297
$$
298298

299-
### Infinite Horizon
299+
### Infinite horizon
300300

301301
We often want to compute the solutions of such games for infinite horizons, in the hope that the decision rules $F_{it}$ settle down to be time-invariant as $t_1 \rightarrow +\infty$.
302302

@@ -315,7 +315,7 @@ game with robust planers in the manner described above.
315315
```{index} single: Markov Perfect Equilibrium; Applications
316316
```
317317

318-
### A Duopoly Model
318+
### A duopoly model
319319

320320
Without concerns for robustness, the model is identical to the duopoly model from the [Markov perfect equilibrium](https://python-intro.quantecon.org/markov_perf.html) lecture.
321321

@@ -438,7 +438,7 @@ A robust decision rule of firm $i$ will take the form $u_{it} = - F_i x_t$, ind
438438
x_{t+1} = (A - B_1 F_1 -B_1 F_2 ) x_t
439439
```
440440

441-
### Parameters and Solution
441+
### Parameters and solution
442442

443443
Consider the duopoly model with parameter values of:
444444

@@ -453,7 +453,7 @@ From these, we computed the infinite horizon MPE without robustness using the co
453453
:load: _static/lecture_specific/markov_perf/duopoly_mpe.py
454454
```
455455

456-
#### Markov Perfect Equilibrium with Robustness
456+
#### Markov perfect equilibrium with robustness
457457

458458
We add robustness concerns to the Markov Perfect Equilibrium model by
459459
extending the function `qe.nnash`
@@ -630,7 +630,7 @@ def nnash_robust(A, C, B1, B2, R1, R2, Q1, Q2, S1, S2, W1, W2, M1, M2,
630630
return F1, F2, P1, P2
631631
```
632632

633-
### Some Details
633+
### Some details
634634

635635
Firm $i$ wants to minimize
636636

@@ -723,7 +723,7 @@ Q1 = Q2 = γ
723723
S1 = S2 = W1 = W2 = M1 = M2 = 0.0
724724
```
725725

726-
#### Consistency Check
726+
#### Consistency check
727727

728728
We first conduct a comparison test to check if `nnash_robust` agrees
729729
with `qe.nnash` in the non-robustness case in which each $\theta_i \approx +\infty$
@@ -747,7 +747,7 @@ print('P2 and P2r should be the same: ', np.allclose(P1, P1r))
747747

748748
We can see that the results are consistent across the two functions.
749749

750-
#### Comparative Dynamics under Baseline Transition Dynamics
750+
#### Comparative dynamics under baseline transition dynamics
751751

752752
We want to compare the dynamics of price and output under the baseline
753753
MPE model with those under the baseline model under the robust decision rules within the robust MPE.
@@ -912,7 +912,7 @@ To explore this, we study next how *ex-post* the two firms' beliefs about state
912912

913913
(by *ex-post* we mean *after* extremization of each firm's intertemporal objective)
914914

915-
#### Heterogeneous Beliefs
915+
#### Heterogeneous beliefs
916916

917917
As before, let $A^o = A - B\_1 F\_1^r - B\_2 F\_2^r$, where in a robust MPE, $F_i^r$ is a robust decision rule for firm $i$.
918918

0 commit comments

Comments
 (0)