Skip to content

Commit 68a53ef

Browse files
committed
update
1 parent 9b6ab1f commit 68a53ef

File tree

2 files changed

+21
-21
lines changed

2 files changed

+21
-21
lines changed

lectures/calvo_abreu.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ our timing protocol.
5555

5656

5757

58-
## Model Components
58+
## Model components
5959

6060
We'll start with a brief review of the setup.
6161

@@ -133,7 +133,7 @@ makes timing protocols matter for modeling optimal government policies.
133133
Quantecon lecture {doc}`calvo` used this insight to simplify analysis of alternative government policy problems.
134134

135135

136-
## Another Timing Protocol
136+
## Another timing protocol
137137

138138
The Quantecon lecture {doc}`calvo` considered three models of government policy making that differ in
139139

@@ -182,7 +182,7 @@ In this version of our model
182182
- at each $t$, the government chooses $\mu_t$ to maximize
183183
a continuation discounted utility.
184184

185-
### Government Decisions
185+
### Government decisions
186186

187187
$\vec \mu$ is chosen by a sequence of government
188188
decision makers, one for each $t \geq 0$.
@@ -214,7 +214,7 @@ for each $t \geq 0$:
214214
expect an associated $\theta_0^A$ for $t+1$. Here $\vec \mu^A = \{\mu_j^A \}_{j=0}^\infty$ is
215215
an alternative government plan to be described below.
216216

217-
### Temptation to Deviate from Plan
217+
### Temptation to deviate from plan
218218

219219
The government's one-period return function $s(\theta,\mu)$
220220
described in equation {eq}`eq_old6` in quantecon lecture {cite}`Calvo1978` has the property that for all
@@ -240,7 +240,7 @@ If the government at $t$ is to resist the temptation to raise its
240240
current payoff, it is only because it forecasts adverse consequences that
241241
its setting of $\mu_t$ would bring for continuation government payoffs via alterations in the private sector's expectations.
242242

243-
## Sustainable or Credible Plan
243+
## Sustainable or credible plan
244244

245245
We call a plan $\vec \mu$ **sustainable** or **credible** if at
246246
each $t \geq 0$ the government chooses to confirm private
@@ -294,7 +294,7 @@ import matplotlib.pyplot as plt
294294
import pandas as pd
295295
```
296296

297-
### Abreu's Self-Enforcing Plan
297+
### Abreu's self-enforcing plan
298298

299299
A plan $\vec \mu^A$ (here the superscipt $A$ is for Abreu) is said to be **self-enforcing** if
300300

@@ -359,7 +359,7 @@ agents' expectation.
359359
We shall use a construction featured in Abreu ({cite}`Abreu`) to construct a
360360
self-enforcing plan with low time $0$ value.
361361

362-
### Abreu's Carrot-Stick Plan
362+
### Abreu's carrot-stick plan
363363

364364
{cite}`Abreu` invented a way to create a self-enforcing plan with a low
365365
initial value.
@@ -518,7 +518,7 @@ Let's create an instance of ChangLQ with the following parameters:
518518
clq = ChangLQ(β=0.85, c=2)
519519
```
520520

521-
### Example of Self-Enforcing Plan
521+
### Example of self-enforcing plan
522522

523523
The following example implements an Abreu stick-and-carrot plan.
524524

@@ -634,7 +634,7 @@ def check_ramsey(clq, T=1000):
634634
check_ramsey(clq)
635635
```
636636

637-
### Recursive Representation of a Sustainable Plan
637+
### Recursive representation of a sustainable plan
638638

639639
We can represent a sustainable plan recursively by taking the
640640
continuation value $v_t$ as a state variable.
@@ -665,7 +665,7 @@ depends on whether the government at $t$ confirms the representative agent's
665665
expectations by setting $\mu_t$ equal to the recommended value
666666
$\hat \mu_t$, or whether it disappoints those expectations.
667667

668-
## Whose Plan is It?
668+
## Whose plan is it?
669669

670670
A credible government plan $\vec \mu$ plays multiple roles.
671671

lectures/calvo_machine_learn.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ We pose some of those questions at the end of this lecture and answer them b
7272
Human intelligence, not the ``artificial intelligence`` deployed in our machine learning approach, is a key input into choosing which regressions to run.
7373

7474

75-
## The Model
75+
## The model
7676

7777
We study a linear-quadratic version of a model that Guillermo Calvo {cite}`Calvo1978` used to illustrate the **time inconsistency** of optimal government plans.
7878

@@ -93,7 +93,7 @@ The model combines ideas from papers by Cagan {cite}`Cagan`, {cite}`sargent1973
9393

9494

9595

96-
## Model Components
96+
## Model components
9797

9898
There is no uncertainty.
9999

@@ -239,7 +239,7 @@ $$
239239
240240
241241
242-
## Parameters and Variables
242+
## Parameters and variables
243243
244244
245245
**Parameters:**
@@ -265,7 +265,7 @@ $$
265265
266266
267267
268-
### Basic Objects
268+
### Basic objects
269269
270270
To prepare the way for our calculations, we'll remind ourselves of the mathematical objects
271271
in play.
@@ -321,7 +321,7 @@ An optimal government plan under this timing protocol is an example of what is
321321
Notice that while the government is in effect choosing a bivariate **time series** $(\vec mu, \vec \theta)$, the government's problem is **static** in the sense that it chooses treats that time-series as a single object to be chosen at a single point in time.
322322
323323
324-
## Approximation and Truncation parameter $T$
324+
## Approximation and truncation parameter $T$
325325
326326
We anticipate that under a Ramsey plan the sequences $\{\theta_t\}$ and $\{\mu_t\}$ both converge to stationary values.
327327
@@ -392,7 +392,7 @@ $$
392392
393393
where $\tilde \theta_t, \ t = 0, 1, \ldots , T-1$ satisfies formula (1).
394394
395-
## A Gradient Descent Algorithm
395+
## A gradient descent algorithm
396396
397397
We now describe code that maximizes the criterion function {eq}`eq:Ramseyvalue` subject to equations {eq}`eq:inflation101` by choice of the truncated vector $\tilde \mu$.
398398
@@ -689,7 +689,7 @@ compute_V(clq.μ_series, β=0.85, c=2)
689689
690690
691691
692-
### Restricting $\mu_t = \bar \mu$ for all $t$
692+
### Restricting $\mu_t = \bar \mu$ for all $t$
693693
694694
We take a brief detour to solve a restricted version of the Ramsey problem defined above.
695695
@@ -729,7 +729,7 @@ V_CR
729729
compute_V(jnp.array([clq.μ_CR]), β=0.85, c=2)
730730
```
731731
732-
## A More Structured ML Algorithm
732+
## A more structured ML algorithm
733733
734734
By thinking about the mathematical structure of the Ramsey problem and using some linear algebra, we can simplify the problem that we hand over to a ``machine learning`` algorithm.
735735
@@ -1063,7 +1063,7 @@ the limit $\bar \mu$ of $\mu_t$ as $t \rightarrow +\infty$.
10631063
This pattern reflects how formula {eq}`eq_grad_old3` makes $\theta_t$ be a weighted average of future $\mu_t$'s.
10641064
10651065
1066-
## Continuation Values
1066+
## Continuation values
10671067
10681068
For subsquent analysis, it will be useful to compute a sequence $\{v_t\}_{t=0}^T$ of what we'll call ``continuation values`` along a Ramsey plan.
10691069
@@ -1163,7 +1163,7 @@ time-less perspective." A more descriptive phrase is "the value of the worst con
11631163
```
11641164
11651165
1166-
## Adding Some Human Intelligence
1166+
## Adding some human intelligence
11671167
11681168
We have used our machine learning algorithms to compute a Ramsey plan.
11691169
@@ -1351,7 +1351,7 @@ Evidently, continuation values $v_t > V^{CR}$ for $t=0, 1, 2$ while $v_t < V^{CR
13511351
13521352
13531353
1354-
## What has Machine Learning Taught Us?
1354+
## What has machine learning taught us?
13551355
13561356
13571357
Our regressions tells us that along the Ramsey outcome $\vec \mu^R, \vec \theta^R$, the linear function

0 commit comments

Comments
 (0)