Skip to content

Commit 35d6ac4

Browse files
committed
minor updates on typos
1 parent 65f0287 commit 35d6ac4

File tree

1 file changed

+13
-30
lines changed

1 file changed

+13
-30
lines changed

lectures/calvo.md

Lines changed: 13 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ The model focuses on intertemporal tradeoffs between
4545
- benefits that anticipations of future deflation generate by decreasing costs of holding real money balances and thereby increasing a representative agent's *liquidity*, as measured by his or her holdings of real money balances, and
4646
- costs associated with the distorting taxes that the government must levy in order to acquire the paper money that it will destroy in order to generate anticipated deflation
4747

48-
Model features include
48+
Model features include
4949

5050
- rational expectations
5151
- alternative possible timing protocols for government choices of a sequence of money growth rates
@@ -138,7 +138,7 @@ or
138138

139139
Because $\alpha > 0$, $0 < \frac{\alpha}{1+\alpha} < 1$.
140140

141-
**Definition:** For scalar $b_t$, let $L^2$ be the space of sequences
141+
**Definition:** For scalar $b_t$, let $L^2$ be the space of sequences
142142
$\{b_t\}_{t=0}^\infty$ satisfying
143143

144144
$$
@@ -291,7 +291,7 @@ v_0 = - \sum_{t=0}^\infty \beta^t r(x_t,\mu_t) = \sum_{t=0}^\infty \beta^t s(\t
291291
where $\beta \in (0,1)$ is a discount factor.
292292
293293
```{note}
294-
We define $ r(x_t,\mu_t) := - s(\theta_t, \mu_t) $ in order to represent the government's **maximum** problem in terms of our Python code for solving linear quadratic discounted dynamic programs.
294+
We define $ r(x_t,\mu_t) := - s(\theta_t, \mu_t) $ in order to represent the government's **maximization** problem in terms of our Python code for solving linear quadratic discounted dynamic programs.
295295
In [first LQ control lecture](https://python-intro.quantecon.org/lqcontrol.html) and some other quantecon lectures, we formulated these as **loss minimization** problems.
296296
```
297297
@@ -301,7 +301,7 @@ $$
301301
v_t = \sum_{j=0}^\infty \beta^j s(\theta_{t+j}, \mu_{t+j}) .
302302
$$ (eq:contnvalue)
303303
304-
We can represent dependence of $v_t$ on $(\vec \theta, \vec \mu)$ recursively via the difference equation
304+
We can represent dependence of $v_0$ on $(\vec \theta, \vec \mu)$ recursively via the difference equation
305305
306306
```{math}
307307
:label: eq_old8
@@ -457,7 +457,7 @@ and pose an ordinary discounted dynamic programming problem that in our setting
457457
In the second stage, we choose an optimal initial inflation rate $\theta_0$.
458458
459459
Define a feasible set of
460-
$\{x_{t+1}, \mu_t \}_{t=0}^\infty$ sequences, with each sequence belonging to $L^2$:
460+
$\{x_{t+1}, \mu_t \}_{t=0}^\infty$ sequences, with each sequence belonging to $L^2$:
461461
462462
$$
463463
\Omega(x_0) = \{x_{t+1}, \mu_t \}_{t=0}^\infty : x_{t+1}
@@ -472,7 +472,7 @@ The value function
472472
473473
$$
474474
J(x_0) = \max_{\{x_{t+1}, \mu_t \}_{t=0}^\infty \in \Omega(x_0)}
475-
- \sum_{t=0}^\infty \beta^t r(x_t,\mu_t)
475+
\sum_{t=0}^\infty \beta^t s(x_t,\mu_t)
476476
$$ (eq:subprob1LQ)
477477
478478
satisfies the Bellman equation
@@ -514,7 +514,8 @@ $Q, R, A, B$, and $\beta$.
514514
515515
The value function for a (continuation) Ramsey planner is
516516
517-
$$ v_t = - \begin{bmatrix} 1 & \theta_t \end{bmatrix} \begin{bmatrix} P_{11} & P_{12} \cr P_{21} & P_{22} \end{bmatrix} \begin{bmatrix} 1 \cr \theta_t \end{bmatrix}
517+
$$
518+
v_t = - \begin{bmatrix} 1 & \theta_t \end{bmatrix} \begin{bmatrix} P_{11} & P_{12} \cr P_{21} & P_{22} \end{bmatrix} \begin{bmatrix} 1 \cr \theta_t \end{bmatrix}
518519
$$
519520
520521
or
@@ -556,7 +557,7 @@ $$
556557
\theta_{t+1} = d_0 + d_1 \theta_t
557558
$$ (eq:thetaRamseyrule)
558559
559-
where $\begin{bmatrix} d_0 & d_1 \end{bmatrix}$ is the second row of
560+
where $\big[\ d_0 \ \ d_1 \ \big]$ is the second row of
560561
the closed-loop matrix $A - BF$ for computed in subproblem 1 above.
561562
562563
The linear quadratic control problem {eq}`eq:subprob1LQ` satisfies regularity conditions that
@@ -711,15 +712,10 @@ In the present context, a symptom of time inconsistency is that the Ramsey plann
711712
chooses to make $\mu_t$ a non-constant function of time $t$ despite the fact that, other than
712713
time itself, there is no other state variable.
713714
714-
715-
716715
Thus, in our context, time-variation of $\vec \mu$ chosen by a Ramsey planner
717716
is the telltale sign of the Ramsey plan's **time inconsistency**.
718717
719718
720-
721-
722-
723719
## Constrained-to-Constant-Growth-Rate Ramsey Plan
724720
725721
@@ -731,22 +727,14 @@ $$
731727
\mu_t = \bar \mu, \quad \forall t \geq 0.
732728
$$
733729
734-
735730
We assume that the government knows the perfect foresight outcome implied by equation {eq}`eq_old2` that $\theta_t = \bar \mu$ when $\mu_t = \bar \mu$ for all $t \geq 0$.
736731
737732
It follows that the value of such a plan is given by $V(\bar \mu)$ defined inequation {eq}`eq:barvdef`.
738733
739-
740-
741734
Then our restricted Ramsey planner chooses $\bar \mu$ to maximize $V(\bar \mu)$.
742735
743-
744-
745-
746-
747736
We can express $V(\bar \mu)$ as
748737
749-
750738
$$
751739
V (\bar \mu) = (1-\beta)^{-1} \left[ U (-\alpha \bar \mu) - \frac{c}{2} (\bar \mu)^2 \right]
752740
$$ (eq:vcrformula20)
@@ -874,7 +862,7 @@ Under the Markov perfect timing protocol
874862
(compute_lq)=
875863
## Outcomes under Three Timing Protocols
876864
877-
We want to compare outcome sequences $\{ \theta_t,\mu_t \}$ under three timing protocols associated with
865+
We want to compare outcome sequences $\{ \theta_t,\mu_t \}$ under three timing protocols associated with
878866
879867
* a standard Ramsey plan with its time-varying $\{ \theta_t,\mu_t \}$ sequences
880868
* a Markov perfect equilibrium, with its time-invariant $\{ \theta_t,\mu_t \}$ sequences
@@ -908,7 +896,7 @@ The first two equalities follow from the preceding three equations.
908896
909897
We'll illustrate the third equality that equates $\theta_0^R$ to $ \theta_\infty^R$ with some quantitative examples below.
910898
911-
Proposition 1 draws attention to how a positive tax distortion parameter $c$ alters the optimal rate of deflation that Milton Friedman financed by imposing a lump sum tax.
899+
Proposition 1 draws attention to how a positive tax distortion parameter $c$ alters the optimal rate of deflation that Milton Friedman financed by imposing a lump sum tax.
912900
913901
We'll compute
914902
@@ -1039,7 +1027,7 @@ Let's create an instance of ChangLQ with the following parameters:
10391027
clq = ChangLQ(β=0.85, c=2)
10401028
```
10411029
1042-
The following code plots value functions for a continuation Ramsey
1030+
The following code plots policy functions for a continuation Ramsey
10431031
planner.
10441032
10451033
```{code-cell} ipython3
@@ -1129,7 +1117,7 @@ It follows that under the Ramsey plan $\{\theta_t\}$ and $\{\mu_t\}$ both conve
11291117
11301118
The next code plots the Ramsey planner's value function $J(\theta)$.
11311119
1132-
We know that $J (\theta)$ is maximized at $\theta^R_0$, the best time $0$ promised inflation rate.
1120+
We know that $J (\theta)$ is maximized at $\theta^R_0$, the best time $0$ promised inflation rate.
11331121
11341122
The figure also plots the limiting value $\theta_\infty^R$, the limiting value of promised inflation rate $\theta_t$ under the Ramsey plan as $t \rightarrow +\infty$.
11351123
@@ -1568,11 +1556,6 @@ economists.
15681556
15691557
* A Markov perfect equilibrium plan is constructed to insure that a sequence of government policymakers who choose sequentially do not want to deviate from it.
15701558
1571-
1572-
1573-
1574-
1575-
15761559
### Ramsey Plan Strikes Back
15771560
15781561
Research by Abreu {cite}`Abreu`, Chari and Kehoe {cite}`chari1990sustainable`

0 commit comments

Comments
 (0)