You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/calvo.md
+13-30Lines changed: 13 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,7 +45,7 @@ The model focuses on intertemporal tradeoffs between
45
45
- benefits that anticipations of future deflation generate by decreasing costs of holding real money balances and thereby increasing a representative agent's *liquidity*, as measured by his or her holdings of real money balances, and
46
46
- costs associated with the distorting taxes that the government must levy in order to acquire the paper money that it will destroy in order to generate anticipated deflation
47
47
48
-
Model features include
48
+
Model features include
49
49
50
50
- rational expectations
51
51
- alternative possible timing protocols for government choices of a sequence of money growth rates
@@ -138,7 +138,7 @@ or
138
138
139
139
Because $\alpha > 0$, $0 < \frac{\alpha}{1+\alpha} < 1$.
140
140
141
-
**Definition:** For scalar $b_t$, let $L^2$ be the space of sequences
141
+
**Definition:** For scalar $b_t$, let $L^2$ be the space of sequences
We define $ r(x_t,\mu_t) := - s(\theta_t, \mu_t) $ in order to represent the government's **maximum** problem in terms of our Python code for solving linear quadratic discounted dynamic programs.
294
+
We define $ r(x_t,\mu_t) := - s(\theta_t, \mu_t) $ in order to represent the government's **maximization** problem in terms of our Python code for solving linear quadratic discounted dynamic programs.
295
295
In [first LQ control lecture](https://python-intro.quantecon.org/lqcontrol.html) and some other quantecon lectures, we formulated these as **loss minimization** problems.
where $\begin{bmatrix} d_0 & d_1 \end{bmatrix}$ is the second row of
560
+
where $\big[\ d_0 \ \ d_1 \ \big]$ is the second row of
560
561
the closed-loop matrix $A - BF$ for computed in subproblem 1 above.
561
562
562
563
The linear quadratic control problem {eq}`eq:subprob1LQ` satisfies regularity conditions that
@@ -711,15 +712,10 @@ In the present context, a symptom of time inconsistency is that the Ramsey plann
711
712
chooses to make $\mu_t$ a non-constant function of time $t$ despite the fact that, other than
712
713
time itself, there is no other state variable.
713
714
714
-
715
-
716
715
Thus, in our context, time-variation of $\vec \mu$ chosen by a Ramsey planner
717
716
is the telltale sign of the Ramsey plan's **time inconsistency**.
718
717
719
718
720
-
721
-
722
-
723
719
## Constrained-to-Constant-Growth-Rate Ramsey Plan
724
720
725
721
@@ -731,22 +727,14 @@ $$
731
727
\mu_t = \bar \mu, \quad \forall t \geq 0.
732
728
$$
733
729
734
-
735
730
We assume that the government knows the perfect foresight outcome implied by equation {eq}`eq_old2` that $\theta_t = \bar \mu$ when $\mu_t = \bar \mu$ for all $t \geq 0$.
736
731
737
732
It follows that the value of such a plan is given by $V(\bar \mu)$ defined inequation {eq}`eq:barvdef`.
738
733
739
-
740
-
741
734
Then our restricted Ramsey planner chooses $\bar \mu$ to maximize $V(\bar \mu)$.
742
735
743
-
744
-
745
-
746
-
747
736
We can express $V(\bar \mu)$ as
748
737
749
-
750
738
$$
751
739
V (\bar \mu) = (1-\beta)^{-1} \left[ U (-\alpha \bar \mu) - \frac{c}{2} (\bar \mu)^2 \right]
752
740
$$ (eq:vcrformula20)
@@ -874,7 +862,7 @@ Under the Markov perfect timing protocol
874
862
(compute_lq)=
875
863
## Outcomes under Three Timing Protocols
876
864
877
-
We want to compare outcome sequences $\{ \theta_t,\mu_t \}$ under three timing protocols associated with
865
+
We want to compare outcome sequences $\{ \theta_t,\mu_t \}$ under three timing protocols associated with
878
866
879
867
* a standard Ramsey plan with its time-varying $\{ \theta_t,\mu_t \}$ sequences
880
868
* a Markov perfect equilibrium, with its time-invariant $\{ \theta_t,\mu_t \}$ sequences
@@ -908,7 +896,7 @@ The first two equalities follow from the preceding three equations.
908
896
909
897
We'll illustrate the third equality that equates $\theta_0^R$ to $ \theta_\infty^R$ with some quantitative examples below.
910
898
911
-
Proposition 1 draws attention to how a positive tax distortion parameter $c$ alters the optimal rate of deflation that Milton Friedman financed by imposing a lump sum tax.
899
+
Proposition 1 draws attention to how a positive tax distortion parameter $c$ alters the optimal rate of deflation that Milton Friedman financed by imposing a lump sum tax.
912
900
913
901
We'll compute
914
902
@@ -1039,7 +1027,7 @@ Let's create an instance of ChangLQ with the following parameters:
1039
1027
clq = ChangLQ(β=0.85, c=2)
1040
1028
```
1041
1029
1042
-
The following code plots value functions for a continuation Ramsey
1030
+
The following code plots policy functions for a continuation Ramsey
1043
1031
planner.
1044
1032
1045
1033
```{code-cell} ipython3
@@ -1129,7 +1117,7 @@ It follows that under the Ramsey plan $\{\theta_t\}$ and $\{\mu_t\}$ both conve
1129
1117
1130
1118
The next code plots the Ramsey planner's value function $J(\theta)$.
1131
1119
1132
-
We know that $J (\theta)$ is maximized at $\theta^R_0$, the best time $0$ promised inflation rate.
1120
+
We know that $J (\theta)$ is maximized at $\theta^R_0$, the best time $0$ promised inflation rate.
1133
1121
1134
1122
The figure also plots the limiting value $\theta_\infty^R$, the limiting value of promised inflation rate $\theta_t$ under the Ramsey plan as $t \rightarrow +\infty$.
1135
1123
@@ -1568,11 +1556,6 @@ economists.
1568
1556
1569
1557
* A Markov perfect equilibrium plan is constructed to insure that a sequence of government policymakers who choose sequentially do not want to deviate from it.
1570
1558
1571
-
1572
-
1573
-
1574
-
1575
-
1576
1559
### Ramsey Plan Strikes Back
1577
1560
1578
1561
Research by Abreu {cite}`Abreu`, Chari and Kehoe {cite}`chari1990sustainable`
0 commit comments