You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/calvo_machine_learn.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -72,7 +72,7 @@ We pose some of those questions at the end of this lecture and answer them b
72
72
Human intelligence, not the ``artificial intelligence`` deployed in our machine learning approach, is a key input into choosing which regressions to run.
73
73
74
74
75
-
## The Model
75
+
## The model
76
76
77
77
We study a linear-quadratic version of a model that Guillermo Calvo {cite}`Calvo1978` used to illustrate the **time inconsistency** of optimal government plans.
78
78
@@ -93,7 +93,7 @@ The model combines ideas from papers by Cagan {cite}`Cagan`, {cite}`sargent1973
93
93
94
94
95
95
96
-
## Model Components
96
+
## Model components
97
97
98
98
There is no uncertainty.
99
99
@@ -239,7 +239,7 @@ $$
239
239
240
240
241
241
242
-
## Parameters and Variables
242
+
## Parameters and variables
243
243
244
244
245
245
**Parameters:**
@@ -265,7 +265,7 @@ $$
265
265
266
266
267
267
268
-
### Basic Objects
268
+
### Basic objects
269
269
270
270
To prepare the way for our calculations, we'll remind ourselves of the mathematical objects
271
271
in play.
@@ -321,7 +321,7 @@ An optimal government plan under this timing protocol is an example of what is
321
321
Notice that while the government is in effect choosing a bivariate **time series** $(\vec mu, \vec \theta)$, the government's problem is **static** in the sense that it chooses treats that time-series as a single object to be chosen at a single point in time.
322
322
323
323
324
-
## Approximation and Truncation parameter $T$
324
+
## Approximation and truncation parameter $T$
325
325
326
326
We anticipate that under a Ramsey plan the sequences $\{\theta_t\}$ and $\{\mu_t\}$ both converge to stationary values.
327
327
@@ -392,7 +392,7 @@ $$
392
392
393
393
where $\tilde \theta_t, \ t = 0, 1, \ldots , T-1$ satisfies formula (1).
394
394
395
-
## A Gradient Descent Algorithm
395
+
## A gradient descent algorithm
396
396
397
397
We now describe code that maximizes the criterion function {eq}`eq:Ramseyvalue` subject to equations {eq}`eq:inflation101` by choice of the truncated vector $\tilde \mu$.
We take a brief detour to solve a restricted version of the Ramsey problem defined above.
695
695
@@ -729,7 +729,7 @@ V_CR
729
729
compute_V(jnp.array([clq.μ_CR]), β=0.85, c=2)
730
730
```
731
731
732
-
## A More Structured ML Algorithm
732
+
## A more structured ML algorithm
733
733
734
734
By thinking about the mathematical structure of the Ramsey problem and using some linear algebra, we can simplify the problem that we hand over to a ``machine learning`` algorithm.
735
735
@@ -1063,7 +1063,7 @@ the limit $\bar \mu$ of $\mu_t$ as $t \rightarrow +\infty$.
1063
1063
This pattern reflects how formula {eq}`eq_grad_old3` makes $\theta_t$ be a weighted average of future $\mu_t$'s.
1064
1064
1065
1065
1066
-
## Continuation Values
1066
+
## Continuation values
1067
1067
1068
1068
For subsquent analysis, it will be useful to compute a sequence $\{v_t\}_{t=0}^T$ of what we'll call ``continuation values`` along a Ramsey plan.
1069
1069
@@ -1163,7 +1163,7 @@ time-less perspective." A more descriptive phrase is "the value of the worst con
1163
1163
```
1164
1164
1165
1165
1166
-
## Adding Some Human Intelligence
1166
+
## Adding some human intelligence
1167
1167
1168
1168
We have used our machine learning algorithms to compute a Ramsey plan.
1169
1169
@@ -1351,7 +1351,7 @@ Evidently, continuation values $v_t > V^{CR}$ for $t=0, 1, 2$ while $v_t < V^{CR
1351
1351
1352
1352
1353
1353
1354
-
## What has Machine Learning Taught Us?
1354
+
## What has machine learning taught us?
1355
1355
1356
1356
1357
1357
Our regressions tells us that along the Ramsey outcome $\vec \mu^R, \vec \theta^R$, the linear function
0 commit comments