@@ -95,7 +95,7 @@ The lecture [First Look at Kalman Filter](https://python-intro.quantecon.org/kal
9595
9696We'll use limiting versions of the Kalman filter corresponding to what are called ** stationary values** in that lecture.
9797
98- ## A Process for Which Adaptive Expectations are Optimal
98+ ## A process for which adaptive expectations are optimal
9999
100100Suppose that an observable $y_t$ is the sum of an unobserved
101101random walk $x_t$ and an IID shock $\epsilon_ {2,t}$:
@@ -184,7 +184,7 @@ Ak, Ck, Gk, Hk = A, K1, G, 1
184184ssk = LinearStateSpace(Ak, Ck, Gk, Hk, mu_0=x_hat_0)
185185```
186186
187- ## Some Useful State-Space Math
187+ ## Some useful state-space math
188188
189189Now we want to map the time-invariant innovations representation {eq}` innovations ` and
190190the original state-space system {eq}` state-space ` into a convenient form for deducing
@@ -266,7 +266,7 @@ $\hat{x_t}$, and $y_t$.
266266We can now investigate how these
267267variables are related by plotting some key objects.
268268
269- ## Estimates of Unobservables
269+ ## Estimates of unobservables
270270
271271First, let’s plot the hidden state $x_t$ and the filtered version
272272$\hat x_t$ that is linear-least squares projection of $x_t$
@@ -287,7 +287,7 @@ Note how $x_t$ and $\hat{x_t}$ differ.
287287For Friedman, $\hat x_t$ and not $x_t$ is the consumer’s
288288idea about her/his * permanent income* .
289289
290- ## Relationship of Unobservables to Observables
290+ ## Relationship of unobservables to observables
291291
292292Now let’s plot $x_t$ and $y_t$.
293293
@@ -320,7 +320,7 @@ ax.set_xlabel("Time")
320320plt.show()
321321```
322322
323- ## MA and AR Representations
323+ ## MA and AR representations
324324
325325Now we shall extract from the ` Kalman ` instance ` kmuth ` coefficients of
326326
0 commit comments