9999
100100where $\beta \in (0,1)$ is a time discount factor.
101101
102- ### Stackelberg Leader and Follower
102+ ### Stackelberg leader and follower
103103
104104Each firm $i=1,2$ chooses a sequence
105105$\vec q_i \equiv \{ q_ {it+1}\} _ {t=0}^\infty$ once and for all at
@@ -119,7 +119,7 @@ In choosing $\vec q_2$, firm 2 takes into account that firm 1 will
119119base its choice of $\vec q_1$ on firm 2's choice of
120120$\vec q_2$.
121121
122- ### Statement of Leader 's and Follower 's Problems
122+ ### Statement of leader 's and follower 's problems
123123
124124We can express firm 1's problem as
125125
@@ -164,7 +164,7 @@ follower's best response to it.
164164
165165To gain insights about these things, we study them in more detail.
166166
167- ### Firms' Problems
167+ ### Firms' problems
168168
169169Firm 1 acts as if firm 2's sequence $\{ q_ {2t+1}\} _ {t=0}^\infty$ is
170170given and beyond its control.
@@ -314,7 +314,7 @@ recursively.
314314We'll proceed by putting our duopoly model into a broader class of models with
315315the same general structure.
316316
317- ## Stackelberg Problem
317+ ## Stackelberg problem
318318
319319We formulate a class of linear-quadratic Stackelberg leader-follower
320320problems of which our duopoly model is an instance.
395395y_{t+1} = A y_t + B u_t
396396```
397397
398- ### Interpretation of Second Block of Equations
398+ ### Interpretation of second block of equations
399399
400400The Stackelberg follower's best response mapping is summarized by the
401401second block of equations of {eq}` new3 ` .
@@ -424,7 +424,7 @@ The Stackelberg leader uses its understanding of the responses
424424restricted by {eq}` constrainteq ` to manipulate the follower's
425425decisions.
426426
427- ### More Mechanical Details
427+ ### More mechanical details
428428
429429For any vector $a_t$, define $\vec a_t = [ a_t,
430430a_ {t+1} \ldots ] $.
@@ -445,7 +445,7 @@ Although it is taken as given in $\Omega(y_0)$,
445445eventually, the $x_0$ component of $y_0$ is to be chosen by the
446446Stackelberg leader.
447447
448- ### Two Subproblems
448+ ### Two subproblems
449449
450450Once again we use backward induction.
451451
@@ -487,7 +487,7 @@ Subproblem 2 optimizes over $x_0$.
487487The value function $w(z_0)$ tells the value of the Stackelberg plan
488488as a function of the vector of natural state variables $z_0$ at time $0$.
489489
490- ## Two Bellman Equations
490+ ## Two Bellman equations
491491
492492We now describe Bellman equations for $v(y)$ and $w(z_0)$.
493493
550550x_0 = - P_{22}^{-1} P_{21} z_0
551551$$ (eq:subprob2x0)
552552
553- ## Stackelberg Plan for Duopoly
553+ ## Stackelberg plan for duopoly
554554
555555Now let's map our duopoly model into the above setup.
556556
@@ -593,7 +593,7 @@ But firm 2 manipulates firm 1's choice through firm 2's choice of the sequence
593593
594594
595595
596- ### Calculations to Prepare Duopoly Model
596+ ### Calculations to prepare duopoly model
597597
598598Now we'll proceed to cast our duopoly model within the framework of the
599599more general linear-quadratic structure described above.
@@ -605,7 +605,7 @@ As emphasized above, firm 1 acts as if firm 2's decisions
605605$\{q_{2t+1}, v_{2t}\}_{t=0}^\infty$ are given and beyond its
606606control.
607607
608- ### Firm 1's Problem
608+ ### Firm 1's problem
609609
610610We again formulate firm 1's optimum problem in terms of the Lagrangian
611611
@@ -696,7 +696,7 @@ It is important to do this for several reasons:
696696First, let's get a recursive representation of the Stackelberg leader's
697697choice of $\vec q_2$ for our duopoly model.
698698
699- ## Recursive Representation of Stackelberg Plan
699+ ## Recursive representation of Stackelberg plan
700700
701701In order to attain an appropriate representation of the Stackelberg
702702leader's history-dependent plan, we will employ what amounts to a
@@ -781,7 +781,7 @@ $\sigma_t$ of a Stackelberg plan is **history-dependent**, meaning
781781that the Stackelberg leader's choice $u_t$ depends not just on
782782$\check z_t$ but on components of $\check z^{t-1}$.
783783
784- ### Comments and Interpretations
784+ ### Comments and interpretations
785785
786786Because we set $\check z_0 = z_0$, it will turn out that $z_t = \check z_t$
787787for all $t \geq 0$.
@@ -795,7 +795,7 @@ sequence $\vec q_2$, we must use representation
795795$\check z^t$ and **not** a corresponding representation cast in
796796terms of $z^t$.
797797
798- ## Dynamic Programming and Time Consistency of Follower 's Problem
798+ ## Dynamic programming and time consistency of follower 's problem
799799
800800Given the sequence $\vec q_2$ chosen by the Stackelberg leader in
801801our duopoly model, it turns out that the Stackelberg **follower's**
@@ -808,7 +808,7 @@ To verify these claims, we'll formulate a recursive version of a
808808follower's problem that builds on our recursive representation of the
809809Stackelberg leader's plan and our use of the **Big K, little k** idea.
810810
811- ### Recursive Formulation of a Follower’s Problem
811+ ### Recursive formulation of a follower's problem
812812
813813We now use what amounts to another “Big $K$, little $k$” trick (see
814814[rational expectations equilibrium](https://python-intro.quantecon.org/rational_expectations.html))
@@ -924,7 +924,7 @@ which will verify that we have properly set up a recursive
924924representation of the follower's problem facing the Stackelberg leader's
925925$\vec q_2$.
926926
927- ### Time Consistency of Follower 's Plan
927+ ### Time consistency of follower 's plan
928928
929929The follower can solve its problem using dynamic programming because its
930930problem is recursive in what for it are the **natural state variables**,
936936
937937It follows that the follower's plan is time consistent.
938938
939- ## Computing Stackelberg Plan
939+ ## Computing Stackelberg plan
940940
941941Here is our code to compute a Stackelberg plan via the linear-quadratic
942942dynamic program describe above.
@@ -1012,7 +1012,7 @@ print("Computed policy for Continuation Stackelberg leader\n")
10121012print(f"F = {F}")
10131013```
10141014
1015- ## Time Series for Price and Quantities
1015+ ## Time series for price and quantities
10161016
10171017Now let's use the code to compute and display outcomes as a Stackelberg plan unfolds.
10181018
@@ -1034,7 +1034,7 @@ ax.set_xlabel('t')
10341034plt.show()
10351035```
10361036
1037- ### Value of Stackelberg Leader
1037+ ### Value of Stackelberg leader
10381038
10391039We'll compute the value $w(x_0)$ attained by the Stackelberg leader, where $x_0$ is given by the maximizer {eq}`eq:subprob2x0` of subproblem 2.
10401040
@@ -1066,7 +1066,7 @@ v_expanded = -((y0.T @ R @ y0 + ut[:, 0].T @ Q @ ut[:, 0] +
10661066(v_leader_direct - v_expanded < tol0)[0, 0]
10671067```
10681068
1069- ## Time Inconsistency of Stackelberg Plan
1069+ ## Time inconsistency of Stackelberg plan
10701070
10711071In the code below we compare two values
10721072
@@ -1120,7 +1120,7 @@ The figure above shows
11201120
11211121Taken together, these outcomes express the time inconsistency of the original time $0$ Stackelberg leaders's plan.
11221122
1123- ## Recursive Formulation of Follower 's Problem
1123+ ## Recursive formulation of follower 's problem
11241124
11251125We now formulate and compute the recursive version of the follower's
11261126problem.
@@ -1172,7 +1172,7 @@ np.max(np.abs(yt_tilde[4] - yt_tilde[2]))
11721172yt[:, 0][-1] - (yt_tilde[:, 1] - yt_tilde[:, 0])[-1] < tol0
11731173```
11741174
1175- ### Explanation of Alignment
1175+ ### Explanation of alignment
11761176
11771177If we inspect coefficients in the decision rule $- \tilde F$,
11781178we should be able to spot why the follower chooses to set $x_t =
@@ -1284,7 +1284,7 @@ plt.show()
12841284np.max(np.abs(yt_tilde_star[:, 4] - yt_tilde[2, :-1]))
12851285```
12861286
1287- ## Markov Perfect Equilibrium
1287+ ## Markov perfect equilibrium
12881288
12891289The **state** vector is
12901290
@@ -1403,7 +1403,7 @@ v2_direct_alt = - z[:, 0].T @ lq1.P @ z[:, 0] + lq1.d
14031403(np.abs(v2_direct - v2_direct_alt) < tol2).all()
14041404```
14051405
1406- ## Comparing Markov Perfect Equilibrium and Stackelberg Outcome
1406+ ## Comparing Markov perfect equilibrium and Stackelberg outcome
14071407
14081408It is enlightening to compare equilbrium values for firms 1 and 2 under two alternative
14091409settings:
0 commit comments