You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This equation can in turn be rearranged to become the second-order
221
-
difference equation
220
+
This equation can in turn be rearranged to become
222
221
223
222
```{math}
224
223
:label: sstack1
@@ -272,7 +271,7 @@ $$
272
271
Operating on both sides of equation {eq}`sstack2` with
273
272
$\beta^{-1}$ times this inverse operator gives the follower's
274
273
decision rule for setting $q_{1t+1}$ in the
275
-
**feedback-feedforward** form.
274
+
**feedback-feedforward** form
276
275
277
276
```{math}
278
277
:label: sstack3
@@ -304,7 +303,7 @@ and formulate the following Lagrangian for the Stackelberg leader firm
304
303
305
304
subject to initial conditions for $q_{1t}, q_{2t}$ at $t=0$.
306
305
307
-
**Comments:** We have formulated the Stackelberg problem in a space of
306
+
**Remarks:** We have formulated the Stackelberg problem in a space of
308
307
sequences.
309
308
310
309
The max-min problem associated with Lagrangian
@@ -314,11 +313,11 @@ future of its choices of $\{q_{1t+j}\}_{j=0}^\infty$.
314
313
315
314
This renders a direct attack on the problem cumbersome.
316
315
317
-
Therefore, below, we will formulate the Stackelberg leader's problem
316
+
Therefore, below we will formulate the Stackelberg leader's problem
318
317
recursively.
319
318
320
319
We'll put our little duopoly model into a broader class of models with
321
-
the same conceptual structure.
320
+
the same structure.
322
321
323
322
## Stackelberg Problem
324
323
@@ -342,7 +341,7 @@ of the Stackelberg **follower**.
342
341
Let $u_t$ be a vector of decisions chosen by the Stackelberg leader
343
342
at $t$.
344
343
345
-
The $z_t$ vector is inherited physically from the past.
344
+
The $z_t$ vector is inherited from the past.
346
345
347
346
But $x_t$ is a decision made by the Stackelberg follower at time
348
347
$t$ that is the follower's best response to the choice of an
@@ -464,13 +463,13 @@ Subproblem 2 is solved by the **Stackelberg leader** at $t=0$.
464
463
465
464
The two subproblems are designed
466
465
467
-
- to respect the protocol in which the follower chooses
466
+
- to respect the timing protocol in which the follower chooses
468
467
$\vec q_1$ after seeing $\vec q_2$ chosen by the leader
469
468
- to make the leader choose $\vec q_2$ while respecting that
470
469
$\vec q_1$ will be the follower's best response to
471
470
$\vec q_2$
472
471
- to represent the leader's problem recursively by artfully choosing
473
-
the state variables confronting and the control variables available
472
+
the leader's state variables and the control variables available
474
473
to the leader
475
474
476
475
**Subproblem 1**
@@ -1012,8 +1011,9 @@ plt.show()
1012
1011
1013
1012
We'll compute the present value earned by the Stackelberg leader.
1014
1013
1015
-
We'll compute it two ways (they give identical answers -- just a check
1016
-
on coding and thinking)
1014
+
We'll compute it two ways and get the same answer.
1015
+
1016
+
In addition to being a useful check on the accuracy of our coding, computing things in these two ways helps us think about the structure of the problem.
0 commit comments