Skip to content

Commit a9c061f

Browse files
committed
DEBUG: add time in each cell
1 parent f394021 commit a9c061f

File tree

1 file changed

+17
-0
lines changed

1 file changed

+17
-0
lines changed

lectures/black_litterman.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -171,6 +171,8 @@ A common reaction to these outcomes is that they are so implausible that a portf
171171
manager cannot recommend them to a customer.
172172

173173
```{code-cell} ipython3
174+
%%time
175+
174176
np.random.seed(12)
175177
176178
N = 10 # Number of assets
@@ -302,6 +304,8 @@ a pair $(\delta_m, \mu_m)$ that tells the customer to hold the
302304
market portfolio.
303305

304306
```{code-cell} ipython3
307+
%%time
308+
305309
# Observed mean excess market return
306310
r_m = w_m @ μ_est
307311
@@ -387,6 +391,8 @@ and $\tau$ is chosen heavily to weight this view, then the
387391
customer's portfolio will involve big short-long positions.
388392

389393
```{code-cell} ipython3
394+
%%time
395+
390396
def black_litterman(λ, μ1, μ2, Σ1, Σ2):
391397
"""
392398
This function calculates the Black-Litterman mixture
@@ -610,6 +616,8 @@ $\bar d_2$ (or $\lambda$ ), we can trace out the whole curve
610616
as the figure below illustrates.
611617

612618
```{code-cell} ipython3
619+
%%time
620+
613621
np.random.seed(1987102)
614622
615623
N = 2 # Number of assets
@@ -691,6 +699,8 @@ following figure, on which the curve connecting $\hat \mu$
691699
and $\mu_{BL}$ are bending
692700

693701
```{code-cell} ipython3
702+
%%time
703+
694704
λ_grid = np.linspace(.001, 20000, 1000)
695705
curve = np.asarray([black_litterman(λ, μ_m, μ_est, Σ_est,
696706
τ * np.eye(N)).flatten() for λ in λ_grid])
@@ -1239,9 +1249,12 @@ observations is related to the sampling frequency
12391249

12401250
- For any given $h$, the autocorrelation converges to zero as we increase the distance -- $n$-- between the observations. This represents the "weak dependence" of the $X$ process.
12411251

1252+
12421253
- Moreover, for a fixed lag length, $n$, the dependence vanishes as the sampling frequency goes to infinity. In fact, letting $h$ go to $\infty$ gives back the case of IID data.
12431254

12441255
```{code-cell} ipython3
1256+
%%time
1257+
12451258
μ = .0
12461259
κ = .1
12471260
σ = .5
@@ -1341,6 +1354,8 @@ the sampling frequency $h$ relative to the IID case that we
13411354
compute in closed form.
13421355

13431356
```{code-cell} ipython3
1357+
%%time
1358+
13441359
@jit
13451360
def sample_generator(h, N, M):
13461361
ϕ = (1 - np.exp(-κ * h)) * μ
@@ -1362,6 +1377,8 @@ def sample_generator(h, N, M):
13621377
```
13631378

13641379
```{code-cell} ipython3
1380+
%%time
1381+
13651382
# Generate large sample for different frequencies
13661383
N_app, M_app = 1000, 30000 # Sample size, number of simulations
13671384
h_grid = np.linspace(.1, 80, 30)

0 commit comments

Comments
 (0)