You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/book/struct_est/SMM.md
+274Lines changed: 274 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -728,6 +728,280 @@ print('Sim. mean =', mean_sim)
728
728
print('Sim. variance =', var_sim)
729
729
```
730
730
731
+
We can also simulate many $(S)$ data sets of test scores, each with $N=161$ test scores. The estimate of the model moments will be the average of the simulated data moments across the simulations.
732
+
733
+
```{code-cell} ipython3
734
+
:tags: []
735
+
736
+
N = 161
737
+
S = 100
738
+
mu_2 = 300.0
739
+
sig_2 = 30.0
740
+
cut_lb = 0.0
741
+
cut_ub = 450.0
742
+
np.random.seed(25) # Set the random number seed to get same answers every time
print("Variance of test scores in each simulation:")
752
+
print(var_sim)
753
+
mean_mod = mean_sim.mean()
754
+
var_mod = var_sim.mean()
755
+
print("")
756
+
print('Estimated model mean (avg. of means) =', mean_mod)
757
+
print('Estimated model variance (avg. of variances) =', var_mod)
758
+
```
759
+
760
+
Our SMM model moments $\hat{m}(\tilde{scores}_i|\mu,\sigma)$ are an estimate of the true models moments that we got in the GMM case by integrating using the PDF of the truncated normal distribution. Our SMM moments we got by simulating the data $S$ times and taking the average of the simulated data moments across the simulations as our estimator of the model moments.
761
+
762
+
Define the error vector as the vector of percent deviations of the model moments from the data moments.
print("Average of mean test scores across simulations is:", mean_mod)
900
+
print("")
901
+
print("Average variance of test scores across simulations is:", var_mod)
902
+
print("")
903
+
print("Criterion function value is:", crit_test[0][0])
904
+
```
905
+
906
+
Now we can perform the SMM estimation using SciPy's minimize function to choose the values of $\mu$ and $\sigma$ of the truncated normal distribution that best fit the data by minimizing the crietrion function. Let's start with the identity matrix as our estimate for the optimal weighting matrix $W = I$.
print('Model mean 1 =', mean_model_1, ', Model variance 1 =', var_model_1)
936
+
print("")
937
+
print('Error vector 1 =', err_1)
938
+
print("")
939
+
print("Results from scipy.opmtimize.minimize:")
940
+
print(results1_1)
941
+
```
942
+
943
+
Let's plot the PDF implied by these SMM estimates $(\hat{\mu}_{SMM},\hat{\sigma}_{SMM})=(612.337, 197.264)$ against the histogram of the data in {numref}`Figure %s <FigSMM_EconScoreSMM1>` below.
SMM-estimated PDF function and data histogram, 2 moments, identity weighting matrix, Econ 381 scores (2011-2012)
971
+
```
972
+
973
+
That looks just like the maximum likelihood estimate from the {ref}`Chap_MaxLikeli` chapter. Let's see what the criterion function looks like for different values of $\mu$ and $\sigma$.
0 commit comments