You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
_A scatter plot of MOS transistor count per microprocessor every two years with a red line for the ordinary least squares prediction and an orange line for Moore's law._
@@ -346,19 +349,20 @@ y = np.linspace(2016.5, 2017.5)
ax.set_title("COVID-19 cumulative cases from Jan 21 to Feb 3 2020")
137
139
```
138
140
139
141
The graph has a strange shape from January 24th to February 1st. It would be interesting to know where this data comes from. If we look at the `locations` array we extracted from the `.csv` file, we can see that we have two columns, where the first would contain regions and the second would contain the name of the country. However, only the first few rows contain data for the the first column (province names in China). Following that, we only have country names. So it would make sense to group all the data from China into a single row. For this, we'll select from the `nbcases` array only the rows for which the second entry of the `locations` array corresponds to China. Next, we'll use the [numpy.sum](https://numpy.org/devdocs/reference/generated/numpy.sum.html#numpy.sum) function to sum all the selected rows (`axis=0`). Note also that row 35 corresponds to the total counts for the whole country for each date. Since we want to calculate the sum ourselves from the provinces data, we have to remove that row first from both `locations` and `nbcases`:
@@ -183,9 +185,10 @@ Let's try and see what the data looks like excluding the first row (data from th
183
185
closely:
184
186
185
187
```{code-cell}
186
-
plt.plot(dates, nbcases_ma[1:].T, "--")
187
-
plt.xticks(selected_dates, dates[selected_dates])
188
-
plt.title("COVID-19 cumulative cases from Jan 21 to Feb 3 2020")
ax.set_title("COVID-19 cumulative cases from Jan 21 to Feb 3 2020 - Mainland China")
238
242
```
239
243
240
244
It's clear that masked arrays are the right solution here. We cannot represent the missing data without mischaracterizing the evolution of the curve.
@@ -271,21 +275,25 @@ package to create a cubic polynomial model that fits the data as best as possibl
271
275
```{code-cell}
272
276
t = np.arange(len(china_total))
273
277
model = np.polynomial.Polynomial.fit(t[~china_total.mask], valid, deg=3)
274
-
plt.plot(t, china_total)
275
-
plt.plot(t, model(t), "--")
278
+
279
+
fig, ax = plt.subplots()
280
+
ax.plot(t, china_total)
281
+
ax.plot(t, model(t), "--")
276
282
```
277
283
278
284
This plot is not so readable since the lines seem to be over each other, so let's summarize in a more elaborate plot. We'll plot the real data when
279
285
available, and show the cubic fit for unavailable data, using this fit to compute an estimate to the observed number of cases on January 28th 2020, 7 days after the beginning of the records:
Copy file name to clipboardExpand all lines: content/tutorial-static_equilibrium.md
+2-6Lines changed: 2 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -97,8 +97,7 @@ d3.quiver(x, y, z, u, v, w, color="r", label="forceA")
97
97
u, v, w = forceB
98
98
d3.quiver(x, y, z, u, v, w, color="b", label="forceB")
99
99
100
-
plt.legend()
101
-
plt.show()
100
+
d3.legend()
102
101
```
103
102
104
103
There are two forces emanating from a single point. In order to simplify this problem, you can add them together to find the sum of forces. Note that both `forceA` and `forceB` are three-dimensional vectors, represented by NumPy as arrays with three components. Because NumPy is meant to simplify and optimize operations between vectors, you can easily compute the sum of these two vectors as follows:
@@ -129,8 +128,7 @@ d3.quiver(x, y, z, u, v, w, color="b", label="forceB")
129
128
u, v, w = forceC
130
129
d3.quiver(x, y, z, u, v, w, color="g", label="forceC")
131
130
132
-
plt.legend()
133
-
plt.show()
131
+
d3.legend()
134
132
```
135
133
136
134
However, the goal is equilibrium.
@@ -172,8 +170,6 @@ x, y, z = np.array([0, 0, 0])
172
170
173
171
u, v, w = forceA + forceB + R # add them all together for sum of forces
174
172
d3.quiver(x, y, z, u, v, w)
175
-
176
-
plt.show()
177
173
```
178
174
179
175
The empty graph signifies that there are no outlying forces. This denotes a system in equilibrium.
Copy file name to clipboardExpand all lines: content/tutorial-svd.md
+13-12Lines changed: 13 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -74,8 +74,8 @@ import matplotlib.pyplot as plt
74
74
```
75
75
76
76
```{code-cell}
77
-
plt.imshow(img)
78
-
plt.show()
77
+
fig, ax = plt.subplots()
78
+
ax.imshow(img)
79
79
```
80
80
81
81
### Shape, axis and array properties
@@ -196,8 +196,8 @@ To see if this makes sense in our image, we should use a colormap from `matplotl
196
196
In our case, we are approximating the grayscale portion of the image, so we will use the colormap `gray`:
197
197
198
198
```{code-cell}
199
-
plt.imshow(img_gray, cmap="gray")
200
-
plt.show()
199
+
fig, ax = plt.subplots()
200
+
ax.imshow(img_gray, cmap="gray")
201
201
```
202
202
203
203
Now, applying the [linalg.svd](https://numpy.org/devdocs/reference/generated/numpy.linalg.svd.html#numpy.linalg.svd) function to this matrix, we obtain the following decomposition:
@@ -259,8 +259,8 @@ np.allclose(img_gray, U @ Sigma @ Vt)
259
259
To see if an approximation is reasonable, we can check the values in `s`:
260
260
261
261
```{code-cell}
262
-
plt.plot(s)
263
-
plt.show()
262
+
fig, ax = plt.subplots()
263
+
ax.plot(s)
264
264
```
265
265
266
266
In the graph, we can see that although we have 768 singular values in `s`, most of those (after the 150th entry or so) are pretty small. So it might make sense to use only the information related to the first (say, 50) *singular values* to build a more economical approximation to our image.
Note that we had to use only the first `k` rows of `Vt`, since all other rows would be multiplied by the zeros corresponding to the singular values we eliminated from this approximation.
283
283
284
284
```{code-cell}
285
-
plt.imshow(approx, cmap="gray")
286
-
plt.show()
285
+
fig, ax = plt.subplots()
286
+
ax.imshow(approx, cmap="gray")
287
287
```
288
288
289
289
Now, you can go ahead and repeat this experiment with other values of `k`, and each of your experiments should give you a slightly better (or worse) image depending on the value you choose.
@@ -362,8 +362,9 @@ Since `imshow` expects values in the range, we can use `clip` to excise the floa
which is not the right shape for showing the image. Finally, reordering the axes back to our original shape of `(768, 1024, 3)`, we can see our approximation:
Even though the image is not as sharp, using a small number of `k` singular values (compared to the original set of 768 values), we can recover many of the distinguishing features from this image.
0 commit comments