diff --git a/C3/W3/ungraded_labs/C3_W3_Lab_1_single_layer_LSTM.ipynb b/C3/W3/ungraded_labs/C3_W3_Lab_1_single_layer_LSTM.ipynb index a96552dc..9a72f372 100755 --- a/C3/W3/ungraded_labs/C3_W3_Lab_1_single_layer_LSTM.ipynb +++ b/C3/W3/ungraded_labs/C3_W3_Lab_1_single_layer_LSTM.ipynb @@ -96,7 +96,7 @@ "source": [ "## Build and compile the model\n", "\n", - "Now you will build the model. You will simply swap the `Flatten` or `GlobalAveragePooling1D` from before with an `LSTM` layer. Moreover, you will nest it inside a [Biderectional](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Bidirectional) layer so the passing of the sequence information goes both forwards and backwards. These additional computations will naturally make the training go slower than the models you built last week. You should take this into account when using RNNs in your own applications." + "Now you will build the model. You will simply swap the `Flatten` or `GlobalAveragePooling1D` from before with an `LSTM` layer. Moreover, you will nest it inside a [Bidirectional](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Bidirectional) layer so the passing of the sequence information goes both forwards and backwards. These additional computations will naturally make the training go slower than the models you built last week. You should take this into account when using RNNs in your own applications." ] }, {