You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -228,9 +228,9 @@ Navigate (```cd```) to the root of the toolbox ```[YOUR_CIRTORCH_ROOT]```.
228
228
<details>
229
229
<summary><b>Testing our pretrained networks with whitening learned end-to-end</b></summary><br/>
230
230
231
-
Pretrained networks with whitening learned end-to-end are provided, trained both on `retrieval-SfM-120k (rSfM120k)` and [`Google Landmarks 2018 (GL18)`](https://www.kaggle.com/google/google-landmarks-dataset) train datasets.
231
+
Pretrained networks with whitening learned end-to-end are provided, trained both on `retrieval-SfM-120k (rSfM120k)` and [`google-landmarks-2018 (gl18)`](https://www.kaggle.com/google/google-landmarks-dataset) train datasets.
232
232
Whitening is learned end-to-end during the network training, so there is no need to compute it as a post-processing step, although one can do that, as well.
233
-
For example, multi-scale evaluation of ResNet101 with GeM and end-to-end whitening trained on `Google Landmarks 2018 (GL18)` dataset using high-resolution images and a triplet loss, is performed with the following script:
233
+
For example, multi-scale evaluation of ResNet101 with GeM and end-to-end whitening trained on `google-landmarks-2018 (gl18)` dataset using high-resolution images and a triplet loss, is performed with the following script:
- Added the [MIT license](https://github.com/filipradenovic/cnnimageretrieval-pytorch/blob/master/LICENSE)
302
-
- Added mutli-scale performance on `roxford5k` and `rparis6k` for new pre-trained networks with end-to-end whitening, trained on both `retrieval-SfM-120` and `Google Landmarks 2018` train datasets
302
+
- Added mutli-scale performance on `roxford5k` and `rparis6k` for new pre-trained networks with end-to-end whitening, trained on both `retrieval-SfM-120` and `google-landmarks-2018` train datasets
303
303
- Added a new example test script without post-processing, for networks that are trained in a fully end-to-end manner, with whitening as FC layer learned during training
304
304
- Added few things in train example: GeMmp pooling, triplet loss, small trick to handle really large batches
305
305
- Added more pre-computed whitening options in imageretrievalnet
0 commit comments