Hello , thank you for excellent work!.
I was trying to train the patch-netvlad on my custom dataset and I follow tips mentioned on #8,
I reuse these functions from msls.py:
- def getitem()
- def update_subcache()
- def new_epoch()
And rewrite Init() function to fill in self.qIdx, self.pIdx, self.nonNegIdx, self.qImages and self.dbImages.
Around 3200 triplets were used for training the network. However, only after one epoch, the loss has already become lower than 0.1 and recall@1 calculated in validation process was 1 already. These are same for next several epochs.
I am not sure if this training process is normal and wonder whether anyone has met this before.
Appreciated if anyone could offer some helps or suggestion for training patch-netvlad on custom dataset, thanks.