-
Notifications
You must be signed in to change notification settings - Fork 12
Description
Would it be possible to parallelise the indexing process, or at least parts of it, to improve the overall speed?
Running this over a 6.4GB repository with 275,000 files in it, on Windows, the process is neither bottlenecked on CPU or disk IO, but the process takes over an hour. Running two index commands on two repos on the same NVMe SSD, in parallel, results in a disk IO of around 20% and barely taxes one core. The memory usage is only around 400MB per process.
I suspect that sequentially opening each file, reading and processing contents, storing the results, then moving onto the next file is causing heavy throughput limitations when there are many thousands of small files.
I don't know enough Go to implement this myself, unfortunately. Is this something you could potentially investigate?