ExponentialBackoffPolicy.shouldRetry() uses a completely deterministic wait time. This can lead to continued failures in the face of multiple parallel processes launched more-or-less simultaneously, because each task will restart its attempt at the same time. While this case may seem contrived, it is exactly what happens when we attempt to launch 100s of tasks in Hadoop that perform parallel reads on a set of "part" files stored in ADL.
Suggested solution:
private void wait(int milliseconds) {
if (milliseconds <= 0) {
return;
}
try {
Thread.sleep(milliseconds / 2 + randomInt(milliseconds / 2 + 1));
where randomInt() could be Random.nextInt() if a highly random seed were used.