I observed a problematic behaviour of MultiThreadedAugmenter while using the nnUNet library on a cluster.
Upon completing the training function, the code would not exit and the job would run indefinitely, because of two threads perpetually in sleep() at line 125 of MultiThreadedAugmenter, waiting for out_queue() to be emptied.
Do you have an idea on what may be causing this behaviour ?
The libraries involved are as follows:
batchgenerators 0.25
nndet-gia 0.1.6
nnunet 1.7.1
pytorch-lightning 1.4.2