-
-
Notifications
You must be signed in to change notification settings - Fork 31
Open
Description
Retrying tasks with countdowns can cause these tasks to pile up in the _pending queue. For argument's sake, let's say there are 10k such tasks (all with the same ETA) in a _pending queue but flush_every is only 10. When the ETA is reached and 10 messages are received from the broker, then the list of SimpleRequest objects passed to the task will be of length 10010. I understand that flush_every=10 is not a promise that the batch size won't be something larger like 10010, but without enforcing some kind of maximum on the batch size, it becomes difficult to reason about how long a batch is expected to take, which is important when setting SQS visibility timeouts for example.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels