Skip to content

Oversized batches of retried tasks  #84

@javaccar

Description

@javaccar

Retrying tasks with countdowns can cause these tasks to pile up in the _pending queue. For argument's sake, let's say there are 10k such tasks (all with the same ETA) in a _pending queue but flush_every is only 10. When the ETA is reached and 10 messages are received from the broker, then the list of SimpleRequest objects passed to the task will be of length 10010. I understand that flush_every=10 is not a promise that the batch size won't be something larger like 10010, but without enforcing some kind of maximum on the batch size, it becomes difficult to reason about how long a batch is expected to take, which is important when setting SQS visibility timeouts for example.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions