Skip to content

When a db write fails because MaxSize is reached, the db client should still accept future reads and writes #1108

@mattsains

Description

@mattsains

I found a bug with my work to introduce a maximum database size feature (#929).

If a write fails because the maximum size of the database is exceeded, it correctly returns an error explaining the constraint.

This is tested here: https://github.com/etcd-io/bbolt/blob/main/db_test.go#L1513

However, it seems to me that if a write fails for this reason, it should not "invalidate" the entire db client and future reads and writes should still work as long as the size is not exceeded again.

Instead, the behaviour is that the database client gets into a bad state because of the incomplete mmap operation. Before returning the MaxSizeReached error, we should re-mmap the db to the original size to allow continued operations

To confirm that this bug is fixed, the above test should perform a final, small write which should be verified to succeed.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions