Looking at the code in fdbclient (Transaction::commitMutations()) it looks like starting from API version 300+, the total transaction size is the sum of the mutated key/value (writes), as well as the size of all the read and write conflict ranges.
A write with key "FOO" and value '"Hello World"` should add the following overhead:
- Mutations:
(Set, 'FOO', 'Hello World').expectedSize() = 3 + 11 = 14 bytes
- Read conflicts: None
- Write conflicts:
('FOO', 'FOO\0') = 3 + 4 = 7 bytes
The total footprint of this operation would be 21 bytes instead of 14 bytes as is currently guesstimated by the .NET binding. This means that algorithms that try to batch data by following the transaction size are underestimating the actual size, and could fail with transaction_too_large while thinking this is ok.
Looking at the code in fdbclient (
Transaction::commitMutations()) it looks like starting from API version 300+, the total transaction size is the sum of the mutated key/value (writes), as well as the size of all the read and write conflict ranges.A write with key
"FOO"and value '"Hello World"` should add the following overhead:(Set, 'FOO', 'Hello World').expectedSize()= 3 + 11 = 14 bytes('FOO', 'FOO\0')= 3 + 4 = 7 bytesThe total footprint of this operation would be 21 bytes instead of 14 bytes as is currently guesstimated by the .NET binding. This means that algorithms that try to batch data by following the transaction size are underestimating the actual size, and could fail with
transaction_too_largewhile thinking this is ok.