You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In fluxer_metrics/src/api/ingest.rs, ingest_batch inserts each metric
one by one. For ClickHouse, that means one INSERT per metric, even though
the client already sent them together in a single batch.
So a payload with 100 metrics causes 100 inserts instead of 1.
Proposed solution
Add batch methods to the Storage trait:
insert_counters_batch
insert_gauges_batch
insert_histograms_batch
For ClickHouseStorage, open one client.insert(...) per metric type,
write all rows, then call end() once. Other backends can keep the
current behavior via a default trait impl.
Update ingest_batch to call the new methods.
Expected impact
Inserts per request: from N down to at most 3.
Latency:lower p50/p95 on /ingest/batch, especially for bigger batches.
Throughput: higher, since we stop paying per item round-trip cost.
ClickHouse load: fewer, larger inserts = fewer parts + less merge
pressure.
Small batches (1–2 items) won't see a meaningful change.
Notes (optional)
No API or schema changes.
On failure, a whole metric-type batch is rejected instead of per-item.
Seems fine since the client already groups them, but worth flagging.
Happy to open a draft PR if this sounds good.
Checks
I searched for existing discussions and didn't find a duplicate.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Problem
In
fluxer_metrics/src/api/ingest.rs,ingest_batchinserts each metricone by one. For ClickHouse, that means one
INSERTper metric, even thoughthe client already sent them together in a single batch.
So a payload with 100 metrics causes 100 inserts instead of 1.
Proposed solution
Add batch methods to the Storage trait:
For ClickHouseStorage, open one client.insert(...) per metric type,
write all rows, then call end() once. Other backends can keep the
current behavior via a default trait impl.
Update ingest_batch to call the new methods.
Expected impact
pressure.
Small batches (1–2 items) won't see a meaningful change.
Notes (optional)
Seems fine since the client already groups them, but worth flagging.
Happy to open a draft PR if this sounds good.
Checks
Beta Was this translation helpful? Give feedback.
All reactions