Feature request: --output json flag for machine-readable results
Currently ollama-benchmark outputs results in a human-readable text format (or table with -t). It would be really useful to have a --output json option that writes structured results to stdout or a file.
Use case: I'm benchmarking models across several machines and want to collect results programmatically — aggregate them in a spreadsheet, compare across hardware, or feed into a monitoring dashboard. Right now I'd need to parse the text output which is fragile.
Possible approach: The OllamaResponse objects already use Pydantic models with all the metrics, so serializing to JSON should be straightforward — something like:
if args.output == "json":
results = []
for model_name, responses in benchmarks.items():
results.append({
"model": model_name,
"runs": [r.model_dump() for r in responses]
})
print(json.dumps(results, indent=2, default=str))
Happy to take a crack at this if it sounds useful.
Feature request:
--output jsonflag for machine-readable resultsCurrently
ollama-benchmarkoutputs results in a human-readable text format (or table with-t). It would be really useful to have a--output jsonoption that writes structured results to stdout or a file.Use case: I'm benchmarking models across several machines and want to collect results programmatically — aggregate them in a spreadsheet, compare across hardware, or feed into a monitoring dashboard. Right now I'd need to parse the text output which is fragile.
Possible approach: The
OllamaResponseobjects already use Pydantic models with all the metrics, so serializing to JSON should be straightforward — something like:Happy to take a crack at this if it sounds useful.