Skip to content

Feature request: --output json flag for machine-readable results #8

@b-gullekson

Description

@b-gullekson

Feature request: --output json flag for machine-readable results

Currently ollama-benchmark outputs results in a human-readable text format (or table with -t). It would be really useful to have a --output json option that writes structured results to stdout or a file.

Use case: I'm benchmarking models across several machines and want to collect results programmatically — aggregate them in a spreadsheet, compare across hardware, or feed into a monitoring dashboard. Right now I'd need to parse the text output which is fragile.

Possible approach: The OllamaResponse objects already use Pydantic models with all the metrics, so serializing to JSON should be straightforward — something like:

if args.output == "json":
    results = []
    for model_name, responses in benchmarks.items():
        results.append({
            "model": model_name,
            "runs": [r.model_dump() for r in responses]
        })
    print(json.dumps(results, indent=2, default=str))

Happy to take a crack at this if it sounds useful.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions