Skip to content

sharafdin/yoauditor

Repository files navigation

YoAuditor πŸ”

LLM-powered code auditor for GitHub repos. Audit any repo for bugs, security issues, and performance problems using local AI. Built in Rust.

✨ Two Analysis Modes

Instead of a one-size-fits-all approach, YoAuditor gives you two ways to audit code:

Single-call mode sends all files in one LLM request β€” fast and efficient.

Tool-calling (agentic) mode lets the LLM explore the codebase autonomously:

  1. πŸ” Explores β€” Uses list_files to discover the project structure
  2. πŸ“– Reads β€” Calls read_file to examine specific files
  3. πŸ”Ž Searches β€” Uses search_code to find patterns
  4. πŸ› Reports β€” Calls report_issue for each problem found
  5. βœ… Finishes β€” Calls finish_analysis when done

This is similar to how Claude or GPT-4 use tools to interact with codebases!

β†’ See docs/MODES.md for a full comparison of single-call vs agentic mode, when to use each, and how to switch.

Features

  • πŸ€– Agentic Analysis: LLM autonomously explores and analyzes the codebase
  • πŸ› οΈ Tool Calling: Uses Ollama's tool-calling API for structured interactions
  • πŸ”’ Security Scanning: Identifies potential security vulnerabilities
  • πŸ› Bug Detection: Finds logic errors, null pointer risks, race conditions
  • ⚑ Performance Issues: Detects inefficient algorithms and blocking I/O
  • πŸ“Š Comprehensive Reports: Generates detailed Markdown/JSON reports with line numbers
  • 🚦 CI-Friendly: --fail-on flag with exit codes for gating builds
  • πŸ”§ Fully Configurable: .yoauditor.toml for model, extensions, excludes, timeouts

Prerequisites

  1. Rust (latest stable): Install Rust

  2. LLM Backend (choose one):

  3. A model that supports tool calling:

    # For Ollama:
    ollama pull llama3.2:latest     # Fast, good tool support
    ollama pull qwen3-coder:480b-cloud  # Cloud model, excellent
    
    # For llama.cpp:
    # Download a GGUF model and run:
    ./llama-server -m model.gguf --port 8080

⚠️ Important: For agentic mode, the model MUST support tool/function calling. Use --single-call if your model doesn't.

Installation

# Clone and build
git clone https://github.com/sharafdin/yoauditor.git
cd yoauditor
cargo build --release

# Install system-wide (optional)
cargo install --path .

Docker

# Build the image
docker build -t yoauditor .

# Run (Ollama must be reachable; use host network or pass Ollama URL)
docker run --rm yoauditor --repo https://github.com/owner/repo.git --ollama-url http://host.docker.internal:11434

# With output mounted so you can read the report on the host
docker run --rm -v "$(pwd)/out:/app/out" yoauditor \
  --repo https://github.com/owner/repo.git \
  --ollama-url http://host.docker.internal:11434 \
  --output /app/out/yoaudit_report.md

On Linux, use --network host and --ollama-url http://localhost:11434 if Ollama runs on the host.

See docs/DOCKER.md for more options and Docker Compose.

Usage

Basic Usage

# Analyze a GitHub repository
yoauditor --repo https://github.com/owner/repo.git

# Use a specific model (must support tool calling!)
yoauditor --repo https://github.com/owner/repo.git --model llama3.2:latest

# Analyze a local directory
yoauditor --repo local --local ./my-project

# Preview which files would be analyzed (no LLM call)
yoauditor --repo https://github.com/owner/repo.git --dry-run

# Generate a default config file
yoauditor --init-config

All Options

yoauditor [OPTIONS]

Options:
  -r, --repo <URL>             GitHub repository URL to analyze
  -m, --model <MODEL>          Ollama model name [default: deepseek-coder:33b]
  -o, --output <FILE>          Output file path [default: yoaudit_report.md]
      --max-files <COUNT>      Maximum files to analyze [default: 100]
      --ollama-url <URL>       Ollama API endpoint [default: http://localhost:11434]
  -c, --config <FILE>          Path to configuration file
  -v, --verbose                Enable verbose logging
  -q, --quiet                  Quiet mode (minimal output)
  -b, --branch <BRANCH>        Specific branch to analyze
      --extensions <EXTS>      File extensions to include (comma-separated)
      --exclude <PATTERNS>     Patterns to exclude (comma-separated)
      --local <DIR>            Analyze a local directory instead of cloning
      --format <FORMAT>        Output format: markdown, json [default: markdown]
      --timeout <SECS>         LLM request timeout [default: from config or 900s]
      --single-call            Force single-call mode
      --no-single-call         Force tool-calling (agentic) mode
      --fail-on <LEVEL>        Fail (exit 2) if issues at or above level
      --min-severity <LEVEL>   Only include issues at or above level in report
      --dry-run                Scan files without calling the LLM
      --init-config            Generate default .yoauditor.toml
  -h, --help                   Print help
  -V, --version                Print version

How It Works (Architecture)

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                       YoAuditor                           β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  1. Clone Repository                                      β”‚
β”‚     └──> /tmp/repo_clone                                  β”‚
β”‚                                                           β”‚
β”‚  2. Scan Source Files (respects config)                   β”‚
β”‚     β”œβ”€β”€ extensions, excludes, max_file_size               β”‚
β”‚     └── unified scanner used by both modes                β”‚
β”‚                                                           β”‚
β”‚  3. Analyze (choose one mode):                            β”‚
β”‚                                                           β”‚
β”‚     Single-call             Tool-calling (agentic)        β”‚
β”‚     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”‚
β”‚     β”‚ Read all filesβ”‚       β”‚ LLM calls tools:     β”‚      β”‚
β”‚     β”‚ Send in ONE   β”‚       β”‚  list_files()        β”‚      β”‚
β”‚     β”‚ API request   β”‚       β”‚  read_file()         β”‚      β”‚
β”‚     β”‚ Parse issues  β”‚       β”‚  search_code()       β”‚      β”‚
β”‚     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜       β”‚  get_file_info()     β”‚      β”‚
β”‚                             β”‚  report_issue()      β”‚      β”‚
β”‚                             β”‚  finish_analysis()   β”‚      β”‚
β”‚                             β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β”‚
β”‚                                                           β”‚
β”‚  4. Generate Report from collected issues                 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Configuration

Create .yoauditor.toml in your project (or run yoauditor --init-config):

[general]
output = "yoaudit_report.md"
verbose = false

[model]
name = "llama3.2:latest"
ollama_url = "http://localhost:11434"
temperature = 0.1
timeout_seconds = 900
# true = single-call (efficient), false = tool-calling (agentic)
single_call_mode = true

[scanner]
max_files = 100
extensions = ["rs", "py", "js", "ts", "go", "java", "c", "cpp"]
excludes = [".git", "target", "node_modules", "vendor"]
max_file_size = 1048576

CI Usage

# Fail the build if any high or critical issues are found
yoauditor --repo https://github.com/owner/repo.git --fail-on high

# Only report critical issues, output as JSON
yoauditor --repo https://github.com/owner/repo.git --min-severity critical --format json

Exit codes:

Code Meaning
0 Success (no issues above threshold)
1 Runtime error (connection, config, etc.)
2 Issues found above --fail-on threshold

Fixtures

The fixtures/ folder contains sample code with intentional issues (Python, JavaScript, Go, Rust) to test YoAuditor and compare models.

# Run auditor on fixtures
yoauditor --repo local --local ./fixtures --output yoaudit_report.md

# Compare local vs cloud models (named outputs)
yoauditor --repo local --local ./fixtures --output qwen3-coder.yoaudit_report.md
yoauditor --repo local --local ./fixtures --model llama3.2:latest --output llama3.2.yoaudit_report.md
In fixtures Purpose
EXPECTED_ISSUES.md Checklist of issues the auditor should find
AUDIT_RESULTS.md Local (Llama) vs cloud (Qwen) model comparison
Source files Intentionally flawed code + clean_example.py (false-positive check)

See fixtures/README.md for details.

Project Structure

yoauditor/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ main.rs              # CLI entry and workflow
β”‚   β”œβ”€β”€ cli.rs               # Argument parsing
β”‚   β”œβ”€β”€ config.rs            # Configuration handling
β”‚   β”œβ”€β”€ models.rs            # Data structures
β”‚   β”œβ”€β”€ scanner/
β”‚   β”‚   └── mod.rs           # Unified file scanner
β”‚   β”œβ”€β”€ repo/
β”‚   β”‚   β”œβ”€β”€ mod.rs           # Module exports
β”‚   β”‚   └── cloner.rs        # Git repository cloning
β”‚   β”œβ”€β”€ agent/
β”‚   β”‚   β”œβ”€β”€ mod.rs           # Module exports
β”‚   β”‚   β”œβ”€β”€ tools.rs         # Tool definitions
β”‚   β”‚   └── agent_loop.rs    # Agentic loop
β”‚   β”œβ”€β”€ analysis/
β”‚   β”‚   β”œβ”€β”€ mod.rs           # Module exports
β”‚   β”‚   └── aggregator.rs    # Issue aggregation
β”‚   └── report/
β”‚       β”œβ”€β”€ mod.rs           # Module exports
β”‚       └── generator.rs     # Report generation
β”œβ”€β”€ fixtures/                # Test code with known issues (see fixtures/README.md)
β”œβ”€β”€ Cargo.toml
└── .yoauditor.toml

Troubleshooting

"Cannot connect to Ollama"

  • Ensure Ollama is running: ollama serve
  • Check the URL: default is http://localhost:11434

"Request timed out"

  • Increase timeout: --timeout 1800 or edit timeout_seconds in .yoauditor.toml
  • Single-call mode with large repos can take 10+ minutes

"Model not found"

  • Pull it first: ollama pull llama3.2:latest

"Tool calling not working"

  • Make sure you're using a model that supports tool calling
  • Try: llama3.2:latest, llama3.1:latest, or mistral:latest
  • Or use --single-call mode which works with any model
  • Models like deepseek-coder may NOT support tools

Analysis takes too long

  • Use --single-call mode for faster analysis
  • Use a faster/smaller model: llama3.2:latest
  • Reduce scope: --max-files 20
  • Check Ollama logs for issues

Docs

Doc Description
docs/MODES.md Single-call vs agentic mode β€” when to use each, how they work
docs/DESIGN.md Design patterns, architecture, data flow
docs/CONFIGURATION.md Full .yoauditor.toml reference
docs/CLI.md Exhaustive CLI options and examples
docs/EXIT_CODES.md Exit codes 0 / 1 / 2 and CI usage
docs/DEVELOPMENT.md Local dev, tests, where to change what
docs/DOCKER.md Docker build, run, and compose
fixtures/README.md Fixtures: test code and model comparison

License

MIT License

Acknowledgments

  • Ollama β€” Local LLM runtime with tool calling support
  • git2 β€” Git bindings for Rust

About

LLM-powered code auditor for GitHub repos.

Topics

Resources

License

Stars

Watchers

Forks

Languages