LLM-powered code auditor for GitHub repos. Audit any repo for bugs, security issues, and performance problems using local AI. Built in Rust.
Instead of a one-size-fits-all approach, YoAuditor gives you two ways to audit code:
Single-call mode sends all files in one LLM request β fast and efficient.
Tool-calling (agentic) mode lets the LLM explore the codebase autonomously:
- π Explores β Uses
list_filesto discover the project structure - π Reads β Calls
read_fileto examine specific files - π Searches β Uses
search_codeto find patterns - π Reports β Calls
report_issuefor each problem found - β
Finishes β Calls
finish_analysiswhen done
This is similar to how Claude or GPT-4 use tools to interact with codebases!
β See docs/MODES.md for a full comparison of single-call vs agentic mode, when to use each, and how to switch.
- π€ Agentic Analysis: LLM autonomously explores and analyzes the codebase
- π οΈ Tool Calling: Uses Ollama's tool-calling API for structured interactions
- π Security Scanning: Identifies potential security vulnerabilities
- π Bug Detection: Finds logic errors, null pointer risks, race conditions
- β‘ Performance Issues: Detects inefficient algorithms and blocking I/O
- π Comprehensive Reports: Generates detailed Markdown/JSON reports with line numbers
- π¦ CI-Friendly:
--fail-onflag with exit codes for gating builds - π§ Fully Configurable:
.yoauditor.tomlfor model, extensions, excludes, timeouts
-
Rust (latest stable): Install Rust
-
LLM Backend (choose one):
- Ollama (recommended): Install Ollama
- llama.cpp server: Build from source
-
A model that supports tool calling:
# For Ollama: ollama pull llama3.2:latest # Fast, good tool support ollama pull qwen3-coder:480b-cloud # Cloud model, excellent # For llama.cpp: # Download a GGUF model and run: ./llama-server -m model.gguf --port 8080
β οΈ Important: For agentic mode, the model MUST support tool/function calling. Use--single-callif your model doesn't.
# Clone and build
git clone https://github.com/sharafdin/yoauditor.git
cd yoauditor
cargo build --release
# Install system-wide (optional)
cargo install --path .# Build the image
docker build -t yoauditor .
# Run (Ollama must be reachable; use host network or pass Ollama URL)
docker run --rm yoauditor --repo https://github.com/owner/repo.git --ollama-url http://host.docker.internal:11434
# With output mounted so you can read the report on the host
docker run --rm -v "$(pwd)/out:/app/out" yoauditor \
--repo https://github.com/owner/repo.git \
--ollama-url http://host.docker.internal:11434 \
--output /app/out/yoaudit_report.mdOn Linux, use --network host and --ollama-url http://localhost:11434 if Ollama runs on the host.
See docs/DOCKER.md for more options and Docker Compose.
# Analyze a GitHub repository
yoauditor --repo https://github.com/owner/repo.git
# Use a specific model (must support tool calling!)
yoauditor --repo https://github.com/owner/repo.git --model llama3.2:latest
# Analyze a local directory
yoauditor --repo local --local ./my-project
# Preview which files would be analyzed (no LLM call)
yoauditor --repo https://github.com/owner/repo.git --dry-run
# Generate a default config file
yoauditor --init-configyoauditor [OPTIONS]
Options:
-r, --repo <URL> GitHub repository URL to analyze
-m, --model <MODEL> Ollama model name [default: deepseek-coder:33b]
-o, --output <FILE> Output file path [default: yoaudit_report.md]
--max-files <COUNT> Maximum files to analyze [default: 100]
--ollama-url <URL> Ollama API endpoint [default: http://localhost:11434]
-c, --config <FILE> Path to configuration file
-v, --verbose Enable verbose logging
-q, --quiet Quiet mode (minimal output)
-b, --branch <BRANCH> Specific branch to analyze
--extensions <EXTS> File extensions to include (comma-separated)
--exclude <PATTERNS> Patterns to exclude (comma-separated)
--local <DIR> Analyze a local directory instead of cloning
--format <FORMAT> Output format: markdown, json [default: markdown]
--timeout <SECS> LLM request timeout [default: from config or 900s]
--single-call Force single-call mode
--no-single-call Force tool-calling (agentic) mode
--fail-on <LEVEL> Fail (exit 2) if issues at or above level
--min-severity <LEVEL> Only include issues at or above level in report
--dry-run Scan files without calling the LLM
--init-config Generate default .yoauditor.toml
-h, --help Print help
-V, --version Print version
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β YoAuditor β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β 1. Clone Repository β
β βββ> /tmp/repo_clone β
β β
β 2. Scan Source Files (respects config) β
β βββ extensions, excludes, max_file_size β
β βββ unified scanner used by both modes β
β β
β 3. Analyze (choose one mode): β
β β
β Single-call Tool-calling (agentic) β
β βββββββββββββββββ ββββββββββββββββββββββββ β
β β Read all filesβ β LLM calls tools: β β
β β Send in ONE β β list_files() β β
β β API request β β read_file() β β
β β Parse issues β β search_code() β β
β βββββββββββββββββ β get_file_info() β β
β β report_issue() β β
β β finish_analysis() β β
β ββββββββββββββββββββββββ β
β β
β 4. Generate Report from collected issues β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Create .yoauditor.toml in your project (or run yoauditor --init-config):
[general]
output = "yoaudit_report.md"
verbose = false
[model]
name = "llama3.2:latest"
ollama_url = "http://localhost:11434"
temperature = 0.1
timeout_seconds = 900
# true = single-call (efficient), false = tool-calling (agentic)
single_call_mode = true
[scanner]
max_files = 100
extensions = ["rs", "py", "js", "ts", "go", "java", "c", "cpp"]
excludes = [".git", "target", "node_modules", "vendor"]
max_file_size = 1048576# Fail the build if any high or critical issues are found
yoauditor --repo https://github.com/owner/repo.git --fail-on high
# Only report critical issues, output as JSON
yoauditor --repo https://github.com/owner/repo.git --min-severity critical --format jsonExit codes:
| Code | Meaning |
|---|---|
| 0 | Success (no issues above threshold) |
| 1 | Runtime error (connection, config, etc.) |
| 2 | Issues found above --fail-on threshold |
The fixtures/ folder contains sample code with intentional issues (Python, JavaScript, Go, Rust) to test YoAuditor and compare models.
# Run auditor on fixtures
yoauditor --repo local --local ./fixtures --output yoaudit_report.md
# Compare local vs cloud models (named outputs)
yoauditor --repo local --local ./fixtures --output qwen3-coder.yoaudit_report.md
yoauditor --repo local --local ./fixtures --model llama3.2:latest --output llama3.2.yoaudit_report.md| In fixtures | Purpose |
|---|---|
| EXPECTED_ISSUES.md | Checklist of issues the auditor should find |
| AUDIT_RESULTS.md | Local (Llama) vs cloud (Qwen) model comparison |
| Source files | Intentionally flawed code + clean_example.py (false-positive check) |
See fixtures/README.md for details.
yoauditor/
βββ src/
β βββ main.rs # CLI entry and workflow
β βββ cli.rs # Argument parsing
β βββ config.rs # Configuration handling
β βββ models.rs # Data structures
β βββ scanner/
β β βββ mod.rs # Unified file scanner
β βββ repo/
β β βββ mod.rs # Module exports
β β βββ cloner.rs # Git repository cloning
β βββ agent/
β β βββ mod.rs # Module exports
β β βββ tools.rs # Tool definitions
β β βββ agent_loop.rs # Agentic loop
β βββ analysis/
β β βββ mod.rs # Module exports
β β βββ aggregator.rs # Issue aggregation
β βββ report/
β βββ mod.rs # Module exports
β βββ generator.rs # Report generation
βββ fixtures/ # Test code with known issues (see fixtures/README.md)
βββ Cargo.toml
βββ .yoauditor.toml
- Ensure Ollama is running:
ollama serve - Check the URL: default is
http://localhost:11434
- Increase timeout:
--timeout 1800or edittimeout_secondsin.yoauditor.toml - Single-call mode with large repos can take 10+ minutes
- Pull it first:
ollama pull llama3.2:latest
- Make sure you're using a model that supports tool calling
- Try:
llama3.2:latest,llama3.1:latest, ormistral:latest - Or use
--single-callmode which works with any model - Models like
deepseek-codermay NOT support tools
- Use
--single-callmode for faster analysis - Use a faster/smaller model:
llama3.2:latest - Reduce scope:
--max-files 20 - Check Ollama logs for issues
| Doc | Description |
|---|---|
| docs/MODES.md | Single-call vs agentic mode β when to use each, how they work |
| docs/DESIGN.md | Design patterns, architecture, data flow |
| docs/CONFIGURATION.md | Full .yoauditor.toml reference |
| docs/CLI.md | Exhaustive CLI options and examples |
| docs/EXIT_CODES.md | Exit codes 0 / 1 / 2 and CI usage |
| docs/DEVELOPMENT.md | Local dev, tests, where to change what |
| docs/DOCKER.md | Docker build, run, and compose |
| fixtures/README.md | Fixtures: test code and model comparison |
MIT License