Skip to content

Enhancement: Add qlty and ast-grep installation to setup wizard #140

@udhaya10

Description

@udhaya10

Description

The setup wizard already installs free CLI tools using a consistent pattern (explain → confirm → install → verify):

  • Math packages (Step 9) — uv sync --extra math
  • TLDR CLI (Step 10) — uv tool install llm-tldr
  • Loogle (Step 12) — custom build

But two free CLI tools are completely missing from the wizard, despite being dependencies for 5+ skills:

Tool Install Command Skills Unlocked
qlty curl -fsSL https://qlty.sh | sh qlty-check, qlty-during-development, fix (deps scope)
ast-grep brew install ast-grep / cargo install ast-grep ast-grep-find, search-router, search-tools

Both are free, require no API keys, and follow the exact same install pattern already used for TLDR.

Current Behavior

  • Wizard has no mention of qlty or ast-grep
  • Skills that depend on them silently fail or degrade
  • Users have no indication these tools exist or are needed

Wizard code (TLDR install pattern to replicate):

# Step 9: TLDR Code Analysis Tool
console.print("\n[bold]Step 10/13: TLDR Code Analysis Tool[/bold]")
console.print(" TLDR provides token-efficient code analysis for LLMs:")
console.print(" - 95% token savings vs reading raw files")
console.print(" - 155x faster queries with daemon mode")
console.print(" - Semantic search, call graphs, program slicing")
console.print(" - Works with Python, TypeScript, Go, Rust")
console.print("")
console.print(" [dim]Note: First semantic search downloads ~1.3GB embedding model.[/dim]")
if Confirm.ask("\nInstall TLDR code analysis tool?", default=True):
console.print(" Installing TLDR...")
import subprocess
try:
# Install from PyPI using uv tool (puts tldr CLI in PATH)
# Use 300s timeout - first install resolves many deps
result = subprocess.run(
["uv", "tool", "install", "llm-tldr"],
capture_output=True,
text=True,
timeout=300,
)
if result.returncode == 0:
console.print(" [green]OK[/green] TLDR installed")
# Verify it works AND is the right tldr (not tldr-pages)
console.print(" Verifying installation...")
verify_result = subprocess.run(
["tldr", "--help"],
capture_output=True,
text=True,
timeout=10,
)
# Check if this is llm-tldr (has 'tree', 'structure', 'daemon') not tldr-pages
is_llm_tldr = any(cmd in verify_result.stdout for cmd in ["tree", "structure", "daemon"])
if verify_result.returncode == 0 and is_llm_tldr:
console.print(" [green]OK[/green] TLDR CLI available")
elif verify_result.returncode == 0 and not is_llm_tldr:
console.print(" [yellow]WARN[/yellow] Wrong tldr detected (tldr-pages, not llm-tldr)")
console.print(" [yellow] [/yellow] The 'tldr' command is shadowed by tldr-pages.")
console.print(" [yellow] [/yellow] Uninstall tldr-pages: pip uninstall tldr")
console.print(" [yellow] [/yellow] Or use full path: ~/.local/bin/tldr")
if is_llm_tldr:
console.print("")
console.print(" [dim]Quick start:[/dim]")
console.print(" tldr tree . # See project structure")
console.print(" tldr structure . --lang python # Code overview")
console.print(" tldr daemon start # Start daemon (155x faster)")
# Configure semantic search
console.print("")
console.print(" [bold]Semantic Search Configuration[/bold]")
console.print(" Natural language code search using AI embeddings.")
console.print(" [dim]First run downloads ~1.3GB model and indexes your codebase.[/dim]")
console.print(" [dim]Auto-reindexes in background when files change.[/dim]")
if Confirm.ask("\n Enable semantic search?", default=True):
# Get threshold
threshold_str = Prompt.ask(
" Auto-reindex after how many file changes?",
default="20"
)
try:
threshold = int(threshold_str)
except ValueError:
threshold = 20
# Save config to global ~/.claude/settings.json
settings_path = get_global_claude_dir() / "settings.json"
settings = {}
if settings_path.exists():
try:
settings = json.loads(settings_path.read_text())
except Exception:
pass
# Detect GPU for model selection
# BGE-large (1.3GB) needs GPU, MiniLM (80MB) works on CPU
has_gpu = False
try:
import torch
has_gpu = torch.cuda.is_available() or torch.backends.mps.is_available()
except ImportError:
pass # No torch = assume no GPU
if has_gpu:
model = "bge-large-en-v1.5"
timeout = 600 # 10 min with GPU
else:
model = "all-MiniLM-L6-v2"
timeout = 300 # 5 min for small model
console.print(" [dim]No GPU detected, using lightweight model[/dim]")
settings["semantic_search"] = {
"enabled": True,
"auto_reindex_threshold": threshold,
"model": model,
}
settings_path.parent.mkdir(parents=True, exist_ok=True)
settings_path.write_text(json.dumps(settings, indent=2))
console.print(f" [green]OK[/green] Semantic search enabled (threshold: {threshold})")
# Offer to pre-download embedding model
# Note: We only download the model here, not index any directory.
# Indexing happens per-project when user runs `tldr semantic index .`
if Confirm.ask("\n Pre-download embedding model now?", default=False):
console.print(f" Downloading {model} embedding model...")
try:
# Just load the model to trigger download (no indexing)
download_result = subprocess.run(
[sys.executable, "-c", f"from tldr.semantic import get_model; get_model('{model}')"],
capture_output=True,
text=True,
timeout=timeout,
env={**os.environ, "TLDR_AUTO_DOWNLOAD": "1"},
)
if download_result.returncode == 0:
console.print(" [green]OK[/green] Embedding model downloaded")
else:
console.print(" [yellow]WARN[/yellow] Download had issues")
if download_result.stderr:
console.print(f" {download_result.stderr[:200]}")
except subprocess.TimeoutExpired:

Proposed Solution

Add two new wizard steps following the existing TLDR pattern:

# Step 11/15: Code Quality Tools (Optional)
console.print("\n[bold]Step 11/15: Code Quality Tools (Optional)[/bold]")
console.print("  qlty provides universal code quality checking:")
console.print("    - 70+ linters across 40+ languages")
console.print("    - Auto-fix for common issues")
console.print("    - Runs via /qlty-check skill")

if Confirm.ask("\nInstall qlty?", default=True):
    result = subprocess.run(
        ["sh", "-c", "curl -fsSL https://qlty.sh | sh"],
        capture_output=True, text=True, timeout=120,
    )
    if result.returncode == 0:
        console.print("  [green]OK[/green] qlty installed")
        # Initialize in project
        subprocess.run(["qlty", "init"], capture_output=True, text=True, timeout=30)
    else:
        console.print("  [yellow]WARN[/yellow] Install manually: curl -fsSL https://qlty.sh | sh")

# Step 12/15: AST-Based Code Search (Optional)
console.print("\n[bold]Step 12/15: AST-Based Code Search (Optional)[/bold]")
console.print("  ast-grep enables structural code search and refactoring:")
console.print("    - Search by AST patterns, not text")
console.print("    - Language-aware refactoring")
console.print("    - Used by /ast-grep-find and search-router skills")

if Confirm.ask("\nInstall ast-grep?", default=True):
    # Try brew first (macOS), then cargo
    result = subprocess.run(
        ["brew", "install", "ast-grep"],
        capture_output=True, text=True, timeout=120,
    )
    if result.returncode != 0:
        result = subprocess.run(
            ["cargo", "install", "ast-grep", "--locked"],
            capture_output=True, text=True, timeout=300,
        )
    if result.returncode == 0:
        console.print("  [green]OK[/green] ast-grep installed")
    else:
        console.print("  [yellow]WARN[/yellow] Install manually: brew install ast-grep")

Related

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions