Skip to content

AI CLI Wrapper that you can choose to jail or let it run crazy on your system, has multiple agent possibilities and has most models you can choose from.

Notifications You must be signed in to change notification settings

N0L0g1c/AI-CLI-Wrapper

Repository files navigation

AI Agent CLI Wrapper

A universal CLI wrapper that makes any AI agent CLI-compatible with configurable LLM selection and API keys. Switch between different LLM providers (OpenAI, Anthropic, Google, Mistral, Cohere, xAI Grok) with a simple configuration file.

Features

Core Features

  • 🔄 Multi-Provider Support: Switch between OpenAI, Anthropic, Google Gemini, Mistral, Cohere, and xAI Grok
  • ⚙️ Easy Configuration: Simple YAML config file for managing API keys and settings
  • 💬 Interactive Chat: Built-in interactive chat interface
  • 🎯 CLI Compatible: Works seamlessly from the command line
  • 🔐 Secure: Supports environment variables for API keys and safety confirmations
  • 🎨 Beautiful UI: Rich terminal UI with colors and formatting
  • 🖥️ Cross-Platform: Works on Windows, Linux, and macOS with automatic OS detection

Advanced Agentic Features 🚀

  • 🤖 Agentic Mode: AI can create files, edit files, and execute commands directly in the terminal!
  • 📝 Multi-Action Support: Execute multiple actions in a single response
  • 🧠 Project Awareness: Automatically detects project type, dependencies, and git status
  • 💾 Session Management: Persistent conversation history and action tracking
  • 📊 Cost Tracking: Monitor API usage and costs in real-time
  • 🔍 File Search: Search and list files in your project
  • 💻 Code Execution: Execute Python, Node.js, and shell code directly
  • 🔗 Git Integration: Auto-commit changes, check git status
  • 📚 Memory System: Remembers previous actions and context
  • 🎯 Smart Parsing: Understands JSON action formats and code blocks
  • Error Recovery: Better error handling and retry logic

Installation

Cross-Platform Support

Windows (Windows 10/11, PowerShell/CMD)
Linux (Ubuntu, Debian, Fedora, Arch, etc.)
macOS (10.14+)

The tool automatically detects your operating system and uses the appropriate shell and path handling.

Installation Steps

  1. Clone or navigate to this directory:
# Windows PowerShell
cd AI-CLI-Wrapper

# Linux/macOS
cd AI-CLI-Wrapper
  1. Install dependencies:
# Windows PowerShell
pip install -r requirements.txt

# Linux/macOS
pip3 install -r requirements.txt
# or
python3 -m pip install -r requirements.txt
  1. Install only the LLM providers you need:
# For OpenAI
pip install openai

# For Anthropic
pip install anthropic

# For Google
pip install google-generativeai

# For Mistral
pip install mistralai

# For Cohere
pip install cohere

# For xAI Grok (uses OpenAI package)
pip install openai

Quick Start

  1. Create your config file:
# Copy the example config
cp agent_config.yaml.example agent_config.yaml

# Or let the tool create it automatically on first run
  1. Edit agent_config.yaml and add your API keys:
llm:
  provider: openai
  api_key: "your-api-key-here"
  model: gpt-3.5-turbo
  1. Configure via CLI (Quick Setup):
# Interactive configuration wizard (recommended for first-time setup)
python agent_wrapper.py --configure

# Or set configuration directly via CLI
python agent_wrapper.py --set-default-provider openai --set-api-key "sk-your-key-here"
python agent_wrapper.py --set-provider-key grok "xai-your-key-here"
python agent_wrapper.py --set-default-model gpt-4
  1. Start using it:
# Show cool startup screen with all available commands
python agent_wrapper.py --show-startup

# Or just run without arguments to see startup screen
python agent_wrapper.py

# Interactive chat mode
python agent_wrapper.py --interactive

# Single prompt
python agent_wrapper.py --prompt "Hello, how are you?"

# List available providers
python agent_wrapper.py --list-providers

Configuration

Quick Setup via CLI

The easiest way to configure the tool is using the interactive wizard or CLI commands:

Interactive Wizard (Recommended):

python agent_wrapper.py --configure

This will guide you through:

  • Selecting a default provider
  • Setting API keys
  • Choosing a default model
  • Configuring additional providers (optional)
  • Setting up agentic mode options (optional)

Direct CLI Configuration:

# Set default provider and API key
python agent_wrapper.py --set-default-provider openai --set-api-key "sk-xxx"

# Set API key for a specific provider
python agent_wrapper.py --set-provider-key grok "xai-xxx"

# Set default model
python agent_wrapper.py --set-default-model gpt-4

# Combine multiple settings
python agent_wrapper.py --set-default-provider anthropic --set-api-key "sk-ant-xxx" --set-default-model claude-3-opus-20240229

All CLI configuration commands automatically save to your config file!

Config File Location

The tool looks for configuration in this order:

  1. agent_config.yaml in the current directory
  2. ~/.ai-agent-cli/config.yaml in your home directory

Config File Structure

llm:
  provider: openai  # Main provider to use
  api_key: ""  # Leave empty to use environment variable
  model: gpt-3.5-turbo
  temperature: 0.7
  max_tokens: 1000

providers:
  openai:
    api_key: ""
    model: gpt-3.5-turbo
    temperature: 0.7
    max_tokens: 1000
  
  anthropic:
    api_key: ""
    model: claude-3-sonnet-20240229
    temperature: 0.7
    max_tokens: 1000
  
  # ... other providers

Environment Variables

You can also set API keys as environment variables (recommended for security):

# Windows PowerShell
$env:OPENAI_API_KEY="your-key-here"
$env:ANTHROPIC_API_KEY="your-key-here"
$env:GOOGLE_API_KEY="your-key-here"
$env:MISTRAL_API_KEY="your-key-here"
$env:COHERE_API_KEY="your-key-here"
$env:XAI_API_KEY="your-key-here"  # For Grok (or use GROK_API_KEY)

# Linux/Mac
export OPENAI_API_KEY="your-key-here"
export ANTHROPIC_API_KEY="your-key-here"
export GOOGLE_API_KEY="your-key-here"
export MISTRAL_API_KEY="your-key-here"
export COHERE_API_KEY="your-key-here"
export XAI_API_KEY="your-key-here"  # For Grok (or use GROK_API_KEY)

Usage Examples

Interactive Chat

python agent_wrapper.py --interactive

Agentic Mode Examples

Create a file:

python agent_wrapper.py --agentic --prompt "Create a Python file called hello.py that prints 'Hello, World!'"

Edit a file:

python agent_wrapper.py --agentic --prompt "Add a function to hello.py that takes a name parameter"

Execute commands:

python agent_wrapper.py --agentic --prompt "Run 'python hello.py' to test the script"

Interactive agentic session:

python agent_wrapper.py --agentic --interactive
# Then you can have a conversation where the AI creates and modifies files!

Single Prompt

python agent_wrapper.py --prompt "Explain quantum computing in simple terms"

Switch Providers

# Use Anthropic Claude
python agent_wrapper.py --provider anthropic --interactive

# Use Google Gemini
python agent_wrapper.py --provider google --prompt "Hello"

Override Settings

# Use a different model
python agent_wrapper.py --model gpt-4 --prompt "Hello"

# Adjust temperature
python agent_wrapper.py --temperature 0.9 --prompt "Be creative"

# Set max tokens
python agent_wrapper.py --max-tokens 2000 --prompt "Write a long response"

List Providers

python agent_wrapper.py --list-providers

View Usage Statistics

# Show cost tracking and usage stats
python agent_wrapper.py --stats

Session Management

# Load a specific session
python agent_wrapper.py --session my_session --agentic --interactive

# Disable session management
python agent_wrapper.py --no-session --agentic

Agentic Mode (Create Files & Execute Commands!)

# Enable agentic mode - AI can now create files and run commands
python agent_wrapper.py --agentic --interactive

# Or with a single prompt
python agent_wrapper.py --agentic --prompt "Create a Python script that prints hello world"

# Disable confirmation prompts (use with caution!)
python agent_wrapper.py --agentic --no-confirm --prompt "Create a file called test.txt"

Available Actions in Agentic Mode

  • CREATE_FILE: Create new files with content
  • EDIT_FILE: Edit existing files (supports replace/append/prepend modes)
  • DELETE_FILE: Delete files (with confirmation)
  • READ_FILE: Read file contents
  • EXECUTE: Run shell commands (with safety checks)
  • EXECUTE_CODE: Execute Python/Node.js code directly
  • SEARCH_FILES: Search for files matching a pattern
  • GIT_COMMIT: Commit changes to git repository

Enhanced Capabilities

  • Multi-Action Execution: The AI can perform multiple actions in one response using JSON format
  • Project Context: Automatically understands your project structure, dependencies, and git status
  • Session Persistence: All conversations and actions are saved for later review
  • Cost Tracking: Monitor your API usage and costs
  • Smart Memory: Remembers previous actions to provide better context

The AI will automatically detect when you ask it to create or modify files and will perform the actions directly!

Supported LLM Providers

Provider Models Status
OpenAI GPT-5, GPT-4.1/Mini/Nano, GPT-4o, o4-mini, GPT-OSS (120B/20B), GPT-3.5 Turbo ✅ Supported
Anthropic Claude 3.7 Sonnet, Claude 3.5 Sonnet/Haiku, Claude 3 Opus/Sonnet/Haiku ✅ Supported
Google Gemini 2.5 Pro, Gemini 2.0, Gemini 1.5 Pro/Flash, Gemma 3, Gemini Pro ✅ Supported
Mistral Mistral Large 3, Mistral Tiny/Small/Medium/Large, Pixtral, Codestral, Mixtral ✅ Supported
Cohere Command R+, Command R, Command, Command Light, Aya, Rerank ✅ Supported
xAI Grok Grok-4 (Fast/Slow), Grok-3/Mini, Grok Code, Grok-2, Grok Beta ✅ Supported

Integrating with Your AI Agent

Python Integration

from agent_wrapper import AgentWrapper

# Initialize wrapper
wrapper = AgentWrapper(config_path='agent_config.yaml')

# Generate response
response = wrapper.generate("What is the capital of France?")
print(response)

# Chat with history
messages = [
    {"role": "user", "content": "Hello"},
    {"role": "assistant", "content": "Hi there!"},
    {"role": "user", "content": "What's 2+2?"}
]
response = wrapper.chat(messages)
print(response)

Switching Providers Programmatically

from agent_wrapper import AgentWrapper, LLMProviderFactory

# Load config
wrapper = AgentWrapper()

# Switch to Anthropic
wrapper.config['llm']['provider'] = 'anthropic'
wrapper.provider = wrapper._initialize_provider()

# Now use Anthropic
response = wrapper.generate("Hello")

Command Line Options

usage: agent_wrapper.py [-h] [--config CONFIG] [--provider PROVIDER]
                        [--model MODEL] [--prompt PROMPT] [--interactive]
                        [--list-providers] [--temperature TEMPERATURE]
                        [--max-tokens MAX_TOKENS] [--agentic] [--no-confirm]
                        [--stats] [--session SESSION] [--no-session]

options:
  -h, --help            show this help message and exit
  --config, -c          Path to configuration file
  --provider, -p        Override LLM provider from config
  --model, -m           Override LLM model from config
  --prompt              Single prompt to process (non-interactive mode)
  --interactive, -i     Start interactive chat session
  --list-providers      List all available LLM providers
  --temperature, -t     Override temperature setting
  --max-tokens          Override max_tokens setting
  --agentic, -a         Enable agentic mode (AI can create files and execute commands)
  --no-confirm          Disable confirmation prompts for dangerous commands
  --stats               Show usage statistics and cost tracking
  --session             Load a specific session by name or path
  --no-session          Disable session management
  --jail-dir            Override jail directory from config (restricts file operations)
  --show-startup        Show startup screen with available commands and features
  
  # Configuration Commands (saves to config file)
  --configure           Interactive configuration wizard
  --set-api-key KEY     Set API key for current provider
  --set-provider-key PROVIDER KEY  Set API key for specific provider
  --set-default-provider PROVIDER  Set default provider
  --set-default-model MODEL        Set default model

Cross-Platform Compatibility

Windows Support

  • ✅ Works with PowerShell and CMD
  • ✅ Handles Windows paths (backslashes) automatically
  • ✅ Executes Windows commands (PowerShell/cmd)
  • ✅ Config file location: %USERPROFILE%\.ai-agent-cli\config.yaml

Linux Support

  • ✅ Works with bash, zsh, and other shells
  • ✅ Handles Linux paths (forward slashes) automatically
  • ✅ Executes Linux/Unix commands
  • ✅ Config file location: ~/.ai-agent-cli/config.yaml

macOS Support

  • ✅ Works with zsh and bash
  • ✅ Same as Linux support
  • ✅ Config file location: ~/.ai-agent-cli/config.yaml

Path Handling

The tool uses Python's pathlib which automatically handles:

  • Windows paths: C:\Users\Name\Documents
  • Linux/macOS paths: /home/name/documents
  • Relative paths work the same on all platforms

Command Execution

  • Windows: Commands run in PowerShell/CMD context
  • Linux/macOS: Commands run in bash/sh context
  • The tool automatically detects your OS and uses the appropriate shell

Example: Cross-Platform Usage

# Works on Windows, Linux, and macOS
python agent_wrapper.py --agentic --prompt "Create a file called test.txt"

# File paths work the same way
python agent_wrapper.py --agentic --prompt "Read the file ./config.yaml"

Security: Jail Directory Feature

Restricting File Operations to a Single Folder

For security, you can restrict all file operations to a single directory (jail directory). This prevents the AI from accessing or modifying files outside the designated workspace.

Configuration

Add jail_directory to your agent_config.yaml:

agentic:
  require_confirmation: true
  jail_directory: "./workspace"  # All file operations restricted to this folder

Examples

Linux/macOS:

jail_directory: "/home/user/ai-workspace"
# or relative path
jail_directory: "./workspace"

Windows:

jail_directory: "C:\\Users\\YourName\\ai-workspace"
# or relative path
jail_directory: ".\\workspace"

How It Works

When jail_directory is set:

  • ✅ All file operations (create, read, edit, delete) are restricted to this directory
  • ✅ Commands execute with this directory as the working directory
  • ✅ Code execution happens within this directory
  • ✅ Path validation prevents accessing files outside the jail
  • ✅ The AI is informed about the restriction in its system prompt

Security Benefits:

  • Prevents accidental modification of system files
  • Isolates AI operations to a safe workspace
  • Protects your important files and directories
  • Makes it safe to experiment with agentic mode

Example Usage

# With jail directory set in config
python agent_wrapper.py --agentic --prompt "Create a Python script"

# Or override jail directory via CLI
python agent_wrapper.py --agentic --jail-dir "./workspace" --prompt "Create a Python script"

# The AI can only create files in ./workspace/
# Attempts to access files outside will be blocked with error message

Security Validation

The tool validates all file paths before operations:

  • ✅ Relative paths are resolved within the jail directory
  • ✅ Absolute paths outside the jail are rejected
  • ✅ Path traversal attempts (../, ..\\) are blocked
  • ✅ Clear error messages when access is denied

Troubleshooting

"Provider not installed" Error

Install the required package for your provider:

pip install openai  # For OpenAI
pip install anthropic  # For Anthropic
# etc.

"API key not found" Error

Make sure you've set your API key either:

  • In the config file: agent_config.yaml
  • As an environment variable: OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.

Config File Not Found

The tool will automatically create a default config file on first run. You can also manually copy agent_config.yaml.example to agent_config.yaml.

Security Best Practices

  1. Use Environment Variables: Store API keys in environment variables instead of config files
  2. Don't Commit Keys: Add agent_config.yaml to .gitignore if it contains API keys
  3. Use Separate Keys: Use different API keys for development and production
  4. Rotate Keys: Regularly rotate your API keys

Contributing

Feel free to add support for additional LLM providers by:

  1. Creating a new provider class inheriting from LLMProvider
  2. Implementing the generate() and chat() methods
  3. Adding it to LLMProviderFactory.PROVIDERS

License

This tool is provided as-is for use in your projects.

Examples

Example 1: Code Explanation

python agent_wrapper.py --prompt "Explain this Python code: def fib(n): return n if n < 2 else fib(n-1) + fib(n-2)"

Example 2: Creative Writing

python agent_wrapper.py --provider anthropic --temperature 0.9 --prompt "Write a short story about a robot learning to paint"

Example 3: Technical Analysis

python agent_wrapper.py --provider google --model gemini-pro --prompt "Analyze the pros and cons of microservices architecture"

Example 4: Create a Complete Project

python agent_wrapper.py --agentic --prompt "Create a Python web scraper that fetches news headlines and saves them to a JSON file"

Example 5: Multi-Step Task

python agent_wrapper.py --agentic --interactive
# Then ask: "Create a REST API with Flask, add authentication, write tests, and commit to git"
# The AI will break it down into multiple actions automatically!

Example 6: Code Review and Fix

python agent_wrapper.py --agentic --prompt "Read my main.py file, review it for bugs, and fix any issues you find"

Example 7: Project Setup

python agent_wrapper.py --agentic --prompt "Set up a new React project with TypeScript, ESLint, and Tailwind CSS"

Support

For issues or questions, please check:

  • The config file is properly formatted (YAML)
  • All required packages are installed
  • API keys are correctly set
  • The provider you're using is supported

About

AI CLI Wrapper that you can choose to jail or let it run crazy on your system, has multiple agent possibilities and has most models you can choose from.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages