A universal CLI wrapper that makes any AI agent CLI-compatible with configurable LLM selection and API keys. Switch between different LLM providers (OpenAI, Anthropic, Google, Mistral, Cohere, xAI Grok) with a simple configuration file.
- 🔄 Multi-Provider Support: Switch between OpenAI, Anthropic, Google Gemini, Mistral, Cohere, and xAI Grok
- ⚙️ Easy Configuration: Simple YAML config file for managing API keys and settings
- 💬 Interactive Chat: Built-in interactive chat interface
- 🎯 CLI Compatible: Works seamlessly from the command line
- 🔐 Secure: Supports environment variables for API keys and safety confirmations
- 🎨 Beautiful UI: Rich terminal UI with colors and formatting
- 🖥️ Cross-Platform: Works on Windows, Linux, and macOS with automatic OS detection
- 🤖 Agentic Mode: AI can create files, edit files, and execute commands directly in the terminal!
- 📝 Multi-Action Support: Execute multiple actions in a single response
- 🧠 Project Awareness: Automatically detects project type, dependencies, and git status
- 💾 Session Management: Persistent conversation history and action tracking
- 📊 Cost Tracking: Monitor API usage and costs in real-time
- 🔍 File Search: Search and list files in your project
- 💻 Code Execution: Execute Python, Node.js, and shell code directly
- 🔗 Git Integration: Auto-commit changes, check git status
- 📚 Memory System: Remembers previous actions and context
- 🎯 Smart Parsing: Understands JSON action formats and code blocks
- ⚡ Error Recovery: Better error handling and retry logic
✅ Windows (Windows 10/11, PowerShell/CMD)
✅ Linux (Ubuntu, Debian, Fedora, Arch, etc.)
✅ macOS (10.14+)
The tool automatically detects your operating system and uses the appropriate shell and path handling.
- Clone or navigate to this directory:
# Windows PowerShell
cd AI-CLI-Wrapper
# Linux/macOS
cd AI-CLI-Wrapper- Install dependencies:
# Windows PowerShell
pip install -r requirements.txt
# Linux/macOS
pip3 install -r requirements.txt
# or
python3 -m pip install -r requirements.txt- Install only the LLM providers you need:
# For OpenAI
pip install openai
# For Anthropic
pip install anthropic
# For Google
pip install google-generativeai
# For Mistral
pip install mistralai
# For Cohere
pip install cohere
# For xAI Grok (uses OpenAI package)
pip install openai- Create your config file:
# Copy the example config
cp agent_config.yaml.example agent_config.yaml
# Or let the tool create it automatically on first run- Edit
agent_config.yamland add your API keys:
llm:
provider: openai
api_key: "your-api-key-here"
model: gpt-3.5-turbo- Configure via CLI (Quick Setup):
# Interactive configuration wizard (recommended for first-time setup)
python agent_wrapper.py --configure
# Or set configuration directly via CLI
python agent_wrapper.py --set-default-provider openai --set-api-key "sk-your-key-here"
python agent_wrapper.py --set-provider-key grok "xai-your-key-here"
python agent_wrapper.py --set-default-model gpt-4- Start using it:
# Show cool startup screen with all available commands
python agent_wrapper.py --show-startup
# Or just run without arguments to see startup screen
python agent_wrapper.py
# Interactive chat mode
python agent_wrapper.py --interactive
# Single prompt
python agent_wrapper.py --prompt "Hello, how are you?"
# List available providers
python agent_wrapper.py --list-providersThe easiest way to configure the tool is using the interactive wizard or CLI commands:
Interactive Wizard (Recommended):
python agent_wrapper.py --configureThis will guide you through:
- Selecting a default provider
- Setting API keys
- Choosing a default model
- Configuring additional providers (optional)
- Setting up agentic mode options (optional)
Direct CLI Configuration:
# Set default provider and API key
python agent_wrapper.py --set-default-provider openai --set-api-key "sk-xxx"
# Set API key for a specific provider
python agent_wrapper.py --set-provider-key grok "xai-xxx"
# Set default model
python agent_wrapper.py --set-default-model gpt-4
# Combine multiple settings
python agent_wrapper.py --set-default-provider anthropic --set-api-key "sk-ant-xxx" --set-default-model claude-3-opus-20240229All CLI configuration commands automatically save to your config file!
The tool looks for configuration in this order:
agent_config.yamlin the current directory~/.ai-agent-cli/config.yamlin your home directory
llm:
provider: openai # Main provider to use
api_key: "" # Leave empty to use environment variable
model: gpt-3.5-turbo
temperature: 0.7
max_tokens: 1000
providers:
openai:
api_key: ""
model: gpt-3.5-turbo
temperature: 0.7
max_tokens: 1000
anthropic:
api_key: ""
model: claude-3-sonnet-20240229
temperature: 0.7
max_tokens: 1000
# ... other providersYou can also set API keys as environment variables (recommended for security):
# Windows PowerShell
$env:OPENAI_API_KEY="your-key-here"
$env:ANTHROPIC_API_KEY="your-key-here"
$env:GOOGLE_API_KEY="your-key-here"
$env:MISTRAL_API_KEY="your-key-here"
$env:COHERE_API_KEY="your-key-here"
$env:XAI_API_KEY="your-key-here" # For Grok (or use GROK_API_KEY)
# Linux/Mac
export OPENAI_API_KEY="your-key-here"
export ANTHROPIC_API_KEY="your-key-here"
export GOOGLE_API_KEY="your-key-here"
export MISTRAL_API_KEY="your-key-here"
export COHERE_API_KEY="your-key-here"
export XAI_API_KEY="your-key-here" # For Grok (or use GROK_API_KEY)python agent_wrapper.py --interactiveCreate a file:
python agent_wrapper.py --agentic --prompt "Create a Python file called hello.py that prints 'Hello, World!'"Edit a file:
python agent_wrapper.py --agentic --prompt "Add a function to hello.py that takes a name parameter"Execute commands:
python agent_wrapper.py --agentic --prompt "Run 'python hello.py' to test the script"Interactive agentic session:
python agent_wrapper.py --agentic --interactive
# Then you can have a conversation where the AI creates and modifies files!python agent_wrapper.py --prompt "Explain quantum computing in simple terms"# Use Anthropic Claude
python agent_wrapper.py --provider anthropic --interactive
# Use Google Gemini
python agent_wrapper.py --provider google --prompt "Hello"# Use a different model
python agent_wrapper.py --model gpt-4 --prompt "Hello"
# Adjust temperature
python agent_wrapper.py --temperature 0.9 --prompt "Be creative"
# Set max tokens
python agent_wrapper.py --max-tokens 2000 --prompt "Write a long response"python agent_wrapper.py --list-providers# Show cost tracking and usage stats
python agent_wrapper.py --stats# Load a specific session
python agent_wrapper.py --session my_session --agentic --interactive
# Disable session management
python agent_wrapper.py --no-session --agentic# Enable agentic mode - AI can now create files and run commands
python agent_wrapper.py --agentic --interactive
# Or with a single prompt
python agent_wrapper.py --agentic --prompt "Create a Python script that prints hello world"
# Disable confirmation prompts (use with caution!)
python agent_wrapper.py --agentic --no-confirm --prompt "Create a file called test.txt"- CREATE_FILE: Create new files with content
- EDIT_FILE: Edit existing files (supports replace/append/prepend modes)
- DELETE_FILE: Delete files (with confirmation)
- READ_FILE: Read file contents
- EXECUTE: Run shell commands (with safety checks)
- EXECUTE_CODE: Execute Python/Node.js code directly
- SEARCH_FILES: Search for files matching a pattern
- GIT_COMMIT: Commit changes to git repository
- Multi-Action Execution: The AI can perform multiple actions in one response using JSON format
- Project Context: Automatically understands your project structure, dependencies, and git status
- Session Persistence: All conversations and actions are saved for later review
- Cost Tracking: Monitor your API usage and costs
- Smart Memory: Remembers previous actions to provide better context
The AI will automatically detect when you ask it to create or modify files and will perform the actions directly!
| Provider | Models | Status |
|---|---|---|
| OpenAI | GPT-5, GPT-4.1/Mini/Nano, GPT-4o, o4-mini, GPT-OSS (120B/20B), GPT-3.5 Turbo | ✅ Supported |
| Anthropic | Claude 3.7 Sonnet, Claude 3.5 Sonnet/Haiku, Claude 3 Opus/Sonnet/Haiku | ✅ Supported |
| Gemini 2.5 Pro, Gemini 2.0, Gemini 1.5 Pro/Flash, Gemma 3, Gemini Pro | ✅ Supported | |
| Mistral | Mistral Large 3, Mistral Tiny/Small/Medium/Large, Pixtral, Codestral, Mixtral | ✅ Supported |
| Cohere | Command R+, Command R, Command, Command Light, Aya, Rerank | ✅ Supported |
| xAI Grok | Grok-4 (Fast/Slow), Grok-3/Mini, Grok Code, Grok-2, Grok Beta | ✅ Supported |
from agent_wrapper import AgentWrapper
# Initialize wrapper
wrapper = AgentWrapper(config_path='agent_config.yaml')
# Generate response
response = wrapper.generate("What is the capital of France?")
print(response)
# Chat with history
messages = [
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi there!"},
{"role": "user", "content": "What's 2+2?"}
]
response = wrapper.chat(messages)
print(response)from agent_wrapper import AgentWrapper, LLMProviderFactory
# Load config
wrapper = AgentWrapper()
# Switch to Anthropic
wrapper.config['llm']['provider'] = 'anthropic'
wrapper.provider = wrapper._initialize_provider()
# Now use Anthropic
response = wrapper.generate("Hello")usage: agent_wrapper.py [-h] [--config CONFIG] [--provider PROVIDER]
[--model MODEL] [--prompt PROMPT] [--interactive]
[--list-providers] [--temperature TEMPERATURE]
[--max-tokens MAX_TOKENS] [--agentic] [--no-confirm]
[--stats] [--session SESSION] [--no-session]
options:
-h, --help show this help message and exit
--config, -c Path to configuration file
--provider, -p Override LLM provider from config
--model, -m Override LLM model from config
--prompt Single prompt to process (non-interactive mode)
--interactive, -i Start interactive chat session
--list-providers List all available LLM providers
--temperature, -t Override temperature setting
--max-tokens Override max_tokens setting
--agentic, -a Enable agentic mode (AI can create files and execute commands)
--no-confirm Disable confirmation prompts for dangerous commands
--stats Show usage statistics and cost tracking
--session Load a specific session by name or path
--no-session Disable session management
--jail-dir Override jail directory from config (restricts file operations)
--show-startup Show startup screen with available commands and features
# Configuration Commands (saves to config file)
--configure Interactive configuration wizard
--set-api-key KEY Set API key for current provider
--set-provider-key PROVIDER KEY Set API key for specific provider
--set-default-provider PROVIDER Set default provider
--set-default-model MODEL Set default model
- ✅ Works with PowerShell and CMD
- ✅ Handles Windows paths (backslashes) automatically
- ✅ Executes Windows commands (PowerShell/cmd)
- ✅ Config file location:
%USERPROFILE%\.ai-agent-cli\config.yaml
- ✅ Works with bash, zsh, and other shells
- ✅ Handles Linux paths (forward slashes) automatically
- ✅ Executes Linux/Unix commands
- ✅ Config file location:
~/.ai-agent-cli/config.yaml
- ✅ Works with zsh and bash
- ✅ Same as Linux support
- ✅ Config file location:
~/.ai-agent-cli/config.yaml
The tool uses Python's pathlib which automatically handles:
- Windows paths:
C:\Users\Name\Documents - Linux/macOS paths:
/home/name/documents - Relative paths work the same on all platforms
- Windows: Commands run in PowerShell/CMD context
- Linux/macOS: Commands run in bash/sh context
- The tool automatically detects your OS and uses the appropriate shell
# Works on Windows, Linux, and macOS
python agent_wrapper.py --agentic --prompt "Create a file called test.txt"
# File paths work the same way
python agent_wrapper.py --agentic --prompt "Read the file ./config.yaml"For security, you can restrict all file operations to a single directory (jail directory). This prevents the AI from accessing or modifying files outside the designated workspace.
Add jail_directory to your agent_config.yaml:
agentic:
require_confirmation: true
jail_directory: "./workspace" # All file operations restricted to this folderLinux/macOS:
jail_directory: "/home/user/ai-workspace"
# or relative path
jail_directory: "./workspace"Windows:
jail_directory: "C:\\Users\\YourName\\ai-workspace"
# or relative path
jail_directory: ".\\workspace"When jail_directory is set:
- ✅ All file operations (create, read, edit, delete) are restricted to this directory
- ✅ Commands execute with this directory as the working directory
- ✅ Code execution happens within this directory
- ✅ Path validation prevents accessing files outside the jail
- ✅ The AI is informed about the restriction in its system prompt
Security Benefits:
- Prevents accidental modification of system files
- Isolates AI operations to a safe workspace
- Protects your important files and directories
- Makes it safe to experiment with agentic mode
# With jail directory set in config
python agent_wrapper.py --agentic --prompt "Create a Python script"
# Or override jail directory via CLI
python agent_wrapper.py --agentic --jail-dir "./workspace" --prompt "Create a Python script"
# The AI can only create files in ./workspace/
# Attempts to access files outside will be blocked with error messageThe tool validates all file paths before operations:
- ✅ Relative paths are resolved within the jail directory
- ✅ Absolute paths outside the jail are rejected
- ✅ Path traversal attempts (
../,..\\) are blocked - ✅ Clear error messages when access is denied
Install the required package for your provider:
pip install openai # For OpenAI
pip install anthropic # For Anthropic
# etc.Make sure you've set your API key either:
- In the config file:
agent_config.yaml - As an environment variable:
OPENAI_API_KEY,ANTHROPIC_API_KEY, etc.
The tool will automatically create a default config file on first run. You can also manually copy agent_config.yaml.example to agent_config.yaml.
- Use Environment Variables: Store API keys in environment variables instead of config files
- Don't Commit Keys: Add
agent_config.yamlto.gitignoreif it contains API keys - Use Separate Keys: Use different API keys for development and production
- Rotate Keys: Regularly rotate your API keys
Feel free to add support for additional LLM providers by:
- Creating a new provider class inheriting from
LLMProvider - Implementing the
generate()andchat()methods - Adding it to
LLMProviderFactory.PROVIDERS
This tool is provided as-is for use in your projects.
python agent_wrapper.py --prompt "Explain this Python code: def fib(n): return n if n < 2 else fib(n-1) + fib(n-2)"python agent_wrapper.py --provider anthropic --temperature 0.9 --prompt "Write a short story about a robot learning to paint"python agent_wrapper.py --provider google --model gemini-pro --prompt "Analyze the pros and cons of microservices architecture"python agent_wrapper.py --agentic --prompt "Create a Python web scraper that fetches news headlines and saves them to a JSON file"python agent_wrapper.py --agentic --interactive
# Then ask: "Create a REST API with Flask, add authentication, write tests, and commit to git"
# The AI will break it down into multiple actions automatically!python agent_wrapper.py --agentic --prompt "Read my main.py file, review it for bugs, and fix any issues you find"python agent_wrapper.py --agentic --prompt "Set up a new React project with TypeScript, ESLint, and Tailwind CSS"For issues or questions, please check:
- The config file is properly formatted (YAML)
- All required packages are installed
- API keys are correctly set
- The provider you're using is supported