Skip to content

securekamal/ai-security-scanner

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

🤖 AI Security Scanner

OWASP LLM Top 10 vulnerability assessment framework for AI/LLM applications — prompt injection detection, static code analysis, and live endpoint testing.

Python OWASP License

Overview

As AI and LLM-powered applications proliferate, new attack surfaces emerge that traditional scanners miss. AI Security Scanner addresses the OWASP LLM Top 10 vulnerabilities with two assessment modes:

  1. Static Analysis — Scans Python/JS/TS source code for AI-specific security anti-patterns
  2. Live Endpoint Testing — Fires real payloads against LLM API endpoints

OWASP LLM Top 10 Coverage

# Vulnerability Static Live
LLM01 Prompt Injection
LLM02 Insecure Output Handling -
LLM03 Training Data Poisoning - -
LLM04 Model Denial of Service -
LLM05 Supply Chain Vulnerabilities -
LLM06 Sensitive Information Disclosure
LLM07 Insecure Plugin Design -
LLM08 Excessive Agency -
LLM09 Overreliance - -
LLM10 Model Theft -

Installation

git clone https://github.com/securekamal/ai-security-scanner.git
cd ai-security-scanner
pip install -r requirements.txt

Usage

Static Analysis (Source Code)

# Scan a single file
python ai_security_scanner.py static /path/to/app.py

# Scan entire project directory
python ai_security_scanner.py static /path/to/ai-app/

# JSON output only
python ai_security_scanner.py static ./src --format json --output results/scan

Detects:

  • Hardcoded API keys for OpenAI, Anthropic, Cohere, HuggingFace
  • User input concatenated directly into system prompts (Prompt Injection risk)
  • LLM output rendered without sanitization (XSS via LLM)
  • eval()/exec() called with LLM output (RCE risk)
  • Missing rate limiting on LLM endpoints
  • Excessive tool/function permissions granted to agents

Live Endpoint Testing

# Test OpenAI-compatible endpoint
python ai_security_scanner.py live https://api.openai.com/v1/chat/completions \
  --api-key $OPENAI_API_KEY \
  --model gpt-4 \
  --test-injection \
  --test-exfiltration \
  --max-payloads 20

# Test custom LLM endpoint
python ai_security_scanner.py live http://localhost:8000/v1/chat/completions \
  --model llama-3 \
  --test-injection \
  --test-dos

Payload Categories

Prompt Injection (LLM01)

  • Direct instruction override (Ignore all previous instructions...)
  • Role confusion attacks (DAN, maintenance mode, fictional framing)
  • Delimiter injection (###END_PROMPT, ```, [SYSTEM])
  • Token smuggling & encoding tricks (Unicode RTL, homoglyphs)
  • Indirect injection via data fields

Data Exfiltration (LLM06)

  • System prompt extraction
  • Context window dumping
  • Session data disclosure
  • Credential/API key leakage detection in responses (regex matching: AWS keys, JWTs, private keys, connection strings)

Model DoS (LLM04)

  • Token flooding attacks
  • Recursive task generation
  • Computationally expensive requests

Sample Output

[*] Static Analysis Mode: ./chatbot-app/

  [!] app.py: 2 issue(s)
  [!] routes/chat.py: 1 issue(s)

======================================
[+] Scan complete in 0.8s
[+] Total findings: 3 (Critical: 2, High: 1)
[+] JSON report: ai_security_report.json
[+] HTML report: ai_security_report.html

Report

The HTML report includes:

  • Executive summary with severity breakdown
  • Per-finding evidence snippets and payloads
  • OWASP LLM Top 10 category mapping
  • CWE references
  • Specific, actionable remediation steps

Remediation Guidance

Vulnerability Fix
Prompt Injection Isolate user input in user role only; never concatenate into system prompt
Insecure Output Sanitize with bleach/DOMPurify before rendering
Hardcoded Keys Use python-dotenv, AWS Secrets Manager, or HashiCorp Vault
No Rate Limiting Apply Flask-Limiter or API gateway throttling
Excessive Agency Grant LLM agents minimum required tool permissions (least privilege)

Legal Disclaimer

⚠️ Only test systems you own or have explicit written authorization to test.


Author

securekamal — Product Security Engineer | AI/LLM Security


References

About

AI/ML model security scanner — prompt injection, poisoning, model inversion

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages