HexaSLM leverages the power of Unsloth to provide blazing fast fine-tuning and inference for Large Language Models (LLMs). It integrates customized data pipelines for cybersecurity and streamlined CLI tools for interaction.
Get Started β’ Documentation β’ Features β’ Contributing
- β‘ Blazing Fast: 2x faster training and 70% less memory usage with Unsloth.
- π§ Chain of Verification: Implements CoVe to reduce hallucinations and verify cybersecurity advice.
- π‘οΈ Cybersecurity Focused: Pre-configured data pipelines for cybersecurity datasets.
- π₯οΈ CLI Interface: Rich, interactive terminal chat for testing your models.
- π¦ Modular: Valid Python package structure for easy integration.
graph LR
A[Raw Data] -->|Process| B(Processed Dataset)
B -->|Fine-tune| C{Unsloth Model}
C -->|Save| D[LoRA Adapters]
D -->|Load| E[Inference CLI]
E -->|CoVe| F[Verified Output]
F -->|Chat| G((User))
style C fill:#f96,stroke:#333
style E fill:#9cf,stroke:#333
style F fill:#6f9,stroke:#333
Clone the repository and install dependencies using uv (recommended) or pip.
# Clone
git clone https://github.com/AneKazek/HexaSLM.git
cd HexaSLM
# Install with uv (Recommended)
uv sync
# Or with pip
pip install .Interact with your model directly from the terminal!
# Run the chat interface (defaults to the included LoRA adapter)
python -m hexa_slm.inference.console
# Or specify a custom model/adapter
python -m hexa_slm.inference.console --model-path "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"π See CLI Options
$ python -m hexa_slm.inference.console --help
Usage: python -m hexa_slm.inference.console [OPTIONS]
Start an interactive chat session with the HexaSLM model.
Options:
--model-path TEXT Path to the model or HuggingFace ID [default: models/cove_cybersec_lora]
--use-4bit / --no-use-4bit
Use 4-bit quantization [default: True]
--system-prompt TEXT System prompt for the chat [default: You are a cybersecurity expert assistant.]
--help Show this message and exit.HexaSLM implements a systematic Chain of Verification process to ensure high-quality, secure responses.
# Enable CoVe mode in CLI
python -m hexa_slm.inference.console --coveWhen enabled, the model follows a 4-step reasoning process:
- Initial Analysis: Breaks down the request.
- Verification Planning: Identifies critical security checks.
- Systematic Verification: Validates against OWASP/NIST standards.
- Final Verified Response: Delivers the safe, verified answer.
To train a new model, use the provided notebooks in notebooks/.
01-hexaslm-train.ipynb: Main fine-tuning workflow.02-hexaslm-inference.ipynb: Validation and testing.
HexaSLM/
βββ data/ # Data storage
β βββ raw/ # Original, immutable data
βββ models/ # Model checkpoints (adapters)
βββ notebooks/ # Jupyter notebooks
βββ src/ # Source code
β βββ hexa_slm/
β βββ data/ # Data loaders
β βββ inference/ # Inference scripts (CLI)
β βββ utils/ # Helpers
βββ pyproject.toml # Configuration
βββ README.md # You are here!
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature) - Commit your Changes (
git commit -m 'Add some AmazingFeature') - Push to the Branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Distributed under the Apache 2.0 License. See LICENSE for more information.