Standalone Python naming/branding pipeline with:
- candidate generation
- LLM ideation (OpenRouter, OpenAI-compatible local runtimes, hybrid)
- async validation
- exclusion memory (SQLite) to avoid re-validating eliminated names
This repository expects secrets to be loaded via .envrc:
OPENROUTER_API_KEYOPENROUTER_HTTP_REFEREROPENROUTER_X_TITLE
Load and use env like this:
direnv allow .
direnv exec . env | rg OPENROUTERImportant: run commands that need remote access via direnv exec . <command>.
The recurring branding automations currently run from dedicated Codex worktrees. Do not remove these paths during routine worktree cleanup:
~/.codex/worktrees/automation-branding-fusion/brandname-generator~/.codex/worktrees/automation-branding-health/brandname-generator
Current automation mapping:
branding-fusion-run(generation lane):automation-branding-fusionbranding-fusion-run-2(fusion lane):automation-branding-fusioncreative-run-check(validation lane):automation-branding-health
If you need to reclaim them, pause or reconfigure the automations first.
Use Python 3.11+ and a local virtual environment:
python3 -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip
python -m pip install -r requirements-dev.txt
python -m playwright install chromiumNotes:
requirements.txtis intentionally small and only covers optional capabilities currently used by the branding pipeline:playwrightfor EUIPO/Swissreg browser probeswordfreqfor source zipf filtering inname_input_ingest.py
- Core candidate generation/validation scripts are standard-library based.
direnv exec . scripts/branding/run_openrouter_lane.sh --lane 0 --out-dir /tmp/branding_openrouter_tuned
direnv exec . scripts/branding/run_openrouter_lane.sh --lane 1 --out-dir /tmp/branding_openrouter_tunedAssumes LM Studio local server is running at http://127.0.0.1:1234/v1.
python3 scripts/branding/test_local_llm_warm_cache.py \
--provider=openai_compat \
--base-url=http://127.0.0.1:1234/v1 \
--model=llama-3.3-8b-instruct-omniwriter \
--ttl-s=3600 \
--keep-alive=30m \
--runs=5 \
--gap-s=1Assumes Ollama is running at http://127.0.0.1:11434.
zsh scripts/branding/test_ollama_local_smoke.sh \
--model gemma3:12b \
--keep-alive 30mRuns one standardized probe per lane and prints one comparison table:
zsh scripts/branding/benchmark_local_llm_profiles.shUses local LM Studio plus OpenRouter in the same ideation stage.
direnv exec . python3 scripts/branding/naming_campaign_runner.py \
--max-runs=1 \
--sleep-s=0 \
--no-mini-test \
--generator-no-external-checks \
--generator-only-llm-candidates \
--llm-ideation-enabled \
--llm-provider=hybrid \
--llm-hybrid-local-models=llama-3.3-8b-instruct-omniwriter \
--llm-hybrid-remote-models=mistralai/mistral-small-creative \
--llm-hybrid-local-share=0.75 \
--llm-openai-base-url=http://127.0.0.1:1234/v1 \
--llm-openai-ttl-s=3600 \
--llm-openai-keep-alive=30m \
--out-dir=/tmp/branding_hybridQuality mode (slower, stronger): switch local model to qwen3-vl-30b-a3b-instruct-mlx.
Canonical prompt location:
- Base prompts:
resources/branding/llm/llm_prompt.*.txt - New prompt template:
resources/branding/llm/llm_prompt.brand_market_template_v1.txt - Recommended custom layout (per brand/market):
resources/branding/llm/prompts/<brand_slug>/<market_slug>.txt
How to wire a prompt into generation:
- Direct runner: pass
--llm-prompt-template-file <absolute_or_repo_path> - Two-lane creation config: set the same flag inside
generation_commandin:resources/branding/configs/creation_lane.default.tomlresources/branding/configs/creation_lane.creative_hybrid.toml
Recommendation:
- Keep one prompt file per brand+market variant.
- Do not edit shared base prompts for experiments; fork them to a brand/market file.
- Keep run outputs isolated per variant (
--out-dir) so comparisons stay clean.
Shortcut wrappers:
zsh scripts/branding/run_hybrid_lmstudio_mistral.shzsh scripts/branding/run_hybrid_ollama_mistral.sh- Profile shortcuts (LM Studio hybrid):
zsh scripts/branding/run_hybrid_lmstudio_mistral.sh --fastzsh scripts/branding/run_hybrid_lmstudio_mistral.sh --qualityzsh scripts/branding/run_hybrid_lmstudio_mistral.sh --creative- Optional remote-model mix (Mistral + Claude via OpenRouter):
zsh scripts/branding/run_hybrid_lmstudio_mistral.sh --creative --remote-models mistralai/mistral-small-creative,anthropic/claude-sonnet-4.5
Supervisor loop (foreground):
zsh scripts/branding/run_continuous_branding_supervisor.sh \
--out-dir test_outputs/branding/continuous_hybrid \
--backend auto \
--fallback-backend ollama \
--profile-plan fast,quality,creative \
--max-usd-per-run 0.75 \
--target-good 120 \
--target-strong 40--target-good / --target-strong use strict survivors
(checked recommendations with full expensive-check pass/warn coverage and no fail/error).
LaunchAgent installer (background, survives terminal close/relogin):
zsh scripts/branding/install_launchd_continuous_branding.sh --install
zsh scripts/branding/install_launchd_continuous_branding.sh --statusProgress report:
zsh scripts/branding/report_campaign_progress.sh \
--out-dir test_outputs/branding/continuous_hybrid \
--top-n 25For a new market or brand line, duplicate the config pair and adjust:
scope/ store countries:scope:global,eu,dach- screening countries:
de,ch,it(or your target list)
- prompt file:
--llm-prompt-template-file resources/branding/llm/prompts/<brand>/<market>.txt
- naming constraints:
--generator-min-len,--generator-max-len--llm-rounds,--llm-candidates-per-round
- market lexicon/seeds:
- update seeds and source inputs (
resources/branding/inputs/source_inputs_v2.csvor market-specific copy)
- update seeds and source inputs (
- legal gate settings:
- keep
validation_lane.legal_heavy.tomldefaults for final shortlist, - adjust only if target registry coverage differs.
- keep
Suggested pattern:
resources/branding/configs/creation_lane.<brand>_<market>.tomlresources/branding/configs/validation_lane.<brand>_<market>.tomltest_outputs/branding/<brand>_<market>/...
- Acceptance-tail automation (decision pack -> final survivors + legal precheck):
direnv exec . python3 scripts/branding/run_acceptance_tail.py --pack-dir <decision_pack_dir>
- Two-lane workflow (compact create lane -> manual review -> validation lane):
direnv exec . python3 scripts/branding/run_creation_lane.py --config resources/branding/configs/creation_lane.default.toml- Review generated
review_unique_top120.csvin the new decision pack (keep/maybe/drop). direnv exec . python3 scripts/branding/run_validation_lane.py --config resources/branding/configs/validation_lane.default.toml --pack-dir <decision_pack_dir>- Tuned profiles:
direnv exec . python3 scripts/branding/run_creation_lane.py --config resources/branding/configs/creation_lane.creative_hybrid.toml- Review generated
review_unique_top160.csvin the tuned decision pack (keep/maybe/drop). direnv exec . python3 scripts/branding/run_validation_lane.py --config resources/branding/configs/validation_lane.legal_heavy.toml --pack-dir <decision_pack_dir>
- Full runner flags:
python3 scripts/branding/naming_campaign_runner.py --help
- Branding docs index:
docs/branding/README.md
- Detailed operational guide:
docs/branding/name_generator_guide.md
- Continuous test plan (mostly automated):
docs/branding/continuous_pipeline_test_plan.md
- Deferred improvement backlog:
docs/branding/continuous_pipeline_deferred_backlog.md
- Static inputs/examples:
resources/branding/
- Historical output artifacts:
artifacts/branding/legacy/2026-02/