Skip to content

scottf007/llm_memory

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM Memory

License: MIT Python 3.10+ Tests

Persistent memory for Claude Code. Every session starts where the last one left off.

The Problem

Claude Code forgets everything between sessions. Long conversations get "compacted" — lossy-summarized into oblivion. If you work on a project across many sessions, you lose decisions, corrections, and context constantly. You spend the first 10 minutes re-explaining where you left off. Mistakes get repeated. Ideas get lost.

LLM Memory fixes this. It gives Claude Code a local, persistent memory that survives across sessions and syncs across machines — no cloud services, no external APIs, everything on your machine.

Quick Install

curl -sL https://raw.githubusercontent.com/scottf007/llm_memory/main/install.sh | bash

Requires Python 3.10+, jq, sqlite3, and curl. No Node.js needed. Restart Claude Code after installing.

Or install from source
git clone https://github.com/scottf007/llm_memory.git
cd llm_memory
./install.sh

What It Does

MCP Server — A local Python server that Claude Code spawns on startup. Provides 8 memory tools for storing, searching, connecting, and auditing memories. Backed by JSON files with a SQLite/FTS5 index for fast full-text search.

Lifecycle Hooks — Shell scripts that fire automatically at session start, before compaction, and at session end. They load project context, archive raw transcripts, and ensure nothing falls through the cracks.

Web Dashboard — A read-only browser UI for browsing your memories as a searchable timeline or interactive knowledge graph.

How It Works

Session starts
  --> Hooks sweep uncollected transcripts, load project narrative + recent notes
  --> Claude knows exactly where you left off

You work normally
  --> Claude stores decisions, corrections, and insights as memories
  --> Hooks monitor the session and track progress

Session ends
  --> Raw JSONL transcript archived
  --> Session log created automatically
  --> Next session picks up right where you stopped

Project narratives are the core feature. Each project gets a living document generated from your raw JSONL transcripts, covering session history, decisions, gotchas, current state, and next steps. When you start a session in a project with no narrative, Claude reads the transcripts and writes one automatically.

Memory Tools

Tool Description
memory_store Save a narrative, note, or session log
memory_search Full-text search across all memories
memory_recent List recent memories, filtered by project or type
memory_get Retrieve a specific memory by UUID
memory_connect Create a relationship between two memories
memory_explore Traverse the memory graph from a starting point
memory_delete Remove a memory
narrative_coverage Check which transcripts are processed into a narrative

Three memory types: narrative (per-project living document), note (atomic fact, decision, or correction), session_log (automatic session record).

Two relationship types: supersedes (narrative versioning) and related_to (linked notes).

Dashboard

llm-memory-dashboard          # http://localhost:8765
llm-memory-dashboard 9000     # custom port

Timeline view with search and type/project filters. Force-directed knowledge graph showing how memories connect. Read-only — never modifies your data.

Multi-Device Sync

LLM Memory syncs between machines via Syncthing. JSON record files and transcripts sync automatically; each machine builds its own SQLite index on startup. No merge conflicts — every record uses a globally unique UUID.

python3 ~/.claude/memory/lib/setup_syncthing.py
Architecture
┌─────────────┐     MCP (stdio)     ┌──────────────┐     ┌───────────┐
│ Claude Code  │◄──────────────────►│ LLM Memory   │────►│ records/  │
│              │                    │ server.py    │     │ (JSON)    │
└─────────────┘                    └──────┬───────┘     └───────────┘
       │                                  │
       │ lifecycle hooks + CLAUDE.md      ▼
       │                            ┌───────────┐
       ▼                            │ SQLite     │ ← derived index
 SessionStart → auto-load narrative │ + FTS5     │   (rebuildable)
 PostToolUse  → monitor transcript  └───────────┘
 PreCompact   → save before compact
 SessionEnd   → archive transcript

Records are JSON files. Each memory is a JSON file in ~/.claude/memory/records/. The SQLite database is a derived index rebuilt from these files on server startup. The database is disposable — delete it anytime and rebuild.

UUIDs, not integers. All records use 32-character hex UUIDs. No collisions across machines.

Narratives are written from raw transcripts, not from summaries. Transcripts capture the user's exact words, the debugging loops, the moments where direction changed. Summaries are lossy.

What the installer does: checks dependencies, downloads code to ~/.claude/memory/lib/, creates a Python venv, registers the MCP server with Claude Code, installs all 4 lifecycle hooks, and processes any existing transcripts into session logs.

What happens on session start:

  1. Auto-update check against GitHub
  2. Transcript sweep — collects any JSONL files not yet archived
  3. Project detection from working directory
  4. Narrative + recent notes loaded into context
  5. If no narrative exists but transcripts do, Claude generates one automatically
  6. Staleness check — flags narratives that need updating

Data layout:

~/.claude/memory/
  records/        ← one JSON file per memory (source of truth, synced)
  transcripts/    ← raw session JSONL files (synced)
  config/         ← shared CLAUDE.md rules (synced)
  memory.db       ← local search index (rebuilt on startup, never synced)
  lib/            ← installed code + Python venv (not synced)

Configuration

LLM Memory injects rules into ~/.claude/CLAUDE.md that tell Claude when and how to use memory tools — when to search for past context, when to store decisions, how to write narratives. The source of these rules lives at ~/.claude/memory/config/CLAUDE.md and syncs between machines via Syncthing.

Development

git clone https://github.com/scottf007/llm_memory.git
cd llm_memory
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
pip install pytest
python3 -m pytest tests/

Troubleshooting

MCP server not loading

  • Run claude mcp list and verify llm_memory appears
  • Check Python version: python3 --version (needs 3.10+)
  • Re-register: claude mcp remove llm_memory --scope user then re-run install.sh

Hooks not firing

  • Check ~/.claude/settings.json for hook entries
  • Verify scripts are executable: ls -la ~/.claude/memory/lib/hooks/
  • Re-run install.sh to reinstall hooks

Permission errors

  • chmod +x ~/.claude/memory/lib/hooks/*.sh

Database issues

  • Rebuild anytime: python3 ~/.claude/memory/lib/server.py --rebuild
  • The database is disposable — JSON record files are the real data

Uninstall

claude mcp remove llm_memory --scope user
# Remove hook entries from ~/.claude/settings.json
rm -rf ~/.claude/memory/lib/
# Optionally delete all stored memories:
# rm -rf ~/.claude/memory/records/ ~/.claude/memory/transcripts/

License

MIT

About

Persistent memory system for Claude Code — MCP server with SQLite, lifecycle hooks, and web dashboard

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors