# Development workflow
make install # Install dependencies
make format # Format code (black + isort)
make check-types # Run mypy type checking
make lint # Run all linting (format + types)
make test # Run tests (no external APIs)
make test-all # Run all tests (includes API tests)
make check # Full check (lint + test)
# Redis setup
make redis-start # Start Redis Stack container
make redis-stop # Stop Redis Stack container
# Documentation
make docs-build # Build documentation
make docs-serve # Serve docs locallyPre-commit hooks are also configured, which you should run before you commit:
pre-commit run --all-files- Most core classes have both sync and async versions (e.g.,
SearchIndex/AsyncSearchIndex) - Follow existing patterns when adding new functionality
# Index schemas define structure
schema = IndexSchema.from_yaml("schema.yaml")
index = SearchIndex(schema, redis_url="redis://localhost:6379")- CRITICAL: Do not change this line unless explicitly asked:
token.strip().strip(",").replace(""", "").replace(""", "").lower()
CRITICAL: NEVER use git push or attempt to push to remote repositories. The user will handle all git push operations.
IMPORTANT: Always run make format before committing code to ensure proper formatting and linting compliance.
IMPORTANT: DO NOT modify README.md unless explicitly requested.
If you need to document something, use these alternatives:
- Development info → CONTRIBUTING.md
- API details → docs/ directory
- Examples → docs/examples/
- Project memory (explicit preferences, directives, etc.) → CLAUDE.md
RedisVL uses pytest with testcontainers for testing.
make test- unit tests only (no external APIs)make test-all- run the full suite, including tests that call external APIspytest --run-api-tests- explicitly run API-dependent tests (e.g., LangCache, external vectorizer/reranker providers). These require the appropriate API keys and environment variables to be set.
redisvl/
├── cli/ # Command-line interface (rvl command)
├── extensions/ # AI extensions (cache, memory, routing)
│ ├── cache/ # Semantic caching for LLMs
│ ├── llmcache/ # LLM-specific caching
│ ├── message_history/ # Chat history management
│ ├── router/ # Semantic routing
│ └── session_manager/ # Session management
├── index/ # SearchIndex classes (sync/async)
├── query/ # Query builders (Vector, Range, Filter, Count)
├── redis/ # Redis client utilities
├── schema/ # Index schema definitions
└── utils/ # Utilities (vectorizers, rerankers, optimization)
├── rerank/ # Result reranking
└── vectorize/ # Embedding providers integration
SearchIndex/AsyncSearchIndex- Main interface for Redis vector indicesIndexSchema- Define index structure with fields (text, tags, vectors, etc.)- Support for JSON and Hash storage types
VectorQuery- Semantic similarity searchRangeQuery- Vector search within distance rangeFilterQuery- Metadata filtering and full-text searchCountQuery- Count matching records- Etc.
SemanticCache- LLM response caching with semantic similarityEmbeddingsCache- Cache for vector embeddingsMessageHistory- Chat history with recency/relevancy retrievalSemanticRouter- Route queries to topics/intents
- OpenAI, Azure OpenAI, Cohere, HuggingFace, Mistral, VoyageAI
- Custom vectorizer support
- Batch processing capabilities
- Main docs: https://docs.redisvl.com
- Built with Sphinx from
docs/directory - Includes API reference and user guides
- Example notebooks in documentation
docs/user_guide/...