A command-line interface (CLI) tool for interacting with AWS Bedrock foundation models and inference profiles. LoamAI provides a streamlined way to invoke AI models, generate embeddings, and manage conversations through AWS Bedrock.
- List available foundation models and inference profiles
- Generate text and image embeddings
- Stream model responses for real-time output
- Support for conversation-based model interactions
- Manage multiple AWS profiles and regions
- Rich terminal output formatting
Requires Python 3.10 or higher.
pip install loam-aiIf you're using AWS SSO, first configure your AWS profile and log in:
export AWS_PROFILE=your-sso-profile
aws sso loginAfter successful login, you can run loam-ai commands using your SSO credentials.
loam list-modelsFilter by provider or output type:
loam list-models --provider anthropic
loam list-models --output TEXTloam sessionGenerate text embeddings:
loam generate-embeddings \
--model-id amazon.titan-embed-text-v2:0 \
--input-file texts.txtGenerate image embeddings:
loam generate-embeddings \
--model-id amazon.titan-embed-image-v1 \
--image image.jpg \
--texts "Image description"Simple text generation:
loam invoke \
-m amazon.nova-lite-v1:0 \
-p "What are the benefits of renewable energy?"Use conversation mode for chat-based interactions:
loam converse \
--model-id "anthropic.claude-3-sonnet-20240229-v1:0" \
--messages-file conversation.jsonExample messages file format:
[
{
"role": "user",
"content": [{ "text": "What is the capital of France?" }]
}
]loam list-inference-profiles--profile: AWS profile name--region: AWS region--debug: Enable debug output
--temperature: Control response randomness (0-1)--max-tokens: Maximum response length--top-p: Control response diversity (0-1)
LoamAI provides clear error messages with rich terminal formatting. Common issues include:
- Invalid AWS credentials
- Unsupported model configurations
- Rate limiting
- Input validation errors
Built with:
MIT License - See LICENSE file for details.