Skip to content

nulib-labs/loam-ai

Repository files navigation

loam-ai

A command-line interface (CLI) tool for interacting with AWS Bedrock foundation models and inference profiles. LoamAI provides a streamlined way to invoke AI models, generate embeddings, and manage conversations through AWS Bedrock.

Features

  • List available foundation models and inference profiles
  • Generate text and image embeddings
  • Stream model responses for real-time output
  • Support for conversation-based model interactions
  • Manage multiple AWS profiles and regions
  • Rich terminal output formatting

Installation

Requires Python 3.10 or higher.

pip install loam-ai

Configuration

AWS SSO Login

If you're using AWS SSO, first configure your AWS profile and log in:

export AWS_PROFILE=your-sso-profile
aws sso login

After successful login, you can run loam-ai commands using your SSO credentials.

Usage

List Available Models

loam list-models

Filter by provider or output type:

loam list-models --provider anthropic
loam list-models --output TEXT

Check Session Information

loam session

Generate Embeddings

Generate text embeddings:

loam generate-embeddings \
    --model-id amazon.titan-embed-text-v2:0 \
    --input-file texts.txt

Generate image embeddings:

loam generate-embeddings \
    --model-id amazon.titan-embed-image-v1 \
    --image image.jpg \
    --texts "Image description"

Invoke Models

Simple text generation:

loam invoke \
    -m amazon.nova-lite-v1:0 \
    -p "What are the benefits of renewable energy?"

Conversation Mode

Use conversation mode for chat-based interactions:

loam converse \
    --model-id "anthropic.claude-3-sonnet-20240229-v1:0" \
    --messages-file conversation.json

Example messages file format:

[
  {
    "role": "user",
    "content": [{ "text": "What is the capital of France?" }]
  }
]

List Inference Profiles

loam list-inference-profiles

Command Options

Global Options

  • --profile: AWS profile name
  • --region: AWS region
  • --debug: Enable debug output

Model Invocation Options

  • --temperature: Control response randomness (0-1)
  • --max-tokens: Maximum response length
  • --top-p: Control response diversity (0-1)

Error Handling

LoamAI provides clear error messages with rich terminal formatting. Common issues include:

  • Invalid AWS credentials
  • Unsupported model configurations
  • Rate limiting
  • Input validation errors

Development

Built with:

  • Click for CLI interface
  • Rich for terminal formatting
  • boto3 for AWS SDK

License

MIT License - See LICENSE file for details.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages