-
Notifications
You must be signed in to change notification settings - Fork 7
Open
Labels
enhancementNew feature or requestNew feature or requestinfrastructureInfrastructure and deploymentInfrastructure and deployment
Description
Context
https://x.com/supabase/status/1973766005189947717
Supabase Edge Functions now support Deno 2.1 with persistent file storage via S3-compatible mounts, offering:
- Up to 97% faster boot times with sync APIs
- Read/write access to S3 buckets (including Supabase Storage)
- Persistent storage between function invocations
Potential Use Cases
1. Archive Processing
Extract zip files for batch contributor data imports without memory limits
- Store uploaded archives to
/tmp - Process in background with
EdgeRuntime.waitUntil() - Upload extracted files to Supabase Storage
2. Large PR Data Processing
Store intermediate results during long-running sync operations
- Persist partial results to S3 during multi-step processing
- Enable recovery from failures without re-fetching all data
- Audit trail of sync operations
// Example: Store PR processing results
await Deno.writeTextFile(
`/s3/inngest-results/${repositoryId}/prs.json`,
JSON.stringify(results)
);3. Bulk Embeddings Generation
Process embeddings in chunks with persistent checkpoints
- Store embedding generation progress
- Resume from last checkpoint on failure
- Reduce duplicate OpenAI API calls
4. Data Export Pipeline
Generate and cache CSV/JSON exports of workspace data
- Pre-generate common reports
- Serve cached exports directly from S3
- Invalidate on data changes
5. Configuration Management
Store sync configurations or processing state
- Repository-specific processing rules
- Dynamic rate limit adjustments
- Feature flag configurations
Performance Benefits
Sync APIs During Initialization
Use synchronous file APIs during function startup for faster boot:
// ✅ 97% faster - during initialization
const config = Deno.readFileSync('/s3/config/sync-rules.json')
// ❌ Still use async in handlers
Deno.serve(async () => {
const data = await Deno.readFile('/s3/data/file.txt')
})Integration with Inngest (150s timeout)
Since all Inngest functions run on Supabase Edge Functions:
- Store intermediate results for long-running jobs
- Checkpoint progress for multi-step workflows
- Enable graceful retry with state recovery
Environment Setup
Required secrets for S3 access:
S3FS_ENDPOINT_URLS3FS_REGIONS3FS_ACCESS_KEY_IDS3FS_SECRET_ACCESS_KEY
Follow S3 authentication guide
Questions to Explore
- What's the right balance between database storage vs S3 file storage?
- Should we persist Inngest job results for debugging/audit trails?
- Can we use this for large workspace data exports?
- Would cached embeddings in S3 improve performance?
- What's the cost/performance tradeoff vs direct database storage?
Related
- Re-implement Supabase Edge Functions for Inngest with priority queue system #882 - Priority queue system (could benefit from persistent state)
- Inngest migration to Edge Functions (fix: Migrate Inngest to Supabase Edge Functions #899)
- Embeddings system (Re-enable embeddings in production via Supabase Edge Functions #903)
References
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or requestinfrastructureInfrastructure and deploymentInfrastructure and deployment