Skip to content

feat: concurrent pagination, resilience improvements, and execution summary#171

Open
huohua-dev wants to merge 9 commits intolc:masterfrom
huohua-dev:feat/concurrent-and-resilience
Open

feat: concurrent pagination, resilience improvements, and execution summary#171
huohua-dev wants to merge 9 commits intolc:masterfrom
huohua-dev:feat/concurrent-and-resilience

Conversation

@huohua-dev
Copy link

Features & Improvements

Split from #169 as requested — this PR contains only the feature enhancements.

Changes

  1. Real-time flush output — prevents data loss on SIGKILL by flushing output immediately
  2. Retry with exponential backoff — adds StatusCodeError type and automatic retry logic for transient failures
  3. Concurrent pagination — parallel page fetching for Wayback, OTX, and CommonCrawl providers
  4. Structured error handling — improved error logging for URLScan, OTX, and CommonCrawl providers
  5. --provider-threads flag — controls per-provider concurrent pagination goroutines
  6. Per-provider timeout control — configurable timeout for individual provider requests
  7. Execution summary — displays total URL count and duration after completion

Motivation

These changes significantly improve gau's performance and reliability:

  • Concurrent pagination provides ~3-5x speedup for large domain scans
  • Exponential backoff handles rate limiting gracefully
  • Real-time flush ensures no data loss on unexpected termination
  • Structured errors make debugging provider issues much easier

Huohua Dev added 9 commits March 14, 2026 19:09
- Call os.Stdout.Sync() after each URL write in WriteURLs and WriteURLsJSON
- Ensure data is immediately flushed to disk in pipe/redirect scenarios
- Add atomic URL counter parameter for exit summary tracking
- Add StatusCodeError type to carry HTTP status codes through error chain
- Implement exponential backoff retry for network errors (capped at 30s)
- Skip retry for 429 rate-limit and 400 bad-request responses
- Add shouldRetry() to detect retryable network errors
- Replace manual case-insensitive search with strings.ToLower
- Implement dispatcher+worker pattern for parallel page fetching
- Use sync.Once to safely stop dispatcher on empty results
- Add structured logging with provider/domain/page fields
- Use StatusCodeError for proper 400 status handling
- Support configurable provider-threads parameter
- Implement dispatcher+worker pattern for parallel page fetching
- Use errors.As with StatusCodeError for proper 429 detection
- Stop pagination when has_next is false
- Add structured logging with provider/domain/page/status fields
…rawl

- Implement dispatcher+worker pattern using known page count
- Cap worker threads to actual page count
- Use errors.As with StatusCodeError for proper error classification
- Add structured logging for connection errors and API errors
- Add provider/domain/page/error fields to warning logs
- Add response body to rate-limit log for debugging
- Add ProviderThreads field to providers.Config
- Register --provider-threads CLI flag with default value 3
- Support provider-threads in .gau.toml config file
- Create timeout context for each provider work item
- Cap provider timeout at 5 minutes to prevent single provider blocking
- Add structured logging with provider/domain/timeout fields
- Track total URL count using atomic counter
- Log summary with total URLs and duration on exit
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant