Indexer: add GetHistory(scripts) RPC#906
Indexer: add GetHistory(scripts) RPC#906louisinger wants to merge 2 commits intoarkade-os:masterfrom
Conversation
WalkthroughThis PR adds a new GetHistory endpoint to retrieve transaction IDs associated with provided public keys. Changes span API specifications (OpenAPI and protobuf), application service layer, domain repository interfaces, database implementations across Badger/Postgres/SQLite with corresponding SQL queries, gRPC handler integration, and associated tests. Changes
Sequence DiagramsequenceDiagram
participant Client as gRPC Client
participant Handler as Handler<br/>(indexer.go)
participant Service as Service<br/>(indexer.go)
participant Repo as Repository<br/>(vtxo_repo.go)
participant DB as Database
Client->>Handler: GetHistory(scripts, page)
Handler->>Handler: Parse scripts to pubkeys
Handler->>Handler: Parse pagination params
Handler->>Service: GetHistory(pubkeys, page)
Service->>Service: Validate pubkeys not empty
Service->>Repo: GetTxidsWithPubKeys(pubkeys)
Repo->>DB: SelectDistinctTxidsWithPubkeys(pubkeys)
DB-->>Repo: []string (txids)
Repo-->>Service: []string (txids)
Service->>Service: Apply pagination
Service-->>Handler: GetHistoryResp{Txids, Page}
Handler-->>Client: GetHistoryResponse{Txids, Page}
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@api-spec/openapi/swagger/ark/v1/indexer.openapi.json`:
- Around line 345-381: The "scripts" query parameter in the GetHistory operation
(operationId IndexerService_GetHistory, path "/v1/indexer/history") must be
marked required and bounded: add "required": true for that parameter and add a
"maxItems" (e.g. 50) to its schema while keeping "minItems": 1 and
"type":"array" with "items":{"type":"string"} so the contract enforces a
non-empty, capped list consistent with server limits.
In `@api-spec/protobuf/ark/v1/indexer.proto`:
- Around line 347-356: The deleted protobuf message IndexerTxHistoryRecord is a
breaking change; restore compatibility by either (A) reintroducing a placeholder
message named IndexerTxHistoryRecord that declares reserved field numbers/names
previously used (use the original numeric tags as reserved) so existing
wire/JSON clients remain compatible, or (B) if deletion was intentional, update
the protobuf config to explicitly allow this breaking change by adding an
exception/annotation for IndexerTxHistoryRecord in your buf/breaking config;
locate references around GetHistoryRequest/GetHistoryResponse to ensure callers
still compile and update any imports or comments accordingly.
🧹 Nitpick comments (6)
internal/infrastructure/db/postgres/sqlc/query.sql (1)
258-270: Consider scanningvtxo_vwonce instead of three times.The query hits
vtxo_vwthree times with the samepubkeyfilter. This can be consolidated into a single pass, which is more efficient especially as the vtxo set grows:♻️ Suggested optimization
-- name: SelectDistinctTxidsWithPubkeys :many -SELECT DISTINCT t.txid FROM ( - SELECT vtxo_vw.txid AS txid FROM vtxo_vw - WHERE vtxo_vw.pubkey = ANY($1::varchar[]) - UNION - SELECT vtxo_vw.ark_txid AS txid FROM vtxo_vw - WHERE vtxo_vw.pubkey = ANY($1::varchar[]) - AND COALESCE(vtxo_vw.ark_txid, '') != '' - UNION - SELECT vtxo_vw.settled_by AS txid FROM vtxo_vw - WHERE vtxo_vw.pubkey = ANY($1::varchar[]) - AND COALESCE(vtxo_vw.settled_by, '') != '' -) t; +SELECT DISTINCT t.txid FROM vtxo_vw v, +LATERAL (VALUES (v.txid), (v.ark_txid), (v.settled_by)) AS t(txid) +WHERE v.pubkey = ANY($1::varchar[]) + AND COALESCE(t.txid, '') != '';Also note: the outer
DISTINCTis redundant sinceUNION(withoutALL) already eliminates duplicates.internal/core/application/indexer.go (1)
406-421: In-memory pagination loads all txids regardless of requested page.
GetTxidsWithPubKeysfetches every matching txid, thenpaginateslices in Go. For power users with very large histories this could become expensive. This is consistent with the rest of the file, so not blocking — but worth noting as a future scalability concern if history grows significantly. PushingLIMIT/OFFSETinto the SQL query would be more efficient.api-spec/openapi/swagger/ark/v1/indexer.openapi.json (1)
915-929: Mirror the query constraints inGetHistoryRequest.The request schema doesn’t currently enforce
scriptspresence or bounds. Keeping it aligned with the query parameter improves generated client behavior and validation.Suggested update
"GetHistoryRequest": { "title": "GetHistoryRequest", "type": "object", + "required": ["scripts"], "properties": { "page": { "$ref": "#/components/schemas/IndexerPageRequest" }, "scripts": { "type": "array", + "minItems": 1, + "maxItems": 100, "items": { "type": "string" } } } },internal/infrastructure/db/badger/vtxo_repo.go (1)
257-281: Sort txids for deterministic history results.Map iteration order is random; sorting keeps paging stable across runs/backends.
Suggested update
out := make([]string, 0, len(seen)) for txid := range seen { out = append(out, txid) } + sort.Strings(out) return out, nilinternal/infrastructure/db/postgres/vtxo_repo.go (1)
385-396: Sort txids for deterministic history results.Keeps paging stable across repeated calls.
Suggested update
txids, err := v.querier.SelectDistinctTxidsWithPubkeys(ctx, pubkeys) if err != nil { if errors.Is(err, sql.ErrNoRows) { return nil, nil } return nil, err } + sort.Strings(txids) return txids, nilinternal/infrastructure/db/sqlite/vtxo_repo.go (1)
391-402: Sort txids for deterministic history results.Keeps paging stable across runs/backends.
Suggested update
txids, err := v.querier.SelectDistinctTxidsWithPubkeys(ctx, pubkeys) if err != nil { if errors.Is(err, sql.ErrNoRows) { return nil, nil } return nil, err } + sort.Strings(txids) return txids, nil
| "/v1/indexer/history": { | ||
| "get": { | ||
| "tags": [ | ||
| "IndexerService" | ||
| ], | ||
| "description": "GetHistory returns the history of txids involving the provided scripts.", | ||
| "operationId": "IndexerService_GetHistory", | ||
| "parameters": [ | ||
| { | ||
| "name": "scripts", | ||
| "in": "query", | ||
| "style": "simple", | ||
| "schema": { | ||
| "minItems": 1, | ||
| "type": "array", | ||
| "items": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| }, | ||
| { | ||
| "name": "page.size", | ||
| "in": "query", | ||
| "schema": { | ||
| "type": "integer", | ||
| "format": "int32" | ||
| } | ||
| }, | ||
| { | ||
| "name": "page.index", | ||
| "in": "query", | ||
| "schema": { | ||
| "type": "integer", | ||
| "format": "int32" | ||
| } | ||
| } | ||
| ], |
There was a problem hiding this comment.
Bound and require the scripts query array.
minItems is set but required and maxItems aren’t; this leaves the contract open to empty/unbounded lists (and matches the CKV_OPENAPI_21 hint). Add a max cap aligned with server limits and mark the parameter required.
Suggested update
{
"name": "scripts",
"in": "query",
+ "required": true,
"style": "simple",
"schema": {
"minItems": 1,
+ "maxItems": 100,
"type": "array",
"items": {
"type": "string"
}
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "/v1/indexer/history": { | |
| "get": { | |
| "tags": [ | |
| "IndexerService" | |
| ], | |
| "description": "GetHistory returns the history of txids involving the provided scripts.", | |
| "operationId": "IndexerService_GetHistory", | |
| "parameters": [ | |
| { | |
| "name": "scripts", | |
| "in": "query", | |
| "style": "simple", | |
| "schema": { | |
| "minItems": 1, | |
| "type": "array", | |
| "items": { | |
| "type": "string" | |
| } | |
| } | |
| }, | |
| { | |
| "name": "page.size", | |
| "in": "query", | |
| "schema": { | |
| "type": "integer", | |
| "format": "int32" | |
| } | |
| }, | |
| { | |
| "name": "page.index", | |
| "in": "query", | |
| "schema": { | |
| "type": "integer", | |
| "format": "int32" | |
| } | |
| } | |
| ], | |
| "/v1/indexer/history": { | |
| "get": { | |
| "tags": [ | |
| "IndexerService" | |
| ], | |
| "description": "GetHistory returns the history of txids involving the provided scripts.", | |
| "operationId": "IndexerService_GetHistory", | |
| "parameters": [ | |
| { | |
| "name": "scripts", | |
| "in": "query", | |
| "required": true, | |
| "style": "simple", | |
| "schema": { | |
| "minItems": 1, | |
| "maxItems": 100, | |
| "type": "array", | |
| "items": { | |
| "type": "string" | |
| } | |
| } | |
| }, | |
| { | |
| "name": "page.size", | |
| "in": "query", | |
| "schema": { | |
| "type": "integer", | |
| "format": "int32" | |
| } | |
| }, | |
| { | |
| "name": "page.index", | |
| "in": "query", | |
| "schema": { | |
| "type": "integer", | |
| "format": "int32" | |
| } | |
| } | |
| ], |
🧰 Tools
🪛 Checkov (3.2.334)
[medium] 357-363: Ensure that arrays have a maximum number of items
(CKV_OPENAPI_21)
🤖 Prompt for AI Agents
In `@api-spec/openapi/swagger/ark/v1/indexer.openapi.json` around lines 345 - 381,
The "scripts" query parameter in the GetHistory operation (operationId
IndexerService_GetHistory, path "/v1/indexer/history") must be marked required
and bounded: add "required": true for that parameter and add a "maxItems" (e.g.
50) to its schema while keeping "minItems": 1 and "type":"array" with
"items":{"type":"string"} so the contract enforces a non-empty, capped list
consistent with server limits.
| } | ||
|
|
||
| message GetHistoryRequest { | ||
| repeated string scripts = 1; | ||
| IndexerPageRequest page = 2; | ||
| } | ||
| message GetHistoryResponse { | ||
| repeated string txids = 1; | ||
| IndexerPageResponse page = 2; | ||
| } No newline at end of file |
There was a problem hiding this comment.
Breaking protobuf change: deletion of IndexerTxHistoryRecord fails buf breaking check.
The pipeline reports that the previously present message IndexerTxHistoryRecord was deleted. This is a backward-incompatible change per buf's wire/JSON compatibility rules. If existing clients depend on this message, they will break.
Options to resolve:
- Mark
IndexerTxHistoryRecordasreserved(keeping field numbers reserved) instead of deleting it outright, so the breaking check passes. - If intentional, add an exception/annotation for
buf breakingor update thebuf.yamlconfig to allow this specific deletion.
🤖 Prompt for AI Agents
In `@api-spec/protobuf/ark/v1/indexer.proto` around lines 347 - 356, The deleted
protobuf message IndexerTxHistoryRecord is a breaking change; restore
compatibility by either (A) reintroducing a placeholder message named
IndexerTxHistoryRecord that declares reserved field numbers/names previously
used (use the original numeric tags as reserved) so existing wire/JSON clients
remain compatible, or (B) if deletion was intentional, update the protobuf
config to explicitly allow this breaking change by adding an
exception/annotation for IndexerTxHistoryRecord in your buf/breaking config;
locate references around GetHistoryRequest/GetHistoryResponse to ensure callers
still compile and update any imports or comments accordingly.
This PR adds an RPC returning the list of txids where the given script are involved : either spend a coin locked by one of the scripts or create new coin locked by one of the scripts.
combined with GetVirtualTxs, it will allow clients to restore transaction history.
@Kukks @altafan please review
Summary by CodeRabbit
Release Notes