Skip to content

[OPIK-4891] [BE] Add data retention policy enforcement#5647

Open
ldaugusto wants to merge 25 commits intomainfrom
daniela/opik-4981-data-retention-endpoints
Open

[OPIK-4891] [BE] Add data retention policy enforcement#5647
ldaugusto wants to merge 25 commits intomainfrom
daniela/opik-4981-data-retention-endpoints

Conversation

@ldaugusto
Copy link
Contributor

@ldaugusto ldaugusto commented Mar 12, 2026

Details

Adds the data retention policy enforcement system for ClickHouse data (traces, spans, feedback scores, comments).

How it works

  1. Rules are stored in MySQL (retention_rules table). Each rule targets a scope: workspace-level or org-level. Rules define a pre-defined retention period (short_14d, base_60d, extended_400d, or unlimited) after which data is eligible for deletion. Only one active rule per scope is allowed — creating a new rule auto-deactivates the previous one (soft delete for audit trail).

  2. The job (RetentionPolicyJob) runs on a Flux.interval() schedule, controlled by executionsPerDay (default 48 = every 30 min). Each tick processes a different 1/N fraction of the workspace ID hex-space, so all workspaces are covered exactly once per day. A distributed Redis lock prevents concurrent execution across instances.

  3. The service (RetentionPolicyService) resolves the most specific active rule per workspace (workspace rule > org rule), computes a cutoff UUID from start_of_today - retention_period, and deletes in referential order: feedback_scores (+ authored_feedback_scores) → comments → spans → traces. Each DAO's deleteForRetention issues a DELETE FROM <table> WHERE workspace_id IN (...) AND <id_column> < cutoff_id. When applyToPast=false, an additional lower bound (id >= min_id derived from the rule's createdAt) ensures only data created after the rule was established gets deleted.

  4. Workspace partitioning (RetentionUtils) splits the UUID hex-space into N equal fractions using the first hex digit of workspace_id. This avoids scanning all workspaces on every tick and distributes load evenly.

  5. Batching by cutoff — since retention periods are pre-defined enum values (not arbitrary durations), workspaces sharing the same cutoff are grouped to minimize queries. The cutoff is normalized to start-of-day (UTC) so all ticks within the same day produce identical cutoffs for the same period. Two query patterns are used:

    • applyToPast=true: simple WHERE workspace_id IN (...) AND id < :cutoff — all workspaces packed into one IN clause
    • applyToPast=false: per-workspace OR conditions WHERE id < :cutoff AND ((workspace_id = :w1 AND id >= :min1) OR (workspace_id = :w2 AND id >= :min2) OR ...) — different minIds packed into a single statement
  6. Sequential execution — all deletion batches and table-level deletes run sequentially (concatMap / Flux.concat), not in parallel. Retention deletes can be very large, and parallel mutations would risk saturating ClickHouse connections and causing excessive merge pressure.

Key design decisions

  • Feature-flagged off by defaultRETENTION_ENABLED=false in config.yml. No Helm changes needed; toggle via env var at deployment time.
  • Soft-delete rules — rules are never hard-deleted (enabled=false), preserving audit trail.
  • Idempotent DELETE endpoint — returns 204 whether the rule existed, was already inactive, or was never created.
  • applyToPast defaults to true — unless explicitly set to false, retention rules apply to all existing data. This keeps most workspaces in the efficient batch path.
  • Children-before-parents deletion order — feedback_scores and comments are deleted before spans, spans before traces, to avoid orphan references.
  • ClickHouse deletion strategy — we don't need to enforce immediate physical deletion. ClickHouse DELETE marks rows as deleted (so they're no longer accessible in queries), and the actual mutation is applied asynchronously by ClickHouse's merge process.
  • Sequential deletes to protect ClickHouse — all batches and per-table deletes run one at a time to avoid connection saturation and excessive merge pressure from parallel mutations.
  • DAO changes in existing files (CommentDAO, FeedbackScoreDAO, SpanDAO, TraceDAO) are limited to adding deleteForRetention / deleteForRetentionWithBounds methods — no changes to existing methods.

Not in scope for this version

  • Project-level rule enforcement — the project_id column exists in the rules table and uniqueness is enforced, but filtering in the deletion path is not yet implemented. Will be addressed in an upcoming PR.
  • Progressive deletion for large workspaces — chunked/throttled deletion to avoid ClickHouse pressure on big workspaces.
  • Daily usage workspace table — a future optimization that tracks which workspaces had activity each day, allowing the retention job to skip inactive workspaces entirely and greatly reduce the number of queries.

All will be part of the codebase before prod activation.

File guide

File Purpose
RetentionPolicyJob Dropwizard managed lifecycle, Flux.interval scheduling, Redis lock acquire/release
RetentionPolicyService Orchestration: partition → resolve rules → sequential ordered deletion (two query patterns)
RetentionRuleService CRUD business logic for rules (create, find, deactivate)
RetentionRuleDAO JDBI DAO for MySQL retention_rules table
RetentionUtils Hex-space partitioning math
RetentionConfig Dropwizard config POJO (enabled, executionsPerDay, lockTimeout, batchSize)
RetentionRule / RetentionLevel / RetentionPeriod API model + enums
RetentionRulesResource JAX-RS REST endpoints (POST/GET/DELETE)
000056_create_retention_rules_table.sql Liquibase migration for MySQL
CommentDAO / FeedbackScoreDAO / SpanDAO / TraceDAO Added deleteForRetention + deleteForRetentionWithBounds methods
config.yml Retention config block with env var defaults

Change checklist

  • User facing
  • Documentation update

Issues

  • OPIK-4891
  • OPIK-4797

Testing

  • Unit tests: RetentionUtilsTest — workspace range partitioning, sentinel coverage for non-hex workspace IDs
  • Integration tests: RetentionPolicyServiceTest — actual ClickHouse deletion verification, rule priority resolution (workspace > org), multi-workspace isolation, disabled/unlimited rule handling, children-before-parents ordering, applyToPast=false preserves pre-existing data across all 4 entity types (same retention period, both patterns exercised in one cycle)
  • API tests: RetentionRulesResourceTest — CRUD operations, validation, workspace scoping, idempotent deactivation, auto-deactivation of previous rule
  • Not tested: Flux scheduling loop, Redis lock lifecycle (integration/staging validation)

Documentation

@github-actions github-actions bot added java Pull requests that update Java code Backend tests Including test files, or tests related like configuration. labels Mar 12, 2026
@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Backend Tests - Integration Group 15

 15 files  + 4   15 suites  +4   2m 2s ⏱️ +50s
209 tests +49  207 ✅ +47  2 💤 +2  0 ❌ ±0 
209 runs  +70  207 ✅ +68  2 💤 +2  0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

This pull request removes 76 and adds 125 tests. Note that renamed tests count towards both.
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ checkAccessForDefaultWorkspace__whenApiKeyIsPresent__thenReturnProperResponse(String, int, String)[1]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ checkAccessForDefaultWorkspace__whenApiKeyIsPresent__thenReturnProperResponse(String, int, String)[2]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ checkAccessForDefaultWorkspace__whenApiKeyIsPresent__thenReturnProperResponse(String, int, String)[3]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ checkAccess__whenApiKeyIsPresent__thenReturnProperResponse(String, int, String)[1]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ checkAccess__whenApiKeyIsPresent__thenReturnProperResponse(String, int, String)[2]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ checkAccess__whenApiKeyIsPresent__thenReturnProperResponse(String, int, String)[3]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ getWorkspaceName(String, int, String)[1]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ getWorkspaceName(String, int, String)[2]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ getWorkspaceName(String, int, String)[3]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ useInvalidWorkspace__thenReturnForbiddenResponse(String, String)[1]
…
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$AutomaticBlueprintUpdates ‑ createPromptVersion__whenBlueprintHasMultiplePrompts__thenUpdateOnlyChangedOne
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$AutomaticBlueprintUpdates ‑ createPromptVersion__whenBlueprintHasNoPrompts__thenNoUpdate
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$AutomaticBlueprintUpdates ‑ createPromptVersion__whenBlueprintReferencesPrompt__thenAutoUpdateBlueprint(Set, String)[1]
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$AutomaticBlueprintUpdates ‑ createPromptVersion__whenBlueprintReferencesPrompt__thenAutoUpdateBlueprint(Set, String)[2]
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$AutomaticBlueprintUpdates ‑ createPromptVersion__whenMaskReferencesPrompt__thenNoUpdate
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$AutomaticBlueprintUpdates ‑ createPromptVersion__whenMultipleProjects__thenOnlySameProjectUpdated
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$AutomaticBlueprintUpdates ‑ createPromptVersion__whenProjectExcluded__thenBlueprintNotUpdated
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$CreateAgentConfig ‑ createAgentConfig
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$CreateAgentConfig ‑ createAgentConfig__nameAutoIncrements
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$CreateAgentConfig ‑ createAgentConfig__perValueType(ValueType, String)[10]
…

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Backend Tests - Integration Group 9

 28 files  + 2   28 suites  +2   8m 9s ⏱️ + 3m 25s
333 tests +15  332 ✅ +20  1 💤  - 5  0 ❌ ±0 
333 runs  +41  332 ✅ +46  1 💤  - 5  0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

This pull request removes 21 and adds 36 tests. Note that renamed tests count towards both.
com.comet.opik.api.resources.v1.events.DatasetExportJobSubscriberResourceTest$ConfigurationTests ‑ shouldVerifyStreamConfiguration
com.comet.opik.api.resources.v1.events.DatasetExportJobSubscriberResourceTest$ConfigurationTests ‑ shouldVerifySubscriberIsEnabled
com.comet.opik.api.resources.v1.events.DatasetExportJobSubscriberResourceTest$EdgeCaseTests ‑ shouldCompleteExport_whenDatasetDoesNotExist
com.comet.opik.api.resources.v1.events.DatasetExportJobSubscriberResourceTest$SuccessTests ‑ shouldProcessExportJobSuccessfully_forEmptyDataset
com.comet.opik.api.resources.v1.events.DatasetExportJobSubscriberResourceTest$SuccessTests ‑ shouldProcessExportJobSuccessfully_whenDatasetHasItems
com.comet.opik.api.resources.v1.events.DatasetExportJobSubscriberResourceTest$SuccessTests ‑ shouldProcessExportJobWithLargeDataset
com.comet.opik.api.resources.v1.events.DatasetExportJobSubscriberResourceTest$SuccessTests ‑ shouldProcessMultipleExportJobsInParallel
com.comet.opik.api.resources.v1.priv.ChatCompletionsResourceTest ‑ createAnthropicValidateMandatoryFields(ChatCompletionRequest, String)[1]
com.comet.opik.api.resources.v1.priv.ChatCompletionsResourceTest ‑ createAnthropicValidateMandatoryFields(ChatCompletionRequest, String)[2]
com.comet.opik.api.resources.v1.priv.ChatCompletionsResourceTest$Create ‑ create(String, LlmProvider, String, BiConsumer)
…
com.comet.opik.api.resources.v1.priv.DatasetExperimentE2ETest$FilterDatasetsByExperimentWith ‑ when__filteringByDatasetsWithExperimentsAfterAnExperimentIsDeleted__thenShouldReturnTheDatasetWithExperiments
com.comet.opik.api.resources.v1.priv.DatasetExperimentE2ETest$FilterDatasetsByExperimentWith ‑ when__filteringByDatasetsWithExperimentsAfterDeletingExperimentsButDatasetHasMore__thenShouldReturnTheDatasetWithExperiments
com.comet.opik.api.resources.v1.priv.DatasetExperimentE2ETest$FilterDatasetsByExperimentWith ‑ when__filteringByDatasetsWithExperiments__thenShouldReturnTheDatasetWithExperiments
com.comet.opik.api.resources.v1.priv.RetentionRulesResourceTest$CreateRetentionRule ‑ createOrgLevelWithProjectIdFails
com.comet.opik.api.resources.v1.priv.RetentionRulesResourceTest$CreateRetentionRule ‑ createOrganizationRule
com.comet.opik.api.resources.v1.priv.RetentionRulesResourceTest$CreateRetentionRule ‑ createProjectRule
com.comet.opik.api.resources.v1.priv.RetentionRulesResourceTest$CreateRetentionRule ‑ createProjectRuleDoesNotDeactivateWorkspaceRule
com.comet.opik.api.resources.v1.priv.RetentionRulesResourceTest$CreateRetentionRule ‑ createRuleAutoDeactivatesPrevious
com.comet.opik.api.resources.v1.priv.RetentionRulesResourceTest$CreateRetentionRule ‑ createRuleWithApplyToPast
com.comet.opik.api.resources.v1.priv.RetentionRulesResourceTest$CreateRetentionRule ‑ createRuleWithoutRetentionFails
…

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Backend Tests - Integration Group 12

214 tests  +15   212 ✅ +13   2m 47s ⏱️ -46s
 34 suites + 1     2 💤 + 2 
 34 files   + 1     0 ❌ ± 0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

This pull request removes 141 and adds 156 tests. Note that renamed tests count towards both.
com.comet.opik.api.resources.v1.priv.ExperimentsResourceCustomConfigurationTest ‑ findExperimentsWithForceSortingBypassesLimit
com.comet.opik.api.resources.v1.priv.ExperimentsResourceCustomConfigurationTest ‑ findExperimentsWithSortingDisabled
com.comet.opik.api.resources.v1.priv.ProjectsResourceTest$ApiKey ‑ createProject__whenApiKeyIsPresent__thenReturnProperResponse(String, boolean, ErrorMessage)[1]
com.comet.opik.api.resources.v1.priv.ProjectsResourceTest$ApiKey ‑ createProject__whenApiKeyIsPresent__thenReturnProperResponse(String, boolean, ErrorMessage)[2]
com.comet.opik.api.resources.v1.priv.ProjectsResourceTest$ApiKey ‑ createProject__whenApiKeyIsPresent__thenReturnProperResponse(String, boolean, ErrorMessage)[3]
com.comet.opik.api.resources.v1.priv.ProjectsResourceTest$ApiKey ‑ deleteProject__whenApiKeyIsPresent__thenReturnProperResponse(String, boolean, ErrorMessage)[1]
com.comet.opik.api.resources.v1.priv.ProjectsResourceTest$ApiKey ‑ deleteProject__whenApiKeyIsPresent__thenReturnProperResponse(String, boolean, ErrorMessage)[2]
com.comet.opik.api.resources.v1.priv.ProjectsResourceTest$ApiKey ‑ deleteProject__whenApiKeyIsPresent__thenReturnProperResponse(String, boolean, ErrorMessage)[3]
com.comet.opik.api.resources.v1.priv.ProjectsResourceTest$ApiKey ‑ getProjectById__whenApiKeyIsPresent__thenReturnProperResponse(String, Visibility, int)[1]
com.comet.opik.api.resources.v1.priv.ProjectsResourceTest$ApiKey ‑ getProjectById__whenApiKeyIsPresent__thenReturnProperResponse(String, Visibility, int)[2]
…
com.comet.opik.api.resources.v1.priv.AttachmentResourceMinIOTest ‑ deleteTraceDeletesTraceAndSpanAttachments(Consumer)[1]
com.comet.opik.api.resources.v1.priv.AttachmentResourceMinIOTest ‑ deleteTraceDeletesTraceAndSpanAttachments(Consumer)[2]
com.comet.opik.api.resources.v1.priv.AttachmentResourceMinIOTest ‑ invalidBaseUrlFormatReturnsError
com.comet.opik.api.resources.v1.priv.AttachmentResourceMinIOTest ‑ uploadAttachmentWithMultiPartPresignUrl
com.comet.opik.api.resources.v1.priv.AutomationRuleEvaluatorsResourceTest$ApiKey ‑ createWhenNotValidApiKey(String, ErrorMessage)[1]
com.comet.opik.api.resources.v1.priv.AutomationRuleEvaluatorsResourceTest$ApiKey ‑ createWhenNotValidApiKey(String, ErrorMessage)[2]
com.comet.opik.api.resources.v1.priv.AutomationRuleEvaluatorsResourceTest$ApiKey ‑ deleteWhenNotValidApiKey(String, ErrorMessage)[1]
com.comet.opik.api.resources.v1.priv.AutomationRuleEvaluatorsResourceTest$ApiKey ‑ deleteWhenNotValidApiKey(String, ErrorMessage)[2]
com.comet.opik.api.resources.v1.priv.AutomationRuleEvaluatorsResourceTest$ApiKey ‑ getLogsWhenNotValidApiKey(String, ErrorMessage)[1]
com.comet.opik.api.resources.v1.priv.AutomationRuleEvaluatorsResourceTest$ApiKey ‑ getLogsWhenNotValidApiKey(String, ErrorMessage)[2]
…

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Backend Tests - Integration Group 16

 29 files   - 10   29 suites   - 10   3m 50s ⏱️ -37s
194 tests  - 33  194 ✅  - 33  0 💤 ±0  0 ❌ ±0 
172 runs   - 55  172 ✅  - 55  0 💤 ±0  0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

This pull request removes 78 and adds 45 tests. Note that renamed tests count towards both.
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$FailureTests ‑ shouldAckAndRemoveNonRetryableFailures(String, RuntimeException)[1]
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$FailureTests ‑ shouldAckAndRemoveNonRetryableFailures(String, RuntimeException)[2]
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$FailureTests ‑ shouldAckAndRemoveNonRetryableFailures(String, RuntimeException)[3]
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$FailureTests ‑ shouldAckAndRemoveNonRetryableFailures(String, RuntimeException)[4]
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$FailureTests ‑ shouldContinueProcessingAfterFailedMessages
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$FailureTests ‑ shouldRecoverFromNoGroupOnReadAndContinueProcessing
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$LifecycleTests ‑ shouldHandleExistingConsumerGroup
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$LifecycleTests ‑ shouldRemoveConsumerOnStop
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$RetryTests ‑ shouldAckAndRemoveAfterMaxRetries
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$RetryTests ‑ shouldHandleMixedSuccessRetryableAndNonRetryableMessagesInSameBatch
…
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ datasetBiInfoTest
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ experimentBiInfoTest
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ mixedWorkspaceExcludesDemoData
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ spanBiInfoTest
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ spansCountExcludingDemoData
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ spansCountForWorkspace
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ traceBiInfoTest
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ tracesCountExcludingDemoData
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ tracesCountForWorkspace
com.comet.opik.api.resources.v1.priv.AnnotationQueuesResourceTest$RequiredPermissionsTest ‑ deleteAnnotationQueueBatchPassesRequiredPermissionsToAuthEndpoint
…

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Backend Tests - Integration Group 5

124 tests  +17   124 ✅ +17   3m 2s ⏱️ -29s
 26 suites + 2     0 💤 ± 0 
 26 files   + 2     0 ❌ ± 0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

This pull request removes 10 and adds 27 tests. Note that renamed tests count towards both.
com.comet.opik.api.resources.v1.priv.OllamaResourceTest ‑ listModels__emptyList(ClientSupport)
com.comet.opik.api.resources.v1.priv.OllamaResourceTest ‑ listModels__success(ClientSupport)
com.comet.opik.api.resources.v1.priv.OllamaResourceTest ‑ listModels__unauthorized(ClientSupport)
com.comet.opik.api.resources.v1.priv.OllamaResourceTest ‑ testConnection__failure(ClientSupport)
com.comet.opik.api.resources.v1.priv.OllamaResourceTest ‑ testConnection__missingBaseUrl(ClientSupport)
com.comet.opik.api.resources.v1.priv.OllamaResourceTest ‑ testConnection__success(String, String, ClientSupport)[1]
com.comet.opik.api.resources.v1.priv.OllamaResourceTest ‑ testConnection__success(String, String, ClientSupport)[2]
com.comet.opik.api.resources.v1.priv.OllamaResourceTest ‑ testConnection__unauthorized(ClientSupport)
com.comet.opik.infrastructure.bi.OpikGuiceyLifecycleEventListenerTest$FirstStartupTest ‑ shouldNotifyEvent(UsageReportService)
com.comet.opik.infrastructure.bi.OpikGuiceyLifecycleEventListenerTest$SecondStartupTest ‑ shouldNotNotifyEvent(UsageReportService)
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ checkAccessForDefaultWorkspace__whenApiKeyIsPresent__thenReturnProperResponse(String, int, String)[1]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ checkAccessForDefaultWorkspace__whenApiKeyIsPresent__thenReturnProperResponse(String, int, String)[2]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ checkAccessForDefaultWorkspace__whenApiKeyIsPresent__thenReturnProperResponse(String, int, String)[3]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ checkAccess__whenApiKeyIsPresent__thenReturnProperResponse(String, int, String)[1]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ checkAccess__whenApiKeyIsPresent__thenReturnProperResponse(String, int, String)[2]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ checkAccess__whenApiKeyIsPresent__thenReturnProperResponse(String, int, String)[3]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ getWorkspaceName(String, int, String)[1]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ getWorkspaceName(String, int, String)[2]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ getWorkspaceName(String, int, String)[3]
com.comet.opik.api.resources.v1.priv.AuthenticationResourceTest$ApiKey ‑ useInvalidWorkspace__thenReturnForbiddenResponse(String, String)[1]
…

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Backend Tests - Integration Group 2

 20 files  ± 0   20 suites  ±0   10m 31s ⏱️ - 24m 37s
259 tests ± 0  259 ✅ ± 0  0 💤 ±0  0 ❌ ±0 
180 runs   - 79  180 ✅  - 79  0 💤 ±0  0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Backend Tests - Integration Group 1

 24 files  + 2   24 suites  +2   2m 37s ⏱️ - 8m 50s
408 tests + 9  408 ✅ + 9  0 💤 ±0  0 ❌ ±0 
335 runs   - 64  335 ✅  - 64  0 💤 ±0  0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Backend Tests - Integration Group 4

    5 files  ±0      5 suites  ±0   3m 12s ⏱️ ±0s
1 362 tests +1  1 362 ✅ +1  0 💤 ±0  0 ❌ ±0 
1 274 runs  +2  1 274 ✅ +2  0 💤 ±0  0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Backend Tests - Integration Group 6

1 130 tests  +3   1 129 ✅ +2   5m 37s ⏱️ -3s
    8 suites +1       1 💤 +1 
    8 files   +1       0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

This pull request removes 8 and adds 11 tests. Note that renamed tests count towards both.
com.comet.opik.api.resources.v1.priv.DatasetExperimentE2ETest$FilterDatasetsByExperimentWith ‑ when__filteringByDatasetsWithExperimentsAfterAnExperimentIsDeleted__thenShouldReturnTheDatasetWithExperiments
com.comet.opik.api.resources.v1.priv.DatasetExperimentE2ETest$FilterDatasetsByExperimentWith ‑ when__filteringByDatasetsWithExperimentsAfterDeletingExperimentsButDatasetHasMore__thenShouldReturnTheDatasetWithExperiments
com.comet.opik.api.resources.v1.priv.DatasetExperimentE2ETest$FilterDatasetsByExperimentWith ‑ when__filteringByDatasetsWithExperiments__thenShouldReturnTheDatasetWithExperiments
com.comet.opik.api.resources.v1.priv.GuardrailsResourceTest ‑ getTraceStats_containsGuardrails
com.comet.opik.api.resources.v1.priv.GuardrailsResourceTest ‑ testCreateGuardrails_findTraces
com.comet.opik.api.resources.v1.priv.GuardrailsResourceTest ‑ testCreateGuardrails_getTraceById
com.comet.opik.infrastructure.health.IsAliveE2ETest ‑ testGetVersion
com.comet.opik.infrastructure.health.IsAliveE2ETest ‑ testIsAlive
com.comet.opik.api.resources.v1.priv.AttachmentResourceTest ‑ directS3DownloadShouldFailTest
com.comet.opik.api.resources.v1.priv.AttachmentResourceTest ‑ directS3UploadShouldFailTest
com.comet.opik.api.resources.v1.priv.AttachmentResourceTest ‑ uploadAttachmentWithMultiPartPresignUrl
com.comet.opik.api.resources.v1.priv.FindSpansResourceTest$FindSpans ‑ whenSortingByUsageTotalTokens__afterUpdate__thenReturnLatestVersion
com.comet.opik.api.resources.v1.priv.SpansBatchUpdateResourceTest$BatchUpdateAllFields ‑ batchUpdate__updateAllFields__success
com.comet.opik.api.resources.v1.priv.SpansBatchUpdateResourceTest$BatchUpdateTags ‑ batchUpdate__success(boolean, String)[1]
com.comet.opik.api.resources.v1.priv.SpansBatchUpdateResourceTest$BatchUpdateTags ‑ batchUpdate__success(boolean, String)[2]
com.comet.opik.api.resources.v1.priv.SpansBatchUpdateResourceTest$BatchUpdateTags ‑ batchUpdate__whenEmptyIds__thenReturn400
com.comet.opik.api.resources.v1.priv.SpansBatchUpdateResourceTest$BatchUpdateTags ‑ batchUpdate__whenNullUpdate__thenReturn400
com.comet.opik.api.resources.v1.priv.SpansBatchUpdateResourceTest$BatchUpdateTags ‑ batchUpdate__whenTooManyIds__thenReturn400
…

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Backend Tests - Integration Group 13

440 tests  +19   438 ✅ +17   4m 21s ⏱️ -2s
 19 suites + 4     2 💤 + 2 
 19 files   + 4     0 ❌ ± 0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

This pull request removes 67 and adds 86 tests. Note that renamed tests count towards both.
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$AutomaticBlueprintUpdates ‑ createPromptVersion__whenBlueprintHasMultiplePrompts__thenUpdateOnlyChangedOne
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$AutomaticBlueprintUpdates ‑ createPromptVersion__whenBlueprintHasNoPrompts__thenNoUpdate
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$AutomaticBlueprintUpdates ‑ createPromptVersion__whenBlueprintReferencesPrompt__thenAutoUpdateBlueprint(Set, String)[1]
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$AutomaticBlueprintUpdates ‑ createPromptVersion__whenBlueprintReferencesPrompt__thenAutoUpdateBlueprint(Set, String)[2]
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$AutomaticBlueprintUpdates ‑ createPromptVersion__whenMaskReferencesPrompt__thenNoUpdate
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$AutomaticBlueprintUpdates ‑ createPromptVersion__whenMultipleProjects__thenOnlySameProjectUpdated
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$AutomaticBlueprintUpdates ‑ createPromptVersion__whenProjectExcluded__thenBlueprintNotUpdated
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$CreateAgentConfig ‑ createAgentConfig
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$CreateAgentConfig ‑ createAgentConfig__perValueType(ValueType, String)[10]
com.comet.opik.api.resources.v1.priv.AgentConfigsResourceTest$CreateAgentConfig ‑ createAgentConfig__perValueType(ValueType, String)[1]
…
com.comet.opik.api.resources.v1.priv.ChatCompletionsResourceTest ‑ createAnthropicValidateMandatoryFields(ChatCompletionRequest, String)[1]
com.comet.opik.api.resources.v1.priv.ChatCompletionsResourceTest ‑ createAnthropicValidateMandatoryFields(ChatCompletionRequest, String)[2]
com.comet.opik.api.resources.v1.priv.ChatCompletionsResourceTest$Create ‑ create(String, LlmProvider, String, BiConsumer)
com.comet.opik.api.resources.v1.priv.ChatCompletionsResourceTest$Create ‑ createAndStreamResponse(String, LlmProvider, String, BiConsumer)
com.comet.opik.api.resources.v1.priv.ChatCompletionsResourceTest$Create ‑ createAndStreamResponseGeminiInvalidApiKey
com.comet.opik.api.resources.v1.priv.ChatCompletionsResourceTest$Create ‑ createAndStreamResponseReturnsBadRequestWhenNoModel(String)[1]
com.comet.opik.api.resources.v1.priv.ChatCompletionsResourceTest$Create ‑ createAndStreamResponseReturnsBadRequestWhenNoModel(String)[2]
com.comet.opik.api.resources.v1.priv.ChatCompletionsResourceTest$Create ‑ createReturnsBadRequestWhenModelIsInvalid(String)[1]
com.comet.opik.api.resources.v1.priv.ChatCompletionsResourceTest$Create ‑ createReturnsBadRequestWhenModelIsInvalid(String)[2]
com.comet.opik.api.resources.v1.priv.ChatCompletionsResourceTest$Create ‑ createReturnsBadRequestWhenNoLlmProviderApiKey(String, LlmProvider)[1]
…

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Backend Tests - Integration Group 10

206 tests  ±0   206 ✅ +2   8m 47s ⏱️ -43s
 25 suites +4     0 💤  - 2 
 25 files   +4     0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

This pull request removes 63 and adds 63 tests. Note that renamed tests count towards both.
com.comet.opik.api.resources.v1.jobs.TraceThreadsClosingJobTest$TraceThreadsClosingJob ‑ shouldCloseTraceThreadsForProject
com.comet.opik.api.resources.v1.jobs.TraceThreadsClosingJobTest$TraceThreadsClosingJob ‑ shouldCloseTraceThreadsForProjectWithCustomTimeout
com.comet.opik.api.resources.v1.jobs.TraceThreadsClosingJobTest$TraceThreadsClosingJob ‑ shouldReopenTraceThreadsIfNewTracesAreAdded
com.comet.opik.api.resources.v1.priv.LlmProviderApiKeyResourceBuiltinProviderTest ‑ testBatchDelete_ignoresBuiltinProvider
com.comet.opik.api.resources.v1.priv.LlmProviderApiKeyResourceBuiltinProviderTest ‑ testFindProviders_builtinProviderHasCorrectConfiguration
com.comet.opik.api.resources.v1.priv.LlmProviderApiKeyResourceBuiltinProviderTest ‑ testFindProviders_builtinProviderHasReadOnlyTrue
com.comet.opik.api.resources.v1.priv.LlmProviderApiKeyResourceBuiltinProviderTest ‑ testFindProviders_builtinProviderIsAddedAtEnd
com.comet.opik.api.resources.v1.priv.LlmProviderApiKeyResourceBuiltinProviderTest ‑ testFindProviders_includesVirtualBuiltinProvider_whenEnabled
com.comet.opik.api.resources.v1.priv.LlmProviderApiKeyResourceBuiltinProviderTest ‑ testFindProviders_userProvidersHaveReadOnlyFalse
com.comet.opik.api.resources.v1.priv.WorkspacesResourceTest$CostsMetricsTest ‑ costsDaily_emptyData(boolean)[1]
…
com.comet.opik.api.resources.v1.priv.DashboardsResourceTest$BatchDeleteDashboards ‑ batchDeleteFromDifferentWorkspaceReturns204
com.comet.opik.api.resources.v1.priv.DashboardsResourceTest$BatchDeleteDashboards ‑ batchDeleteMultipleExistingDashboards
com.comet.opik.api.resources.v1.priv.DashboardsResourceTest$BatchDeleteDashboards ‑ batchDeleteSingleDashboard
com.comet.opik.api.resources.v1.priv.DashboardsResourceTest$BatchDeleteDashboards ‑ batchDeleteWithMixedIds
com.comet.opik.api.resources.v1.priv.DashboardsResourceTest$BatchDeleteDashboards ‑ batchDeleteWithNonExistentIdsReturns204
com.comet.opik.api.resources.v1.priv.DashboardsResourceTest$CreateDashboard ‑ createDashboardWithAllFields(DashboardType, DashboardScope)[1]
com.comet.opik.api.resources.v1.priv.DashboardsResourceTest$CreateDashboard ‑ createDashboardWithAllFields(DashboardType, DashboardScope)[2]
com.comet.opik.api.resources.v1.priv.DashboardsResourceTest$CreateDashboard ‑ createDashboardWithDuplicateNameSucceeds
com.comet.opik.api.resources.v1.priv.DashboardsResourceTest$CreateDashboard ‑ createDashboardWithSpecialCharactersInName
com.comet.opik.api.resources.v1.priv.DashboardsResourceTest$CreateDashboard ‑ createDashboardWithoutDescription
…

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Backend Tests - Integration Group 3

313 tests  +6   313 ✅ +6   9m 56s ⏱️ +9s
 29 suites +1     0 💤 ±0 
 29 files   +1     0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Backend Tests - Integration Group 14

222 tests   - 14   222 ✅  - 14   12m 5s ⏱️ + 2m 2s
 27 suites + 4     0 💤 ± 0 
 27 files   + 4     0 ❌ ± 0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

This pull request removes 77 and adds 63 tests. Note that renamed tests count towards both.
com.comet.opik.api.resources.v1.priv.AttachmentResourceMinIOTest ‑ deleteTraceDeletesTraceAndSpanAttachments(Consumer)[1]
com.comet.opik.api.resources.v1.priv.AttachmentResourceMinIOTest ‑ deleteTraceDeletesTraceAndSpanAttachments(Consumer)[2]
com.comet.opik.api.resources.v1.priv.AttachmentResourceMinIOTest ‑ invalidBaseUrlFormatReturnsError
com.comet.opik.api.resources.v1.priv.AttachmentResourceMinIOTest ‑ uploadAttachmentWithMultiPartPresignUrl
com.comet.opik.api.resources.v1.priv.DatasetsCsvUploadResourceTest ‑ uploadCsvFile__invalidHeaders(String, String)[1]
com.comet.opik.api.resources.v1.priv.DatasetsCsvUploadResourceTest ‑ uploadCsvFile__invalidHeaders(String, String)[2]
com.comet.opik.api.resources.v1.priv.DatasetsCsvUploadResourceTest ‑ uploadCsvFile__invalidHeaders(String, String)[3]
com.comet.opik.api.resources.v1.priv.DatasetsCsvUploadResourceTest ‑ uploadCsvFile__largeBatch
com.comet.opik.api.resources.v1.priv.DatasetsCsvUploadResourceTest ‑ uploadCsvFile__specialCharacters
com.comet.opik.api.resources.v1.priv.DatasetsCsvUploadResourceTest ‑ uploadCsvFile__success
…
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$FailureTests ‑ shouldAckAndRemoveNonRetryableFailures(String, RuntimeException)[1]
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$FailureTests ‑ shouldAckAndRemoveNonRetryableFailures(String, RuntimeException)[2]
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$FailureTests ‑ shouldAckAndRemoveNonRetryableFailures(String, RuntimeException)[3]
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$FailureTests ‑ shouldAckAndRemoveNonRetryableFailures(String, RuntimeException)[4]
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$FailureTests ‑ shouldContinueProcessingAfterFailedMessages
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$FailureTests ‑ shouldRecoverFromNoGroupOnReadAndContinueProcessing
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$LifecycleTests ‑ shouldHandleExistingConsumerGroup
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$LifecycleTests ‑ shouldRemoveConsumerOnStop
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$RetryTests ‑ shouldAckAndRemoveAfterMaxRetries
com.comet.opik.api.resources.v1.events.BaseRedisSubscriberTest$RetryTests ‑ shouldHandleMixedSuccessRetryableAndNonRetryableMessagesInSameBatch
…

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Backend Tests - Integration Group 8

268 tests  +5   268 ✅ +6   4m 31s ⏱️ +22s
 24 suites +3     0 💤  - 1 
 24 files   +3     0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

This pull request removes 13 and adds 18 tests. Note that renamed tests count towards both.
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ datasetBiInfoTest
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ experimentBiInfoTest
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ mixedWorkspaceExcludesDemoData
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ spanBiInfoTest
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ spansCountExcludingDemoData
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ spansCountForWorkspace
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ traceBiInfoTest
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ tracesCountExcludingDemoData
com.comet.opik.api.resources.v1.internal.UsageResourceTest$Usage ‑ tracesCountForWorkspace
com.comet.opik.api.resources.v1.priv.AttachmentResourceTest ‑ directS3DownloadShouldFailTest
…
com.comet.opik.api.resources.v1.events.WebhookSubscriberLoggingTest ‑ processEvent_whenSuccessfulWebhook_shouldSendRequestAndCreateLogs
com.comet.opik.api.resources.v1.events.WebhookSubscriberLoggingTest ‑ processEvent_whenWebhookFails_shouldRetryAndCreateErrorLogs
com.comet.opik.api.resources.v1.priv.SpansResourceTest$RequiredPermissionsTest ‑ batchUpdateSpansPassesRequiredPermissionsToAuthEndpoint
com.comet.opik.api.resources.v1.priv.SpansResourceTest$RequiredPermissionsTest ‑ createSpanPassesRequiredPermissionsToAuthEndpoint
com.comet.opik.api.resources.v1.priv.SpansResourceTest$RequiredPermissionsTest ‑ createSpansBatchPassesRequiredPermissionsToAuthEndpoint
com.comet.opik.api.resources.v1.priv.SpansResourceTest$RequiredPermissionsTest ‑ updateSpanPassesRequiredPermissionsToAuthEndpoint
com.comet.opik.domain.RetentionPolicyServiceTest$DeletionVerification ‑ deletesOnlyOldRowsAcrossAllTables
com.comet.opik.domain.RetentionPolicyServiceTest$DeletionVerification ‑ deletionIsScopedToTargetWorkspaces
com.comet.opik.domain.RetentionPolicyServiceTest$RetentionCycleExecution ‑ applyToPastFalsePreservesPreExistingData
com.comet.opik.domain.RetentionPolicyServiceTest$RetentionCycleExecution ‑ deletesExpiredDataAndKeepsRecentData
…

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Backend Tests - Unit Tests

1 471 tests  +7   1 469 ✅ +7   54s ⏱️ -1s
  176 suites +1       2 💤 ±0 
  176 files   +1       0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Backend Tests - Integration Group 7

257 tests  +19   257 ✅ +19   2m 21s ⏱️ +7s
 27 suites + 4     0 💤 ± 0 
 27 files   + 4     0 ❌ ± 0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

This pull request removes 12 and adds 31 tests. Note that renamed tests count towards both.
com.comet.opik.api.resources.v1.priv.DatasetsResourceCreateFromTracesTest ‑ createDatasetItemsFromTraces__success
com.comet.opik.api.resources.v1.priv.DatasetsResourceCreateFromTracesTest ‑ createDatasetItemsFromTraces__whenDatasetNotFound__thenReturn404
com.comet.opik.api.resources.v1.priv.DatasetsResourceCreateFromTracesTest ‑ createDatasetItemsFromTraces__whenTraceIdsAreEmpty__thenReturn422
com.comet.opik.api.resources.v1.priv.DatasetsResourceCreateFromTracesTest ‑ createDatasetItemsFromTraces__withAllEnrichmentOptionsButNoData__success
com.comet.opik.api.resources.v1.priv.DatasetsResourceCreateFromTracesTest ‑ createDatasetItemsFromTraces__withNoEnrichmentOptions__success
com.comet.opik.infrastructure.redis.RedisStreamCodecTest ‑ shouldFailWithOldCodecButSucceedWithConfiguredCodec
com.comet.opik.infrastructure.redis.RedisStreamCodecTest ‑ shouldFailWithOldMapperButSucceedWithConfigured
com.comet.opik.infrastructure.redis.RedisStreamCodecTest ‑ shouldMemoizeCodecInstance
com.comet.opik.infrastructure.redis.RedisStreamCodecTest ‑ shouldUseConfiguredMapper
com.comet.opik.infrastructure.redis.RedisStreamCodecTest ‑ shouldWriteAndReadLargePayloadToRedis
…
com.comet.opik.api.resources.v1.jobs.TraceThreadsClosingJobTest$TraceThreadsClosingJob ‑ shouldCloseTraceThreadsForProject
com.comet.opik.api.resources.v1.jobs.TraceThreadsClosingJobTest$TraceThreadsClosingJob ‑ shouldCloseTraceThreadsForProjectWithCustomTimeout
com.comet.opik.api.resources.v1.jobs.TraceThreadsClosingJobTest$TraceThreadsClosingJob ‑ shouldReopenTraceThreadsIfNewTracesAreAdded
com.comet.opik.api.resources.v1.priv.PromptResourceTest$ProjectScopedPrompts ‑ createPromptWithExistingProjectName
com.comet.opik.api.resources.v1.priv.PromptResourceTest$ProjectScopedPrompts ‑ createPromptWithNonExistingProjectId
com.comet.opik.api.resources.v1.priv.PromptResourceTest$ProjectScopedPrompts ‑ createPromptWithNonExistingProjectName
com.comet.opik.api.resources.v1.priv.PromptResourceTest$ProjectScopedPrompts ‑ createPromptWithProjectId
com.comet.opik.api.resources.v1.priv.PromptResourceTest$ProjectScopedPrompts ‑ findPromptsByProjectId
com.comet.opik.api.resources.v1.priv.PromptResourceTest$RequiredPermissionsTest ‑ deletePromptByIdPassesRequiredPermissionsToAuthEndpoint
com.comet.opik.api.resources.v1.priv.PromptResourceTest$RequiredPermissionsTest ‑ deletePromptsBatchPassesRequiredPermissionsToAuthEndpoint
…

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Python SDK E2E Tests Results (Python 3.13)

244 tests  ±0   242 ✅ ±0   8m 35s ⏱️ -4s
  1 suites ±0     2 💤 ±0 
  1 files   ±0     0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

This pull request removes 1 and adds 1 tests. Note that renamed tests count towards both.
tests.e2e.test_tracing ‑ test_opik_client__update_trace__happy_flow[None-None-None-None-019cf763-94da-72fe-81c3-933e805d8156]
tests.e2e.test_tracing ‑ test_opik_client__update_trace__happy_flow[None-None-None-None-019cfd18-99f5-74a6-93dd-e2c26df1503e]

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Python SDK E2E Tests Results (Python 3.11)

244 tests  ±0   242 ✅ ±0   8m 33s ⏱️ ±0s
  1 suites ±0     2 💤 ±0 
  1 files   ±0     0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

This pull request removes 1 and adds 1 tests. Note that renamed tests count towards both.
tests.e2e.test_tracing ‑ test_opik_client__update_trace__happy_flow[None-None-None-None-019cf73f-c11b-75d8-9300-6af82099c94a]
tests.e2e.test_tracing ‑ test_opik_client__update_trace__happy_flow[None-None-None-None-019cfd19-ed07-73d0-b20a-323b1dbf3498]

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

TS SDK E2E Tests - Node 18

238 tests  +2   236 ✅ +2   16m 2s ⏱️ - 1m 0s
 25 suites ±0     2 💤 ±0 
  1 files   ±0     0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Python SDK E2E Tests Results (Python 3.14)

244 tests  ±0   242 ✅ ±0   8m 3s ⏱️ - 1m 28s
  1 suites ±0     2 💤 ±0 
  1 files   ±0     0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

This pull request removes 1 and adds 1 tests. Note that renamed tests count towards both.
tests.e2e.test_tracing ‑ test_opik_client__update_trace__happy_flow[None-None-None-None-019cf747-0abe-7b5b-a229-74a9a5cc0dd9]
tests.e2e.test_tracing ‑ test_opik_client__update_trace__happy_flow[None-None-None-None-019cfd19-6750-71c7-ac0d-a82ae0bd7eb7]

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

TS SDK E2E Tests - Node 20

238 tests  +2   236 ✅ +2   19m 34s ⏱️ + 3m 2s
 25 suites ±0     2 💤 ±0 
  1 files   ±0     0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Python SDK E2E Tests Results (Python 3.10)

244 tests  ±0   242 ✅ ±0   12m 57s ⏱️ + 3m 54s
  1 suites ±0     2 💤 ±0 
  1 files   ±0     0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

This pull request removes 1 and adds 1 tests. Note that renamed tests count towards both.
tests.e2e.test_tracing ‑ test_opik_client__update_trace__happy_flow[None-None-None-None-019cf738-50ab-770e-9609-26c33abb296f]
tests.e2e.test_tracing ‑ test_opik_client__update_trace__happy_flow[None-None-None-None-019cfd1a-72dd-78c2-ab10-fc16efb61b21]

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Backend Tests - Integration Group 11

143 tests   - 31   140 ✅  - 32   3m 11s ⏱️ +13s
 21 suites  -  2     3 💤 + 1 
 21 files    -  2     0 ❌ ± 0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

This pull request removes 162 and adds 131 tests. Note that renamed tests count towards both.
com.comet.opik.api.resources.v1.events.WebhookSubscriberLoggingTest ‑ processEvent_whenSuccessfulWebhook_shouldSendRequestAndCreateLogs
com.comet.opik.api.resources.v1.events.WebhookSubscriberLoggingTest ‑ processEvent_whenWebhookFails_shouldRetryAndCreateErrorLogs
com.comet.opik.api.resources.v1.priv.AutomationRuleEvaluatorsResourceTest$ApiKey ‑ createWhenNotValidApiKey(String, ErrorMessage)[1]
com.comet.opik.api.resources.v1.priv.AutomationRuleEvaluatorsResourceTest$ApiKey ‑ createWhenNotValidApiKey(String, ErrorMessage)[2]
com.comet.opik.api.resources.v1.priv.AutomationRuleEvaluatorsResourceTest$ApiKey ‑ deleteWhenNotValidApiKey(String, ErrorMessage)[1]
com.comet.opik.api.resources.v1.priv.AutomationRuleEvaluatorsResourceTest$ApiKey ‑ deleteWhenNotValidApiKey(String, ErrorMessage)[2]
com.comet.opik.api.resources.v1.priv.AutomationRuleEvaluatorsResourceTest$ApiKey ‑ getLogsWhenNotValidApiKey(String, ErrorMessage)[1]
com.comet.opik.api.resources.v1.priv.AutomationRuleEvaluatorsResourceTest$ApiKey ‑ getLogsWhenNotValidApiKey(String, ErrorMessage)[2]
com.comet.opik.api.resources.v1.priv.AutomationRuleEvaluatorsResourceTest$ApiKey ‑ getWhenNotValidApiKey(String, ErrorMessage)[1]
com.comet.opik.api.resources.v1.priv.AutomationRuleEvaluatorsResourceTest$ApiKey ‑ getWhenNotValidApiKey(String, ErrorMessage)[2]
…
com.comet.opik.api.resources.v1.events.DatasetExportJobSubscriberResourceTest$ConfigurationTests ‑ shouldVerifyStreamConfiguration
com.comet.opik.api.resources.v1.events.DatasetExportJobSubscriberResourceTest$ConfigurationTests ‑ shouldVerifySubscriberIsEnabled
com.comet.opik.api.resources.v1.events.DatasetExportJobSubscriberResourceTest$EdgeCaseTests ‑ shouldCompleteExport_whenDatasetDoesNotExist
com.comet.opik.api.resources.v1.events.DatasetExportJobSubscriberResourceTest$SuccessTests ‑ shouldProcessExportJobSuccessfully_forEmptyDataset
com.comet.opik.api.resources.v1.events.DatasetExportJobSubscriberResourceTest$SuccessTests ‑ shouldProcessExportJobSuccessfully_whenDatasetHasItems
com.comet.opik.api.resources.v1.events.DatasetExportJobSubscriberResourceTest$SuccessTests ‑ shouldProcessExportJobWithLargeDataset
com.comet.opik.api.resources.v1.events.DatasetExportJobSubscriberResourceTest$SuccessTests ‑ shouldProcessMultipleExportJobsInParallel
com.comet.opik.api.resources.v1.priv.ProjectsResourceTest$ApiKey ‑ createProject__whenApiKeyIsPresent__thenReturnProperResponse(String, boolean, ErrorMessage)[1]
com.comet.opik.api.resources.v1.priv.ProjectsResourceTest$ApiKey ‑ createProject__whenApiKeyIsPresent__thenReturnProperResponse(String, boolean, ErrorMessage)[2]
com.comet.opik.api.resources.v1.priv.ProjectsResourceTest$ApiKey ‑ createProject__whenApiKeyIsPresent__thenReturnProperResponse(String, boolean, ErrorMessage)[3]
…
This pull request removes 2 skipped tests and adds 3 skipped tests. Note that renamed tests count towards both.
com.comet.opik.api.resources.v1.priv.AutomationRuleEvaluatorsResourceTest$CombinedSortingAndFilteringFunctionality ‑ filterAndSort
com.comet.opik.api.resources.v1.priv.AutomationRuleEvaluatorsResourceTest$ListFilteringFunctionality ‑ filterByProjectName(String, String, String, Predicate)
com.comet.opik.api.resources.v1.events.DatasetExportJobSubscriberResourceTest$SuccessTests ‑ shouldProcessExportJobSuccessfully_whenDatasetHasItems
com.comet.opik.api.resources.v1.events.DatasetExportJobSubscriberResourceTest$SuccessTests ‑ shouldProcessExportJobWithLargeDataset
com.comet.opik.api.resources.v1.events.DatasetExportJobSubscriberResourceTest$SuccessTests ‑ shouldProcessMultipleExportJobsInParallel

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

Python SDK E2E Tests Results (Python 3.12)

244 tests  ±0   242 ✅ ±0   7m 55s ⏱️ - 5m 23s
  1 suites ±0     2 💤 ±0 
  1 files   ±0     0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

This pull request removes 1 and adds 1 tests. Note that renamed tests count towards both.
tests.e2e.test_tracing ‑ test_opik_client__update_trace__happy_flow[None-None-None-None-019cf761-99ec-72c6-8560-82e74ea94f09]
tests.e2e.test_tracing ‑ test_opik_client__update_trace__happy_flow[None-None-None-None-019cfd19-ba1e-7e26-a66e-10575e7cb177]

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026

TS SDK E2E Tests - Node 22

238 tests  +2   236 ✅ +2   18m 9s ⏱️ + 1m 21s
 25 suites ±0     2 💤 ±0 
  1 files   ±0     0 ❌ ±0 

Results for commit 27a1623. ± Comparison against base commit c8c6dfc.

♻️ This comment has been updated with latest results.

ldaugusto and others added 21 commits March 16, 2026 15:04
Add deleteForRetention methods to CommentDAO, FeedbackScoreDAO, SpanDAO,
and TraceDAO with proper log_comment SETTINGS for ClickHouse audit trail.
Register RetentionConfig in OpikConfiguration.
- Fix computeCurrentFraction to use tick index instead of minuteOfDay
  (avoids skipped/duplicate fractions when 1440 % executionsPerDay != 0)
- Fix Location header: remove leading slash that replaced the full path
- Remove duplicate doOnError logging (job layer is single source of truth)
- Quote interpolated values in structured log messages
- Filter out rules with applyToPast=false in groupByRetention
Tests were failing because groupByRetention now filters out rules
where applyToPast != true, but test builders didn't set the flag.
Use unique workspace IDs per test method instead of static constants
to prevent data accumulation when surefire retries tests (rerunFailingTestsCount=3).
Each retry was inserting into the same workspace, causing count assertions to fail.
…y for ClickHouse visibility

Feedback scores with an authenticated user go to authored_feedback_scores,
not feedback_scores. Updated test assertions to query the correct table.
Added Awaitility-based awaitData helper for ClickHouse async insert
visibility and unique workspace IDs per test for surefire retry isolation.
…blocks, null-safe equality

- Chain unlock into reactive pipeline instead of fire-and-forget subscribe()
- Make DELETE idempotent: return 204 for non-existent/already-deactivated rules
- Replace COALESCE(project_id, '') with null-safe <=> operator
- Convert SQL string concatenation to text blocks in RetentionRuleDAO
…visibility

- Move RetentionPolicyService, RetentionRuleService, RetentionRuleDAO to com.comet.opik.domain
- Move RetentionUtils to com.comet.opik.utils
- Revert SpanDAO from public back to package-private (no longer needed since service is in same package)
- Restore TraceDAO interface to package-private (was accidentally made public)
- Restore deleted uuid_from_time/uuid_to_time query blocks in both DAOs
- Re-apply only our retention additions on top of clean main state
…o post-rule data

When applyToPast is false, the retention job only deletes data created
after the rule was created. This uses a minimum UUID v7 (IdGenerator.generateMinId)
as a lower bound (minId) so pre-existing data is preserved.

- Add IdGenerator.generateMinId() for deterministic minimum UUID v7 at a timestamp
- Refactor RetentionPolicyService to resolve per-rule cutoffId and optional minId
- Update all 4 DAOs (Trace, Span, FeedbackScore, Comment) with conditional minId clause
- Add test contrasting applyToPast=true vs false side-by-side

Co-Authored-By: Claude Opus 4.6 <[email protected]>
…conditions

Instead of one query per applyToPast=false workspace, pack them into a single
statement with per-workspace (workspace_id, min_id) OR clauses. applyToPast=true
workspaces still use the simple IN (:workspace_ids) pattern. Normalize cutoff
to start-of-day UTC for deterministic batching across ticks.
@ldaugusto ldaugusto force-pushed the daniela/opik-4981-data-retention-endpoints branch from 426103f to d7147ce Compare March 16, 2026 15:05
@ldaugusto ldaugusto marked this pull request as ready for review March 16, 2026 15:07
Comment on lines +68 to +70
log.info("Retention policy job started: interval={}, executionsPerDay={}, fractions={}",
interval, config.getExecutionsPerDay(), config.getTotalFractions());
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Startup log emits unquoted placeholders: log.info("Retention policy job started: interval={}, executionsPerDay={}, fractions={}", ...), but backend logging guidelines (apps/opik-backend/AGENTS.md) require interpolated values to be quoted. This makes the startup entry non-compliant with our structured logging; can we quote the placeholders (e.g., interval='{}') to match the existing tick '{}' pattern?

Finding type: AI Coding Guidelines | Severity: 🟢 Low


Want Baz to fix this for you? Activate Fixer

Other fix methods

Fix in Cursor

Prompt for AI Agents:

In
apps/opik-backend/src/main/java/com/comet/opik/api/resources/v1/jobs/RetentionPolicyJob.java
around lines 68 to 70, the log message in the start() method uses unquoted placeholders.
Change the message to quote each interpolated value (for example, use "interval='{}',
executionsPerDay='{}', fractions='{}'"), keeping the same argument order (interval,
config.getExecutionsPerDay(), config.getTotalFractions()). This will make the startup
log conform to the backend structured-logging guideline requiring quoted values.

return lockService.lockUsingToken(RUN_LOCK, Duration.ofSeconds(config.getLockTimeoutSeconds()))
.flatMap(acquired -> {
if (!acquired) {
log.info("Retention policy: could not acquire lock, another instance is running");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning level?

Comment on lines +103 to +105
int computeCurrentFraction(long tick) {
return (int) (tick % config.getTotalFractions());
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tick counter from Flux.interval resets to 0 on every app restart, which means that if the service restarts mid-day, early fractions are reprocessed while later fractions are skipped until the next full cycle completes. This is harmless since the DELETEs are idempotent, but consider persisting the last processed fraction (e.g., in Redis) to resume where the previous run left off and ensure more even coverage across restarts.

private final @NonNull FeedbackScoreDAO feedbackScoreDAO;
private final @NonNull CommentDAO commentDAO;
private final @NonNull InstantToUUIDMapper uuidMapper;
private final @NonNull @Config("retention") RetentionConfig config;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Config doesn't work well with lombok, for injection, either inject OpikConfiguration or you will need a manual constructor

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commit 27a1623 addressed this comment by replacing the Lombok @requiredargsconstructor with an explicit @Inject constructor that takes @config("retention") RetentionConfig, ensuring configuration injection works correctly.

Comment on lines +156 to +164
.concatMap(batch -> Flux.concat(
feedbackScoreDAO.deleteForRetention(batch, cutoffId)
.onErrorResume(e -> logAndSkip("feedback_scores", batch.size(), e)),
commentDAO.deleteForRetention(batch, cutoffId)
.onErrorResume(e -> logAndSkip("comments", batch.size(), e)),
spanDAO.deleteForRetention(batch, cutoffId)
.onErrorResume(e -> logAndSkip("spans", batch.size(), e)),
traceDAO.deleteForRetention(batch, cutoffId)
.onErrorResume(e -> logAndSkip("traces", batch.size(), e))));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if we delete all traces of an experiment?

Copy link
Contributor

@thiagohora thiagohora left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The most critical items are the span orphan issue (deleting by id instead of trace_id) and unbounded mutations on large workspaces.

Comment on lines +167 to +170
private static final String DELETE_FOR_RETENTION = """
DELETE FROM <table_name>
WHERE workspace_id IN :workspace_ids
AND entity_id \\< :cutoff_id
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Correctness — Medium] entity_id < :cutoff_id is correct for trace-type entities (entity_id = trace UUID). However, for span-type feedback scores (entity_type = 'span'), entity_id is the span UUID. If the span's id is newer than the cutoff (which can happen for late-arriving spans), those scores will survive even though the parent trace is gone.

This is a cascading effect from the span orphan issue — once spans are deleted by trace_id instead of id, this becomes less of a concern. But it's worth noting that this query doesn't filter by entity_type, so it catches both.

Also, entity_id is the 4th component of the ORDER BY key (workspace_id, project_id, entity_type, entity_id, name) — ClickHouse can't use the sort key efficiently since project_id and entity_type are skipped.

Copy link
Contributor Author

@ldaugusto ldaugusto Mar 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a big problem.

First we actually want the deletions to go through all entity types and all projects, so not a problem, CH can navigate through selected workspaces, then check all branches in the next two levels, and finally narrow in the cutoff_id.

Second for the span type, we might leave a couple feedbacks to be deleted on the next day, so it's fine. They will be orphaned for a day.

A lot of the effort we have to do it here is a best effort on how to delete the user content without triggering secondary queries to check extra stuff.

Comment on lines +156 to +164
.concatMap(batch -> Flux.concat(
feedbackScoreDAO.deleteForRetention(batch, cutoffId)
.onErrorResume(e -> logAndSkip("feedback_scores", batch.size(), e)),
commentDAO.deleteForRetention(batch, cutoffId)
.onErrorResume(e -> logAndSkip("comments", batch.size(), e)),
spanDAO.deleteForRetention(batch, cutoffId)
.onErrorResume(e -> logAndSkip("spans", batch.size(), e)),
traceDAO.deleteForRetention(batch, cutoffId)
.onErrorResume(e -> logAndSkip("traces", batch.size(), e))));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Correctness — Medium] onErrorResume per table means a transient ClickHouse error on e.g. feedback_scores is swallowed — we log and move on to comments, spans, traces. The failed table won't be retried until the same fraction is processed again, which happens once per day (since each fraction runs only once in a 24h cycle).

This means orphaned feedback scores could exist for up to 24 hours. Consider either:

  1. Keeping a "failed table + workspace" set in Redis and retrying on the next tick (any fraction), or
  2. At minimum, emitting a metric/alert when a table delete fails so ops can monitor it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the best effort execution without retries is actually better performance wise. If something happens with a deletion today, tomorrow's execution for the same range will tackle them (and as it's navigating the sortkeys anyway, it doesnt go one extra time for them), so worst case scenario the oldest data in the workspace will have some incosistency for a day.


return Mono.fromCallable(() -> template.inTransaction(READ_ONLY, handle -> {
var dao = handle.attach(RetentionRuleDAO.class);
return dao.findActiveWorkspaceRulesInRange(range[0], range[1]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Performance — Low] findActiveWorkspaceRulesInRange loads all matching rules into memory without pagination. If thousands of workspaces have active retention rules, this could be a large result set. Not critical for initial rollout, but worth keeping in mind as adoption grows.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not particularly worried about this tbh. For 1M workspaces, with the standard 48 ranges, that's only 20k strings. We can either keep 20k small strings in memory or we can convert this into going to Clickhouse 20 times (assuming 1k batches). I choose the first.

Comment on lines +15 to +27
public static String[] computeWorkspaceRange(int fraction, int totalFractions) {
long maxVal = 1L << 32;
long rangeSize = maxVal / totalFractions;

long start = fraction * rangeSize;
long end = (fraction == totalFractions - 1) ? maxVal : (fraction + 1) * rangeSize;

String rangeStart = String.format("%08x", start);
String rangeEnd = (end >= maxVal)
? "~" // ASCII 126, sorts after all alphanumeric chars (some workspace_ids are not hex UUIDs)
: String.format("%08x", end);

return new String[]{rangeStart, rangeEnd};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Medium] Workspace IDs starting with non-hex characters (gz) will always sort above ffffffff and land in the last fraction only (via the ~ sentinel). If non-UUID workspace IDs exist in production, this concentrates load unevenly — the last fraction does all the work for those workspaces while other fractions skip them.

If this is a real scenario, consider hash-based partitioning (e.g. MD5(workspace_id).substring(0,8)) for more even distribution.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The non-UUID workspaces do exist, but they are ~0.01% of the total workspaces, so I considered the hash solution would be overengineering and making the fetch query slower.

Copy link
Contributor

@thiagohora thiagohora Mar 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Performance — Low] findActiveWorkspaceRulesInRange filters on enabled = true AND retention != 'unlimited' AND project_id IS NULL AND workspace_id >= ? AND workspace_id < ?. The existing index idx_active_workspace (enabled, workspace_id) only partially covers this.

However, note that retention != 'unlimited' is a not-equal condition — MySQL cannot efficiently use a B-tree index for != and will stop traversing the index after that column. So retention should be excluded from the index.

A better covering index would be:

INDEX idx_retention_job (enabled, project_id, workspace_id)

This gives:

  1. enabled = true — equality ✓
  2. project_id IS NULL — equality (IS NULL works with B-tree) ✓
  3. workspace_id >= ... AND < ... — range (must be last) ✓

The retention != 'unlimited' filter is applied as a post-filter after the index narrows the rows.

Not critical at low scale, but helpful as the rules table grows.

…eight_deletes_sync=0, config docs

- log.error → log.warn for tick failures (recoverable, retried next interval)
- Manual constructor for RetentionPolicyService (@config doesn't work with Lombok)
- Remove @Valid from primitive fields in RetentionConfig
- Add lightweight_deletes_sync=0 to all retention DELETE queries (async mutation)
- Document recommended executionsPerDay divisor values in config.yml
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Backend java Pull requests that update Java code tests Including test files, or tests related like configuration.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants