Skip to content

feat(nimbus): add validation to nimbus review serializer for targeting context fields within version range#15406

Open
moibra05 wants to merge 1 commit intomainfrom
14925
Open

feat(nimbus): add validation to nimbus review serializer for targeting context fields within version range#15406
moibra05 wants to merge 1 commit intomainfrom
14925

Conversation

@moibra05
Copy link
Copy Markdown
Contributor

@moibra05 moibra05 commented Apr 21, 2026

Because

  • Targeting configs were not validated against versioned targeting context fields, allowing JEXL expressions to reference fields that may not exist across the selected version range

This commit

  • Adds version-range validation for targeting config JEXL expressions during experiment review
  • Extracts referenced identifiers from the JEXL expression and validates them against targeting context fields for all versions in the selected range
  • Reports unsupported fields with specific version breakdowns when they are only partially available

Fixes #14925

Copy link
Copy Markdown
Collaborator

@jaredlockhart jaredlockhart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is so so so sick I'm so pleased with this. Very well written, very clear, very idiomatic with the project patterns, very clearly and well tested.

Try to do the test fixture file approach. If it's a nightmare then we can fall back to mocking but it's worth at least trying to set it up now.

Great job @moibra05 🙏 🎉

Comment on lines +1845 to +1853
extracted_root_fields = set()
for subexpr in collect_exprs(targeting_config.targeting):
try:
json.loads(subexpr)
continue
except json.JSONDecodeError:
if subexpr.startswith("."):
continue
extracted_root_fields.add(subexpr.partition(".")[0])
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh then we should pull this out into a utility somewhere since we're also doing it here:

extracted_root_fields = set()
for subexpr in collect_exprs(targeting_config.targeting):
try:
json.loads(subexpr)
continue
except json.JSONDecodeError:
if subexpr.startswith("."):
continue
extracted_root_fields.add(subexpr.partition(".")[0])
unknown_fields = extracted_root_fields - valid_fields

)
)

unknown_fields = extracted_root_fields - fields
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Set logic is the greatest thing ever 💯 💯 💯

Comment on lines +4433 to +4439
patch(
"experimenter.experiments.api.v5.serializers.collect_exprs",
return_value=["knownField", "unknownField", ".id"],
),
patch(
"experimenter.experiments.api.v5.serializers.TargetingContextFields.for_application",
return_value={"knownField"},
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure mocking out the function calls here is the right way to do this. Usually we follow a pattern of having test fixture data and pointing the loader at it and forcing it to reload and then letting all the logic flow naturally. I'd say see if you can do that rather than mocking it out at the function level.

Comment on lines +4486 to +4491
fields_by_version = {
"v120.0.0": {"knownField"},
"v121.0.0": {"knownField", "anotherKnownField"},
"v122.0.0": {"knownField", "anotherKnownField", "thirdKnownField"},
"v123.0.0": {"knownField"},
}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah exactly like we could put these in fixture files and have the loader pick them by mocking out the file path.

Comment on lines +4496 to +4497
"experimenter.experiments.api.v5.serializers.collect_exprs",
return_value=["knownField"],
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And then we should be able to setup some dummy test targeting configs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Validate targeting configs against targeting context fields within experiment version range

2 participants