-
Notifications
You must be signed in to change notification settings - Fork 2
feat: add lorentz_correction to powder workflow #215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
I there a reason why we can't compute a dspacing coord earlier and assign it to |
Yeah that's also a possibility. It has the same drawbacks as the other version, and it makes the name Another option is to also remove |
src/ess/powder/conversion.py
Outdated
| # See scipp/essreduce#249 | ||
| out.bins.coords.pop("event_time_offset", None) | ||
| return DspacingDetector[RunType](out) | ||
| return CorrectedDetector[RunType](out) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have to keep tof coord because it is used by masking in later step.
I don't really see any reason to drop event_time_offset here except for performance.
Maybe that's reason enough.
On the other hand, that makes users unable to investigate correlations between event_time_offset and other coordinate if they want, which might be annoying.
src/ess/powder/conversion.py
Outdated
| graph: ElasticCoordTransformGraph[RunType], | ||
| calibration: CalibrationData, | ||
| ) -> DspacingDetector[RunType]: | ||
| ) -> CorrectedDetector[RunType]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The types don't make sense. If the function was used as a provider like this, it would receive data that already contains d-spacing. And it the return type would clash with or bypass corrections. So I think you should just use sc.DataArray here.
3b629ca to
f12a9e3
Compare
src/ess/powder/conversion.py
Outdated
| graph: ElasticCoordTransformGraph[RunType], | ||
| calibration: CalibrationData, | ||
| ) -> DspacingDetector[RunType]: | ||
| ) -> sc.DataArray | sc.Dataset: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can this be a dataset?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know. The type checker told me it could be.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's definitely wrong. The input is a data array and all definitions of out are also data arrays.
Co-authored-by: Jan-Lukas Wynen <[email protected]>
Adding the correction to the workflow graph was more messy than I expected.
Workflow/graph related issues
The guidelines say that Lorentz correction should be applied to
CorrectedDetector.But at that step in the workflow the events don't have
dspacing(and Lorentz correction is a function ofdspacing).I don't know if this is a bug in EssDiffraction or if it is a bug in the guidelines.
I worked around the issue by removing
CorrectedDetector, going directly fromWavelengthDetectortoDspacingDetector.However, this makes the provider
add_coords_masks_and_apply_correctionsdo a lot of things at once.And it takes a lot of arguments that are not really related, so it's not very useful for understanding what the provider does.
The other option would be to add more domain types where each step is done separately. But that would add a lot of unnecessary domain types to the graph.
There's a tension between making the graphs show relevant intermediate results, and making the graphs explain what is being done to the data.
Issues in
apply_lorentz_correctionThe existing
apply_lorentz_correctionfunction could not be applied to data with variances.For now I just replaced it with something that works with variances.