Skip to content

Replace the ray-to-dem intersection algorithm with the secant method#5408

Closed
oleg-alexandrov wants to merge 0 commit intoDOI-USGS:devfrom
oleg-alexandrov:dev
Closed

Replace the ray-to-dem intersection algorithm with the secant method#5408
oleg-alexandrov wants to merge 0 commit intoDOI-USGS:devfrom
oleg-alexandrov:dev

Conversation

@oleg-alexandrov
Copy link
Copy Markdown
Contributor

This is a proposed fix for the issue of intersecting a ray from a rover camera with a DEM. There were several problems:

  • The rays emanating from the camera rover had wrong direction (my bug, Bufixes for MSL sensor intrinsics ale#589)
  • As @acpaquette noticed, the zero elevation datum is not a great initial guess for a tiny DEM which is 4,000 m below the datum and oblique rays
  • The intersection method uses the fixed point method. This is fragile. There was a comment in the code saying that current method fails at the limb, which I think is because of that.

This fix first finds a representative elevation based on sampling the DEM, then uses the secant method.

One issue that needs looking into is the stopping criterion. I left it the same, at 100th of pixel resolution. For MSL the computed resolution ranged between 2 m and 10 m, which is a bit too coarse I think for MSL. I am not sure if the provided resolution() function is accurate enough.

To test this, one should first merge the ALE code, regenerate the ISD, and reattach it to the cub file with same shapefile as before. Must fetch latest ASP if doing mapprojection, as there's a bug fix there too.

I tested both the old fixed point method and new secant method after initializing with a representative elevation, rather than zero datum. The old approach does well with campt when line and sample are 200 or more, when the ray looks more vertically, but the old approach fails for line and sample between 120 and 200 (approx) when rays are more oblique. The new secant method still does well then. But it breaks down when line and sample is 120 and below (approx) when one looks closer to the horizon. This was done with dataset NLB_450492465EDR_F0310634NCAM00357M1.cub with the fixed ISD and same shape file as what @acpaquette prepared.

This is a large change. It is suggested to have a build with and another without this change and make some sanity checks for orbital sensors. This is my last week on CSM work and I will not have much time for a large amount of work if this change has a big impact.

Licensing

This project is mostly composed of free and unencumbered software released into the public domain, and we are unlikely to accept contributions that are not also released into the public domain. Somewhere near the top of each file should have these words:

This work is free and unencumbered software released into the public domain. In jurisdictions that recognize copyright laws, the author or authors of this software dedicate any and all copyright interest in the software to the public domain.

  • I dedicate any and all copyright interest in this software to the public domain. I make this dedication for the benefit of the public at large and to the detriment of my heirs and successors. I intend this dedication to be an overt act of relinquishment in perpetuity of all present and future rights to this software under copyright law.

@acpaquette
Copy link
Copy Markdown
Collaborator

Just to keep you in the loop with tests on this @oleg-alexandrov, here is partial output from one of the camstats tests:

The difference between (double) group.findKeyword("LatitudeMinimum") and 9.928647861 is 2.0655599897168031e-08, which exceeds 1e-8, where
(double) group.findKeyword("LatitudeMinimum") evaluates to 9.9286478816555999,
9.928647861 evaluates to 9.928647861, and
1e-8 evaluates to 1e-08.

/tmp/jenkins-7a57a89e/workspace/ISIS_PR-5408/isis/tests/FunctionalTestsCamstats.cpp:29: Failure
The difference between (double) group.findKeyword("LatitudeMaximum") and 10.434709879 is 2.5338000142482997e-08, which exceeds 1e-8, where
(double) group.findKeyword("LatitudeMaximum") evaluates to 10.434709904338,
10.434709879 evaluates to 10.434709879, and
1e-8 evaluates to 1e-08.

/tmp/jenkins-7a57a89e/workspace/ISIS_PR-5408/isis/tests/FunctionalTestsCamstats.cpp:35: Failure
The difference between (double) group.findKeyword("LongitudeMaximum") and 256.146069653 is 5.8120008361584041e-08, which exceeds 1e-8, where
(double) group.findKeyword("LongitudeMaximum") evaluates to 256.14606959487998,
256.146069653 evaluates to 256.14606965299998, and
1e-8 evaluates to 1e-08.

/tmp/jenkins-7a57a89e/workspace/ISIS_PR-5408/isis/tests/FunctionalTestsCamstats.cpp:37: Failure
The difference between (double) group.findKeyword("LongitudeStandardDeviation") and 0.106583406 is 1.8490559999806422e-08, which exceeds 1e-8, where
(double) group.findKeyword("LongitudeStandardDeviation") evaluates to 0.10658338750944001,
0.106583406 evaluates to 0.10658340600000001, and
1e-8 evaluates to 1e-08.

/tmp/jenkins-7a57a89e/workspace/ISIS_PR-5408/isis/tests/FunctionalTestsCamstats.cpp:40: Failure
The difference between (double) group.findKeyword("SampleResolutionMinimum") and 18.840683397 is 1.4566600015086806e-07, which exceeds 1e-8, where
(double) group.findKeyword("SampleResolutionMinimum") evaluates to 18.840683542666,
18.840683397 evaluates to 18.840683396999999, and
1e-8 evaluates to 1e-08.

/tmp/jenkins-7a57a89e/workspace/ISIS_PR-5408/isis/tests/FunctionalTestsCamstats.cpp:41: Failure
The difference between (double) group.findKeyword("SampleResolutionMaximum") and 18.985953926 is 3.6378700229988681e-07, which exceeds 1e-8, where
(double) group.findKeyword("SampleResolutionMaximum") evaluates to 18.985953562212998,
18.985953926 evaluates to 18.985953926000001, and
1e-8 evaluates to 1e-08.

/tmp/jenkins-7a57a89e/workspace/ISIS_PR-5408/isis/tests/FunctionalTestsCamstats.cpp:42: Failure
The difference between (double) group.findKeyword("SampleResolutionAverage") and 18.908165593 is 5.2355002111426074e-08, which exceeds 1e-8, where
(double) group.findKeyword("SampleResolutionAverage") evaluates to 18.908165645355002,
18.908165593 evaluates to 18.908165593, and
1e-8 evaluates to 1e-08.

/tmp/jenkins-7a57a89e/workspace/ISIS_PR-5408/isis/tests/FunctionalTestsCamstats.cpp:43: Failure
The difference between (double) group.findKeyword("SampleResolutionStandardDeviation") and 0.038060025 is 1.4495013002380208e-08, which exceeds 1e-8, where
(double) group.findKeyword("SampleResolutionStandardDeviation") evaluates to 0.038060039495013,
0.038060025 evaluates to 0.038060024999999997, and
1e-8 evaluates to 1e-08.

/tmp/jenkins-7a57a89e/workspace/ISIS_PR-5408/isis/tests/FunctionalTestsCamstats.cpp:46: Failure
The difference between (double) group.findKeyword("LineResolutionMinimum") and 18.840683397 is 1.4566600015086806e-07, which exceeds 1e-8, where
(double) group.findKeyword("LineResolutionMinimum") evaluates to 18.840683542666,
18.840683397 evaluates to 18.840683396999999, and
1e-8 evaluates to 1e-08.

/tmp/jenkins-7a57a89e/workspace/ISIS_PR-5408/isis/tests/FunctionalTestsCamstats.cpp:47: Failure
The difference between (double) group.findKeyword("LineResolutionMaximum") and 18.985953926 is 3.6378700229988681e-07, which exceeds 1e-8, where
(double) group.findKeyword("LineResolutionMaximum") evaluates to 18.985953562212998,
18.985953926 evaluates to 18.985953926000001, and
1e-8 evaluates to 1e-08.

/tmp/jenkins-7a57a89e/workspace/ISIS_PR-5408/isis/tests/FunctionalTestsCamstats.cpp:48: Failure
The difference between (double) group.findKeyword("LineResolutionAverage") and 18.908165593 is 5.2355002111426074e-08, which exceeds 1e-8, where
(double) group.findKeyword("LineResolutionAverage") evaluates to 18.908165645355002,
18.908165593 evaluates to 18.908165593, and
1e-8 evaluates to 1e-08.

/tmp/jenkins-7a57a89e/workspace/ISIS_PR-5408/isis/tests/FunctionalTestsCamstats.cpp:49: Failure
The difference between (double) group.findKeyword("LineResolutionStandardDeviation") and 0.038060025 is 1.4495013002380208e-08, which exceeds 1e-8, where
(double) group.findKeyword("LineResolutionStandardDeviation") evaluates to 0.038060039495013,
0.038060025 evaluates to 0.038060024999999997, and
1e-8 evaluates to 1e-08.

We are exceptionally close to within our test tolerance but I don't know if there is much we can do about it

@oleg-alexandrov
Copy link
Copy Markdown
Contributor Author

oleg-alexandrov commented Jan 16, 2024

One may try to see if decreasing the tolerance in the stopping criterion in the secant method changes anything, or one could just call these the new reference results. Given that one peeks all the way from orbit, 1e-8 is small, whether these are pixels or lon-lat.

@oleg-alexandrov
Copy link
Copy Markdown
Contributor Author

I realized that jigsaw also uses this function. I am not sure what it is used for, but there's a possibility that the current stopping tolerance, of 1/100th of a pixel, may be too high. Especially if one has to do numerical differentiation based on the result of this.

In CSM ground-to-image, the tolerance used is about 1e-8. Here likely 1/1000 of a pixel may be a better tolerance, and likely will result in at most another iteration, given that the method converges very fast.

@jlaura
Copy link
Copy Markdown
Collaborator

jlaura commented Feb 8, 2024

@acpaquette What is the status on this? More work needed or waiting on bandwidth for additional review?

@acpaquette
Copy link
Copy Markdown
Collaborator

@jlaura @oleg-alexandrov I still need to look into all of the failing tests here and do some manual verification. I should be able to get to this under continuous support but it will take a but longer to fully explore the changes

@acpaquette
Copy link
Copy Markdown
Collaborator

Just to loop back around here, this seems like a good change for ISIS as it allows ISIS to use more local DEMs for both landed and orbital data. Still need to test on a network through jigsaw to see how the results differ.

@oleg-alexandrov
Copy link
Copy Markdown
Contributor Author

Sounds good. I would suggest making the tolerance for convergence smaller, maybe by a factor of 10, and seeing the effect. For ground-level DEMs, whose grid can grow with the distance from the rover, current tolerance looked to be on the high side.

@acpaquette
Copy link
Copy Markdown
Collaborator

@oleg-alexandrov I tested it against orbital data/networks and everything seems nominal. I was also able to replicate the images produced through ASP for the landed data. I think this PR from a functional stand point is good to go.

When you have time Oleg, would you be able to update the tests? No rush on our end as the grounded support hasn't been widely publicized.

Thanks for an amazing addition to the code base!

@oleg-alexandrov
Copy link
Copy Markdown
Contributor Author

I will not be coming back to CSM / ISIS work till middle of June. I have never worked with the ISIS test suite and know very little about its structure.

Can the ISIS maintainers pick up the slack? :) Likely not the first time the regression suite broke.

@Kelvinrr
Copy link
Copy Markdown
Collaborator

Kelvinrr commented Apr 5, 2024

@oleg-alexandrov we've always asked PR authors to write and update their own tests. People would be pretty updates if we suddenly made an exception. The process for updating docs is pretty well documented and new outside contributors seem to be able to pick up it pretty fast: https://astrogeology.usgs.gov/docs/how-to-guides/isis-developer-guides/writing-isis-tests-with-ctest-and-gtest/

@oleg-alexandrov
Copy link
Copy Markdown
Contributor Author

oleg-alexandrov commented Apr 8, 2024

Thanks for the user guide.

I believe I updated all the tests. I also pushed making the tolerance for convergence 1/1000 pixel rather than 1/100 pixel. This had the added bonus that more tests passed even before I started looking at them.

I brought up an issue in #5459. Currently some unit tests take a very long time while not doing much than computing a number or two. Multiply by the very large number of tests and potentially several iterations and that becomes a pain to deal with.

@oleg-alexandrov
Copy link
Copy Markdown
Contributor Author

Any thoughts on this?

@acpaquette
Copy link
Copy Markdown
Collaborator

@oleg-alexandrov Apologies for little activity on this, Jenkins has been dead for sometime now. I will work in the background this week to double check tests and provide a review here

@acpaquette
Copy link
Copy Markdown
Collaborator

Looking into this now that I have a handle on test failures on mac with a generic mac test config from #5517

@oleg-alexandrov
Copy link
Copy Markdown
Contributor Author

This is reminder to perhaps attempt to pull this in. This pull request has been around for 6 months.

A few months ago I fixed all the tests I could see failing. That meant dealing with large amount of editing of files and plugging in new numbers. Not fun. I can do that second time, if provided if a list of failures (after merging into latest), but then hoping it will be merged promptly afterwards.

@amystamile-usgs
Copy link
Copy Markdown
Contributor

@oleg-alexandrov I am unable to run tests until merge conflicts have been resolved.

@oleg-alexandrov oleg-alexandrov force-pushed the dev branch 2 times, most recently from f056bce to e912de9 Compare July 16, 2024 20:43
@oleg-alexandrov
Copy link
Copy Markdown
Contributor Author

I fixed the conflict. Look that the test results changed in different ways in my branch and the main one, so surely some tests will fail now. When you have the test results I will fix what is failing.

@amystamile-usgs
Copy link
Copy Markdown
Contributor

I fixed the conflict. Look that the test results changed in different ways in my branch and the main one, so surely some tests will fail now. When you have the test results I will fix what is failing.

I will run the tests now. It takes about an hour. I will post the results in the PR.

@amystamile-usgs
Copy link
Copy Markdown
Contributor

The following tests FAILED:
--
30829 | 65 - isis_unit_test_Camera (Failed)
30830 | 71 - isis_unit_test_CameraPointInfo (Failed)
30833 | 199 - isis_unit_test_PixelFOV (Failed)
30834 | 275 - isis_unit_test_Sensor (Failed)
30835 | 405 - cosi_app_test_default (Failed)
30837 | 491 - fx_app_test_bandDependent (Failed)
30838 | 492 - fx_app_test_camera (Failed)
30839 | 658 - photomet_app_test_clemMaxemission (Failed)
30840 | 660 - photomet_app_test_clementine (Failed)
30841 | 667 - photomet_app_test_useDem (Failed)
30842 | 672 - photrim_app_test_usedem (Failed)
30843 | 674 - pixel2map_app_test_crismpolyoverlap (Failed)
30846 | 779 - cnetref_app_test_interestNoOverlaps (Failed)
30847 | 780 - cnetref_app_test_interestOverlaps (Failed)
30848 | 790 - cnettable_app_test_allowErrors (Failed)
30849 | 791 - cnettable_app_test_append (Failed)
30850 | 792 - cnettable_app_test_default (Failed)
30851 | 855 - isis_module_test_CropCam2map (Failed)
30852 | 856 - isis_module_test_CropEnlargeCam2map (Failed)
30853 | 857 - isis_module_test_CropReduceCam2map (Failed)
30854 | 858 - isis_module_test_EnlargeCropCam2map (Failed)
30855 | 860 - isis_module_test_ReduceCropCam2map (Failed)
30856 | 862 - apollo_unit_test_ApolloMetricCamera (Failed)
30857 | 892 - chandrayaan1_unit_test_Chandrayaan1M3Camera (Failed)
30858 | 896 - clementine_unit_test_UvvisCamera (Failed)
30860 | 908 - dawn_unit_test_DawnFcCamera (Failed)
30864 | 954 - kaguya_unit_test_KaguyaMiCamera (Failed)
30865 | 955 - kaguya_unit_test_KaguyaTcCamera (Failed)
30866 | 960 - lo_unit_test_LoHighCamera (Failed)
30867 | 961 - lo_unit_test_LoMediumCamera (Failed)
30868 | 978 - lrowacpho_app_test_default (Failed)
30873 | 1029 - mgs_unit_test_MocWideAngleCamera (Failed)
30874 | 1036 - mocproc_app_test_case01 (Failed)
30875 | 1037 - mocproc_app_test_case02 (Failed)
30876 | 1038 - mocproc_app_test_case03 (Failed)
30878 | 1041 - mro_unit_test_CTXCamera (Failed)
30879 | 1042 - mro_unit_test_CrismCamera (Failed)
30880 | 1069 - hijitter_app_test_default (Failed)
30881 | 1070 - hijitter_app_test_withCrop (Failed)
30883 | 1099 - odyssey_unit_test_ThemisVisCamera (Failed)
30886 | 1106 - thmproc_app_test_ir (Failed)
30887 | 1107 - thmproc_app_test_vis (Failed)
30888 | 1111 - odyssey_module_test_themis (Failed)
30890 | 1125 - tgo_unit_test_TgoCassisCamera (Failed)
30891 | 1246 - ThreeImageNetwork.FunctionalTestFindfeaturesFastGeomGridDefault (Failed)
30892 | 1248 - ThreeImageNetwork.FunctionalTestFindimageoverlapsNoOverlap (Failed)
30893 | 1426 - DefaultCube.FunctionalTestCamrangeMeta (Failed)
30894 | 1431 - DefaultCube.FunctionalTestCamstatsDefaultParameters (Failed)
30895 | 1491 - DefaultCube.FunctionalTestNoprojFromUser (Failed)
30896 | 1494 - DefaultCube.FunctionalTestPhocubeAllBands (Failed)
30897 | 1524 - LineScannerCube.FunctionalTestNoprojLineScanner (Failed)
30898 | 1773 - TempTestingFiles.UnitTestImagePolygonCross (Failed)
30899 | 1976 - ApolloNetwork.FunctionalTestJigsawApollo (Failed)
30900 | 2052 - Lrowacphomap.FunctionalTestLrowacphomapNoBack (Failed)
30901 | 2054 - Lrowacphomap.FunctionalTestLrowacphomapDefaultAlgoAndParCubeNoBack (Failed)
30902 | 2356 - TgoCassisModuleKernels.TgoCassisSingleFrameletProjection (Failed)
30903 | 2357 - TgoCassisModuleKernels.TgoCassisTestColorMosaic (Failed)
30904 | 2359 - TgoCassisModuleKernels.TgoCassisSingleColorMosaicReingest (Failed)
30905 | 2360 - TgoCassisModuleKernels.TgoCassisTestProjSingleStitchedFrame (Failed)
30906 | 2362 - TgoCassisModuleTests.TgoCassisUncontrolledSingleColorMosaic (Failed)

@oleg-alexandrov
Copy link
Copy Markdown
Contributor Author

Were these tests passing before? Most of these have no relation to the function I changed, which projects points from image to ground.

@amystamile-usgs
Copy link
Copy Markdown
Contributor

@oleg-alexandrov I have been running tests for all the PRs today and have not run across these test failures. I can run the tests again and see if anything changes.

Currently, there are 19 test failures in the dev branch that I did not include in this list.

@oleg-alexandrov
Copy link
Copy Markdown
Contributor Author

oleg-alexandrov commented Jul 16, 2024 via email

@oleg-alexandrov
Copy link
Copy Markdown
Contributor Author

oleg-alexandrov commented Jul 16, 2024 via email

@amystamile-usgs
Copy link
Copy Markdown
Contributor

amystamile-usgs commented Jul 17, 2024

Here are the test logs:

tests-part1.log
tests-part2.log

Please ignore the follow tests that are failing on dev:

Screenshot 2024-07-15 at 2 12 24 PM

@oleg-alexandrov
Copy link
Copy Markdown
Contributor Author

oleg-alexandrov commented Jul 17, 2024

Thank you. The useful information I was able to extract from those is:

EVENTS 1721165603679 The difference between hist->Average() and 127.46759871644132 is 3.0619243602814095e-06, which exceeds .000001, where
EVENTS 1721165603679 The difference between hist->Sum() and 41629898 is 1, which exceeds .0001, where
EVENTS 1721165607710 The difference between hist->Sum() and bandSum[i-1] is 1.9193115008420136e-06, which exceeds 0.000001, where
EVENTS 1721165625868 The difference between hist->Average() and 26.259947527749951 is 4.372687521225771e-05, which exceeds .000001, where
EVENTS 1721165625868 The difference between hist->Sum() and 78070.824000000604 is 0.13000000000465661, which exceeds .0001, where
EVENTS 1721165887540 The difference between csvLine[i].toDouble() and compareMe[i].toDouble() is 0.00056271000005381211, which exceeds 0.000001, where
EVENTS 1721165887540 The difference between csvLine[i].toDouble() and compareMe[i].toDouble() is 0.00033369999982824083, which exceeds 0.000001, where
EVENTS 1721165887540 The difference between csvLine[i].toDouble() and compareMe[i].toDouble() is 0.0013663400000041293, which exceeds 0.000001, where
EVENTS 1721165887540 The difference between csvLine[i].toDouble() and compareMe[i].toDouble() is 0.0003895800000464078, which exceeds 0.000001, where
EVENTS 1721165887540 The difference between csvLine[i].toDouble() and compareMe[i].toDouble() is 8.171000013135199e-05, which exceeds 0.000001, where
EVENTS 1721165887540 The difference between csvLine[i].toDouble() and compareMe[i].toDouble() is 0.00096860000007836788, which exceeds 0.000001, where
EVENTS 1721165887540 The difference between csvLine[i].toDouble() and compareMe[i].toDouble() is 0.00036263000004055357, which exceeds 0.000001, where
EVENTS 1721165887540 The difference between csvLine[i].toDouble() and compareMe[i].toDouble() is 0.00032482999995409045, which exceeds 0.000001, where
EVENTS 1721165887540 The difference between csvLine[i].toDouble() and compareMe[i].toDouble() is 0.00044334000000390006, which exceeds 0.000001, where
EVENTS 1721165947957 The difference between hist->Sum() and expectedSum is 0.04754066004534252, which exceeds 0.001, where
EVENTS 1721165949970 The difference between hist->Sum() and expectedSum is 0.046142582286847755, which exceeds 0.001, where
EVENTS 1721166103015 The difference between hist->Sum() and 70857.19977273792 is 0.083809107542037964, which exceeds 0.0001, where
EVENTS 1721166103015 The difference between hist->Sum() and 78150.645788893104 is 0.097585096955299377, which exceeds 0.0001, where
EVENTS 1721166103015 The difference between hist->Sum() and 78810.883871480823 is 0.29421606659889221, which exceeds 0.0001, where
EVENTS 1721166103015 The difference between hist->Sum() and 42226.834142448381 is 0.050979908555746078, which exceeds 0.0001, where
EVENTS 1721166121057 The difference between hist->Sum() and 441491.9287794265 is 0.2429589219391346, which exceeds 0.001, where
EVENTS 1721166263188 The difference between (double) group.findKeyword("EmissionMinimum") and 6.5873900702382002 is 4.2110721071348003, which exceeds 1e-8, where
EVENTS 1721166263188 The difference between (double) group.findKeyword("EmissionMaximum") and 26.933664313876001 is 13.431033218835001, which exceeds 1e-8, where
EVENTS 1721166263188 The difference between (double) group.findKeyword("EmissionAverage") and 14.577805073101 is 2.4263181069420003, which exceeds 1e-8, where
EVENTS 1721166263188 The difference between (double) group.findKeyword("EmissionStandardDeviation") and 1.9856895689242999 is 1.42025136788749, which exceeds 1e-8, where
EVENTS 1721166263188 The difference between (double) group.findKeyword("IncidenceMinimum") and 53.332131708106999 is 16.608964322613005, which exceeds 1e-8, where
EVENTS 1721166263188 The difference between (double) group.findKeyword("IncidenceMaximum") and 73.850688669405002 is 3.5387436018540086, which exceeds 1e-8, where
EVENTS 1721166263188 The difference between (double) group.findKeyword("IncidenceAverage") and 66.178552518139 is 3.9489066163740034, which exceeds 1e-8, where
EVENTS 1721166263188 The difference between (double) group.findKeyword("IncidenceStandardDeviation") and 1.7434732723265001 is 1.6409828310974302, which exceeds 1e-8, where
EVENTS 1721166281230 The difference between 214.399354 and centroid->getX() is 0.037849262457228861, which exceeds 1e-6, where
EVENTS 1721166281230 The difference between 67.471751 and centroid->getY() is 0.016809208945247178, which exceeds 1e-6, where

These numbers are much larger than would be expected, and I don't understand what they mean, and even if they even have anything to do do with the code I changed.

Last time around I spent at least a day on this.

I regretfully no longer have time to go through this process again. The tests take 2-3 hours to run on my machine. Then one has to plug and chug the right numbers in the right files. Mistakes are unavoidable so one has to rerun the tests. One has to also configure rclone and fetch gigabytes of test data, set up the env, fetch ISIS, and wait for a couple of hours for it to build.

I hope the effort I spent modifying the image-to-ground code, which required a new algorithm, is appreciated, and that under the umbrella of the relevant project the ISIS developers can take it from there.

@jlaura
Copy link
Copy Markdown
Collaborator

jlaura commented Jul 17, 2024

This is an important contribution to have merged in support of work that I am doing. Given the challenges getting access to the test data #5550, the inability for external contributors to merge updates to the test data, and the fact that @oleg-alexandrov had GTests passing on or about April 8, I would propose that the remaining updates be addressed under an upcoming continuous support sprint.

Additionally, I would love to see process improvements that (1) make the test data more accessible, e.g., the proposed solution from #5550, (2) process improvements such that the turn around between contributions and review are significantly shorter.

@github-actions
Copy link
Copy Markdown

Thank you for your contribution!

Unfortunately, this pull request hasn't received much attention lately, so it is labeled as 'stale.'

If no additional action is taken, this pull request will be automatically closed in 180 days.

If you want to participate in our support prioritization meetings or be notified when support sprints are happening, you can sign up the support sprint notification emails here.

Read more about our support processs here

@github-actions github-actions bot added the inactive Issue that has been inactive for at least 6 months label Jan 14, 2025
@oleg-alexandrov
Copy link
Copy Markdown
Contributor Author

This should stay on. I should fix it when back on the project.

@github-actions github-actions bot removed the inactive Issue that has been inactive for at least 6 months label Jan 15, 2025
@github-actions
Copy link
Copy Markdown

The build and test suite have started for your pull request.

To view your build log, please reference the build with source version: "PR_5408".

Additionally, check the latest "dev" source version to identify existing test failures. Please note that you are not responsible for the test failures that exist on both your PR and the dev branch.

3 similar comments
@github-actions
Copy link
Copy Markdown

The build and test suite have started for your pull request.

To view your build log, please reference the build with source version: "PR_5408".

Additionally, check the latest "dev" source version to identify existing test failures. Please note that you are not responsible for the test failures that exist on both your PR and the dev branch.

@github-actions
Copy link
Copy Markdown

The build and test suite have started for your pull request.

To view your build log, please reference the build with source version: "PR_5408".

Additionally, check the latest "dev" source version to identify existing test failures. Please note that you are not responsible for the test failures that exist on both your PR and the dev branch.

@github-actions
Copy link
Copy Markdown

The build and test suite have started for your pull request.

To view your build log, please reference the build with source version: "PR_5408".

Additionally, check the latest "dev" source version to identify existing test failures. Please note that you are not responsible for the test failures that exist on both your PR and the dev branch.

@github-actions
Copy link
Copy Markdown

The build and test suite have started for your pull request.

To view your build log, please reference the build with source version: "PR_5408".

Additionally, check the latest "dev" source version to identify existing test failures. Please note that you are not responsible for the test failures that exist on both your PR and the dev branch.

1 similar comment
@github-actions
Copy link
Copy Markdown

The build and test suite have started for your pull request.

To view your build log, please reference the build with source version: "PR_5408".

Additionally, check the latest "dev" source version to identify existing test failures. Please note that you are not responsible for the test failures that exist on both your PR and the dev branch.

@github-actions
Copy link
Copy Markdown

The build and test suite have started for your pull request.

To view your build log, please reference the build with source version: "PR_5408".

Additionally, check the latest "dev" source version to identify existing test failures. Please note that you are not responsible for the test failures that exist on both your PR and the dev branch.

@oleg-alexandrov
Copy link
Copy Markdown
Contributor Author

oleg-alexandrov commented Mar 26, 2025

@acpaquette @Kelvinrr

I fixed all tests except for "app" and "module" tests. Those require latest "truth" to be stored in your data repo that I don't have write access to.

Kelvin is saying what is now may do, and this can be merged in, if Adam agrees.

Here's the build that ran after squashing all commits: https://us-west-2.codebuild.aws.amazon.com/project/eyJlbmNyeXB0ZWREYXRhIjoiNDJNZ2MxbHFKTkwxV1RyQUxJekdJY3FIanNqU29rMHB4Nk1YUzk4REIrZUZDeEtEaW9HQlZ1dTZOSHpML2VUTGVDekYydmVFcU9sUHJKN20wQzd1Q0UzSzJscnB0MElDb1M3Ti9GTlJYR1RuMWJTV3V1SkJTa3NoYmc9PSIsIml2UGFyYW1ldGVyU3BlYyI6IjF3U2NTSGlDcEtCc29YVnEiLCJtYXRlcmlhbFNldFNlcmlhbCI6MX0%3D/build/048c6416-0022-410e-bd8f-b9e01532b358

Note that I made some minor changes to some tests. There were some comparing PVL tables, and these would bail out at the first failure, requiring around 8 cycles of edit/build/test/fix per test, multiplied by 3 such tests, which I think is counterproductive. I replaced those with a handful of EXPECT_NEAR statements.

I am quite confident the changes due to this are very reasonable. All sensible tests give comparable results as before, including the test where the logic being implemented is meant to intentionally fail if the ray points away from the planet.

Thanks.

To add, I wrote a little python script to auto-edit tests and plug in the right numbers into EXPECT_NEAR and EXPECT_EQ statements. Plugging numbers by hand is a bit of a giant chore, and this may help going forward if more tests are standardized.

Copy link
Copy Markdown
Collaborator

@acpaquette acpaquette left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple discussion points. I haven't checked the changes to tests/data as my suggests may impact the test results

Comment on lines +149 to +151
// Compute the position along the ray
for (size_t i = 0; i < 3; i++)
intersectionPoint[i] = observerPos[i] + t * lookDirection[i];
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know it's not ideal, but can this be moved out of the demError function? Reading over the new algorithm, I could not find how the intersection point was being updated. Maybe into a small function that just updates the intersection point. That would also simplify the function signature here to just take the intersection point and success.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is essential that the demError() function takes a parameter named t, which parameterizes the line. Later the secant method calls this with values of t that helps it find the root.

The intersection point is being updated in the same function, demError(), a little down. If I recall, I had to work within the constraints of the previous logic, in which the updated intersection point is a member of the class.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I think I understand what you mean. If updating the point cannot be separated from the error generation, I think it would be helpful to update the name of the function. Maybe updateIntersection, this way it's more obvious that the intersection is being updated.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. The function could be called calcDemErrUpdateIntersection().

* Returned value is in km. A more robust method would return a couple
of values, and later try one if the other one fails.
*/
double DemShape::findDemValue() {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a particular reason this deviates from using the ellipsoid to do an initial DEM value guess?

I don't know for sure, but this method seems to interact with global DEMs oddly. In that it would start at the upper left corner, and traverse the cube until a valid point is found. It would be nice to avoid this for global DEMs.

I think we can avoid this by using the ellipsoid intersection as an initial guess and getting a 5 x 5 pixel square around the initial intersection point. Then iterating over that to get an initial DEM value.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason the ellipsoid is not used to find the initial DEM value guess is because this would fail for a rover that would have a little DEM in front of it, at an oblique angle, and it would be 8,000 m below the datum.

So, this is an intentional fix for problems with the previous method.

Much effort made into ensuring one searches around along the ray till an intersection point is found, by querying the DEM.

Copy link
Copy Markdown
Collaborator

@acpaquette acpaquette Mar 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I am missing something but this function seems to step through the DEM until it finds a valid value/radii. For global DEMs, the first valid values would likely be found at high latitudes around the poles.

If my data is closer to the equator, it seems like a significant jump for the algorithm to have to converge toward the radii at the equator from a radii obtained closer to the poles. I just wonder if that is contributing to a larger difference in intersection for closer to nadir data

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When the ray-to-DEM algorithm starts, it needs some estimate of the DEM height which is better than just the zero datum elevation, which can be really off, especially for a rover.

Yes, the initial estimate is found by just sampling the DEM and stopping at first good height value. The intersection with the ellipsoid at that value usually succeeds, even if the true value we need is off by a few km.

If no luck, the algorithm initiates a search along the ray. It could have started just with that, but this is somewhat slower than seeding with a valid DEM height. So, the strategy from above is an optimization, the algorithm works without it.

Eventually, whichever way the initial estimate is found, a rigorous root-searching starts, so it should not produce a large difference in intersection no matter what. It will stop when the tolerance is reached, or it will fail.

@github-actions
Copy link
Copy Markdown

The build and test suite have started for your pull request.

To view your build log, please reference the build with source version: "PR_5408".

Additionally, check the latest "dev" source version to identify existing test failures. Please note that you are not responsible for the test failures that exist on both your PR and the dev branch.

1 similar comment
@github-actions
Copy link
Copy Markdown

The build and test suite have started for your pull request.

To view your build log, please reference the build with source version: "PR_5408".

Additionally, check the latest "dev" source version to identify existing test failures. Please note that you are not responsible for the test failures that exist on both your PR and the dev branch.

@oleg-alexandrov
Copy link
Copy Markdown
Contributor Author

oleg-alexandrov commented Mar 28, 2025

Suggested change implemented. Tweak the tolerance for stopping criterion.

Note: This resulted in a few more failed tests, but by small amounts, and this is fully expected, given the change in tolerance.

That very few extra few tests failed it means that the change was not overall major. The new change forces the solver to work a little harder in some circumstances.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 1, 2025

The build and test suite have started for your pull request.

To view your build log, please reference the build with source version: "PR_5408".

Additionally, check the latest "dev" source version to identify existing test failures. Please note that you are not responsible for the test failures that exist on both your PR and the dev branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants