Hi, thank you for the excellent work and for releasing the code!
I have a question regarding the egocentric 4D reconstruction results shown in the paper / project (e.g., the comparison with MegaSAM, where your method reconstructs the scene in <1s).
I am trying to understand how these egocentric 4D reconstructions are generated in practice:
- Are these results obtained by running the released code directly, or do they rely on additional internal preprocessing / postprocessing steps?
- Specifically, is there an example pipeline or script for:
- accumulating the online reconstruction over time,
- and visualizing the resulting 4D (time-varying) pointmap or scene?
If there is any documentation, example command, or dataset configuration that would help reproduce these results, I would greatly appreciate it.
Thank you again for the impressive work!