Conversation
|
Hey Niels! Thanks for putting together this PR. It's great to see you folks taking an interest in our work. I think our main concern is that DoubleTake requires a bunch of machinery around the model to actually shine, which this repo provides and HuggingFace does not. Depth hinting via mesh fusion and rendering is what gives the model its edge. While the model will work without them, it'll be suboptimal. Having the model live here in this repo means users - researchers or practioners - will be exposed to these mechanisms. The intention isn't to lock users into this repo, but rather to make them aware of this machinery on their first exposure to DoubleTake. From there they can break free into other repos after being inspired by the work. I'm happy to hear solutions to this though. And thanks again for putting this together. Mohamed |
|
Hi Mohamed, Thanks for your feedback. I don't think the goal is to have people only stay on Hugging Face for running models, it's mainly to discover research artifacts (and then be appropriately linked to the code to run inference with them - which could be a Github repository or a code snippet). Currently, a lot of researchers often share their weights on Google Drive or other URLs such as https://storage.googleapis.com, which makes it hard for people to find these (unless you know the paper). The goal is mainly to improve the discoverability, e.g. https://huggingface.co/models?pipeline_tag=depth-estimation&sort=trending currently showcases models such as DPT, MiDaS, Marigold, Depth Anything, but we'd like to make people also discover works like DoubleTake. This is done by adding metadata tags to the model weight repos - such as One can include a code snippet or link to a Github repository in the model card, which showcases how to appropriately run the model. One thing we also support is a "Use this model" button, where you can customize the code snippet displayed to run a certain model. We often include "First install from the Github repository (...)" in that code snippet, see here for an example PR. For SAM-2 for instance, we point to the Github repository to run inference, we have a nice "Use this model" button which then links to it - see the top right of https://huggingface.co/facebook/sam2-hiera-large. Happy to hear your thoughts! Niels |
Hi @mohammed-amr and team,
Thanks for this nice work! I wrote a quick PoC to showcase that you can easily have integration with the 🤗 hub so that you can
from_pretrained(and push it usingpush_to_hub)safetensorsfor the weights in favor of pickle.Also, this greatly improves the discoverability of your model (as it's currently hosted on Google Storage which is hard to find).
It leverages the PyTorchModelHubMixin class which allows to inherits these methods.
Usage is as follows:
This means people don't need to manually download a checkpoint first in their local environment, it just loads automatically from the hub.
Would you be interested in this integration?
Kind regards,
Niels
Note
Please don't merge this PR before pushing the model to the hub :)