[CI] [GHA] Use snapshot_download for HF models#3348
Open
akashchi wants to merge 5 commits intoopenvinotoolkit:masterfrom
Open
[CI] [GHA] Use snapshot_download for HF models#3348akashchi wants to merge 5 commits intoopenvinotoolkit:masterfrom
snapshot_download for HF models#3348akashchi wants to merge 5 commits intoopenvinotoolkit:masterfrom
Conversation
Contributor
There was a problem hiding this comment.
Pull request overview
This PR updates the test suite to use snapshot_download from huggingface_hub instead of directly calling model loading functions with model IDs. This change helps reduce the number of API requests to HuggingFace servers by downloading all model files at once and caching them locally before loading models and tokenizers.
Changes:
- Replaced direct model ID usage with
snapshot_download()calls to pre-cache models and reduce HF API rate limit issues - Added
huggingface_hub.snapshot_downloadimports across multiple test files - Applied the pattern consistently across test files for Whisper, VLM, tokenizer, and parser tests
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/python_tests/utils/hugging_face.py | Updated get_huggingface_models() and GGUF loading functions to use snapshot_download for caching models before loading |
| tests/python_tests/test_whisper_pipeline_static.py | Added snapshot_download call in load_and_save_whisper_model() to cache model before loading processor and tokenizer |
| tests/python_tests/test_whisper_pipeline.py | Added snapshot_download call in save_to_temp() to cache model before loading tokenizer, model, and processor |
| tests/python_tests/test_vlm_pipeline.py | Added snapshot_download calls in multiple functions to cache VLM models before loading processors and tokenizers |
| tests/python_tests/test_vllm_parsers_wrapper.py | Added inline snapshot_download calls when creating parsers with tokenizers |
| tests/python_tests/test_tokenizer.py | Added snapshot_download calls in multiple test functions to cache models before loading tokenizers |
| tests/python_tests/test_text_streamer.py | Added inline snapshot_download calls when loading tokenizers for text streaming tests |
| tests/python_tests/test_parsers.py | Added snapshot_download call in fixture to cache model before loading tokenizer |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Should lower the number of API requests to the HF servers.
It was introduced and tested in openvinotoolkit/openvino/pull/32282 and openvinotoolkit/openvino/pull/32458
Ticket: