[tests] EAGLE3 uses same optimum-intel as others#3340
[tests] EAGLE3 uses same optimum-intel as others#3340peterchen-intel wants to merge 2 commits intoopenvinotoolkit:masterfrom
Conversation
1. Update optimum-intel in tests requirement.txt to commit id includes EAGLE3 draft model support 2. Remove specified optimum-intel installation for EAGLE3 tests 3. Move VLM (MiniCPM-o-2_6) tests to the end of the list since it specifies transformers verions 4. Fix EAGLE3 draft model conversion issues Signed-off-by: Chen, Peter <peter.chen@intel.com>
There was a problem hiding this comment.
Pull request overview
This PR updates the test infrastructure to use the official optimum-intel repository for EAGLE3 model support, eliminating the need for a custom fork. The changes primarily affect test configuration and CI workflows to properly handle EAGLE3 draft model conversion using the newly merged upstream support.
Changes:
- Updated optimum-intel dependency to a specific commit that includes EAGLE3 support from merged PR #1588
- Removed custom optimum-intel installation steps from CI workflows for EAGLE3 tests
- Added explicit
trust_remote_code=Trueparameter when converting EAGLE3 draft models - Reordered test execution to run EAGLE3 tests before VLM tests that require specific transformers versions
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/python_tests/requirements.txt | Updated optimum-intel to git commit hash with EAGLE3 support |
| tests/python_tests/utils/hugging_face.py | Changed EAGLE3 parameter from custom "eagle3" flag to standard "trust_remote_code" |
| tests/python_tests/test_continuous_batching.py | Added conditional logic to pass trust_remote_code for EAGLE3 models |
| tests/python_tests/samples/conftest.py | Removed deprecated --eagle3 flag from conversion arguments |
| .github/workflows/windows.yml | Removed custom optimum-intel installation and reordered tests |
| .github/workflows/manylinux_2_28.yml | Removed custom optimum-intel installation and reordered tests |
| .github/workflows/linux.yml | Removed custom optimum-intel installation and reordered tests |
| if "eagle3" in str(draft_model_id).lower(): | ||
| draft_model_path = download_and_convert_model(draft_model_id, trust_remote_code=True).models_path | ||
| else: | ||
| draft_model_path = download_and_convert_model(draft_model_id).models_path |
There was a problem hiding this comment.
The pattern of checking for "eagle3" in the model ID and conditionally adding trust_remote_code=True is duplicated in three locations (lines 590-593, 700-703, and in test_eagle3_sd_string_inputs). Consider extracting this logic into a helper function that wraps download_and_convert_model to handle EAGLE3 models. This would reduce code duplication and make the logic easier to maintain if the EAGLE3 handling needs to change in the future.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
| if "eagle3" in str(draft_model_id).lower(): | ||
| draft_model_path = download_and_convert_model(draft_model_id, trust_remote_code=True).models_path | ||
| else: | ||
| draft_model_path = download_and_convert_model(draft_model_id).models_path |
There was a problem hiding this comment.
This is a duplicate of the conditional logic added at lines 590-593. Since hugging_face.py already automatically detects EAGLE3 models and sets trust_remote_code=True, this explicit parameter passing is redundant. Consider removing this conditional and relying on the automatic detection for better maintainability.
Description
Update optimum intel for test cases only, doesn't change GenAI source code.
Use official optimum intel repo to test eagle3 pipeline since huggingface/optimum-intel#1588 which supports eagle3 draft model conversion has already been merged.
Tickets: CVS-179182
EAGLE3 enabling PR #3055
Checklist: