-
Notifications
You must be signed in to change notification settings - Fork 256
[model] Add bridge support for GLM-4.7-Flash #2972
Copy link
Copy link
Open
Labels
area:modelModel implementations and HF bridge logicModel implementations and HF bridge logiccommunity-requestfeatureNew capabilities, enhancements, or enablement workNew capabilities, enhancements, or enablement workneeds-authorAuthor action is required before review or merge can continueAuthor action is required before review or merge can continue
Description
Hugging Face repository
https://huggingface.co/zai-org/GLM-4.7-Flash
Architecture family
MoE decoder-only (e.g. DeepSeek V2/V3, OLMoE, Qwen3-MoE, MiniMax-M2)
Required deliverables
- Model providers
- HF conversion bridge
- Unit tests (config and bridge)
- Model conversion functional tests
- Optimal pretraining recipe
- Optimal finetuning recipe
- Recipe unit tests
- Recipe functional tests
- End-to-end CI coverage
Owner if known
No response
Extra context
zai-org/GLM-4.7-Flash uses a new architecture class Glm4MoeLiteForCausalLM (model_type: glm4_moe_lite), which is not covered by the existing GLM45Bridge. Are there any plans to support this model? Thanks.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
area:modelModel implementations and HF bridge logicModel implementations and HF bridge logiccommunity-requestfeatureNew capabilities, enhancements, or enablement workNew capabilities, enhancements, or enablement workneeds-authorAuthor action is required before review or merge can continueAuthor action is required before review or merge can continue