-
Notifications
You must be signed in to change notification settings - Fork 46
Description
π Feature Description
I would like to propose adding support for Ascend NPU (Huawei's AI processor) to this sparse attention repository. This would enable the efficient execution of sparse attention mechanisms on Ascend hardware, which is widely used in China and increasingly in global markets.
π― Motivation & Background
-
Growing Ascend Ecosystem: Ascend NPUs (e.g.,Ascend 910) are becoming popular for AI workloads, especially in Chinese cloud services and research institutions.
-
Performance Potential: The sparse attention pattern is particularly suitable for Ascend's architecture, which excels at:
- Matrix computations
- Memory-bandwidth optimized operations
- Large-scale parallel processing
-
Community Demand: Many developers in China are interested in running efficient sparse attention models on Ascend hardware for both research and production.
π‘ Proposed Approach
I'm interested in contributing to this effort and would like to discuss:
-
Implementation Strategy:
- Port existing CUDA kernels to Ascend's CANN (Compute Architecture for Neural Networks)
- Leverage Ascend's custom operator development framework
-
Test Plan:
- Functional correctness tests comparing with CPU/CUDA implementations
- Performance benchmarks on Ascend 910
π Related Resources
- Similar successful ports in other repos (e.g. Liger-Kernel)
π€ Help Needed
I'm looking for:
- Feedback on whether this feature aligns with the project's roadmap
- Guidance on implementation approach and code structure
- Collaboration with maintainers who understand the existing codebase
Would the maintainers and community be interested in supporting Ascend NPU? I'm excited to contribute and work with you on this!