numaaware: add GPU NUMA topology awareness to scheduler#5095
numaaware: add GPU NUMA topology awareness to scheduler#5095pmady wants to merge 9 commits intovolcano-sh:masterfrom
Conversation
Add GPUInfo, GPUDetails types and helper methods (NUMANodes, GPUsInNUMANodes) to support GPU-to-NUMA node mapping in the scheduler API. Extend NumatopoInfo struct with GPUDetail field and update DeepCopy to include GPU topology data. This enables the numaaware plugin to consider GPU PCIe affinity when making scheduling decisions for GPU workloads. Signed-off-by: pmady <pavan4devops@gmail.com>
Add gpumanager package that implements the policy.HintProvider interface for nvidia.com/gpu resources. The provider: - Generates topology hints based on GPU-to-NUMA node affinity - Prefers allocations from the minimum number of NUMA nodes - Allocates GPUs from preferred NUMA nodes first, then spills over - Follows the same pattern as the existing cpumanager provider Signed-off-by: pmady <pavan4devops@gmail.com>
Register the gpumanager HintProvider alongside the existing cpumanager provider so that GPU NUMA topology is considered during predicate and allocation phases. Update getNodeNumaNumForTask to include GPU NUMA node count in the scoring function. Nodes where both CPU and GPU assignments span fewer NUMA nodes receive higher scores, improving locality for mixed CPU+GPU workloads. Add getNumaNodeCntForGPUID helper to count NUMA nodes used by GPU assignments. Signed-off-by: pmady <pavan4devops@gmail.com>
Test coverage includes: - 4 GPUs from single NUMA node (both NUMA 0 and NUMA 1) - 4 GPUs when only one NUMA node has enough available - 5 GPUs requiring cross-NUMA allocation - 8 GPUs using all available across both NUMA nodes - 9 GPUs requested (impossible, no hints returned) - Container with no GPU request (nil hints) - Allocation from preferred NUMA node - Allocation with partial NUMA availability All tests follow existing table-driven test pattern from cpumanager. Signed-off-by: pmady <pavan4devops@gmail.com>
|
Welcome @pmady! It looks like this is your first PR to volcano-sh/volcano 🎉 |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces GPU NUMA topology awareness to the numaaware scheduler plugin, a valuable feature for performance-sensitive GPU workloads. While the changes are well-structured and follow existing patterns, two medium-severity security issues were identified: a potential Denial of Service (DoS) due to exponential complexity in bitmask iteration with many NUMA nodes, and a potential integer overflow when handling large GPU resource requests. These require addressing with appropriate input validation and consistent data types. Additionally, a significant issue in the scoring logic could lead to suboptimal scheduling decisions, and there are a few minor suggestions for improvement.
| cpuNumaCnt := getNumaNodeCntForCPUID(assignCpus, node.NumaSchedulerInfo.CPUDetail) | ||
|
|
||
| // Include GPU NUMA node count in scoring if GPU assignments exist. | ||
| gpuNumaCnt := 0 | ||
| assignGPUs := resAssignMap[node.Name][string(gpumanager.NvidiaGPUResource)] | ||
| if assignGPUs.Size() > 0 && node.NumaSchedulerInfo.GPUDetail != nil { | ||
| gpuNumaCnt = getNumaNodeCntForGPUID(assignGPUs, node.NumaSchedulerInfo.GPUDetail) | ||
| } | ||
|
|
||
| // Total NUMA node count: prefer nodes where both CPU and GPU | ||
| // assignments span fewer NUMA nodes. | ||
| totalNumaCnt := cpuNumaCnt + gpuNumaCnt | ||
| nodeNumaCnts[index] = api.ScoredNode{ | ||
| NodeName: node.Name, | ||
| Score: int64(getNumaNodeCntForCPUID(assignCpus, node.NumaSchedulerInfo.CPUDetail)), | ||
| Score: int64(totalNumaCnt), | ||
| } |
There was a problem hiding this comment.
The current scoring logic totalNumaCnt := cpuNumaCnt + gpuNumaCnt incorrectly calculates the total number of NUMA nodes used. It sums the counts of NUMA nodes for CPUs and GPUs separately, which doesn't correctly represent the total span of NUMA nodes when resources share a NUMA node. For example, if both CPUs and GPUs are on NUMA node 0, the current logic would score it as 1+1=2, which is the same score as when CPUs are on NUMA 0 and GPUs are on NUMA 1. The score should be the count of the union of NUMA nodes used by both CPUs and GPUs.
To fix this, you should calculate the union of NUMA node bitmasks for CPUs and GPUs first, and then count the bits in the resulting mask for the score. This will require changing getNumaNodeCntForGPUID to return a bitmask.BitMask, and also modifying getNumaNodeCntForCPUID (or creating a new helper) to return a bitmask.
| cpuNumaCnt := getNumaNodeCntForCPUID(assignCpus, node.NumaSchedulerInfo.CPUDetail) | |
| // Include GPU NUMA node count in scoring if GPU assignments exist. | |
| gpuNumaCnt := 0 | |
| assignGPUs := resAssignMap[node.Name][string(gpumanager.NvidiaGPUResource)] | |
| if assignGPUs.Size() > 0 && node.NumaSchedulerInfo.GPUDetail != nil { | |
| gpuNumaCnt = getNumaNodeCntForGPUID(assignGPUs, node.NumaSchedulerInfo.GPUDetail) | |
| } | |
| // Total NUMA node count: prefer nodes where both CPU and GPU | |
| // assignments span fewer NUMA nodes. | |
| totalNumaCnt := cpuNumaCnt + gpuNumaCnt | |
| nodeNumaCnts[index] = api.ScoredNode{ | |
| NodeName: node.Name, | |
| Score: int64(getNumaNodeCntForCPUID(assignCpus, node.NumaSchedulerInfo.CPUDetail)), | |
| Score: int64(totalNumaCnt), | |
| } | |
| cpuNumaMask := getNumaMaskForCPUID(assignCpus, node.NumaSchedulerInfo.CPUDetail) | |
| // Include GPU NUMA node count in scoring if GPU assignments exist. | |
| gpuNumaMask, _ := bitmask.NewBitMask() | |
| assignGPUs := resAssignMap[node.Name][string(gpumanager.NvidiaGPUResource)] | |
| if assignGPUs.Size() > 0 && node.NumaSchedulerInfo.GPUDetail != nil { | |
| gpuNumaMask = getNumaMaskForGPUID(assignGPUs, node.NumaSchedulerInfo.GPUDetail) | |
| } | |
| // Total NUMA node count: prefer nodes where both CPU and GPU | |
| // assignments span the minimum number of unique NUMA nodes. | |
| totalNumaMask := cpuNumaMask.Or(gpuNumaMask) | |
| nodeNumaCnts[index] = api.ScoredNode{ | |
| NodeName: node.Name, | |
| Score: int64(totalNumaMask.Count()), | |
| } |
| func getNumaNodeCntForGPUID(gpus cpuset.CPUSet, gpuDetails api.GPUDetails) int { | ||
| mask, _ := bitmask.NewBitMask() | ||
| for _, gpuIdx := range gpus.List() { | ||
| if info, ok := gpuDetails[gpuIdx]; ok { | ||
| mask.Add(info.NUMANodeID) | ||
| } | ||
| } | ||
| return mask.Count() | ||
| } |
There was a problem hiding this comment.
To implement the corrected scoring logic as suggested in the other comment, this function should be renamed to getNumaMaskForGPUID and return a bitmask.BitMask instead of an int.
| func getNumaNodeCntForGPUID(gpus cpuset.CPUSet, gpuDetails api.GPUDetails) int { | |
| mask, _ := bitmask.NewBitMask() | |
| for _, gpuIdx := range gpus.List() { | |
| if info, ok := gpuDetails[gpuIdx]; ok { | |
| mask.Add(info.NUMANodeID) | |
| } | |
| } | |
| return mask.Count() | |
| } | |
| func getNumaMaskForGPUID(gpus cpuset.CPUSet, gpuDetails api.GPUDetails) bitmask.BitMask { | |
| mask, _ := bitmask.NewBitMask() | |
| for _, gpuIdx := range gpus.List() { | |
| if info, ok := gpuDetails[gpuIdx]; ok { | |
| mask.Add(info.NUMANodeID) | |
| } | |
| } | |
| return mask | |
| } |
| minAffinitySize := gpuDetail.NUMANodes().Size() | ||
| hints := []policy.TopologyHint{} | ||
|
|
||
| bitmask.IterateBitMasks(gpuDetail.NUMANodes().List(), func(mask bitmask.BitMask) { |
There was a problem hiding this comment.
The generateGPUTopologyHints function calls bitmask.IterateBitMasks with a list of NUMA node IDs. This function iterates over all possible non-empty combinations of the provided NUMA nodes, which has a complexity of
| if !ok { | ||
| return 0 | ||
| } | ||
| return int(gpuQuantity.Value()) |
There was a problem hiding this comment.
The requestedGPUs function converts a GPU resource request value from int64 to int. On 32-bit systems, or with extremely large request values, this can result in an integer overflow. A negative value resulting from such an overflow would cause incorrect behavior in the hint generation and allocation logic, potentially leading to suboptimal or incorrect scheduling decisions.
| @@ -0,0 +1,186 @@ | |||
| /* | |||
| Copyright 2026 The Volcano Authors. | |||
| // Count how many available GPUs fall within this NUMA node combination. | ||
| numMatching := 0 | ||
| for _, gpuIdx := range availableGPUs.List() { | ||
| if gpuInfo, ok := gpuDetail[gpuIdx]; ok { | ||
| if mask.IsSet(gpuInfo.NUMANodeID) { | ||
| numMatching++ | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
The calculation of numMatching is performed inside the IterateBitMasks loop, which iterates over all available GPUs for each NUMA node combination. This can be inefficient. You can optimize this by pre-calculating the number of available GPUs per NUMA node before the loop.
For example:
// Before IterateBitMasks
availablePerNuma := make(map[int]int)
for _, gpuIdx := range availableGPUs.List() {
if gpuInfo, ok := gpuDetail[gpuIdx]; ok {
availablePerNuma[gpuInfo.NUMANodeID]++
}
}
bitmask.IterateBitMasks(..., func(mask bitmask.BitMask) {
// ...
// Replace the loop with this:
numMatching := 0
for _, numaID := range mask.GetBits() {
numMatching += availablePerNuma[numaID]
}
// ...
})| @@ -0,0 +1,305 @@ | |||
| /* | |||
| Copyright 2026 The Volcano Authors. | |||
There was a problem hiding this comment.
Pull request overview
Adds GPU NUMA-topology awareness to the numaaware scheduler plugin so GPU workloads are preferentially placed on nodes where GPU allocations span fewer NUMA nodes (reducing cross-NUMA PCIe/NVLink traffic).
Changes:
- Extends scheduler NUMA topology API (
NumatopoInfo) with GPU topology (GPUDetail) and helper methods. - Introduces a new
gpumanagerhint provider to generate/allocate GPU topology hints fornvidia.com/gpu. - Updates
numaawarescoring to account for GPU NUMA node span in addition to CPU.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| pkg/scheduler/api/numa_info.go | Adds GPU topology types/helpers and deep-copy support via GPUDetail. |
| pkg/scheduler/plugins/numaaware/provider/gpumanager/gpu_mng.go | Implements GPU topology hint generation and allocation for nvidia.com/gpu. |
| pkg/scheduler/plugins/numaaware/provider/gpumanager/gpu_mng_test.go | Adds unit tests for GPU hint generation and allocation behavior. |
| pkg/scheduler/plugins/numaaware/numaaware.go | Registers GPU provider and includes GPU NUMA span in node scoring. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| func generateGPUTopologyHints(availableGPUs cpuset.CPUSet, gpuDetail api.GPUDetails, request int) []policy.TopologyHint { | ||
| minAffinitySize := gpuDetail.NUMANodes().Size() | ||
| hints := []policy.TopologyHint{} | ||
|
|
||
| bitmask.IterateBitMasks(gpuDetail.NUMANodes().List(), func(mask bitmask.BitMask) { | ||
| // Count the total GPUs in the NUMA nodes covered by this mask. | ||
| gpusInMask := gpuDetail.GPUsInNUMANodes(mask.GetBits()...).Size() | ||
| if gpusInMask >= request && mask.Count() < minAffinitySize { | ||
| minAffinitySize = mask.Count() | ||
| } |
There was a problem hiding this comment.
In generateGPUTopologyHints, minAffinitySize is updated based on the total GPUs in a NUMA-mask (gpuDetail.GPUsInNUMANodes), not on the available GPUs. If each single NUMA node has enough total GPUs but not enough available GPUs to satisfy the request, all feasible hints will have mask.Count() > minAffinitySize and will end up with Preferred=false (i.e., no preferred hint), which can degrade/incorrectly influence policy merging. Compute minAffinitySize from the masks that actually satisfy numMatching >= request (or compute min over the constructed hints) and add a unit test for the ‘must span NUMA nodes due to availability’ case.
The previous scoring logic summed CPU and GPU NUMA node counts separately (cpuNumaCnt + gpuNumaCnt), which incorrectly double-counted shared NUMA nodes. For example, if CPUs and GPUs were both on NUMA 0, the score was 1+1=2 — the same as if they were on different NUMA nodes. Fix by computing the union of CPU and GPU NUMA bitmasks before counting. Co-located CPU+GPU on the same NUMA node now correctly scores 1. Rename helper functions to return bitmask.BitMask instead of int: - getNumaNodeCntForCPUID -> getNumaMaskForCPUID - getNumaNodeCntForGPUID -> getNumaMaskForGPUID Signed-off-by: pmady <pavan4devops@gmail.com>
|
Thanks for your contribution! Will take a look 👍 , also /cc @zjj2wry |
Remove redundant nil check before len() on GPUDetail map. len() for nil maps is defined as zero in Go. Signed-off-by: pmady <pavan4devops@gmail.com>
There was a problem hiding this comment.
Hello 👋 ,
Thank you for the contribution!
This is quite a wasp nest you put your hand in. 😄 I don't think that an implementation without any runtime plumbing makes sense here currently (without proper NumaTopology CRD extension, population from exporter, scheduler cache translation...). Without this, the new provider cannot be effective in real clusters.
Also, maybe can we avoid hardcoding "nvidia.com/gpu" in the provider design and make it resource-name driven, with general GPUs so other accelerators (e.g. Ascend) can be supported with the same framework? NVIDIA-first implementation is fine, but the abstraction should remain generic.
But first things first. Probably we should update, refactor and implement this feature in resource-exporter at least for nvidia GPUs, and maybe in the scheduler. After that we can revisit this PR.
| @@ -0,0 +1,186 @@ | |||
| /* | |||
| Copyright 2026 The Volcano Authors. | |||
|
Hi @hajnalmt, thank you for the thorough review , you are absolutely right, and this is exactly the feedback I needed. Acknowledging the GapI agree that the scheduler-side logic alone does not make sense without the full runtime plumbing. After reviewing the
I should have started from the data-source side. I have now implemented the missing pieces: PRs Created
On the Vendor-Agnostic QuestionGood point about not hard-coding
Which direction do you prefer? I am leaning toward option 2 (concrete first, abstract later) but happy to go either way. I will keep this PR open and rebase it once the apis and resource-exporter PRs are merged. Please let me know if the approach looks good! |
|
@pmady: GitHub didn't allow me to request PR reviews from the following users: zjj2wry. Note that only volcano-sh members and repo collaborators can review this PR, and authors cannot review their own PRs. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
Thanks for your PR,I have some questions: Also, since the scheduler does not have direct control over the underlying CPU and GPU allocation, how is this aspect expected to be managed? |
|
I think for cpu numa it's reasonable to add these judgements, but for those pods only requesting gpus: volcano/pkg/scheduler/plugins/numaaware/numaaware.go Lines 119 to 129 in 2946de8 |
- Add GPUInfo type and GPUDetail field to NumatopoSpec API - Populate GPUDetail in scheduler cache (getNumaInfo) - Add NUMA node count cap (maxNUMANodes=16) to prevent DoS in IterateBitMasks - Fix minAffinitySize to use available GPUs, not total - Optimize numMatching with pre-computed per-NUMA counts - Update deepcopy for GPUInfo and GPUDetail Signed-off-by: pmady <pavan4devops@gmail.com>
2f71592 to
a40d1cc
Compare
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
| // Build a unified NUMA mask across CPU and GPU assignments. | ||
| // Using the union ensures that shared NUMA nodes are counted once, | ||
| // so co-located CPU+GPU on the same NUMA node scores better. | ||
| assignGPUs := resAssignMap[node.Name][string(gpumanager.NvidiaGPUResource)] |
There was a problem hiding this comment.
So currently we only support GPU NUMA alignment, could you also clarify it with a note in the https://github.com/volcano-sh/volcano/blob/master/docs/user-guide/how_to_use_numa_aware.md?
There was a problem hiding this comment.
Good point. The Guaranteed QoS check at line 121 is pre-existing code from the CPU NUMA implementation, this PR does not modify it. It is there because kubelet's Topology Manager only applies topology hints for Guaranteed pods, so filtering non-Guaranteed pods at the scheduler level keeps them consistent.
For GPU workloads in practice, most training/inference pods set explicit CPU and memory requests alongside GPU requests (our user guide examples all do this), so they end up as Guaranteed QoS and go through the NUMA predicate path.
That said, if there is a use case for GPU-only pods without CPU/memory requests getting topology-aware scheduling, we could add an alternative check (e.g., skip the QoS gate when the pod requests GPU resources). Happy to add that in a follow-up if you think it is needed, or handle it in this PR .Let me know your preference.
| // Mark hints with the minimal number of NUMA nodes as preferred. | ||
| for i := range hints { | ||
| if hints[i].NUMANodeAffinity.Count() == minAffinitySize { | ||
| hints[i].Preferred = true |
There was a problem hiding this comment.
Consider a fragmented scenario on a 2-NUMA node machine:
- NUMA 0 has 2 physical, 1 available.
- NUMA 1 has 1 physical, 1 available.
Then a Pod requests 2 GPUs. - Mask {0} sets minAffinitySize to 1 (because physical >= 2), but yields no hint.
- Mask {1} yields no hint.
- Mask {0, 1} yields a valid hint with Count() == 2 (since 1+1 available >= 2).
The only valid hint {0, 1} will have Preferred = false because minAffinitySize was incorrectly dragged down to 1 by Mask {0}.
If the user's topology policy isrestricted(which requires a Preferred==true hint to admit the pod), this Pod will be rejected by this node, even though the node has 2 available GPUs and crossing 2 NUMA nodes is actually the best possible allocation at this moment.
There was a problem hiding this comment.
Good catch on the fragmented scenario. The current code already handles this correctly,minAffinitySize is only updated after the if numMatching < request { return } check, so masks without enough available GPUs are skipped before they can influence the minimum.
Tracing your example (NUMA 0: 2 physical/1 available, NUMA 1: 1 physical/1 available, request=2):
- Mask {0}: numMatching=1 (available) < 2 → skipped, does not update minAffinitySize
- Mask {1}: numMatching=1 < 2 → skipped
- Mask {0,1}: numMatching=2 >= 2 → passes, sets minAffinitySize=2
So the hint {0,1} correctly gets Preferred=true.
I added a clarifying comment in the latest push to make this ordering explicit, so future readers do not accidentally move the minAffinitySize update above the numMatching check.
|
@pmady I think the overall solution is great. But we're currently working on improving Volcano's usability and need to ensure each feature has a user guide. Could you also please update the numa aware user guide for the gpu using? Thanks 😄 https://github.com/volcano-sh/volcano/blob/master/docs/user-guide/how_to_use_numa_aware.md |
@kingeasternsun Great question on latency. GPU topology data follows the exact same CRD watch path as CPU topology — the resource-exporter writes to the The During hint generation, the In summary: no measurable scheduling latency impact for typical GPU node configurations (2–8 NUMA nodes, 4–16 GPUs). |
Add comprehensive GPU topology-aware scheduling documentation including: - Architecture overview with data flow diagram - Prerequisites for GPU NUMA scheduling - Verification steps for GPU topology data in Numatopology CRD - Practical YAML examples for Volcano Jobs, single-numa-node policy, and Kubeflow PyTorchJob with GPU NUMA affinity - Scoring explanation for mixed CPU+GPU workloads - Troubleshooting table for common GPU scheduling issues - Update design doc: move GPU from Non-Goals to Goals (added in v1.11) Signed-off-by: pmady <pavan4devops@gmail.com>
|
All review feedback has been addressed in the latest push. Summary of changes: API & Plumbing:
GPU Manager Fixes:
Scoring:
Documentation:
Resource-Exporter (companion PR volcano-sh/resource-exporter#12):
All tests pass locally. Ready for re-review. |
docs/design/numa-aware.md
Outdated
|
|
||
| ## Non-Goals | ||
| - Support other resources topology schedule, such as GPU. | ||
| - Support GPU resource topology scheduling (added in v1.11) |
There was a problem hiding this comment.
🤯 Why was it added in v1.11 here, and the version number is also wrong, the user guide shouldn't have this kind of thing
There was a problem hiding this comment.
Fixed, removed the version reference in the latest push.
|
|
||
| ## GPU NUMA-Aware Scheduling | ||
|
|
||
| Starting from Volcano v1.11, the numa-aware plugin supports **GPU topology-aware scheduling**. This enables Volcano to consider GPU-to-NUMA node affinity when placing GPU workloads, reducing cross-NUMA memory traffic and improving performance for AI/ML training and inference. |
There was a problem hiding this comment.
We don't need to add Starting from volcano v1.11 .... , and the version is wrong, we just need to teach users how to use it in the user guide
There was a problem hiding this comment.
Fixed. removed the version reference and just kept the usage instructions.
| 2. When both CPUs and GPUs are requested, the scheduler favors nodes where they share the **same NUMA node**, minimizing cross-NUMA data transfers. | ||
| 3. Nodes that cannot satisfy the GPU request within the topology policy are **filtered out** during the predicate phase. | ||
|
|
||
| ``` |
There was a problem hiding this comment.
Using a real drawing would be better, this drawing is slightly misaligned and looks AI-generated, which will likely raise questions from other communities :)
There was a problem hiding this comment.
Removed the ASCII diagram, replaced with a text description of the data flow.
|
@pmady Please check the code verify CI, should also update the CRD yamls |
|
We truly appreciate your contribution. Would you be willing to present and walk us through your approach at the community weekly meeting? |
Thanks @pmady for your reply. |
Thanks @kingeasternsun! I appreciate the invitation. I'd be happy to walk through the design and implementation approach. could you share the meeting schedule (day/time and timezone)? I'm in US Central Time (UTC-5), so depending on the slot I may need to join asynchronously. If the timing doesn't work out, I could also prepare a written technical walkthrough covering the architecture, data flow, and design decisions, and post it as a discussion or document for the community to review. Happy to go with whatever format works best for the team. |
@kingeasternsun Great point, you're right that the current device plugin interface doesn't guarantee the kubelet will honor the scheduler's NUMA preference. This is actually the same fundamental gap that exists for CPU NUMA-aware scheduling today: the scheduler picks NUMA-aligned CPUs, but kubelet's CPU manager does the actual core assignment independently. For GPUs specifically, there are a few paths forward:
This PR focuses on path (1): ensuring the scheduler picks nodes where NUMA alignment is possible, and relying on the Topology Manager to enforce it at the kubelet level. The value is in node selection without this, the scheduler might place a GPU pod on a node where cross-NUMA allocation is unavoidable, even when a better node exists. I'll add a note about this limitation and the Topology Manager dependency in the user guide. Happy to discuss further at the community meeting as well! |
- Remove fake v1.11 version references from user guide and design doc - Remove ASCII diagram flagged as AI-generated - Fix allocatable format to use CPUSet (e.g. '0-7') per review - Remove fabricated resource-exporter version, link to actual PR - Add Limitations section about device plugin alignment gap - Clarify minAffinitySize comment ordering in gpu_mng.go - Regenerate CRD YAMLs with gpuDetail field (controller-gen v0.20.0) - Regenerate installer development YAMLs (TAG=latest) - Update staging codegen (applyconfiguration for GPUInfo) Signed-off-by: pmady <pavan4devops@gmail.com>
|
@JesseStutler Done, all review comments addressed in the latest push (627c0c8):
Code verify CI should pass now. |
Automated verification for Volcano PRs: - volcano-sh/volcano#5095 (scheduler GPU NUMA awareness) - volcano-sh/resource-exporter#12 (GPU NUMA topology discovery) - volcano-sh/apis#229 (GPUInfo API types) Two modes: - test-existing-cluster.sh: For existing K8s GPU clusters - gpu-numa-test.sh: Creates GCP GPU VM from scratch Signed-off-by: Pavan Madduri <pavan4devops@gmail.com>
E2E Test Scripts for GPU NUMA-Aware SchedulingI've created automated test scripts to make it easy for anyone with a GPU cluster to verify this feature end-to-end. Repo: https://github.com/pmady/gpu-numa-test Quick Start (existing GPU cluster)git clone https://github.com/pmady/gpu-numa-test.git
cd gpu-numa-test
./test-existing-cluster.shWhat the script does
Prerequisites
Options./test-existing-cluster.sh # Full: build + deploy + test
./test-existing-cluster.sh --skip-build # Skip image build
./test-existing-cluster.sh --skip-deploy # Only run tests
./test-existing-cluster.sh --cleanup # Remove all test resourcesAlso includes |
Thanks, I got it! |
Hi @pmady This is Volcano Community Meeting Notes / Agenda |
What type of PR is this?
/kind feature
What this PR does / why we need it
This PR extends the
numaawarescheduler plugin to support GPU NUMA topology awareness. Currently, the plugin only considers CPU NUMA topology when scheduling workloads. For GPU workloads (especially multi-GPU training and LLM inference), cross-NUMA GPU placement causes measurable performance degradation due to NVLink and PCIe traffic crossing NUMA boundaries.Changes
1. API: Add GPU topology types (
pkg/scheduler/api/numa_info.go)GPUInfoandGPUDetailstypes for GPU-to-NUMA node mappingNUMANodes()andGPUsInNUMANodes()helper methods onGPUDetailsGPUDetailfield toNumatopoInfostructDeepCopyto include GPU topology data2. GPU HintProvider (
pkg/scheduler/plugins/numaaware/provider/gpumanager/)policy.HintProviderinterface fornvidia.com/gpuresourcescpumanagerprovider3. Plugin registration and scoring (
pkg/scheduler/plugins/numaaware/numaaware.go)gpumanager.NewProvider()alongside existingcpumanager.NewProvider()getNodeNumaNumForTaskscoring to include GPU NUMA node countgetNumaNodeCntForGPUIDhelper functionExample scenario (from #4998)
Node A: 8 GPUs, GPUs 0-3 on NUMA 0, GPUs 4-7 on NUMA 1, available: GPUs 0-2 on NUMA 0 + GPU 7 on NUMA 1
Node B: 8 GPUs, GPUs 0-3 on NUMA 0, GPUs 4-7 on NUMA 1, available: GPUs 0-3 all on NUMA 0
For a task requesting 4 GPUs:
Which issue(s) this PR fixes
Fixes #4998
Does this PR introduce a user-facing change?
Add GPU NUMA topology awareness to the numaaware scheduler plugin. The scheduler now considers GPU-to-NUMA node affinity when scheduling GPU workloads, preferring placements that minimize cross-NUMA GPU allocation for better performance.
How Has This Been Tested?