Skip to content

fix(ROCm): prevent false TMA support detection on AMD GPUs#4126

Merged
danielhanchen merged 1 commit intounslothai:mainfrom
GoldenGrapeGentleman:fix/rocm-tma-false-positive
Mar 1, 2026
Merged

fix(ROCm): prevent false TMA support detection on AMD GPUs#4126
danielhanchen merged 1 commit intounslothai:mainfrom
GoldenGrapeGentleman:fix/rocm-tma-false-positive

Conversation

@GoldenGrapeGentleman
Copy link
Contributor

Problem

_check_tma_support() in unsloth/kernels/moe/grouped_gemm/interface.py incorrectly returns True on AMD ROCm GPUs.

TMA (Tensor Memory Accelerator) is an NVIDIA Hopper+ exclusive hardware feature. However, both checks in the function pass on AMD:

Check AMD gfx1100 (W7900) Expected
torch.cuda.get_device_capability()[0] >= 9 11 >= 9True False
hasattr(tl, "make_tensor_descriptor") True (symbol exists in ROCm Triton) N/A
_SUPPORTS_TMA (final) True False

The capability value 11 comes from gfx1100, not from NVIDIA's sm_90 (Hopper). This false positive would cause MoE grouped GEMM to attempt TMA operations on AMD, leading to runtime failures.

Fix

Early-return False for HIP devices, matching the existing XPU guard:

def _check_tma_support():
    if DEVICE_TYPE in ("xpu", "hip"):  # TMA is NVIDIA Hopper+ only
        return False
    ...

Verification

Tested on AMD Radeon PRO W7900 (gfx1100, ROCm 6.3.4):

Original _SUPPORTS_TMA: True   ← BUG
Fixed    _SUPPORTS_TMA: False  ← Correct

Changes

  • unsloth/kernels/moe/grouped_gemm/interface.py: 1 file, +2 −2

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical bug where AMD ROCm GPUs were erroneously identified as supporting NVIDIA's Tensor Memory Accelerator (TMA) feature. This false detection could lead to runtime failures when attempting to use TMA operations on unsupported hardware. The implemented fix introduces a device-type check to correctly disable TMA support for AMD (HIP) devices, ensuring accurate behavior and preventing crashes across different GPU architectures.

Highlights

  • Incorrect TMA support detection on AMD GPUs: The _check_tma_support() function was incorrectly returning True for AMD ROCm GPUs, leading to potential runtime failures. This was due to AMD's gfx1100 capability being 11 (which is >= 9) and Triton's make_tensor_descriptor symbol existing in ROCm Triton, despite TMA being an NVIDIA Hopper+ exclusive feature.
  • Fix for TMA detection logic: An early-return False condition was added for hip devices (AMD ROCm) in _check_tma_support(), similar to the existing xpu guard. This ensures that TMA support is correctly disabled for AMD GPUs, preventing false positives and ensuring stability.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • unsloth/kernels/moe/grouped_gemm/interface.py
    • Updated the _check_tma_support function to explicitly return False for hip devices, preventing incorrect TMA support detection on AMD GPUs.
    • Clarified the comment regarding TMA requirements to specify 'NVIDIA GPU'.
Activity
  • No specific activity has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes an issue where TMA (Tensor Memory Accelerator) support was being incorrectly detected on AMD ROCm GPUs. This was due to the device capability check returning a value that passed the threshold intended for NVIDIA Hopper+ GPUs. The fix introduces an early return for hip devices, preventing this false positive and potential runtime failures. The change is logical and well-implemented. I've added one suggestion to make the device type check even more robust for the future.

# 2. Triton version with TMA API (make_tensor_descriptor or _experimental_make_tensor_descriptor)
def _check_tma_support():
if DEVICE_TYPE == "xpu":
if DEVICE_TYPE in ("xpu", "hip"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While adding "hip" to the check correctly fixes the bug for AMD GPUs, a more robust and future-proof approach would be to explicitly check for "cuda" devices, since TMA is an NVIDIA-specific feature. This prevents similar issues if support for other hardware is added in the future.

Suggested change
if DEVICE_TYPE in ("xpu", "hip"):
if DEVICE_TYPE != "cuda":

@GoldenGrapeGentleman GoldenGrapeGentleman force-pushed the fix/rocm-tma-false-positive branch from 4a2a483 to 5b05509 Compare February 28, 2026 07:19
@GoldenGrapeGentleman
Copy link
Contributor Author

Note: Rebased to remove merge commit — now a clean single commit on top of main.

Dependency: This PR is independent and can be merged standalone. It does not depend on any other open PR.

Related PRs:

TMA (Tensor Memory Accelerator) is an NVIDIA Hopper+ feature that does
not exist on AMD GPUs.  However, _check_tma_support() incorrectly
returns True on ROCm because:

1. torch.cuda.get_device_capability() returns (11, 0) for gfx1100,
   satisfying the >= 9 check intended for Hopper (sm_90).
2. ROCm Triton exports tl.make_tensor_descriptor (the symbol exists
   even though the hardware does not support TMA).

This would cause MoE grouped_gemm to attempt TMA operations on AMD
GPUs, leading to runtime failures.

Fix: early-return False for HIP devices, matching the existing XPU
guard.
Copy link
Contributor

@danielhanchen danielhanchen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! This works great!

@danielhanchen danielhanchen merged commit 0cc6941 into unslothai:main Mar 1, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants