Skip to content

Fix GPU device options missing from module/device selectors#76

Merged
thomaswantstobeaskeleton merged 1 commit intomainfrom
codex/fix-gpu-not-appearing-in-device-options
Apr 30, 2026
Merged

Fix GPU device options missing from module/device selectors#76
thomaswantstobeaskeleton merged 1 commit intomainfrom
codex/fix-gpu-not-appearing-in-device-options

Conversation

@thomaswantstobeaskeleton
Copy link
Copy Markdown
Owner

Motivation

  • Module/device pickers were only showing Default and cpu because runtime device discovery was fragile and didn’t expose GPU device options in AVAILABLE_DEVICES.
  • The goal is to harden device detection so GPU runtimes (including indexed CUDA devices) appear in module selectors and the default device picker.

Description

  • Centralized runtime device option population in modules/base.py by introducing _append_unique_device and building AVAILABLE_DEVICES from a single source of truth.
  • Improved CUDA/ROCm detection by checking both torch.cuda.is_available() and torch.cuda.device_count() and adding cuda plus indexed entries (cuda:0, cuda:1, ...) when present.
  • Preserved and deduplicated detection for XPU, MPS, and DirectML (privateuseone) while keeping the CPU fallback intact.
  • DEVICE_SELECTOR() continues to read from AVAILABLE_DEVICES, so UI selectors and module defaults automatically pick up discovered devices.

Testing

  • Ran python -m compileall -f -q modules/base.py and it completed successfully.
  • Performed a runtime smoke check with python - <<'PY' ... import modules.base as b ... PY that printed DEFAULT_DEVICE, AVAILABLE_DEVICES, and DEVICE_SELECTOR(); the script ran successfully and showed CPU-only results in this environment (expected because no CUDA devices are exposed here).
  • No automated test suites were modified or run beyond the compile and runtime smoke check described above.

Codex Task

@thomaswantstobeaskeleton thomaswantstobeaskeleton merged commit 89baefd into main Apr 30, 2026
4 checks passed
@thomaswantstobeaskeleton thomaswantstobeaskeleton deleted the codex/fix-gpu-not-appearing-in-device-options branch April 30, 2026 22:31
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 435456aed3

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread modules/base.py
Comment on lines +315 to +317
if torch.cuda.is_available() or cuda_count > 0:
DEFAULT_DEVICE = 'cuda'
_append_unique_device('cuda')
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Require CUDA availability before choosing it as default

The new condition torch.cuda.is_available() or cuda_count > 0 can set DEFAULT_DEVICE to cuda even when PyTorch reports CUDA is not usable in the current process. This happens in partially broken CUDA installs (e.g., driver/runtime init failures) where device enumeration can still return a positive count; the app will then default to CUDA and later fail on model load or cache ops instead of safely staying on CPU. The previous behavior only selected CUDA when is_available() was true, so this introduces a regression in startup/device-selection reliability.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant