Skip to content

[Bug]: AssertionError: Torch not compiled with CUDA enabled #17236

@WhO2022

Description

@WhO2022

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

when starting web-ui.bat it shows error:
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
when adding argument to variable its not generating and shows error:
AssertionError: Torch not compiled with CUDA enabled

Steps to reproduce the problem

  1. get runtimeerror above
  2. add --skip-torch-cuda-test to COMMANDLINE_ARGS var
  3. generate img2img

What should have happened?

work

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

sysinfo-2026-01-04-08-13.json

Console logs

venv "C:\Users\elmar_86\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --skip-torch-cuda-test
C:\Users\elmar_86\stable-diffusion-webui\venv\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [cc6cb27103] from C:\Users\elmar_86\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.ckpt
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 5.3s (prepare environment: 0.4s, import torch: 2.6s, import gradio: 0.6s, setup paths: 0.4s, other imports: 0.3s, load scripts: 0.5s, create ui: 0.2s, gradio launch: 0.3s).
Creating model from config: C:\Users\elmar_86\stable-diffusion-webui\configs\v1-inference.yaml
C:\Users\elmar_86\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:942: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: AssertionError
Traceback (most recent call last):
  File "C:\Users\elmar_86\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\elmar_86\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\elmar_86\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\elmar_86\stable-diffusion-webui\modules\initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "C:\Users\elmar_86\stable-diffusion-webui\modules\shared_items.py", line 175, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\Users\elmar_86\stable-diffusion-webui\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "C:\Users\elmar_86\stable-diffusion-webui\modules\sd_models.py", line 868, in load_model
    with devices.autocast(), torch.no_grad():
  File "C:\Users\elmar_86\stable-diffusion-webui\modules\devices.py", line 228, in autocast
    if has_xpu() or has_mps() or cuda_no_autocast():
  File "C:\Users\elmar_86\stable-diffusion-webui\modules\devices.py", line 28, in cuda_no_autocast
    device_id = get_cuda_device_id()
  File "C:\Users\elmar_86\stable-diffusion-webui\modules\devices.py", line 40, in get_cuda_device_id
    ) or torch.cuda.current_device()
  File "C:\Users\elmar_86\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\__init__.py", line 769, in current_device
    _lazy_init()
  File "C:\Users\elmar_86\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\__init__.py", line 289, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled


Stable diffusion model failed to load
Exception in thread Thread-16 (load_model):
Traceback (most recent call last):
  File "C:\Users\elmar_86\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\elmar_86\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\elmar_86\stable-diffusion-webui\modules\initialize.py", line 154, in load_model
    devices.first_time_calculation()
  File "C:\Users\elmar_86\stable-diffusion-webui\modules\devices.py", line 277, in first_time_calculation
    linear(x)
  File "C:\Users\elmar_86\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\elmar_86\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\elmar_86\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 584, in network_Linear_forward
    return originals.Linear_forward(self, input)
  File "C:\Users\elmar_86\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
Using already loaded model v1-5-pruned-emaonly.ckpt [cc6cb27103]: done in 0.0s
*** Error completing request
*** Arguments: ('task(iba2sr0ecdrv7hs)', <gradio.routes.Request object at 0x0000020525B6FC40>, 0, 'improve quality', 'sketch art, worse quality', [], <PIL.Image.Image image mode=RGBA size=423x525 at 0x20525B6FBE0>, None, None, None, None, None, None, 4, 0, 1, 1, 1, 7, 1.5, 0.75, 0.0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', 'upload', None, 0, False, 1, 0.5, 4, 0, 0.5, 2, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Users\elmar_86\stable-diffusion-webui\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\elmar_86\stable-diffusion-webui\modules\call_queue.py", line 53, in f
        res = func(*args, **kwargs)
      File "C:\Users\elmar_86\stable-diffusion-webui\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "C:\Users\elmar_86\stable-diffusion-webui\modules\img2img.py", line 242, in img2img
        processed = process_images(p)
      File "C:\Users\elmar_86\stable-diffusion-webui\modules\processing.py", line 847, in process_images
        res = process_images_inner(p)
      File "C:\Users\elmar_86\stable-diffusion-webui\modules\processing.py", line 920, in process_images_inner
        with devices.autocast():
      File "C:\Users\elmar_86\stable-diffusion-webui\modules\devices.py", line 228, in autocast
        if has_xpu() or has_mps() or cuda_no_autocast():
      File "C:\Users\elmar_86\stable-diffusion-webui\modules\devices.py", line 28, in cuda_no_autocast
        device_id = get_cuda_device_id()
      File "C:\Users\elmar_86\stable-diffusion-webui\modules\devices.py", line 40, in get_cuda_device_id
        ) or torch.cuda.current_device()
      File "C:\Users\elmar_86\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\__init__.py", line 769, in current_device
        _lazy_init()
      File "C:\Users\elmar_86\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\__init__.py", line 289, in _lazy_init
        raise AssertionError("Torch not compiled with CUDA enabled")
    AssertionError: Torch not compiled with CUDA enabled

---

Additional information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bug-reportReport of a bug, yet to be confirmed

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions