-
|
im trying to use the newest version of seedvr2, and while i can make a upscale 4x and get 1 or 1.5fps in topaz, seedvr2 i get like 0.5fps or lower. i dont know how to speed up (i have a 4090 but 16 of RAM)) can someone help me? Thank u for this amazing project. |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 2 replies
-
|
Please turn on enable_debug on the upscaler node and share the full log output. We can't help without data. |
Beta Was this translation helpful? Give feedback.
-
|
@adrientoupet im sorry again to not bring all the information, but the deal is, when i try to make a 1080 to 4k (1:20 min) in the example that i use, i manage to make a 1080p into 1080p super slow, but i didn't save the log. so i try using topaz to 4k and works, when i did with comfy it breaks i'm gonna try replicate this in linux and in another windows machine to have at least a log to bring here, but every time is the same crash in high resolutions, using all the tips, swaps, 3b models. when i get the new log i bring here. with much more data |
Beta Was this translation helpful? Give feedback.
-
|
Which Topaz model do you use? Starlight or Starlight Sharp? |
Beta Was this translation helpful? Give feedback.
-
|
@adrientoupet i'm using gemini traslated to help undertanding cause i find a new error trying to use different models. I am reporting a very specific "Guard Check Failed" error when attempting to perform video upscaling to 1080p using the quantized DiT GGUF model (seedvr2_ema_3b-Q8_0.gguf). My system has Triton active and PyTorch 2.9, but the process fails right at the first batch of Phase 2. This suggests a conflict between the GGUF model (which uses its own precision optimizations) and the dynamic compilation of torch.compile (PyTorch Dynamo). 🛠️ Configuration and Environment GPU: NVIDIA GeForce RTX 4090 (24GB) PyTorch: 2.9.1+cu128 Acceleration: FlashAttn: v2 ✓ | Triton: ✓ SeedVR2 Version: v2.5.21 DiT Model: /home/devil/IA/ComfyUI/models/SEEDVR2/seedvr2_ema_3b-Q8_0.gguf Target Resolution: 1920x1080 (1080px shortest edge) Total Frames: 946 The process fails at the start of Phase 2 (DiT upscaling) with the following error: [12:43:46.218] ❌ [ERROR] Forward pass error: Guard check failed: 40/0: tensor 'self._buffers['quantized_weight']' size mismatch at index 1. expected 2560, actual 2720. Guard failed on a parameter, consider using torch._dynamo.config.force_parameter_static_shapes = False to allow dynamism on parameters. |
Beta Was this translation helpful? Give feedback.
-
|
Which workflow of @numz are you referring to? |
Beta Was this translation helpful? Give feedback.

@adrientoupet i'm making more tests, but the workflow from @numz works so fast, im like speechless. the same video and a better quality runs in 4 minuts. im still trying to understand why the official workflow is this heavy, but im gonna shre the new log. im doing 101 frames, without any tweak, just works
16:25:25.172] ━━━━━━━━ Phase 1: VAE encoding ━━━━━━━━
[16:25:25.172] ♻️ Reusing pre-initialized video transformation pipeline
[16:25:25.173]
[16:25:25.173] 💡 Tip: For 946 frames, batch_size=945 matches video length optimally
[16:25:25.173] 💡 Matching batch_size to shot length improves temporal coherence
[16:25:25.173]
[16:25:25.173] 🎨 Materializing VAE weights to CPU (offload device): /h…