ComfyUI Error ReportError Details- Node ID: 276
- Node Type: KSampler
- Exception Type: ValueError
- Exception Message: too many values to unpack (expected 4)
Stack Trace File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 515, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 329, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 303, in _async_map_node_over_list await process_inputs(input_dict, i) File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 291, in process_inputs result = f(**inputs) ^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\nodes.py", line 1538, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\nodes.py", line 1505, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sample.py", line 60, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1163, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1053, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1035, in sample output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 984, in outer_sample self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds, self.model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 130, in prepare_sampling return executor.execute(model, noise_shape, conds, model_options=model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 138, in _prepare_sampling comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required + inference_memory, minimum_memory_required=minimum_memory_required + inference_memory) File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 197, in patched_load_models_gpu loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights) File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 506, in model_load self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights) File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 536, in model_use_more_vram return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 239, in new_partially_load device_assignments = analyze_safetensor_loading(self, allocations, is_clip=is_clip_model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 426, in analyze_safetensor_loading total_memory = sum(module_size for module_size, _, _, _ in raw_block_list) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 426, in <genexpr> total_memory = sum(module_size for module_size, _, _, _ in raw_block_list) ^^^^^^^^^^^^^^^^^^^^
System Information- ComfyUI Version: 0.4.0
- Arguments: C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\main.py --user-directory H:\COMFY UI - TEST\user --input-directory H:\COMFY UI - TEST\input --output-directory H:\COMFY UI - TEST\output --front-end-root C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\web_custom_versions\desktop_app --base-directory H:\COMFY UI - TEST --extra-model-paths-config C:\Users\user\AppData\Roaming\ComfyUI\extra_models_config.yaml --log-stdout --listen 127.0.0.1 --port 8000
- OS: win32
- Python Version: 3.12.11 (main, Aug 18 2025, 19:17:54) [MSC v.1944 64 bit (AMD64)]
- Embedded Python: false
- PyTorch Version: 2.8.0+cu129
Devices- Name: cuda:0 NVIDIA GeForce RTX 4060 Ti : cudaMallocAsync
- Type: cuda
- VRAM Total: 17175150592
- VRAM Free: 15964569600
- Torch VRAM Total: 0
- Torch VRAM Free: 0
Logs2025-12-11T15:33:01.148364 - 2025-12-11T15:33:01.148364 - ['0']2025-12-11T15:33:01.148364 - 2025-12-11T15:33:01.499424 - </s><s>car<loc_0><loc_0><loc_998><loc_998></s>2025-12-11T15:33:01.499424 - 2025-12-11T15:33:01.509399 - match index:2025-12-11T15:33:01.509399 - 2025-12-11T15:33:01.509399 - 02025-12-11T15:33:01.509399 - 2025-12-11T15:33:01.509399 - in mask_indexes:2025-12-11T15:33:01.509399 - 2025-12-11T15:33:01.509399 - ['0']2025-12-11T15:33:01.509399 - 2025-12-11T15:33:01.851484 - </s><s>car<loc_0><loc_0><loc_998><loc_998></s>2025-12-11T15:33:01.851484 - 2025-12-11T15:33:01.860459 - match index:2025-12-11T15:33:01.860459 - 2025-12-11T15:33:01.860459 - 02025-12-11T15:33:01.860459 - 2025-12-11T15:33:01.860459 - in mask_indexes:2025-12-11T15:33:01.860459 - 2025-12-11T15:33:01.860459 - ['0']2025-12-11T15:33:01.860459 - 2025-12-11T15:33:02.155669 - </s><s>car<loc_0><loc_0><loc_998><loc_998></s>2025-12-11T15:33:02.155669 - 2025-12-11T15:33:02.166640 - match index:2025-12-11T15:33:02.166640 - 2025-12-11T15:33:02.166640 - 02025-12-11T15:33:02.166640 - 2025-12-11T15:33:02.166640 - in mask_indexes:2025-12-11T15:33:02.166640 - 2025-12-11T15:33:02.166640 - ['0']2025-12-11T15:33:02.166640 - 2025-12-11T15:33:02.536650 - </s><s>car<loc_0><loc_0><loc_998><loc_998></s>2025-12-11T15:33:02.536650 - 2025-12-11T15:33:02.545627 - match index:2025-12-11T15:33:02.545627 - 2025-12-11T15:33:02.545627 - 02025-12-11T15:33:02.545627 - 2025-12-11T15:33:02.545627 - in mask_indexes:2025-12-11T15:33:02.545627 - 2025-12-11T15:33:02.545627 - ['0']2025-12-11T15:33:02.545627 - 2025-12-11T15:33:02.924612 - </s><s>car<loc_0><loc_0><loc_998><loc_998></s>2025-12-11T15:33:02.924612 - 2025-12-11T15:33:02.933590 - match index:2025-12-11T15:33:02.933590 - 2025-12-11T15:33:02.934587 - 02025-12-11T15:33:02.934587 - 2025-12-11T15:33:02.934587 - in mask_indexes:2025-12-11T15:33:02.934587 - 2025-12-11T15:33:02.934587 - ['0']2025-12-11T15:33:02.934587 - 2025-12-11T15:33:03.317562 - </s><s>car<loc_0><loc_0><loc_998><loc_998></s>2025-12-11T15:33:03.317562 - 2025-12-11T15:33:03.326538 - match index:2025-12-11T15:33:03.326538 - 2025-12-11T15:33:03.326538 - 02025-12-11T15:33:03.326538 - 2025-12-11T15:33:03.326538 - in mask_indexes:2025-12-11T15:33:03.326538 - 2025-12-11T15:33:03.326538 - ['0']2025-12-11T15:33:03.326538 - 2025-12-11T15:33:03.725471 - </s><s>car<loc_0><loc_0><loc_998><loc_998></s>2025-12-11T15:33:03.725471 - 2025-12-11T15:33:03.735444 - match index:2025-12-11T15:33:03.735444 - 2025-12-11T15:33:03.735444 - 02025-12-11T15:33:03.735444 - 2025-12-11T15:33:03.735444 - in mask_indexes:2025-12-11T15:33:03.735444 - 2025-12-11T15:33:03.735444 - ['0']2025-12-11T15:33:03.735444 - 2025-12-11T15:33:04.108445 - </s><s>car<loc_0><loc_0><loc_998><loc_998></s>2025-12-11T15:33:04.108445 - 2025-12-11T15:33:04.117421 - match index:2025-12-11T15:33:04.117421 - 2025-12-11T15:33:04.117421 - 02025-12-11T15:33:04.117421 - 2025-12-11T15:33:04.117421 - in mask_indexes:2025-12-11T15:33:04.117421 - 2025-12-11T15:33:04.117421 - ['0']2025-12-11T15:33:04.117421 - 2025-12-11T15:33:04.523336 - </s><s>car<loc_0><loc_0><loc_998><loc_998></s>2025-12-11T15:33:04.523336 - 2025-12-11T15:33:04.533310 - match index:2025-12-11T15:33:04.533310 - 2025-12-11T15:33:04.533310 - 02025-12-11T15:33:04.533310 - 2025-12-11T15:33:04.533310 - in mask_indexes:2025-12-11T15:33:04.534309 - 2025-12-11T15:33:04.534309 - ['0']2025-12-11T15:33:04.534309 - 2025-12-11T15:33:04.910300 - </s><s>car<loc_0><loc_0><loc_998><loc_998></s>2025-12-11T15:33:04.910300 - 2025-12-11T15:33:04.921272 - match index:2025-12-11T15:33:04.921272 - 2025-12-11T15:33:04.921272 - 02025-12-11T15:33:04.921272 - 2025-12-11T15:33:04.921272 - in mask_indexes:2025-12-11T15:33:04.921272 - 2025-12-11T15:33:04.921272 - ['0']2025-12-11T15:33:04.921272 - 2025-12-11T15:33:05.244406 - </s><s>car<loc_0><loc_0><loc_998><loc_998></s>2025-12-11T15:33:05.244406 - 2025-12-11T15:33:05.256376 - match index:2025-12-11T15:33:05.256376 - 2025-12-11T15:33:05.256376 - 02025-12-11T15:33:05.256376 - 2025-12-11T15:33:05.256376 - in mask_indexes:2025-12-11T15:33:05.256376 - 2025-12-11T15:33:05.256376 - ['0']2025-12-11T15:33:05.256376 - 2025-12-11T15:33:05.634364 - </s><s>car<loc_0><loc_0><loc_998><loc_998></s>2025-12-11T15:33:05.634364 - 2025-12-11T15:33:05.644338 - match index:2025-12-11T15:33:05.644338 - 2025-12-11T15:33:05.644338 - 02025-12-11T15:33:05.644338 - 2025-12-11T15:33:05.644338 - in mask_indexes:2025-12-11T15:33:05.644338 - 2025-12-11T15:33:05.644338 - ['0']2025-12-11T15:33:05.644338 - 2025-12-11T15:33:05.992406 - </s><s>car<loc_0><loc_0><loc_998><loc_998></s>2025-12-11T15:33:05.992406 - 2025-12-11T15:33:06.003377 - match index:2025-12-11T15:33:06.003377 - 2025-12-11T15:33:06.003377 - 02025-12-11T15:33:06.003377 - 2025-12-11T15:33:06.003377 - in mask_indexes:2025-12-11T15:33:06.003377 - 2025-12-11T15:33:06.003377 - ['0']2025-12-11T15:33:06.003377 - 2025-12-11T15:33:06.247725 - Offloading model...2025-12-11T15:33:06.247725 - 2025-12-11T15:33:06.572853 - [{}]2025-12-11T15:33:06.572853 - 2025-12-11T15:33:06.587814 - model_path: 2025-12-11T15:33:06.587814 - 2025-12-11T15:33:06.587814 - H:\COMFY UI - TEST\models\sam2\sam2.1_hiera_large-fp16.safetensors2025-12-11T15:33:06.587814 - 2025-12-11T15:33:06.588810 - Using model config: H:\COMFY UI - TEST\custom_nodes\ComfyUI-segment-anything-2\sam2_configs\sam2.1_hiera_l.yaml2025-12-11T15:33:06.588810 - 2025-12-11T15:33:09.990478 - Resizing to model input image size: 2025-12-11T15:33:09.990478 - 2025-12-11T15:33:09.990478 - 10242025-12-11T15:33:09.990478 - 2025-12-11T15:33:10.439279 - combined labels: 2025-12-11T15:33:10.439279 - 2025-12-11T15:33:10.439279 - [1. 1. 1. 1. 1. 1.]2025-12-11T15:33:10.439279 - 2025-12-11T15:33:10.439279 - combined labels shape: 2025-12-11T15:33:10.439279 - 2025-12-11T15:33:10.439279 - (6,)2025-12-11T15:33:10.439279 - 2025-12-11T15:33:24.094580 - propagate in video: 100%|██████████| 75/75 [00:12<00:00, 5.70it/s]2025-12-11T15:33:24.095577 - propagate in video: 100%|██████████| 75/75 [00:12<00:00, 5.80it/s]2025-12-11T15:33:24.095577 - 2025-12-11T15:33:25.749152 - Expanding/Contracting Mask: 100%|██████████| 75/75 [00:01<00:00, 51.36it/s]2025-12-11T15:33:25.749152 - Expanding/Contracting Mask: 100%|██████████| 75/75 [00:01<00:00, 48.52it/s]2025-12-11T15:33:25.749152 - 2025-12-11T15:33:28.803980 - [MultiGPU Core Patching] text_encoder_device_patched returning device: cuda:0 (current_text_encoder_device=cuda:0)2025-12-11T15:33:54.856605 - Requested to load WanTEModel2025-12-11T15:33:54.880541 - loaded completely; 95367431640625005117571072.00 MB usable, 10835.48 MB loaded, full load: True2025-12-11T15:33:54.883533 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float162025-12-11T15:33:55.330338 - [MultiGPU Core Patching] Successfully patched ModelPatcher.partially_load2025-12-11T15:33:55.672581 - gguf qtypes: F32 (836), Q5_0 (441), Q5_1 (48), F16 (6)2025-12-11T15:33:55.725439 - model weight dtype torch.float16, manual cast: None2025-12-11T15:33:55.725439 - model_type FLOW2025-12-11T15:33:58.669798 - Using sage attention mode: auto2025-12-11T15:33:58.670797 - �[33m[rgthree-comfy][Power Lora Loader]�[0m Lora "Wan21_CausVid_14B_T2V_lora_rank32_v2.safetensors" not found, skipping.�[0m2025-12-11T15:33:58.671795 - 2025-12-11T15:33:58.675782 - [MultiGPU DisTorch V2] ModelPatcher missing 'model_patches_models' attribute, using 'model_patches_to' fallback.2025-12-11T15:33:58.702711 - loaded completely; 0.00 MB usable, 10835.48 MB loaded, full load: True2025-12-11T15:33:59.386380 - [MultiGPU DisTorch V2] ModelPatcher missing 'model_patches_models' attribute, using 'model_patches_to' fallback.2025-12-11T15:33:59.413308 - loaded completely; 0.00 MB usable, 10835.48 MB loaded, full load: True2025-12-11T15:34:02.726761 - [MultiGPU DisTorch V2] ModelPatcher missing 'model_patches_models' attribute, using 'model_patches_to' fallback.2025-12-11T15:34:02.726761 - Requested to load WanVAE2025-12-11T15:34:03.586462 - Unloaded partially: 2336.48 MB freed, 8499.00 MB remains loaded, 160.00 MB buffer reserved, lowvram patches: 02025-12-11T15:34:03.696977 - loaded completely; 0.00 MB usable, 242.03 MB loaded, full load: True2025-12-11T15:34:48.109894 - [MultiGPU DisTorch V2] ModelPatcher missing 'model_patches_models' attribute, using 'model_patches_to' fallback.2025-12-11T15:34:48.184695 - loaded completely; 0.00 MB usable, 242.03 MB loaded, full load: True2025-12-11T15:35:34.988581 - [MultiGPU DisTorch V2] ModelPatcher missing 'model_patches_models' attribute, using 'model_patches_to' fallback.2025-12-11T15:35:35.065375 - loaded completely; 0.00 MB usable, 242.03 MB loaded, full load: True2025-12-11T15:36:19.057464 - [MultiGPU DisTorch V2] ModelPatcher missing 'model_patches_models' attribute, using 'model_patches_to' fallback.2025-12-11T15:36:19.149219 - loaded completely; 0.00 MB usable, 242.03 MB loaded, full load: True2025-12-11T15:37:05.853404 - [MultiGPU DisTorch V2] ModelPatcher missing 'model_patches_models' attribute, using 'model_patches_to' fallback.2025-12-11T15:37:05.918231 - loaded completely; 0.00 MB usable, 242.03 MB loaded, full load: True2025-12-11T15:37:53.506085 - [MultiGPU DisTorch V2] ModelPatcher missing 'model_patches_models' attribute, using 'model_patches_to' fallback.2025-12-11T15:37:53.572874 - loaded completely; 0.00 MB usable, 242.03 MB loaded, full load: True2025-12-11T15:38:38.075470 - [MultiGPU DisTorch V2] GGUFModelPatcher missing 'model_patches_models' attribute, using 'model_patches_to' fallback.2025-12-11T15:38:38.075470 - Requested to load WAN21_Vace2025-12-11T15:38:40.424586 - 0 models unloaded.2025-12-11T15:39:20.871456 - loaded completely; 0.00 MB usable, 12013.25 MB loaded, full load: True2025-12-11T15:39:20.999112 - ===============================================2025-12-11T15:39:20.999112 - DisTorch2 Model Virtual VRAM Analysis2025-12-11T15:39:20.999112 - ===============================================2025-12-11T15:39:21.000115 - Object Role Original(GB) Total(GB) Virt(GB)2025-12-11T15:39:21.000115 - -----------------------------------------------2025-12-11T15:39:21.021309 - cuda:0 recip 16.00GB 32.00GB +16.00GB2025-12-11T15:39:21.027293 - cpu donor 31.92GB 15.92GB -16.00GB2025-12-11T15:39:21.027293 - -----------------------------------------------2025-12-11T15:39:21.111786 - model model 11.60GB 0.00GB -16.00GB2025-12-11T15:39:21.117907 - ==================================================2025-12-11T15:39:21.117907 - [MultiGPU DisTorch V2] Final Allocation String:cuda:0,0.0000;cpu,0.50122025-12-11T15:39:21.120903 - ==================================================2025-12-11T15:39:21.120903 - DisTorch2 Model Device Allocations2025-12-11T15:39:21.120903 - ==================================================2025-12-11T15:39:21.120903 - Device VRAM GB Dev % Model GB Dist %2025-12-11T15:39:21.121899 - --------------------------------------------------2025-12-11T15:39:21.121899 - cuda:0 16.00 0.0% 0.00 0.0%2025-12-11T15:39:21.121899 - cpu 31.92 50.1% 16.00 100.0%2025-12-11T15:39:21.122903 - --------------------------------------------------2025-12-11T15:39:22.327129 - !!! Exception during processing !!! too many values to unpack (expected 4)2025-12-11T15:39:22.416874 - Traceback (most recent call last): File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 515, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 329, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 303, in _async_map_node_over_list await process_inputs(input_dict, i) File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 291, in process_inputs result = f(**inputs) ^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\nodes.py", line 1538, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\nodes.py", line 1505, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sample.py", line 60, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1163, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1053, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1035, in sample output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 984, in outer_sample self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds, self.model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 130, in prepare_sampling return executor.execute(model, noise_shape, conds, model_options=model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 138, in _prepare_sampling comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required + inference_memory, minimum_memory_required=minimum_memory_required + inference_memory) File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 197, in patched_load_models_gpu loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights) File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 506, in model_load self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights) File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 536, in model_use_more_vram return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 239, in new_partially_load device_assignments = analyze_safetensor_loading(self, allocations, is_clip=is_clip_model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 426, in analyze_safetensor_loading total_memory = sum(module_size for module_size, _, _, _ in raw_block_list) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 426, in <genexpr> total_memory = sum(module_size for module_size, _, _, _ in raw_block_list) ^^^^^^^^^^^^^^^^^^^^ValueError: too many values to unpack (expected 4)2025-12-11T15:39:22.728777 - Prompt executed in 522.12 seconds2025-12-11T15:40:05.628722 - got prompt2025-12-11T15:40:05.924981 - WARNING: [Errno 2] No such file or directory: 'H:\\COMFY UI - TEST\\input\\ComfyUI_00780_.png'2025-12-11T15:40:06.057339 - [MultiGPU DisTorch V2] GGUFModelPatcher missing 'model_patches_models' attribute, using 'model_patches_to' fallback.2025-12-11T15:40:06.057339 - Requested to load WAN21_Vace2025-12-11T15:40:06.063324 - 0 models unloaded.2025-12-11T15:40:06.166049 - loaded completely; 0.00 MB usable, 12013.25 MB loaded, full load: True2025-12-11T15:40:06.205942 - ===============================================2025-12-11T15:40:06.206940 - DisTorch2 Model Virtual VRAM Analysis2025-12-11T15:40:06.206940 - ===============================================2025-12-11T15:40:06.206940 - Object Role Original(GB) Total(GB) Virt(GB)2025-12-11T15:40:06.206940 - -----------------------------------------------2025-12-11T15:40:06.206940 - cuda:0 recip 16.00GB 32.00GB +16.00GB2025-12-11T15:40:06.210928 - cpu donor 31.92GB 15.92GB -16.00GB2025-12-11T15:40:06.210928 - -----------------------------------------------2025-12-11T15:40:06.218907 - model model 11.60GB 0.00GB -16.00GB2025-12-11T15:40:06.218907 - ==================================================2025-12-11T15:40:06.218907 - [MultiGPU DisTorch V2] Final Allocation String:cuda:0,0.0000;cpu,0.50122025-12-11T15:40:06.222898 - ==================================================2025-12-11T15:40:06.222898 - DisTorch2 Model Device Allocations2025-12-11T15:40:06.222898 - ==================================================2025-12-11T15:40:06.222898 - Device VRAM GB Dev % Model GB Dist %2025-12-11T15:40:06.222898 - --------------------------------------------------2025-12-11T15:40:06.222898 - cuda:0 16.00 0.0% 0.00 0.0%2025-12-11T15:40:06.222898 - cpu 31.92 50.1% 16.00 100.0%2025-12-11T15:40:06.222898 - --------------------------------------------------2025-12-11T15:40:06.482771 - !!! Exception during processing !!! too many values to unpack (expected 4)2025-12-11T15:40:06.484739 - Traceback (most recent call last): File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 515, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 329, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 303, in _async_map_node_over_list await process_inputs(input_dict, i) File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 291, in process_inputs result = f(**inputs) ^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\nodes.py", line 1538, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\nodes.py", line 1505, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sample.py", line 60, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1163, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1053, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1035, in sample output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 984, in outer_sample self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds, self.model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 130, in prepare_sampling return executor.execute(model, noise_shape, conds, model_options=model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 138, in _prepare_sampling comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required + inference_memory, minimum_memory_required=minimum_memory_required + inference_memory) File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 197, in patched_load_models_gpu loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights) File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 506, in model_load self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights) File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 536, in model_use_more_vram return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 239, in new_partially_load device_assignments = analyze_safetensor_loading(self, allocations, is_clip=is_clip_model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 426, in analyze_safetensor_loading total_memory = sum(module_size for module_size, _, _, _ in raw_block_list) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 426, in <genexpr> total_memory = sum(module_size for module_size, _, _, _ in raw_block_list) ^^^^^^^^^^^^^^^^^^^^ValueError: too many values to unpack (expected 4)2025-12-11T15:40:06.488728 - Prompt executed in 0.65 seconds2025-12-11T15:43:04.809158 - got prompt2025-12-11T15:43:04.841072 - WARNING: [Errno 2] No such file or directory: 'H:\\COMFY UI - TEST\\input\\ComfyUI_00780_.png'2025-12-11T15:43:04.983564 - [MultiGPU DisTorch V2] GGUFModelPatcher missing 'model_patches_models' attribute, using 'model_patches_to' fallback.2025-12-11T15:43:04.983564 - Requested to load WAN21_Vace2025-12-11T15:43:04.989548 - 0 models unloaded.2025-12-11T15:43:05.095265 - loaded completely; 0.00 MB usable, 12013.25 MB loaded, full load: True2025-12-11T15:43:05.135158 - ===============================================2025-12-11T15:43:05.135158 - DisTorch2 Model Virtual VRAM Analysis2025-12-11T15:43:05.135158 - ===============================================2025-12-11T15:43:05.135158 - Object Role Original(GB) Total(GB) Virt(GB)2025-12-11T15:43:05.135158 - -----------------------------------------------2025-12-11T15:43:05.135158 - cuda:0 recip 16.00GB 32.00GB +16.00GB2025-12-11T15:43:05.138150 - cpu donor 31.92GB 15.92GB -16.00GB2025-12-11T15:43:05.138150 - -----------------------------------------------2025-12-11T15:43:05.147127 - model model 11.60GB 0.00GB -16.00GB2025-12-11T15:43:05.147127 - ==================================================2025-12-11T15:43:05.147127 - [MultiGPU DisTorch V2] Final Allocation String:cuda:0,0.0000;cpu,0.50122025-12-11T15:43:05.151116 - ==================================================2025-12-11T15:43:05.151116 - DisTorch2 Model Device Allocations2025-12-11T15:43:05.151116 - ==================================================2025-12-11T15:43:05.151116 - Device VRAM GB Dev % Model GB Dist %2025-12-11T15:43:05.151116 - --------------------------------------------------2025-12-11T15:43:05.151116 - cuda:0 16.00 0.0% 0.00 0.0%2025-12-11T15:43:05.151116 - cpu 31.92 50.1% 16.00 100.0%2025-12-11T15:43:05.151116 - --------------------------------------------------2025-12-11T15:43:05.609498 - !!! Exception during processing !!! too many values to unpack (expected 4)2025-12-11T15:43:05.612490 - Traceback (most recent call last): File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 515, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 329, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 303, in _async_map_node_over_list await process_inputs(input_dict, i) File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 291, in process_inputs result = f(**inputs) ^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\nodes.py", line 1538, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\nodes.py", line 1505, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sample.py", line 60, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1163, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1053, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1035, in sample output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 984, in outer_sample self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds, self.model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 130, in prepare_sampling return executor.execute(model, noise_shape, conds, model_options=model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 138, in _prepare_sampling comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required + inference_memory, minimum_memory_required=minimum_memory_required + inference_memory) File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 197, in patched_load_models_gpu loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights) File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 506, in model_load self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights) File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 536, in model_use_more_vram return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 239, in new_partially_load device_assignments = analyze_safetensor_loading(self, allocations, is_clip=is_clip_model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 426, in analyze_safetensor_loading total_memory = sum(module_size for module_size, _, _, _ in raw_block_list) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 426, in <genexpr> total_memory = sum(module_size for module_size, _, _, _ in raw_block_list) ^^^^^^^^^^^^^^^^^^^^ValueError: too many values to unpack (expected 4)2025-12-11T15:43:05.659725 - Prompt executed in 0.84 seconds2025-12-11T15:43:57.161685 - got prompt2025-12-11T15:43:57.204572 - WARNING: [Errno 2] No such file or directory: 'H:\\COMFY UI - TEST\\input\\ComfyUI_00780_.png'2025-12-11T15:43:57.284357 - [MultiGPU DisTorch V2] GGUFModelPatcher missing 'model_patches_models' attribute, using 'model_patches_to' fallback.2025-12-11T15:43:57.284357 - Requested to load WAN21_Vace2025-12-11T15:43:57.292336 - 0 models unloaded.2025-12-11T15:43:57.401045 - loaded completely; 0.00 MB usable, 12013.25 MB loaded, full load: True2025-12-11T15:43:57.443930 - ===============================================2025-12-11T15:43:57.443930 - DisTorch2 Model Virtual VRAM Analysis2025-12-11T15:43:57.443930 - ===============================================2025-12-11T15:43:57.443930 - Object Role Original(GB) Total(GB) Virt(GB)2025-12-11T15:43:57.443930 - -----------------------------------------------2025-12-11T15:43:57.443930 - cuda:0 recip 16.00GB 32.00GB +16.00GB2025-12-11T15:43:57.447920 - cpu donor 31.92GB 15.92GB -16.00GB2025-12-11T15:43:57.447920 - -----------------------------------------------2025-12-11T15:43:57.456920 - model model 11.60GB 0.00GB -16.00GB2025-12-11T15:43:57.456920 - ==================================================2025-12-11T15:43:57.456920 - [MultiGPU DisTorch V2] Final Allocation String:cuda:0,0.0000;cpu,0.50122025-12-11T15:43:57.460916 - ==================================================2025-12-11T15:43:57.460916 - DisTorch2 Model Device Allocations2025-12-11T15:43:57.460916 - ==================================================2025-12-11T15:43:57.460916 - Device VRAM GB Dev % Model GB Dist %2025-12-11T15:43:57.460916 - --------------------------------------------------2025-12-11T15:43:57.460916 - cuda:0 16.00 0.0% 0.00 0.0%2025-12-11T15:43:57.460916 - cpu 31.92 50.1% 16.00 100.0%2025-12-11T15:43:57.460916 - --------------------------------------------------2025-12-11T15:43:57.695258 - !!! Exception during processing !!! too many values to unpack (expected 4)2025-12-11T15:43:57.698249 - Traceback (most recent call last): File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 515, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 329, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 303, in _async_map_node_over_list await process_inputs(input_dict, i) File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 291, in process_inputs result = f(**inputs) ^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\nodes.py", line 1538, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\nodes.py", line 1505, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sample.py", line 60, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1163, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1053, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1035, in sample output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 984, in outer_sample self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds, self.model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 130, in prepare_sampling return executor.execute(model, noise_shape, conds, model_options=model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 138, in _prepare_sampling comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required + inference_memory, minimum_memory_required=minimum_memory_required + inference_memory) File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 197, in patched_load_models_gpu loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights) File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 506, in model_load self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights) File "C:\Users\user\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 536, in model_use_more_vram return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 239, in new_partially_load device_assignments = analyze_safetensor_loading(self, allocations, is_clip=is_clip_model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 426, in analyze_safetensor_loading total_memory = sum(module_size for module_size, _, _, _ in raw_block_list) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\COMFY UI - TEST\custom_nodes\comfyui-multigpu\distorch_2.py", line 426, in <genexpr> total_memory = sum(module_size for module_size, _, _, _ in raw_block_list) ^^^^^^^^^^^^^^^^^^^^ValueError: too many values to unpack (expected 4)2025-12-11T15:43:57.702239 - Prompt executed in 0.53 seconds
Attached WorkflowPlease make sure that workflow does not contain any sensitive information such as API keys or passwords. Workflow too large. Please manually upload the workflow from local file system.
Additional Context(Please add any additional context or steps to reproduce the error here) I need help. Have benn around this problem for a week and completely new to COmfy ui |