Qwen Support #2081
Replies: 15 comments 52 replies
-
|
Great release. |
Beta Was this translation helpful? Give feedback.
-
|
Can you please give an instruction how to use the edit feature with referencing to multiple inputs? E.g. let's assume the task of placing a person (image 1) and an object (image 2) in a scene / Background (image 3). How should I organize my layers so that this will work? |
Beta Was this translation helpful? Give feedback.
-
|
Thank you so much for sharing this post and for all the work you've done to make this a reality. You're doing fantastic work. I'm sure many, many people will appreciate it, even if they don't comment later. |
Beta Was this translation helpful? Give feedback.
-
|
Is there a reason that qwen_image_edit_2509_fp8_e4m3fn.safetensors isn't recognized the same way the other two fp8 models are? |
Beta Was this translation helpful? Give feedback.
-
|
Why does it load the flux text encoders (clip_l and t5)? Shouldn't it use the single clip loader with qwen_2.5_vl_7b_fp8_scaled in qwen_image mode? |
Beta Was this translation helpful? Give feedback.
-
Hi Acly, I think, these are the new Loras (without guarantee). I got the link from here. Sorry for that question but, I'm currently trying to find out which is the fastest and best quality combination for me (RTX 5600TI 16GB). Bye |
Beta Was this translation helpful? Give feedback.
-
|
Is there a way to limit the editing to a mask? I'd like to profit from:
|
Beta Was this translation helpful? Give feedback.
-
|
Qwen-Image and Qwen-Image-Edit models are currently not fully compatible with Sage-Attention (particularly version 2.2+ when using the Not sure what the best option is for using Qwen models in Krita. Sage Attention speeds up most models, and only seems incompatible with the Qwen ones. It's very inconvenient to have to remove the sage-attention flag from the startup commands and restart ComfyUI just to use Qwen models. Switching between models quickly becomes impossible. Can the "Patch Sage Attention KJ" node be added as an option in Krita's style settings for Qwen models? KJ Nodes is a pretty popular extension that many users probably already have. If not, users of these GPUs that want to use sage-attention might have to work with custom graphs in Krita where they can apply the patch until these bugs are worked out of Sage-Attention/Triton. |
Beta Was this translation helpful? Give feedback.
-
|
Thank you for all the work you've put into this extension. Great model. Even with Nunchaku, it takes a long time to output each image, but it's more likely to get it right, so I can keep it running in the background while I do something else. |
Beta Was this translation helpful? Give feedback.
-
|
I don't have much success with Changing the resolution to 1024x1024 as recommended here on Reddit doesn't help at all. There are no LORAs or anything. Euler beta, Sampler Steps 20, CFG Scale 2.5 Prompt:
|
Beta Was this translation helpful? Give feedback.
-
|
InstantX/Qwen-Image-ControlNet-Inpainting is a great inpainting ControlNet for Qwen Image (non-Edit version). It works really well with Lightning LoRAs and other community-made LoRAs, and it complements the Edit model perfectly. The Edit model is great, but sometimes the main model with the inpainting ControlNet gives noticeably better results and more precise control. It uses the same inpainting node as Flux - ControlNetInpaintingAliMamaApply, which is already in Krita AI. Any chance we might see it added to Krita AI at some point? |
Beta Was this translation helpful? Give feedback.
-
|
1.44.0: added Qwen-Image-Layered support, see the special section in top post on how to get it working. In theorey it could be really useful, but IME it doesn't work that well (and is super slow). But it's very new and might be improved in the future. |
Beta Was this translation helpful? Give feedback.
-
|
can't get qwen layered to work. latest plugin and updated comfy. using a q6 quant, it always returns a single layer with the original image but deepfried |
Beta Was this translation helpful? Give feedback.
-
|
New edit model released, apparently addressing some of the issues we have seen. https://huggingface.co/Qwen/Qwen-Image-Edit-2511 Hopefully they Nunchaku it soon. |
Beta Was this translation helpful? Give feedback.
-
|
the new qwen edit 2511 requires the FluxKontextMultiReferenceLatentMethod node (comfy core) set to "index_timestep_zero" to not output shifted and oversaturated/garbled images. |
Beta Was this translation helpful? Give feedback.


Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Qwen Image
Last updated: 1.44.0
Features
Requirements
The managed install can run Qwen models since plugin version 1.40.0, but it does not offer automatic model downloads.
For custom ComfyUI installs remember to update:
Model Download
There are 4 main models:
Diffusion Models
You can download one or more of these. They go into the
/models/diffusion_modelsfolder.VAE
This is required for all diffusion models.
/models/vaeText Encoder
This is required for all diffusion models.
/models/text_encodersLightning LoRA
For fp8 or GGUF versions you can add the Lightning LoRA and reduce steps to 8 or 4.
For SVDQ/Nunchaku models LoRA don't work yet, but you can use a merged low-step Lightning version from here:
Qwen Image | Qwen Image Edit | Qwen Image Edit 2509.
You have to create a custom Style for the Lighting versions.
Qwen Image Layered
Qwen Image Layered is a model which takes an existing image and splits it into several layers, separating background, foreground, individual objects and text. These parts can then be edited invidivually with traditional Krita tools.
Diffusion Models
To use it you need the diffusion model (several options with different size below), and the VAE.
/models/diffusion_models/models/diffusion_models/models/diffusion_modelsVAE
The VAE for Qwen-Image-Layered is different from the one used for the other Qwen models:
/models/vaeHow to use
Additional information
You can also follow ComfyUI documentation:
Beta Was this translation helpful? Give feedback.
All reactions