Commit Graph

422 Commits

Author SHA1 Message Date
Jedrzej Kosinski
d9bb4530b0 Merge branch 'master' into attention-select 2025-08-29 23:35:38 -07:00
Jedrzej Kosinski
cb959f9669 Add optimized to get_attention_function 2025-08-29 21:48:36 -07:00
Jedrzej Kosinski
d553073a1e Fixed WAN 2.1 VACE transformer_options passthrough 2025-08-29 13:20:43 -07:00
Jedrzej Kosinski
af288b9946 Fixed Wan2.1 Fun Camera transformer_options passthrough 2025-08-29 13:06:37 -07:00
Jedrzej Kosinski
1ae6fe14a7 Fix WanI2VCrossAttention so that it expects to receive transformer_options 2025-08-29 02:31:16 -07:00
Jedrzej Kosinski
2d13bf1c7a Made SVD work with optimized_attention_override 2025-08-28 22:45:45 -07:00
Jedrzej Kosinski
8be3edb606 Made Chroma work with optimized_attention_override 2025-08-28 22:45:31 -07:00
Jedrzej Kosinski
d644aba6bc Made Lumina work with optimized_attention_override 2025-08-28 22:00:44 -07:00
Jedrzej Kosinski
17090c56be Made AuraFlow work with optimized_attention_override 2025-08-28 21:46:56 -07:00
Jedrzej Kosinski
034d6c12e6 Made StableCascade work with optimized_attention_override 2025-08-28 21:42:08 -07:00
Jedrzej Kosinski
09c84b31a2 Made Omnigen 2 work with optimized_attention_override 2025-08-28 21:30:18 -07:00
Jedrzej Kosinski
8fe2dea297 Made CosmosVideo work with optimized_attention_override 2025-08-28 21:23:03 -07:00
Jedrzej Kosinski
4a44ed4a76 Make CosmosPredict2 work with optimized_attention_override 2025-08-28 21:18:34 -07:00
Jedrzej Kosinski
8b9b4bbb62 Made Hunyuan3D work with optimized_attention_override 2025-08-28 21:06:44 -07:00
Jedrzej Kosinski
27ebd312ae Made optimized_attention_override work with ACE Step 2025-08-28 21:03:28 -07:00
Jedrzej Kosinski
9461f30387 Made StableAudio work with optimized_attention_override 2025-08-28 20:56:56 -07:00
Jedrzej Kosinski
2cda45d1b4 Made LTX work with optimized_attention_override 2025-08-28 20:42:22 -07:00
Jedrzej Kosinski
61b5c5fc75 Made Mochi work with optimized_attention_override 2025-08-28 20:34:06 -07:00
Jedrzej Kosinski
ef894cdf08 Made HunyuanVideo work with optimized_attention_override 2025-08-28 20:26:53 -07:00
Jedrzej Kosinski
0ac5c6344f Made SD3 work with optimized_attention_override 2025-08-28 20:21:14 -07:00
Jedrzej Kosinski
1ddfb5bb14 Made wan patches_replace work with optimized_attention_override 2025-08-28 20:13:51 -07:00
Jedrzej Kosinski
4cafd58f71 Made hidream work with optimized_attention_override 2025-08-28 20:10:50 -07:00
Jedrzej Kosinski
f752715aac Make Qwen work with optimized_attention_override 2025-08-28 19:52:52 -07:00
Jedrzej Kosinski
48ed71caf8 Add logs to verify optimized_attention_override is passed all the way into attention function 2025-08-28 19:43:39 -07:00
Jedrzej Kosinski
a7d70e42a0 Make flux work with optimized_attention_override 2025-08-28 19:33:02 -07:00
Jedrzej Kosinski
51a30c2ad7 Make sure wrap_attn doesn't make itself recurse infinitely, attempt to load SageAttention and FlashAttention if not enabled so that they can be marked as available or not, create registry for available attention 2025-08-28 18:53:20 -07:00
Jedrzej Kosinski
669b9ef8e6 Added **kwargs to all attention functions so transformer_options could potentially be passed through 2025-08-28 13:14:41 -07:00
Jedrzej Kosinski
dd21b4aa51 Made WAN attention receive transformer_options, test node added to wan to test out attention override later 2025-08-27 17:56:21 -07:00
Jedrzej Kosinski
29b7990dc2 Fix memory usage issue with inspect 2025-08-27 17:55:35 -07:00
Jedrzej Kosinski
68b00e9c60 Created logging code for this branch so that it can be used to track down all the code paths where transformer_options would need to be added 2025-08-27 17:13:33 -07:00
comfyanonymous
491755325c Better s2v memory estimation. (#9584) 2025-08-27 19:02:42 -04:00
Jedrzej Kosinski
b58db6934c Looking into a @wrap_attn decorator to look for 'optimized_attention_override' entry in transformer_options 2025-08-27 14:18:18 -07:00
comfyanonymous
496888fd68 Improve s2v performance when generating videos longer than 120 frames. (#9582) 2025-08-27 16:06:40 -04:00
comfyanonymous
b5ac6ed7ce Fixes to make controlnet type models work on qwen edit and kontext. (#9581) 2025-08-27 15:26:28 -04:00
comfyanonymous
88aee596a3 WIP Wan 2.2 S2V model. (#9568) 2025-08-27 01:10:34 -04:00
Jedrzej Kosinski
fc247150fe Implement EasyCache and Invent LazyCache (#9496)
* Attempting a universal implementation of EasyCache, starting with flux as test; I screwed up the math a bit, but when I set it just right it works.

* Fixed math to make threshold work as expected, refactored code to use EasyCacheHolder instead of a dict wrapped by object

* Use sigmas from transformer_options instead of timesteps to be compatible with a greater amount of models, make end_percent work

* Make log statement when not skipping useful, preparing for per-cond caching

* Added DIFFUSION_MODEL wrapper around forward function for wan model

* Add subsampling for heuristic inputs

* Add subsampling to output_prev (output_prev_subsampled now)

* Properly consider conds in EasyCache logic

* Created SuperEasyCache to test what happens if caching and reuse is moved outside the scope of conds, added PREDICT_NOISE wrapper to facilitate this test

* Change max reuse_threshold to 3.0

* Mark EasyCache/SuperEasyCache as experimental (beta)

* Make Lumina2 compatible with EasyCache

* Add EasyCache support for Qwen Image

* Fix missing comma, curse you Cursor

* Add EasyCache support to AceStep

* Add EasyCache support to Chroma

* Added EasyCache support to Cosmos Predict t2i

* Make EasyCache not crash with Cosmos Predict ImagToVideo latents, but does not work well at all

* Add EasyCache support to hidream

* Added EasyCache support to hunyuan video

* Added EasyCache support to hunyuan3d

* Added EasyCache support to LTXV (not very good, but does not crash)

* Implemented EasyCache for aura_flow

* Renamed SuperEasyCache to LazyCache, hardcoded subsample_factor to 8 on nodes

* Eatra logging when verbose is true for EasyCache
2025-08-22 22:41:08 -04:00
contentis
fe31ad0276 Add elementwise fusions (#9495)
* Add elementwise fusions

* Add addcmul pattern to Qwen
2025-08-22 19:39:15 -04:00
comfyanonymous
ff57793659 Support InstantX Qwen controlnet. (#9488) 2025-08-22 00:53:11 -04:00
comfyanonymous
f7bd5e58dd Make it easier to implement future qwen controlnets. (#9485) 2025-08-21 23:18:04 -04:00
comfyanonymous
0963493a9c Support for Qwen Diffsynth Controlnets canny and depth. (#9465)
These are not real controlnets but actually a patch on the model so they
will be treated as such.

Put them in the models/model_patches/ folder.

Use the new ModelPatchLoader and QwenImageDiffsynthControlnet nodes.
2025-08-20 22:26:37 -04:00
comfyanonymous
8d38ea3bbf Fix bf16 precision issue with qwen image embeddings. (#9441) 2025-08-20 02:58:54 -04:00
comfyanonymous
7cd2c4bd6a Qwen rotary embeddings should now match reference code. (#9437) 2025-08-20 00:45:27 -04:00
comfyanonymous
ed43784b0d WIP Qwen edit model: The diffusion model part. (#9383) 2025-08-17 16:45:39 -04:00
comfyanonymous
0f2b8525bc Qwen image model refactor. (#9375) 2025-08-16 17:51:28 -04:00
comfyanonymous
1702e6df16 Implement wan2.2 camera model. (#9357)
Use the old WanCameraImageToVideo node.
2025-08-15 17:29:58 -04:00
comfyanonymous
c308a8840a Add FluxKontextMultiReferenceLatentMethod node. (#9356)
This node is only useful if someone trains the kontext model to properly
use multiple reference images via the index method.

The default is the offset method which feeds the multiple images like if
they were stitched together as one. This method works with the current
flux kontext model.
2025-08-15 15:50:39 -04:00
comfyanonymous
ad19a069f6 Make SLG nodes work on Qwen Image model. (#9345) 2025-08-14 23:16:01 -04:00
comfyanonymous
9df8792d4b Make last PR not crash comfy on old pytorch. (#9324) 2025-08-13 15:12:41 -04:00
contentis
3da5a07510 SDPA backend priority (#9299) 2025-08-13 14:53:27 -04:00
comfyanonymous
560d38f34c Wan2.2 fun control support. (#9292) 2025-08-12 23:26:33 -04:00