Commit Graph

108 Commits

Author SHA1 Message Date
Jedrzej Kosinski
2d13bf1c7a Made SVD work with optimized_attention_override 2025-08-28 22:45:45 -07:00
Jedrzej Kosinski
48ed71caf8 Add logs to verify optimized_attention_override is passed all the way into attention function 2025-08-28 19:43:39 -07:00
Jedrzej Kosinski
51a30c2ad7 Make sure wrap_attn doesn't make itself recurse infinitely, attempt to load SageAttention and FlashAttention if not enabled so that they can be marked as available or not, create registry for available attention 2025-08-28 18:53:20 -07:00
Jedrzej Kosinski
669b9ef8e6 Added **kwargs to all attention functions so transformer_options could potentially be passed through 2025-08-28 13:14:41 -07:00
Jedrzej Kosinski
29b7990dc2 Fix memory usage issue with inspect 2025-08-27 17:55:35 -07:00
Jedrzej Kosinski
68b00e9c60 Created logging code for this branch so that it can be used to track down all the code paths where transformer_options would need to be added 2025-08-27 17:13:33 -07:00
Jedrzej Kosinski
b58db6934c Looking into a @wrap_attn decorator to look for 'optimized_attention_override' entry in transformer_options 2025-08-27 14:18:18 -07:00
comfyanonymous
9df8792d4b Make last PR not crash comfy on old pytorch. (#9324) 2025-08-13 15:12:41 -04:00
contentis
3da5a07510 SDPA backend priority (#9299) 2025-08-13 14:53:27 -04:00
Kohaku-Blueleaf
520eb77b72 LoRA Trainer: LoRA training node in weight adapter scheme (#8446) 2025-06-13 19:25:59 -04:00
comfyanonymous
5a87757ef9 Better error if sageattention is installed but a dependency is missing. (#8264) 2025-05-24 06:43:12 -04:00
Raphael Walker
89e4ea0175 Add activations_shape info in UNet models (#7482)
* Add activations_shape info in UNet models

* activations_shape should be a list
2025-04-04 21:27:54 -04:00
comfyanonymous
e471c726e5 Fallback to pytorch attention if sage attention fails. 2025-03-22 15:45:56 -04:00
FeepingCreature
9c98c6358b Tolerate missing @torch.library.custom_op (#7234)
This can happen on Pytorch versions older than 2.4.
2025-03-14 09:51:26 -04:00
FeepingCreature
7aceb9f91c Add --use-flash-attention flag. (#7223)
* Add --use-flash-attention flag.
This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
2025-03-14 03:22:41 -04:00
comfyanonymous
96d891cb94 Speedup on some models by not upcasting bfloat16 to float32 on mac. 2025-02-24 05:41:32 -05:00
comfyanonymous
aff16532d4 Remove some useless code. 2025-02-22 04:45:14 -05:00
Dr.Lt.Data
0a0df5f136 better guide message for sageattention (#6634) 2025-02-02 09:26:47 -05:00
comfyanonymous
129d8908f7 Add argument to skip the output reshaping in the attention functions. 2025-01-10 06:27:37 -05:00
comfyanonymous
37e5390f5f Add: --use-sage-attention to enable SageAttention.
You need to have the library installed first.
2024-12-18 01:56:10 -05:00
comfyanonymous
19ee5d9d8b Don't expand mask when not necessary.
Expanding seems to slow down inference.
2024-12-16 18:22:50 -05:00
Raphael Walker
61b50720d0 Add support for attention masking in Flux (#5942)
* fix attention OOM in xformers

* allow passing attention mask in flux attention

* allow an attn_mask in flux

* attn masks can be done using replace patches instead of a separate dict

* fix return types

* fix return order

* enumerate

* patch the right keys

* arg names

* fix a silly bug

* fix xformers masks

* replace match with if, elif, else

* mask with image_ref_size

* remove unused import

* remove unused import 2

* fix pytorch/xformers attention

This corrects a weird inconsistency with skip_reshape.
It also allows masks of various shapes to be passed, which will be
automtically expanded (in a memory-efficient way) to a size that is
compatible with xformers or pytorch sdpa respectively.

* fix mask shapes
2024-12-16 18:21:17 -05:00
Chenlei Hu
d9d7f3c619 Lint all unused variables (#5989)
* Enable F841

* Autofix

* Remove all unused variable assignment
2024-12-12 17:59:16 -05:00
comfyanonymous
2fd9c1308a Fix mask issue in some attention functions. 2024-11-22 02:10:09 -05:00
comfyanonymous
07f6eeaa13 Fix mask issue with attention_xformers. 2024-11-20 17:07:46 -05:00
comfyanonymous
fabf449feb Mochi VAE encoder. 2024-11-01 17:33:09 -04:00
comfyanonymous
33fb282d5c Fix issue. 2024-08-14 02:51:47 -04:00
comfyanonymous
bb1969cab7 Initial support for the stable audio open model. 2024-06-15 12:14:56 -04:00
comfyanonymous
0920e0e5fe Remove some unused imports. 2024-05-27 19:08:27 -04:00
comfyanonymous
8508df2569 Work around black image bug on Mac 14.5 by forcing attention upcasting. 2024-05-21 16:56:33 -04:00
comfyanonymous
83d969e397 Disable xformers when tracing model. 2024-05-21 13:55:49 -04:00
comfyanonymous
1900e5119f Fix potential issue. 2024-05-20 08:19:54 -04:00
comfyanonymous
0bdc2b15c7 Cleanup. 2024-05-18 10:11:44 -04:00
comfyanonymous
98f828fad9 Remove unnecessary code. 2024-05-18 09:36:44 -04:00
comfyanonymous
46daf0a9a7 Add debug options to force on and off attention upcasting. 2024-05-16 04:09:41 -04:00
comfyanonymous
ec6f16adb6 Fix SAG. 2024-05-14 18:02:27 -04:00
comfyanonymous
bb4940d837 Only enable attention upcasting on models that actually need it. 2024-05-14 17:00:50 -04:00
comfyanonymous
b0ab31d06c Refactor attention upcasting code part 1. 2024-05-14 12:47:31 -04:00
comfyanonymous
2aed53c4ac Workaround xformers bug. 2024-04-30 21:23:40 -04:00
comfyanonymous
2a813c3b09 Switch some more prints to logging. 2024-03-11 16:34:58 -04:00
comfyanonymous
6bcf57ff10 Fix attention masks properly for multiple batches. 2024-02-17 16:15:18 -05:00
comfyanonymous
f8706546f3 Fix attention mask batch size in some attention functions. 2024-02-17 15:22:21 -05:00
comfyanonymous
3b9969c1c5 Properly fix attention masks in CLIP with batches. 2024-02-17 12:13:13 -05:00
comfyanonymous
89507f8adf Remove some unused imports. 2024-01-25 23:42:37 -05:00
comfyanonymous
6a7bc35db8 Use basic attention implementation for small inputs on old pytorch. 2024-01-09 13:46:52 -05:00
comfyanonymous
c6951548cf Update optimized_attention_for_device function for new functions that
support masked attention.
2024-01-07 13:52:08 -05:00
comfyanonymous
aaa9017302 Add attention mask support to sub quad attention. 2024-01-07 04:13:58 -05:00
comfyanonymous
0c2c9fbdfa Support attention mask in split attention. 2024-01-06 13:16:48 -05:00
comfyanonymous
3ad0191bfb Implement attention mask on xformers. 2024-01-06 04:33:03 -05:00
comfyanonymous
a5056cfb1f Remove useless code. 2023-12-15 01:28:16 -05:00