500 Commits

Author SHA1 Message Date
comfyanonymous
1a4bd9e9a6 Refactor the attention functions.
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
2023-10-11 20:38:48 -04:00
comfyanonymous
8cc75c64ff Let unet wrapper functions have .to attributes. 2023-10-11 01:34:38 -04:00
comfyanonymous
5e885bd9c8 Cleanup. 2023-10-10 21:46:53 -04:00
comfyanonymous
851bb87ca9 Merge branch 'taesd_safetensors' of https://github.com/mochiya98/ComfyUI 2023-10-10 21:42:35 -04:00
Yukimasa Funaoka
9eb621c95a
Supports TAESD models in safetensors format 2023-10-10 13:21:44 +09:00
comfyanonymous
d1a0abd40b Merge branch 'input-directory' of https://github.com/jn-jairo/ComfyUI 2023-10-09 01:53:29 -04:00
comfyanonymous
72188dffc3 load_checkpoint_guess_config can now optionally output the model. 2023-10-06 13:48:18 -04:00
Jairo Correa
63e5fd1790 Option to input directory 2023-10-04 19:45:15 -03:00
City
9bfec2bdbf Fix quality loss due to low precision 2023-10-04 15:40:59 +02:00
badayvedat
0f17993d05 fix: typo in extra sampler 2023-09-29 06:09:59 +03:00
comfyanonymous
66756de100 Add SamplerDPMPP_2M_SDE node. 2023-09-28 21:56:23 -04:00
comfyanonymous
71713888c4 Print missing VAE keys. 2023-09-28 00:54:57 -04:00
comfyanonymous
d234ca558a Add missing samplers to KSamplerSelect. 2023-09-28 00:17:03 -04:00
comfyanonymous
1adcc4c3a2 Add a SamplerCustom Node.
This node takes a list of sigmas and a sampler object as input.

This lets people easily implement custom schedulers and samplers as nodes.

More nodes will be added to it in the future.
2023-09-27 22:21:18 -04:00
comfyanonymous
bf3fc2f1b7 Refactor sampling related code. 2023-09-27 16:45:22 -04:00
comfyanonymous
fff491b032 Model patches can now know which batch is positive and negative. 2023-09-27 12:04:07 -04:00
comfyanonymous
1d6dd83184 Scheduler code refactor. 2023-09-26 17:07:07 -04:00
comfyanonymous
446caf711c Sampling code refactor. 2023-09-26 13:45:15 -04:00
comfyanonymous
76cdc809bf Support more controlnet models. 2023-09-23 18:47:46 -04:00
comfyanonymous
ae87543653 Merge branch 'cast_intel' of https://github.com/simonlui/ComfyUI 2023-09-23 00:57:17 -04:00
Simon Lui
eec449ca8e Allow Intel GPUs to LoRA cast on GPU since it supports BF16 natively. 2023-09-22 21:11:27 -07:00
comfyanonymous
afa2399f79 Add a way to set output block patches to modify the h and hsp. 2023-09-22 20:26:47 -04:00
comfyanonymous
492db2de8d Allow having a different pooled output for each image in a batch. 2023-09-21 01:14:42 -04:00
comfyanonymous
1cdfb3dba4 Only do the cast on the device if the device supports it. 2023-09-20 17:52:41 -04:00
comfyanonymous
7c9a92f552 Don't depend on torchvision. 2023-09-19 13:12:47 -04:00
MoonRide303
2b6b178173 Added support for lanczos scaling 2023-09-19 10:40:38 +02:00
comfyanonymous
b92bf8196e Do lora cast on GPU instead of CPU for higher performance. 2023-09-18 23:04:49 -04:00
comfyanonymous
321c5fa295 Enable pytorch attention by default on xpu. 2023-09-17 04:09:19 -04:00
comfyanonymous
61b1f67734 Support models without previews. 2023-09-16 12:59:54 -04:00
comfyanonymous
43d4935a1d Add cond_or_uncond array to transformer_options so hooks can check what is
cond and what is uncond.
2023-09-15 22:21:14 -04:00
comfyanonymous
415abb275f Add DDPM sampler. 2023-09-15 19:22:47 -04:00
comfyanonymous
94e4fe39d8 This isn't used anywhere. 2023-09-15 12:03:03 -04:00
comfyanonymous
44361f6344 Support for text encoder models that need attention_mask. 2023-09-15 02:02:05 -04:00
comfyanonymous
0d8f376446 Set last layer on SD2.x models uses the proper indexes now.
Before I had made the last layer the penultimate layer because some
checkpoints don't have them but it's not consistent with the others models.

TLDR: for SD2.x models only: CLIPSetLastLayer -1 is now -2.
2023-09-14 20:28:22 -04:00
comfyanonymous
0966d3ce82 Don't run text encoders on xpu because there are issues. 2023-09-14 12:16:07 -04:00
comfyanonymous
3039b08eb1 Only parse command line args when main.py is called. 2023-09-13 11:38:20 -04:00
comfyanonymous
ed58730658 Don't leave very large hidden states in the clip vision output. 2023-09-12 15:09:10 -04:00
comfyanonymous
fb3b728203 Fix issue where autocast fp32 CLIP gave different results from regular. 2023-09-11 21:49:56 -04:00
comfyanonymous
7d401ed1d0 Add ldm format support to UNETLoader. 2023-09-11 16:36:50 -04:00
comfyanonymous
e85be36bd2 Add a penultimate_hidden_states to the clip vision output. 2023-09-08 14:06:58 -04:00
comfyanonymous
1e6b67101c Support diffusers format t2i adapters. 2023-09-08 11:36:51 -04:00
comfyanonymous
326577d04c Allow cancelling of everything with a progress bar. 2023-09-07 23:37:03 -04:00
comfyanonymous
f88f7f413a Add a ConditioningSetAreaPercentage node. 2023-09-06 03:28:27 -04:00
comfyanonymous
1938f5c5fe Add a force argument to soft_empty_cache to force a cache empty. 2023-09-04 00:58:18 -04:00
comfyanonymous
7746bdf7b0 Merge branch 'generalize_fixes' of https://github.com/simonlui/ComfyUI 2023-09-04 00:43:11 -04:00
Simon Lui
2da73b7073 Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused. 2023-09-02 20:07:52 -07:00
comfyanonymous
a74c5dbf37 Move some functions to utils.py 2023-09-02 22:33:37 -04:00
Simon Lui
4a0c4ce4ef Some fixes to generalize CUDA specific functionality to Intel or other GPUs. 2023-09-02 18:22:10 -07:00
comfyanonymous
77a176f9e0 Use common function to reshape batch to. 2023-09-02 03:42:49 -04:00
comfyanonymous
7931ff0fd9 Support SDXL inpaint models. 2023-09-01 15:22:52 -04:00