mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2025-07-29 17:26:34 +00:00
Initial Home page
commit
bc2bf0ebd2
37
How-to-Contribute-Code.md
Normal file
37
How-to-Contribute-Code.md
Normal file
@ -0,0 +1,37 @@
|
|||||||
|
## Adding support for new models
|
||||||
|
|
||||||
|
Here are a few tips to make reviewing your PR easier for us:
|
||||||
|
|
||||||
|
- Have a minimal implementation of the model code that only depends on pytorch under a license compatible with the GPL license that ComfyUI uses.
|
||||||
|
- Provide a reference image with sampling settings/seed/etc.. so that I can make sure the ComfyUI implementation matches the reference one.
|
||||||
|
- Replace all attention functions with the comfyui "optimized_attention" attention function
|
||||||
|
- If possible, please release your primary models in `.safetensor` file format.
|
||||||
|
|
||||||
|
Example of the sdpa implementation:
|
||||||
|
|
||||||
|
```
|
||||||
|
def optimized_attention(q, k, v, heads, mask=None, attn_precision=None, skip_reshape=False):
|
||||||
|
if skip_reshape:
|
||||||
|
b, _, _, dim_head = q.shape
|
||||||
|
else:
|
||||||
|
b, _, dim_head = q.shape
|
||||||
|
dim_head //= heads
|
||||||
|
q, k, v = map(
|
||||||
|
lambda t: t.view(b, -1, heads, dim_head).transpose(1, 2),
|
||||||
|
(q, k, v),
|
||||||
|
)
|
||||||
|
|
||||||
|
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
|
||||||
|
out = (
|
||||||
|
out.transpose(1, 2).reshape(b, -1, heads * dim_head)
|
||||||
|
)
|
||||||
|
return out
|
||||||
|
```
|
||||||
|
|
||||||
|
[Audio](https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/ldm/audio/dit.py#L369)
|
||||||
|
|
||||||
|
[SDXL Cascade](https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/ldm/cascade/common.py#L47)
|
||||||
|
|
||||||
|
[mmdit](https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/ldm/modules/diffusionmodules/mmdit.py#L293)
|
||||||
|
|
||||||
|
|
Loading…
x
Reference in New Issue
Block a user