Only enable attention upcasting on models that actually need it.

This commit is contained in:
comfyanonymous
2024-05-14 15:18:00 -04:00
parent b0ab31d06c
commit bb4940d837
5 changed files with 27 additions and 24 deletions

View File

@@ -207,12 +207,6 @@ To use a textual inversion concepts/embeddings in a text prompt put them in the
```embedding:embedding_filename.pt```
## How to increase generation speed?
On non Nvidia hardware you can set this command line setting to disable the upcasting to fp32 in some cross attention operations which will increase your speed. Note that this will very likely give you black images on SD2.x models. If you use xformers or pytorch attention this option does not do anything.
```--dont-upcast-attention```
## How to show high-quality previews?
Use ```--preview-method auto``` to enable previews.