* Added initial support for basic context windows - in progress
* Add prepare_sampling wrapper for context window to more accurately estimate latent memory requirements, fixed merging wrappers/callbacks dicts in prepare_model_patcher
* Made context windows compatible with different dimensions; works for WAN, but results are bad
* Fix comfy.patcher_extension.merge_nested_dicts calls in prepare_model_patcher in sampler_helpers.py
* Considering adding some callbacks to context window code to allow extensions of behavior without the need to rewrite code
* Made dim slicing cleaner
* Add Wan Context WIndows node for testing
* Made context schedule and fuse method functions be stored on the handler instead of needing to be registered in core code to be found
* Moved some code around between node_context_windows.py and context_windows.py
* Change manual context window nodes names/ids
* Added callbacks to IndexListContexHandler
* Adjusted default values for context_length and context_overlap, made schema.inputs definition for WAN Context Windows less annoying
* Make get_resized_cond more robust for various dim sizes
* Fix typo
* Another small fix
* Change bf16 check and switch non-blocking to off default with option to force to regain speed on certain classes of iGPUs and refactor xpu check.
* Turn non_blocking off by default for xpu.
* Update README.md for Intel GPUs.
* Add factorization utils for lokr
* Add lokr train impl
* Add loha train impl
* Add adapter map for algo selection
* Add optional grad ckpt and algo selection
* Update __init__.py
* correct key name for loha
* Use custom fwd/bwd func and better init for loha
* Support gradient accumulation
* Fix bugs of loha
* use more stable init
* Add OFT training
* linting
* Support for async execution functions
This commit adds support for node execution functions defined as async. When
a node's execution function is defined as async, we can continue
executing other nodes while it is processing.
Standard uses of `await` should "just work", but people will still have
to be careful if they spawn actual threads. Because torch doesn't really
have async/await versions of functions, this won't particularly help
with most locally-executing nodes, but it does work for e.g. web
requests to other machines.
In addition to the execute function, the `VALIDATE_INPUTS` and
`check_lazy_status` functions can also be defined as async, though we'll
only resolve one node at a time right now for those.
* Add the execution model tests to CI
* Add a missing file
It looks like this got caught by .gitignore? There's probably a better
place to put it, but I'm not sure what that is.
* Add the websocket library for automated tests
* Add additional tests for async error cases
Also fixes one bug that was found when an async function throws an error
after being scheduled on a task.
* Add a feature flags message to reduce bandwidth
We now only send 1 preview message of the latest type the client can
support.
We'll add a console warning when the client fails to send a feature
flags message at some point in the future.
* Add async tests to CI
* Don't actually add new tests in this PR
Will do it in a separate PR
* Resolve unit test in GPU-less runner
* Just remove the tests that GHA can't handle
* Change line endings to UNIX-style
* Avoid loading model_management.py so early
Because model_management.py has a top-level `logging.info`, we have to
be careful not to import that file before we call `setup_logging`. If we
do, we end up having the default logging handler registered in addition
to our custom one.