* Added the parameter required_frontend_version in the /system_stats api response
* Update server.py
* Created a function get_required_frontend_version and wrote tests for it
* Refactored the function to return currently installed frontend pacakage version
* Moved required_frontend to a new function and imported that in server.py
* Corrected test cases using mocking techniques
* Corrected files to comply with ruff formatting
This makes it easier to write asynchronous clients that submit requests, because they can store the task immediately.
Duplicate prompt IDs are rejected by the job queue.
* Support for async execution functions
This commit adds support for node execution functions defined as async. When
a node's execution function is defined as async, we can continue
executing other nodes while it is processing.
Standard uses of `await` should "just work", but people will still have
to be careful if they spawn actual threads. Because torch doesn't really
have async/await versions of functions, this won't particularly help
with most locally-executing nodes, but it does work for e.g. web
requests to other machines.
In addition to the execute function, the `VALIDATE_INPUTS` and
`check_lazy_status` functions can also be defined as async, though we'll
only resolve one node at a time right now for those.
* Add the execution model tests to CI
* Add a missing file
It looks like this got caught by .gitignore? There's probably a better
place to put it, but I'm not sure what that is.
* Add the websocket library for automated tests
* Add additional tests for async error cases
Also fixes one bug that was found when an async function throws an error
after being scheduled on a task.
* Add a feature flags message to reduce bandwidth
We now only send 1 preview message of the latest type the client can
support.
We'll add a console warning when the client fails to send a feature
flags message at some point in the future.
* Add async tests to CI
* Don't actually add new tests in this PR
Will do it in a separate PR
* Resolve unit test in GPU-less runner
* Just remove the tests that GHA can't handle
* Change line endings to UNIX-style
* Avoid loading model_management.py so early
Because model_management.py has a top-level `logging.info`, we have to
be careful not to import that file before we call `setup_logging`. If we
do, we end up having the default logging handler registered in addition
to our custom one.
* Update fix for potential XSS on /view
This commit uses mimetypes to add more restricted filetypes to prevent from being served, since mimetypes are what browsers use to determine how to serve files.
* Fix typo
Fixed a typo that prevented the program from running
* Add Ideogram generate node.
* Add staging api.
* COMFY_API_NODE_NAME node property
* switch to boolean flag and use original node name for id
* add optional to type
* Add API_NODE and common error for missing auth token (#5)
* Add Minimax Video Generation + Async Task queue polling example (#6)
* [Minimax] Show video preview and embed workflow in ouput (#7)
* [API Nodes] Send empty request body instead of empty dictionary. (#8)
* Fixed: removed function from rebase.
* Add pydantic.
* Remove uv.lock
* Remove polling operations.
* Update stubs workflow.
* Remove polling comments.
* Update stubs.
* Use pydantic v2.
* Use pydantic v2.
* Add basic OpenAITextToImage node
* Add.
* convert image to tensor.
* Improve types.
* Ruff.
* Push tests.
* Handle multi-form data.
- Don't set content-type for multi-part/form
- Use data field instead of JSON
* Change to api.comfy.org
* Handle error code 409.
* Remove nodes.
---------
Co-authored-by: bymyself <cbyrne@comfy.org>
Co-authored-by: Yoland Y <4950057+yoland68@users.noreply.github.com>
* install templates as pip package
* Update requirements.txt
* bump templates version to include hidream
---------
Co-authored-by: Chenlei Hu <hcl@comfy.org>
* Add /logs/raw and /logs/subscribe for getting logs on frontend
Hijacks stderr/stdout to send all output data to the client on flush
* Use existing send sync method
* Fix get_logs should return string
* Fix bug
* pass no server
* fix tests
* Fix output flush on linux
* add internal /folder_paths route
returns a json maps of folder paths
* (minor) format download_models.py
* initial folder path input on download api
* actually, require folder_path and clean up some code
* partial tests update
* fix & logging
* also download to a tmp file not the live file
to avoid compounding errors from network failure
* update tests again
* test tweaks
* workaround the first tests blocker
* fix file handling in tests
* rewrite test for create_model_path
* minor doc fix
* avoid 'mock_directory'
use temp dir to avoid accidental fs pollution from tests