Welcome to an immersive tutorial that guides you through leveraging the power of ComfyUI’s API capabilities and deploying your workflows on Inferless. This resource is designed to help you create and deploy custom workflows, extending ComfyUI’s API functionality. You’ll learn how to interact with ComfyUI and deploy on Inferless.
build.sh
: This shell script automates the setup of the ComfyUI environment on the NFS volume. It leverages the ComfyUI command-line interface (CLI) to install and configure ComfyUI, ensuring the necessary model weights are downloaded into the specified workspace directory.
app.py
: This Python script contains the InferlessPythonModel
class, which Inferless uses to manage the application lifecycle.
initialize
function within this class triggers the build.sh
script to set up the environment.infer
function is responsible for processing incoming user requests by interacting with the ComfyUI server to generate images, which it then returns to the user. Additionally, this function handles loading user workflows, updating them with user prompts, and managing the lifecycle of the ComfyUI server.comfy_utils.py
: This utility script have helper functions that streamline our interaction with ComfyUI.
inferless-runtime-config.yaml
: This YAML file is crucial for configuring the runtime environment for our ComfyUI application on Inferless.
/var/nfs-mount/YOUR_VOLUME_MOUNT_PATH
) as you’ll need to pass as an environment variable as NFS_VOLUME
.
build.sh
, app.py
, comfy_utils.py
, and any custom workflow JSON files are ready. These files should be uploaded to a GitHub repository for easy access during deployment.
Add a custom model
. Then follow these steps:
NFS_VOLUME
as key and YOUR_VOLUME_MOUNT_PATH as the value. And then HF_ACCESS_TOKEN
as key and YOUR_HF_ACCESS_TOKEN as the value.
--gpu A100
: Specifies the GPU type for deployment. Available options include A10
, A100
, and T4
.--runtime inferless-runtime-config.yaml
: Defines the runtime configuration file. If not specified, the default Inferless runtime is used.workflow.json
for the Flux workflow and convert it into a format compatible with the ComfyUI API.build.sh
script. You can add any other models in similar way.input_schema.py
file accordingly. For instance, if your workflow requires both a prompt and a negative prompt, update the file to handle these inputs.input_schema.py
.
10.59
seconds to process each queries, significantly faster than many traditional platforms.10.59
seconds of processing time and a cold start overhead of 6.88
seconds.1.85
hours
4.5
hours
Scenarios | On-Demand Cost | Inferless Cost |
---|---|---|
100 requests/day | $28.8 (24 hours billed at $1.22/hour) | $2.26 (1.85 hours billed at $1.22/hour) |
1000 requests/day | $28.8 (24 hours billed at $1.22/hour) | $5.49 (4.5 hours billed at $1.22/hour) |