Comfyui user manual example
Comfyui user manual example. Support for SD 1. This repo (opens in a new tab) contains examples of what is achievable with ComfyUI (opens in a new tab). mp4 trucks_noise_example. Make sure you see the "RunPod SD Comfy UI" template listed on the right like in the image below. In this video, I will introduce how to reuse parts of the workflow using the template feature provided by ComfyUI. As of writing this there are two image to video checkpoints. serve a ComfyUI workflow as an API. py ComfyUI is a popular tool that allow you to create stunning images and animations with Stable Diffusion. (the cfg set in the sampler). You can then load up the following image in ComfyUI to get the workflow: ComfyUI . Install. A You can also subtract models weights and add them like in this example used to create an inpaint model from a non inpaint model with the formula: (inpaint_model - base_model) * 1. GLIGEN Examples. By connecting various blocks, referred to as nodes, you can construct an image generation workflow. - ComfyUI/extra_model_paths. safetensors, stable_cascade_inpainting. Combining the UI and the API in a single app makes it easy to iterate on your workflow even after deployment. You can Load these images in ComfyUI (opens in a new tab) to get the full workflow. GLIGEN Examples - ComfyUI Workflow. Unlike other Stable Diffusion tools that have basic text fields where you enter values and information for generating an image, a node-based interface is different in the sense that you’d have to create nodes to build a workflow to generate images. Examples of ComfyUI workflows. Parameter Comfy dtype Description; unet_name: COMBO[STRING] Specifies the name of the U-Net model to be loaded. It allows users to construct image generation processes by connecting different blocks (nodes). Flux Examples. Stable Zero123 Stable Zero123 is a diffusion model that given an image with an object and a simple background can generate images of that object from different angles. safetensors and put it in your ComfyUI/checkpoints directory. Aug 26, 2024 · Hello, fellow AI enthusiasts! 👋 Welcome to our introductory guide on using FLUX within ComfyUI. Example. Intermediate #Load Checkpoint (With Config) # Conditioning Conditioning # Apply ControlNet Apply ControlNet # Apply Style Model. Additional discussion and help can be found here . AuraFlow 0. So, we will learn how to do things in ComfyUI in the simplest text-to-image workflow. Seed: It's normally the initial point where the random value is generated for any particular generated image. Class name: LoraLoaderModelOnly Category: loaders Output node: False This node specializes in loading a LoRA model without requiring a CLIP model, focusing on enhancing or modifying a given model based on LoRA parameters. In the example below we use a different VAE to encode an image to latent space, and decode the result of the Ksampler. 1 Here is a basic example how to use it: As a reminder you can save these image files and drag or load them into ComfyUI to get the workflow. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI open in new window. In this post we'll show you some example workflows you can import and get started straight away. Then I created two more sets of nodes, from Load Images to the IPAdapters, and adjusted the masks so that they would be part of a specific section in the whole image. This guide is designed to help you quickly get started with ComfyUI, run your first image generation, and explore advanced features. This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. After studying some essential ones, you will start to understand how to make your own. You can use it to connect up models, prompts, and other nodes to create your own unique workflow. Here is an example of how to use upscale models like ESRGAN. Here is an example for how to use the Canny Controlnet: Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. Dec 19, 2023 · This is my complete guide for ComfyUI, the node-based interface for Stable Diffusion. Here is an example of how the esrgan upscaler can be used for the upscaling step. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. The proper way to use it is with the new SDTurbo ComfyUI User Interface. T2I-Adapters are used the same way as ControlNets in ComfyUI: using the ControlNetLoader node. ComfyUI wikipedia, a online manual that help you use ComfyUI and Stable Diffusion Follow the ComfyUI manual installation instructions for Windows and Linux. /scripts/app. It covers the following topics: In this post we'll show you some example workflows you can import and get started straight away. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. You can use more steps to increase the quality. This repo contains examples of what is achievable with ComfyUI. ComfyUI A powerful and modular stable diffusion GUI and backend. This section is a guide to the ComfyUI user interface, including basic operations, menu settings, node operations, and other common user interface options. 4x using consumer-grade hardware. Hypernetworks are patches applied on the main MODEL so to use them put them in the models/hypernetworks directory and use the Hypernetwork Loader node like this: Area Composition Examples. Here is an example: You can load this image in ComfyUI to get the workflow. LCM models are special models that are meant to be sampled in very few steps. The importance of parts of the prompt can be up or down-weighted by enclosing the specified part of the prompt in brackets using the following syntax: (prompt:weight). The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI (opens in a new tab). For these examples I have renamed the files by adding stable_cascade_ in front of the filename for example: stable_cascade_canny. Using ComfyUI Online. Jul 6, 2024 · The best way to learn ComfyUI is by going through examples. safetensors. Edit models also called InstructPix2Pix models are models that can be used to edit images using a text prompt. Hunyuan DiT is a diffusion model that understands both english and chinese. Text box GLIGEN. exe -m pip install -r ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\requirements. ComfyUI stands as an advanced, modular GUI engineered for stable diffusion, characterized by its intuitive graph/nodes interface. Examples of what is achievable with ComfyUI. 0. up and down weighting. Interface Description. You can also subtract models weights and add them like in this example used to create an inpaint model from a non inpaint model with the formula: (inpaint_model - base_model) * 1. There is a portable standalone build for Windows that should work for running on Nvidia GPUs or for running on your CPU only on the releases page. Written by comfyanonymous and other contributors. Hunyuan DiT Examples. Note that this example uses the DiffControlNetLoader node because the controlnet used is a diff Stable Zero123 Stable Zero123 is a diffusion model that given an image with an object and a simple background can generate images of that object from different angles. A ComfyUI guide ComfyUI is a simple yet powerful Stable Diffusion UI with a graph and nodes interface. 🌟 In this tutorial, we'll dive into the essentials of ComfyUI FLUX, showcasing how this powerful model can enhance your creative process and help you push the boundaries of AI-generated art. 7. The following images can be loaded in ComfyUI to get the full workflow. In the above example the first frame will be cfg 1. python_embeded\python. The video demonstrates how to integrate a large language model (LLM) for creative image results without adapters or control nets. For more details, you could follow ComfyUI repo. ComfyUI Provides a variety of ways to finetune your prompts to better reflect your intention. /. Jan 8, 2024 · ComfyUI is a node-based graphical user interface (GUI) for Stable Diffusion, designed to facilitate image generation workflows. These are examples demonstrating the ConditioningSetArea node. LCM Lora. Simply download, extract with 7-Zip and run. Launch ComfyUI by running python main. Aug 29, 2024 · Lora Examples. Reload to refresh your session. The most powerful and modular stable diffusion GUI and backend. LCM loras are loras that can be used to convert a regular model to a LCM model. Windows. py Flux. -- Showcase random and singular seeds-- Dashboard random and singular seeds to manipulate individual image settings A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Workflow Included Share Example. In this example we will be using this image. Upscale Model Examples. Efficient Loader node in ComfyUI KSampler(Efficient) node in ComfyUI. The text box GLIGEN model lets you specify the location and size of multiple objects in the image. Search the Efficient Loader and KSampler (Efficient) node in the list and add it to the empty workflow. mp4 ComfyUI Support The ComfyUI-FLATTEN implementation can support most ComfyUI nodes, including ControlNets, IP-Adapter, LCM, InstanceDiffusion/GLIGEN, and many more. - comfyanonymous/ComfyUI Jan 31, 2024 · After funding your account, you'll be able to deploy your selected GPU with the ComfyUI template. AuraFlow is one of the only true open source models with both the code and the weights being under a FOSS license. A growing collection of fragments of example code… Comfy UI preference settings. Share, discover, & run thousands of ComfyUI workflows. I made this using the following workflow with two images as a starting point from the ComfyUI IPAdapter node repository. You can Load these images in ComfyUI to get the full workflow. Restarting your ComfyUI instance on ThinkDiffusion. Why ComfyUI? TODO. 2 days ago · Inpaint Examples. 2. These are examples demonstrating how to do img2img. Reroute Reroute nodeReroute node The Reroute node can be used to reroute links, this can be useful for organizing you You signed in with another tab or window. Feb 7, 2024 · With, in depth examples we explore the intricacies of encoding in the space providing insights and suggestions to enhance this process for your projects. Aug 29, 2024 · Upscale Model Examples. Join the largest ComfyUI community. This way frames further away from the init frame get a gradually higher cfg. Welcome to the comprehensive, community-maintained documentation for ComfyUI, the cutting-edge, modular Stable Diffusion GUI and backend. Flux is a family of diffusion models by black forest labs. Download it and place it in your input folder. SD3 performs very well with the negative conditioning zeroed out like in the following example: SD3 Controlnet. Img2Img works by loading an image like this example image open in new window, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. Annotated Examples. Easy starting workflow. ControlNet and T2I-Adapter - ComfyUI workflow Examples Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. In it I'll cover: What ComfyUI is; How ComfyUI compares to AUTOMATIC1111 (the reigning most popular Stable Diffusion user interface) How to install it; How it works (with a brief overview of how Stable Diffusion works) ComfyUI Examples. We will go through some basic workflow examples. Since ESRGAN Aug 29, 2024 · Hypernetwork Examples. This guide is about how to setup ComfyUI on your Windows computer to run Flux. For example: 896x1152 or 1536x640 are good resolutions. Put the GLIGEN model files in the ComfyUI/models/gligen directory. However, it is not for the faint hearted and can be somewhat intimidating if you are new to ComfyUI. Here are the official checkpoints for the one tuned to generate 14 frame videos (opens in a new tab) and the one for 25 frame videos (opens in a new tab). ComfyUI breaks down the workflow into rearrangeable elements, allowing you to effortlessly create your custom workflow. run ComfyUI interactively to develop workflows. Here is how you use it in ComfyUI (you can drag this into ComfyUI open in new window to get the workflow): Example noise_augmentation controls how closely the model will try to follow the image concept. The initial set includes three templates: Simple Template; Intermediate Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. 1. to split batches up when the batch size is too big for all of them to fit inside VRAM, as ComfyUI will execute nodes for every batch in the list, rather than all at once. x, SDXL, LoRA, and upscaling makes ComfyUI flexible. You signed in with another tab or window. Mar 23, 2024 · 何度か機会はあったものの、noteでの記事で解説するのは難しそうだなぁと思って後回しにしてしまっていましたが、今回は ComfyUI の基本解説 をやっていこうと思います。 私は基本的に A1111WebUI & Forge 派なんですが、新しい技術が出た時にすぐに対応できないというのがネックでした。 Stable Apr 26, 2024 · Workflow. For the easy to use single file versions that you can easily use in ComfyUI see below: FP8 Checkpoint Version ComfyUI manual; Core Nodes; Interface; Examples. These versatile workflow templates have been designed to cater to a diverse range of projects, making them compatible with any SD1. If not, just select it from the dropdow. These are examples demonstrating how to use Loras. In this guide, we are aiming to collect a list of 10 cool ComfyUI workflows that you can simply download and try out for yourself. Here is a link to download pruned versions of the supported GLIGEN model files (opens in a new tab). Feb 24, 2024 · ComfyUI is a node-based interface to use Stable Diffusion which was created by comfyanonymous in 2023. ComfyUI also has a mask editor that can be accessed by right clicking an image in the LoadImage node and “Open in MaskEditor”. This image has had part of it erased to alpha with gimp, the alpha channel is what we will be using as a mask for the inpainting. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. Where to Begin? See the differentiation between samplers in this 14 image simple prompt generator. At times you might wish to use a different VAE than the one that came loaded with the Load Checkpoint node. This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. This name is used to locate the model within a predefined directory structure, enabling the dynamic loading of different U-Net models. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Here is the workflow for the stability SDXL edit model, the checkpoint can be downloaded from: here (opens in a new tab). ComfyUI Examples; 2 Pass Txt2Img (Hires fix) Examples; 3D Examples; Area Composition Examples; ControlNet and T2I-Adapter Examples ComfyUI (opens in a new tab) Examples. txt Currently even if this can run without xformers, the memory usage is huge. Note that this example uses the DiffControlNetLoader node because the controlnet used is a diff These are examples demonstrating how you can achieve the "Hires Fix" feature. The initial set includes three templates: Simple Template. You switched accounts on another tab or window. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples Installing ComfyUI Features ComfyUI comes with a set of nodes to help manage the graph. Recommended to use xformers if possible: You signed in with another tab or window. Hypernetworks are patches applied on the main MODEL so to use them put them in the models/hypernetworks directory and use the Hypernetwork Loader node like this: SDXL Examples. A reminder that you can right click images in the LoadImage node and edit them with the mask editor. The resulting What is ComfyUI. SDXL Turbo is a SDXL model that can generate consistent images in a single step. Follow the ComfyUI manual installation instructions for Windows and Linux and run ComfyUI normally as described above after everything is installed. 5. Download aura_flow_0. ComfyUI. By facilitating the design and execution of sophisticated stable diffusion pipelines, it presents users with a flowchart-centric approach. After the first generation, if you set its randomness to fixed, the model will generate the same style of image. ComfyUI is a node-based GUI designed for Stable Diffusion. This image contain 4 different areas: night, evening, day, morning. Here is a link to download pruned versions of the supported GLIGEN model files. You signed out in another tab or window. Then you can load this image in ComfyUI to get the workflow that shows how to use the LCM SDXL lora with the SDXL base model: The important parts are to use a low cfg, use the "lcm" sampler and the "sgm_uniform" or "simple" scheduler. Aug 7, 2024 · Learn ComfyUI basics from beginner to advance node. Install the ComfyUI dependencies. Here is how you use it in ComfyUI (you can drag this into ComfyUI (opens in a new tab) to get the workflow): noise_augmentation controls how closely the model will try to follow the image concept. import { app } from ". Txt2Img is achieved by passing an empty image to the sampler node with maximum denoise. 0 + other_model If you are familiar with the "Add Difference" option in other UIs this is how to do it in ComfyUI. The ComfyUI encyclopedia, your online AI image generator knowledge base. Aug 29, 2024 · LCM Examples. Inpainting a cat with the v2 inpainting model: Inpainting a woman with the v2 inpainting model: It also works with non inpainting models. The image below is the empty workflow with Efficient Loader and KSampler (Efficient) added and connected to each other ComfyUI is a popular tool that allow you to create stunning images and animations with Stable Diffusion. example. Note that in ComfyUI txt2img and img2img are the same node. On a machine equipped with a 3070ti, the generation should be completed in about 3 minutes. example at master · comfyanonymous/ComfyUI Aug 29, 2024 · Area Composition Examples. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. yaml. You can then load up the following image in ComfyUI to get the workflow: AuraFlow 0. It can also be used merge lists of batches back together into a single batch. Key features include lightweight and flexible configuration, transparency in data flow, and ease of sharing reproducible workflows. 🚀 Lora Examples. . mp4 runner_noise_example. Dec 10, 2023 · ComfyUI should be capable of autonomously downloading other controlnet-related models. Each ControlNet/T2I adapter needs the image that is passed to it to be in a specific format like depthmaps, canny maps and so on depending on the specific model if you want good results. Hunyuan DiT 1. mp4 ComfyUI-FLATTEN. Users can drag and drop nodes to design advanced AI art pipelines, and also take advantage of libraries of existing workflows. ComfyUI StableZero123 Custom Node Use playground-v2 model with ComfyUI Generative AI for Krita – using LCM on ComfyUI Basic auto face detection and refine example Enabling face fusion and style migration 5 days ago · TLDR In this tutorial, Seth introduces ComfyUI's Flux workflow, a powerful tool for AI image generation that simplifies the process of upscaling images up to 5. All LoRA flavours: Lycoris, loha, lokr, locon, etc are used this way. Aug 29, 2024 · Image Edit Model Examples. This is the input image that will be used in this example source: Here is how you use the depth T2I-Adapter: Here is how you use the depth Controlnet. 75 and the last frame 2. Hires fix is just creating an image at a lower resolution, upscaling it and then sending it through img2img. I'd recommend leaving the rest of the settings alone. Direct link to download. FLUX is a cutting-edge model developed by Black Forest Labs. Learn how to download models and generate an image Aug 29, 2024 · Lora Examples. Here is a basic example how to use it: As a reminder you can save these image files and drag or load them into ComfyUI to get the workflow. 1 ComfyUI install guidance, workflow and example. In this example, we show you how to. x, 2. 0 (the min_cfg in the node) the middle frame 1. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. ComfyUI provides a powerful yet intuitive way to harness Stable Diffusion through a flowchart interface. Lora Loader Model Only Documentation - Lora Loader Model Only. g. Add and read a setting. The lower the value the more it will follow the concept. This tutorial gives you a step by step guide on how to create a workflow using Style Alliance in ComfyUI starting from setting up the workflow to encoding the latent for direction. This is what the workflow looks like in ComfyUI: wolf_noise_example. Hypernetwork Examples - ComfyUI Workflow. Aug 21, 2024 · GLIGEN Examples. Follow the ComfyUI manual installation instructions for Windows and Linux. Download hunyuan_dit_1. Here’s a simple workflow in ComfyUI to do this with basic latent upscaling: Non latent Upscaling. The ComfyUI interface includes: The main operation interface; Workflow node Aug 29, 2024 · SDXL Examples. All the images in this page contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. This is useful e. AuraFlow Examples. If you have another Stable Diffusion UI you might be able to reuse the dependencies. Area composition with Anything-V3 + second pass with AbyssOrangeMix2_hard. Video Examples Image to Video. SD3 Controlnets by InstantX are also supported. Conclusion. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. The image below is a screenshot of the ComfyUI interface. 5 checkpoint model. js"; /* In setup(), add the setting */ . You can Load these images in ComfyUI open in new window to get the full workflow. All LoRA flavours: Lycoris, loha, lokr, locon, etc… are used this way. cxkkn omtbtv vvom lwrivhku wcawk fuqu aqygj bepcjcl vsioegy nvw