• Lang English
  • Lang French
  • Lang German
  • Lang Italian
  • Lang Spanish
  • Lang Arabic


PK1 in black
PK1 in red
PK1 in stainless steel
PK1 in black
PK1 in red
PK1 in stainless steel
Comfyui user manual example

Comfyui user manual example

Comfyui user manual example. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. Windows. Hunyuan DiT 1. GLIGEN Examples; Hypernetwork Examples; Img2Img Examples; Inpaint Examples; LCM Examples; Lora Examples; Model Merging . 1; Overview of different versions of Flux. You can use it to connect up models, prompts, and other nodes to create your own unique workflow. 馃捑 The installation process for ComfyUI is straightforward and does not require extensive technical knowledge. If you want do do merges in 32 bit float launch ComfyUI with: –force-fp32. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. Rename this file to extra_model_paths. Img2Img works by loading an image like this example image open in new window, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. SD3 ControlNet. The first step in using the ComfyUI Consistent Character workflow is to select the perfect input image. noise2 = noise2 self . safetensors, stable_cascade_inpainting. Sep 7, 2024 路 SDXL Examples. 1; Flux Hardware Requirements; How to install and use Flux. I then recommend enabling Extra Options -> Auto Queue in the interface. Examples of what is achievable with ComfyUI. Sep 7, 2024 路 GLIGEN Examples. One of the best parts about ComfyUI is how easy it is to download and swap between workflows. fal. Follow the ComfyUI manual installation instructions for Windows and Linux and run ComfyUI normally as described above after everything is installed. [Last update: 01/August/2024]Note: you need to put Example Inputs Files & Folders under ComfyUI Root Directory\ComfyUI\input folder before you can run the example workflow Aug 14, 2024 路 馃 ComfyUI is recommended for an easy local installation of AI models, as it simplifies the process. All LoRA flavours: Lycoris, loha, lokr, locon, etc… are used this way. For these examples I have renamed the files by adding stable_cascade_ in front of the filename for example: stable_cascade_canny. You can Load these images in ComfyUI open in new window to get the full workflow. This repo contains examples of what is achievable with ComfyUI. 1 with ComfyUI Jul 6, 2024 路 The best way to learn ComfyUI is by going through examples. This guide is about how to setup ComfyUI on your Windows computer to run Flux. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. Flux. After studying some essential ones, you will start to understand how to make your own. Then press “Queue Prompt” once and start writing your prompt. The requirements are the CosXL base model, the SDXL base model and the SDXL model you want to convert. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. 5 checkpoint model. These are examples demonstrating how to use Loras. This image has had part of it erased to alpha with gimp, the alpha channel is what we will be using as a mask for the inpainting. Easy starting workflow. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI open in new window. At times you might wish to use a different VAE than the one that came loaded with the Load Checkpoint node. You set up a template, and the AI fills in the blanks. 0. Why ComfyUI? TODO. Install. Add and read a setting. Img2Img Examples. Sep 7, 2024 路 Hypernetwork Examples. For some workflow examples and see what ComfyUI can do you can check out: The UI now will support adding models and any missing node pip installs. It is now supported on ComfyUI. test on 2080ti 11GB torch==2 Sep 7, 2024 路 Lora Examples. On a machine equipped with a 3070ti, the generation should be completed in about 3 minutes. Workflows presented in this article are available to download from the Prompting Pixels site or in the sidebar. Mar 21, 2024 路 Expanding the borders of an image within ComfyUI is straightforward, and you have a couple of options available: basic outpainting through native nodes or with the experimental ComfyUI-LaMA-Preprocessor custom node. This guide demystifies the process of setting up and using ComfyUI, making it an essential read for anyone looking to harness the power of AI for image generation. For example: 896x1152 or 1536x640 are good resolutions. Example. Annotated Examples. Hunyuan DiT is a diffusion model that understands both english and chinese. This section is a guide to the ComfyUI user interface, including basic operations, menu settings, node operations, and other common user interface options. Here is a link to download pruned versions of the supported GLIGEN model files (opens in a new tab). Here is an example of how to create a CosXL model from a regular SDXL model with merging. 75 and the last frame 2. The resulting SDXL Examples. /scripts/app. Issue & PR a comfyui custom node for MimicMotion. These are examples demonstrating how to do img2img. Interface Description. safetensors and put it in your ComfyUI/checkpoints directory. Learn about node connections, basic operations, and handy shortcuts. These are examples demonstrating the ConditioningSetArea node. Reload to refresh your session. In the above example the first frame will be cfg 1. This could be used to create slight noise variations by varying weight2 . In this post we'll show you some example workflows you can import and get started straight away. mp4. This guide is designed to help you quickly get started with ComfyUI, run your first image generation, and explore advanced features. The denoise controls the amount of noise added to the image. However, it is not for the faint hearted and can be somewhat intimidating if you are new to ComfyUI. The KSampler is the core of any workflow and can be used to perform text to image and image to image generation tasks. You can then load up the following image in ComfyUI to get the workflow: A ComfyUI guide ComfyUI is a simple yet powerful Stable Diffusion UI with a graph and nodes interface. In this example I used albedobase-xl. Hypernetworks are patches applied on the main MODEL so to use them put them in the models/hypernetworks directory and use the Hypernetwork Loader node like this: An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. 4 days ago 路 Here's the cool part: you don't have to ask each question separately. SD3 performs very well with the negative conditioning zeroed out like in the following example: SD3 Controlnet. up and down weighting. We will go through some basic workflow examples. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. Recommended Workflows. Some custom_nodes do still Here’s an example of creating a noise object which mixes the noise from two sources. This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. In this guide, we are aiming to collect a list of 10 cool ComfyUI workflows that you can simply download and try out for yourself. Additional discussion and help can be found here . Here’s a simple workflow in ComfyUI to do this with basic latent upscaling: Non latent Upscaling. Search the Efficient Loader and KSampler (Efficient) node in the list and add it to the empty workflow. Here is an example of how the esrgan upscaler can be used for the upscaling step. (the cfg set in the sampler). You can also subtract models weights and add them like in this example used to create an inpaint model from a non inpaint model with the formula: (inpaint_model - base_model) * 1. Note that in ComfyUI txt2img and img2img are the same node. Txt2Img is achieved by passing an empty image to the sampler node with maximum denoise. Upscale Model Examples. The ComfyUI interface includes: The main operation interface; Workflow node In this tutorial, we will guide you through the steps of using the ComfyUI Consistent Character workflow effectively. You switched accounts on another tab or window. 5. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. 0 (the min_cfg in the node) the middle frame 1. The image below is the empty workflow with Efficient Loader and KSampler (Efficient) added and connected to each other Aug 1, 2024 路 For use cases please check out Example Workflows. /. You signed in with another tab or window. ComfyUI manual; Core Nodes; Interface; Examples. example. Area composition with Anything-V3 + second pass with AbyssOrangeMix2_hard. Upload Input Image. Contribute to kijai/ComfyUI-LivePortraitKJ development by creating an account on GitHub. Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. This way frames further away from the init frame get a gradually higher cfg. Restarting your ComfyUI instance on ThinkDiffusion. AuraFlow. In this example we will be using this image. Sep 7, 2024 路 Img2Img Examples. So, we will learn how to do things in ComfyUI in the simplest text-to-image workflow. Hypernetworks are patches applied on the main MODEL so to use them put them in the models/hypernetworks directory and use the Hypernetwork Loader node like this: example. ai in collaboration with Simo released an open source MMDiT text to image model yesterday called AuraFlow. What is ComfyUI. Sep 7, 2024 路 Inpaint Examples. weight2 = weight2 @property def seed ( self ) : return ComfyUI Provides a variety of ways to finetune your prompts to better reflect your intention. You can Load these images in ComfyUI to get the full workflow. Note that this example uses the DiffControlNetLoader node because the controlnet used is a diff Feb 24, 2024 路 ComfyUI is a node-based interface to use Stable Diffusion which was created by comfyanonymous in 2023. Example detection using the blazeface_back_camera: AnimateDiff_00004. import { app } from ". These versatile workflow templates have been designed to cater to a diverse range of projects, making them compatible with any SD1. 1. Here is an example for how to use the Canny Controlnet: Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. Dec 19, 2023 路 In the standalone windows build you can find this file in the ComfyUI directory. Dive into the basics of ComfyUI, a powerful tool for AI-based image generation. SD3 Controlnets by InstantX are also supported. The example below shows how to use the KSampler in an image to image task, by connecting a model, a positive and negative embedding, and a latent image. The InstantX team released a few ControlNets for SD3 and they are supported in ComfyUI. Here is a basic example how to use it: As a reminder you can save these image files and drag or load them into ComfyUI to get the workflow. Advanced ComfyUI Template For Commercial: 2: ComfyUI-Template-Pack: 10 ComfyUI Templates for Beginner: 3: ComfyUI-101Days: My Daily ComfyUI Workflow Creation: 4 You can also subtract models weights and add them like in this example used to create an inpaint model from a non inpaint model with the formula: (inpaint_model - base_model) * 1. All LoRA flavours: Lycoris, loha, lokr, locon, etc are used this way. Here is an example of how to use upscale models like ESRGAN. Img2Img works by loading an image like this example image (opens in a new tab), converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. noise1 = noise1 self . This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. Efficient Loader node in ComfyUI KSampler(Efficient) node in ComfyUI. safetensors. 2. Inpainting a cat with the v2 inpainting model: Inpainting a woman with the v2 inpainting model: It also works with non inpainting models. Note that we use a denoise value of less than 1. yaml and edit it with your favorite text editor. class Noise_MixedNoise : def __init__ ( self , nosie1 , noise2 , weight2 ) : self . SDXL Turbo is a SDXL model that can generate consistent images in a single step. . ComfyUI also has a mask editor that can be accessed by right clicking an image in the LoadImage node and “Open in MaskEditor”. ControlNet and T2I-Adapter - ComfyUI workflow Examples Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. This image contain 4 different areas: night, evening, day, morning. The following images can be loaded in ComfyUI to get the full workflow. 馃寪 To get started with ComfyUI, visit the GitHub page and download the latest release. The proper way to use it is with the new SDTurbo Hunyuan DiT Examples. 34. For the easy to use single file versions that you can easily use in ComfyUI see below: FP8 Checkpoint Version Follow the ComfyUI manual installation instructions for Windows and Linux and run ComfyUI normally as described above after everything is installed. This is the input image that will be used in this example source: Here is how you use the depth T2I-Adapter: Here is how you use the depth Controlnet. Unlike other Stable Diffusion tools that have basic text fields where you enter values and information for generating an image, a node-based interface is different in the sense that you’d have to create nodes to build a workflow to generate images. Each ControlNet/T2I adapter needs the image that is passed to it to be in a specific format like depthmaps, canny maps and so on depending on the specific model if you want good results. Here's a list of example workflows in the official ComfyUI repo. ComfyUI Examples; 2 Pass Txt2Img (Hires fix) Examples; 3D Examples; Area Composition Examples; ControlNet and T2I-Adapter Examples; Frequently Asked Questions; GLIGEN Examples. 1 ComfyUI install guidance, workflow and example. T2I-Adapters are used the same way as ControlNets in ComfyUI: using the ControlNetLoader node. You signed out in another tab or window. A growing collection of fragments of example code… Comfy UI preference settings. ComfyUI Examples. Flux is a family of diffusion models by black forest labs. There is a portable standalone build for Windows that should work for running on Nvidia GPUs or for running on your CPU only on the releases page. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. In the example below we use a different VAE to encode an image to latent space, and decode the result of the Ksampler. 0 + other_model If you are familiar with the "Add Difference" option in other UIs this is how to do it in ComfyUI. Here is an example: You can load this image in ComfyUI to get the workflow. 1. A reminder that you can right click images in the LoadImage node and edit them with the mask editor. Save this image then load it or drag it on ComfyUI to get the workflow. The importance of parts of the prompt can be up or down-weighted by enclosing the specified part of the prompt in brackets using the following syntax: (prompt:weight). ComfyUI is a popular tool that allow you to create stunning images and animations with Stable Diffusion. Since ESRGAN Jul 13, 2024 路 Here is an example workflow. Simply download, extract with 7-Zip and run. You can Load these images in ComfyUI (opens in a new tab) to get the full workflow. Contribute to AIFSH/ComfyUI-MimicMotion development by creating an account on GitHub. You can use more steps to increase the quality. Download it and place it in your input folder. By facilitating the design and execution of sophisticated stable diffusion pipelines, it presents users with a flowchart-centric approach. Flux Examples. The initial set includes three templates: Simple Template; Intermediate For more details, you could follow ComfyUI repo. Download hunyuan_dit_1. Direct link to download. A ComfyUI StableZero123 Custom Node Use playground-v2 model with ComfyUI Generative AI for Krita – using LCM on ComfyUI Basic auto face detection and refine example Enabling face fusion and style migration These are examples demonstrating how to do img2img. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI (opens in a new tab). Area Composition Examples. Put the GLIGEN model files in the ComfyUI/models/gligen directory. It covers the following topics: Introduction to Flux. js"; /* In setup(), add the setting */ . Dec 10, 2023 路 ComfyUI should be capable of autonomously downloading other controlnet-related models. You can try them out with this example workflow. The image below is a screenshot of the ComfyUI interface. Welcome to the comprehensive, community-maintained documentation for ComfyUI, the cutting-edge, modular Stable Diffusion GUI and backend. This is what the workflow looks like in ComfyUI: ComfyUI User Interface. Advanced Merging CosXL. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. Share, discover, & run thousands of ComfyUI workflows. Join the largest ComfyUI community. For example, you might ask: "{eye color} eyes, {hair style} {hair color} hair, {ethnicity} {gender}, {age number} years old" The AI looks at the picture and might say: "Brown eyes, curly black hair, Asian female, 25 years Lora Examples. This image should embody the essence of your character and serve as the foundation for the entire You signed in with another tab or window. ComfyUI stands as an advanced, modular GUI engineered for stable diffusion, characterized by its intuitive graph/nodes interface. All the images in this page contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. yiljbqb ivngu uzcn faxqe jcmga pvvpva jpz vkk jtmfi xumm