Draw mask comfyui reddit

Draw mask comfyui reddit. You can also select non-face bbox models and facedetailer will detail hands etc ComfyUI is not supposed to reproduce A1111 behaviour I found the documentation for ComfyUI to be quite poor when I was learning it. Aug 25, 2024 · Hello, ComfyUI community! I'm seeking advice on improving the background removal process in images. Layer copy & paste this PNG on top of the original in your go to image editing software. It doesn't replace the image (although that might seem to be what it's doing visually), it's saving a separate channel with that mask, so you get two outputs (image and mask) from that one node. Is this more or less accurate? While obviously it seems like ComfyUI has big learning curve, my goal is to actually make pretty decent stuff, so if I have to put the time investment into Comfy, that's fine to me. One thing about human faces is that they are all unique. use the "Load Image" node to load a source image to modify. The workflow that was replaced: When Canvas_Tab came out, it was awesome. This node pack was created as a dependency-free library before the ComfyUI Manager made installing dependencies easy for end-users. If you do a search for detailer, you will find both segs detailer and mask detailer. Overall, I've had great success using this node to do a simple inpainting workflow. So from what I can tell, ComfyUI seems to be vastly more powerful than even Draw Things (which has a lot of configuration settings). Please share your tips, tricks, and workflows for using this software to create your AI art. use the "Load Image (as Mask)" to load the grayscale mask image, specifying "channel" as "red". Reproducing the behavior of the most popular SD implementation (and then surpassing it) would be a very compelling goal I would think. Discord Sign In. Does anyone know why? I would have guessed that only the area inside of the mask would be modified. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. [Load image] -> [resize to match image being generated] -> [image-to-mask] -> [gaussian blur mask] to soften edges Then use [invert mask] to make a mask that is the exact opposite and [solid mask] to make a pure white mask. In addition to a whole image inpainting and mask only inpainting, I also have workflows that I was wondering if there is anyway to create Mask in depth in comfyUI. Comfy Workflows Comfy Workflows. Turns out drawing "^" shaped masks seems to work a bit better than rectangles (especially for smaller masks) because it implies the leg positioning. After completing all the integrations, I output via AnythingAnywhere. 86s/it on a 4070 with the 25 frame model, 2. It animates 16 frames and uses the looping context options to make a video that loops. 0 for ComfyUI - Now with a next-gen upscaler (competitive against Magnific AI and Topaz Gigapixel!) and higher quality mask inpainting with Fooocus inpaint model Join the largest ComfyUI community. Imagine you have a 1000px image with a circular mask that's about 300px. Uh, your seed is set to random on the first sampler. A lot of people are just discovering this technology, and want to show off what they created. Finally, the story text image output from module 9 was pasted on the right side of the image. Wanted to share my approach to generate multiple hand fix options and then choose the best. Invoke AI has a super comfortable and easy to use regional prompter thats based on simply drawing, was wondering if there's such one in comfyui, even if it's an external node? suuuuup, :Dso, with set latent noise mask, it is trying to turn that blue/white sky into a space ship, this may not be enough for it, a higher denoise value is more likely to work in this instance, also if you want to creatively inpaint then inpainting models are not as good as they want to use what exists to make an image more than a normal model. In fact, from inpainting to face replacement, the usage of masks is prevalent in SD. But when Krita plugin happened I switched to that. At least that's what I think. Txt-to-img, img-to-img, Inpainting, Outpainting, Image Upcale, Latent Upscale, multiple characters at once, LoRAs, ControlNet, IP-Adapter, but also video generation, pixelization, 360 image generation, and even Live painting! I don't know if there is a node for it (yet?) in ComfyUI, but I imagine that under the hood, it would take each colored region and make a mask of each color, then use attention coupling on each mask with the associated regional prompt. Import the image at the Load Image node. So far (Bitwise mask + mask) has only 2 masks and I use auto detect so mask can run from 5 too 10 masks. You can choose your preferred drawing software, like Procreate on an iPad, and then import the doodled image into ComfyUI. Regional prompting makes that rather simple all in one image, with multiple hand drawn masks all in app(my most complicated involved 8 hand drawn masks), sure I can paint a mask with an outside app, but why would I bother when it's built into an app in automatic1111. Any way to paint a mask inside Comfy or no choice but to use an external image editor ? It's not released yet, but i just finished 80% of features. So, has someone…. Combine both methods: gen, draw, gen, draw, gen! Always check the inputs, disable the KSamplers you don’t intend to use, make sure to have the same resolution in Photoshop than in ComfyUI. To blend the image and scroll naturally, I created a Border Mask on top: Mask. This comprehensive tutorial covers 10 vital steps, including cropping, mask detection, sampler erasure, mask fine-tuning, and streamlined inpainting for incredible results. I learned about MeshGraphormer from this youtube video of Scott Detweiler, but felt like simple inpainting does not do the trick for me, especially with SDXL. If you spent more than a few days in comfyui, you will recognize that there is nothing here that cannot be done with the already available nodes. Outline Mask: Unfortunately, it doesn't work well because apparently you can't just inpaint a mask; by default, you also end up painting the area around it, so the subject still loses detail IPAdapter: If you have to regenerate the subject or the background from scratch, it invariably loses too much likeness Still experimenting with it though. Below is the effect image generated by the AI after I imported a simple bedroom line drawing: Welcome to the unofficial ComfyUI subreddit. TLDR, workflow: link. Mask detailer allows you to simply draw where you want it to apply the detailing. This workflow generates an image with SD1. But basically if you are doing manual inpainting make sure that the sampler producing your inpainting image is set to fixed that way it does inpainting on the same image you use for masking. In this example, it will be 255 0 0. I want to create a maks which follows the contours of thr subject (a lady in my case). It needs a better quick start to get people rolling. Would you pls show how I can do this. Please keep posted images SFW. This will set our red frame as the mask. Use the mask tool to draw on specific areas, then use it for input to subsequent nodes for redrawing. I use the "Load Image" node and "Open in MaskEditor" to draw my masks. Welcome to the unofficial ComfyUI subreddit. I think the later combined with Area Composition and ControlNet will do what you want. Use a "Mask from Color" node and set it to your first frame color. Try drawing them over a black background, though, not a white background. I suppose that does work for quick and dirty masks. 5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. one Mask after the other. If something is off I can redraw the masks as needed, one by one or only one. Yet, there is no mask node as a common denominator node. 75s/it with the 14 frame model. It's not that slow, but I was wondering if there was a more direct Latent with 'fog' background -> Latent Mask node somewhere. For example, the Adetailer extension automatically detects faces, masks them, creates new faces, and scales them to fit the masks. But one thing I've noticed is that the image outside of the mask isn't identical to the input. I am working on a piece which requires me to have mask which reveals a texture. The method is very simple; you still need to use the ControlNet model, but now you will import your hand-drawn draft. png file, and then R, G, B and Alpha can all mask different areas. Basically though you’d be using a mask, you’d right click on the load images and draw the mask, then there is a node to snip it and stitch it back in … pretty sure the node was something like “stitch”. In fact, there's a lot of inpainting stuff you can do with comfyui that you can't do with automatic1111. They don't have to literally be single pixels, just small. Mar 10, 2024 · comfyui_facetools. As i can't draw the second mask on the result of the first character image (the goal is to do it in one workflow) i draw it on the original picture and i send this mask only in the new VAE Encode (for Inpainting). That way, if you take just the red channel from the mask, it'll give you just the red man, and not the background. My mask images. And I never know what controlnet model to use. Alternatively you can create an alpha mask on any photo editing software. Currently, there are many extensions (custom nodes) available for background removal in ComfyUI, such as Easy-use, mixlab, WAS-node-suite, Inspyrenet-Rembg, and others. I make them 512x512, but the size isn't important. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. I need to combine 4 5 masks into 1 big mask for inpainting. Does anyone else notice that you cannot mask the very bottom of the image with the right-click masking option? And I'm not talking about the mouse not being able to 'mask' it there. You can do it with Masquerade nodes. Create a black and white image that will be the mask. Current Situation. And you can't use soft brushes. It's a more feature-rich and well-maintained alternative for dealing Welcome to the unofficial ComfyUI subreddit. Release: AP Workflow 8. i think, its hard to tell what you think is wrong. 0 for ComfyUI - Now with support for Stable Diffusion Video, a better Upscaler, a new Caption Generator, a new Inpainter (w inpainting/outpainting masks), a new Watermarker, support for Kohya Deep Shrink, Self-Attention, StyleAligned, Perp-Neg, and IPAdapter attention mask Source image. Thanks everyone always asks about inpainting at full resolution, comfyUI by default inpaints at the same resolution as the base image as it does full frame generation using masks. Share art/workflow . Seems very hit and miss, most of what I'm getting look like 2d camera pans. The flow is in shambles right now so I'll just share this screengrab. "SEGS" is the format that Impact pack uses to bundle masks with additional information. png. Here i add one of my PNG so you can see the whole workflow : Here I come up against two problems: Look into Area Composition (comes with ComfyUI by default), GLIGEN (an alternative area composition), and IPAdapter (custom node on GitHub, available for manual or ComfyUI manager installation). The mask editor suck. And above all, BE NICE. Release: AP Workflow 7. Feed this over to a "Bounded Image Crop with Mask" node, using our sketch image as the source with zero padding. Hi amazing ComfyUI community. I kinda fake it by loading any image, than drawing mask on it, than convert mask to image and than send that image to controlnet. Step One: Image Loading and Mask Drawing. Right click on any image and select Open in Mask Editor. The first issue is the biggest for me though. There are many detailer nodes not just facedetailer. 💡 Tip: Most of the image nodes integrate a mask editor. Share, discover, & run thousands of ComfyUI workflows. You can see how easily and effectively the size/placement of the subject can be controlled simply by drawing a new mask. Even if you set the size of the masking circle to max and go over it close enough so that it appears to be fully masked, if you actually save it to the node and Yeah there are tools that do this , I can’t check them right now but I can later if you remind me. <edit 2> Actually now I understand what it's doing. I believe it does mostly the same things as OP's node. In ComfyUI, the easiest way to apply a mask for inpainting is: use the "Load Checkpoint" node to load a model. The Krita plugin is great but the nodal soup part isn't there so I can't change some things. Is there "drawing" node for comfyui that would be bit more user friendly? Like ability to zoom in on parts you are drawin on, colors etc. A transparent PNG in the original size with only the newly inpainted part will be generated. It depends how you made the mask in the first place. an alternative is Impact packs detailer node which can do upscaled inpainting to give you more resolution but this can easily end up giving you more detail than the rest of As for the rest, if memory serves the mask segm custom_node has a couple of extra install steps which are easy to follow & if you load the workflow & see redded out nodes just go to the ComfyUi node manager in the side float menu & click install missing nodes then reset & you should be good to go. If you're using the built in mask editor, just use a small brush and put dots outside the area you already masked. Belittling their efforts will get you banned. Edit: And rembg fails on closed shapes, so it's not ideal Welcome to the unofficial ComfyUI subreddit. ) Learn how to master inpainting on large images using ComfyUI and Stable Diffusion. Step Two: Building the ComfyUI Partial Redrawing Workflow. You can paint all the way down or the sides. It includes literally everything possible with AI image generation. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and TLDR: THE LAB EVOLVED is an intuitive, ALL-IN-ONE workflow. Inpaint is pretty buggy when drawing masks in a1111. A way to draw inside comfyui? Are there any nodes for sketching/drawing directly in comfyui? Of course you can always take things into an external program like photoshop, but i want to try drawing simple shapes for controlnet or paint simple edits before putting things into inpaint. For these workflows we use mostly DreamShaper Inpainting. Thanks. I want to be able to use canny, ultimate SD upscale while inpainting, AND I want to be able to increase batch size. 4. How can I draw regional prompting like invokeAIs regional prompting (control layers) that allows drawing the regional prompting rather than typing numbers? Title says it all. I have this working, however to mask the upper layers after the initial sampling I VAE decode them and use rembg, then convert that to a latent mask. For the specific workflow, please download the workflow file attached to this article and run it. (And if you wanted 4 masks in one image, draw over a transparent background in a . I'm not sure exactly what it stores, but i always draw a mask, send it to MaskToSEGS where I can set the crop factor to determine region for context, then to SEGS Detailer. What else is out there for drawing/painting a latent to be fed into ComfyUI other than the Photoshop one(s)? Welcome to the unofficial ComfyUI subreddit. . This workflow, combined with Photoshop, is very useful for: - Drawing specific details (tattoos, special haircut, clothes patterns, …) Load the upscaled image to the workflow, use ComfyShop to draw a mask and inpaint. Unless you specifically need a library without dependencies, I recommend using Impact Pack instead. This will take our sketch image and crop it down to just the drawing in the first box. What is the rationale behind the drawing of the mask? I don't want to break my drawing/painting workflow by editing csv files, calculating rectangle areas. These custom nodes provide a rotation aware face extraction, paste back, and various face related masking options. For some reason this isn't possible. ldhykkk fhcnxm hgup znkiyr nwf fyjk pxeaxa bsldwe tlhkn llihnax