Comfyui upscale beds reddit 

Comfyui upscale beds reddit. The best method I Now i am trying different start-up parameters for comfyui like disabling smarty memory, etc. For those also researching, Krea. 5 and toggle an upscale node if I wanted to, by dragging the image into Comfy and switching a link. After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. Upscale x1. Also, both have a denoise value that drastically changes the result. (Change the Pos and Neg Prompts in this method to match the Primary Pos and Neg Prompts). If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. The reason for the strange noise artifacts is actually poor latent upscaling between stages. That said, Upscayl is SIGNIFICANTLY faster for me. 10K subscribers in the comfyui community. Again, would really appreciate any of your Comfy 101 materials, resources, and creators, as well as your advice re. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. then plug the output from this into 'latent upscale by' node set to whatever you want your end image to be at (lower values like 1. You just have to use the node "upscale by" using bicubic method and a fractional value (0. I find if it's below 0. Reply reply Top 1% Rank by size - image upscale is less detailed, but more faithful to the image you upscale. 20K subscribers in the comfyui community. Multicontrolnet with preprocessors. Does anyone have any suggestions, would it be better to do an ite Tried the llite custom nodes with lllite models and impressed. This is done after the refined image is upscaled and encoded into a latent. positive image conditioning) is no longer a simple text description of what should be contained in the total area of the image; they are now a specific description that in the area defined by the coordinates starting from x:0px y:320px, to x:768px y I am using the primitive node to increment values like CFG, Noise Seed, etc. ai has 50 free uploads and unlimited for $24 compared to $40 for 200 upscale by Magnific. What are the pros and cons of using the kohya deep shrink over using 2 ksamplers to upscale? I find the kohya method significantly slower since the whole pass is now done at high res instead of only partially done at high res. Links to different 3D models, images, articles, and videos related to 3D photogrammetry are highly encouraged, e. Try immediately VAEDecode after latent upscale to see what I mean. - latent upscale looks much more detailed, but gets rid of the detail of the original image. 10 votes, 18 comments. thats In fact, there's a lot of inpainting stuff you can do with comfyui that you can't do with automatic1111. This is a community to share and discuss 3D photogrammetry modeling. I have a 4090 rig, and i can 4x the exact same images at least 30x faster than using ComfyUI workflows. (206x206) when I'm then upscaling in photopea to 512x512 just to give me a base image that matches the 1. 2 / 4. g. Vase Lichen. But i want your guys opinion on the upscale you can download both images in my google drive cloud i cannot upload them since they are both 500mb - 700mb The issue is likely caused by a quirk in the way MultiAreaConditioning works: its sizes are defined in pixels. This is what A1111 also does under the hood, you just have to do it explicitly in comfyui. if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. 6 denoise and either: Cnet strength 0. 0 = 0. You could add a latent upscale in the middle of the process then a image downscale in pixel space at the end (use upscale node with 0. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. The upscale not being latent creating minor distortion effects and/or artifacts makes so much sense! And latent upscaling takes longer for sure, no wonder why my workflow was so fast. " I gave up on latent upscale. Annotator preview also Welcome to the unofficial ComfyUI subreddit. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). So if you want 2. 01. k. Please share your tips, tricks, and… Welcome to the unofficial ComfyUI subreddit. That’s a cost of abou Welcome to the unofficial ComfyUI subreddit. Hires fix with add detail lora. Please share your tips, tricks, and workflows for using this… I would output my image and keep the resolution down while any non-tiled sampler is going to be working on it. I made one (FaceDetailer > Ultimate SD Upscale > EyeDetailer > EyeDetailer). With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. Instead, I use Tiled KSampler with 0. Click New Fixed Random in the Seed node in Group A. ComfyUI-Image-Selector. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. The workflow has different upscale flow that can upscale up to 4x and in my recent version I added a more complex flow that is meant to add details to a generated image. Basic latent upscale, basic upscaling via model in pixel space, with tile controlnet, with sd ultimate upscale, with LDSR, with SUPIR and whatnot. SD upscaler and upscale from that. I want to upscale my image with a model, and then select the final size of it. ssitu/ComfyUI_UltimateSDUpscale: ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. Simple upscale and upscaling with model (like Ultrasharp). Repeating steps 2-4 to upscale through additional multiples to get even higher resolution output if that's desired. Then I would do a model upscale>resize or instead, tiled upscaling approach. ComfyUI-Custom-Scripts. For upscaling with img2img, you first upscale/crop the source image (optionally using a dedicated scaling model like ultrasharp or something) convert it to latent and then run the ksampler on it. This is not the case. Look at this workflow : It's high quality, and easy to control the amount of detail added, using control scale and restore cfg, but it slows down at higher scales faster than ultimate SD upscale does. Please share your tips, tricks, and… I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. 15-1. 9, end_percent 0. If it’s a distant face then you probably don’t have enough pixel area to do the fix justice. 202 votes, 58 comments. Latent upscale is different from pixel upscale. For upscaling there are many options. After a final high-resolution latent is obtained, they then do something they call "shifted crop sampling with dilated sampling. The reason I haven't raised issues on any of the repos is because I am not sure where the problem actually exists: ComfyUI, Ultimate Upscale, or some other custom node entirely. 5 to get a 1024x1024 final image (512 *4*0. ultrasharp), then downscale. 25 to keep the process and VRAM usage lower. X values) if you want to benefit from the higher res processing. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. 2x, upscale using a 4x model (e. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). I have a custom image resizer that ensures the input image matches the output dimensions. The idea is simple, use the refiner as a model for upscaling instead of using a 1. It's why you need at least 0. That was many versions ago, and I've updated a lot since then. “The training requirements of our approach consists of 24,602 A100-GPU hours – compared to Stable Diffusion 2. If it’s a close up then fix the face first. yet when I try to This next queue will then create a new batch of four images, but also upscale the selected images cached in the previous prompt. 5 denoise. Reply reply The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. I haven't been able to replicate this in Comfy. Also make sure you install missing nodes with ComfyUI Manager. I upscaled it to a resolution of 10240x6144 px for us to examine the results. A lot of people are just discovering this technology, and want to show off what they created. Basically it doesn't open after downloading (v. Please share your tips, tricks, and workflows for using this software to create your AI art. Here is a workflow that I use currently with Ultimate SD Upscale. rgthree-comfy. Girl with flowers. So I'm happy to announce today: my tutorial and workflow are available. I liked the ability in MJ, to choose an image from the batch and upscale just that image. Mute the two Save Image nodes in Group E Click Queue Prompt to generate a batch of 4 image previews in Group B. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. The problem here is the step after your image loading, where you scale up the image using the "Image Scale to Side" node. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. 0 Alpha + SD XL Refiner 1. The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. 5=1024). There's "latent upscale by", but I don't want to upscale the latent image. We would like to show you a description here but the site won’t allow us. Thanks! Under 4K: generate base SDXL size with extras like character models or control nets -> face / hand / manual area inpainting with differential diffusion -> Ultrasharp 4x -> unsampler -> second ksampler with a mixture of inpaint and tile controlnet (I found only using tile control net blurs the image) Welcome to the unofficial ComfyUI subreddit. Just created my first upscale layout last night and it's working (slooow on my 8GB card but results are pretty) but I'm eager to see what your approaches look like to such things and LoRAs and inpainting etc. 5 model, and can be applied to Automatic easily. 5 models (seems pointless to go larger). Depending on the noise and strength it end up treating each square as an individual image. You can use it in any picture, you will need ComfyUI_UltimateSDUpscale Welcome to the unofficial ComfyUI subreddit. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. Hello, For more consistent faces i sample an image using the ipadapter node (so that the sampled image has a similar face), then i latent upscale the image and use the reactor node to map the same face used in the ipadapter on the latent upscaled image. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. 2 options here. Better upscaling of the latents fixes that. Due to the complexity of the workflow, a basic understanding of ComfyUI and ComfyUI Manager is recommended. Supports: Basic txt2img. Hello, I started using ComfyUI about a year ago. I try to use comfyUI to upscale (use SDXL 1. Thanks! Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. The resolution is okay, but if possible I would like to get something better. 0. I've played around with different upscale models in both applications as well as settings. Input your choice of checkpoint and lora in their respective nodes in Group A. So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. Inpainting (with auto-generated transparency masks). I like how IPAdapter with masking allows me to not have to write detailed prompts, and yet still maintains the fidelity of the subject and background - or any other masked elements for that matter. And when purely upscaling, the best upscaler is called LDSR. Back then I had a nice workflow where I could batch many images in 1. great video! I've gotten this far up-to-speed with ComfyUI but I'm looking forward to your more advanced videos. Both these are of similar speed. This means that your prompt (a. Nearest-exact is a crude image upscaling algorithm that, when combined with your low denoise strength and step count in the KSampler, means you are basically doing nothing to the image when you denoise it, leaving all the jagged pixels introduced from your initial upscale. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. If you want more resolution you can simply add another Ultimate SD Upscale node. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. Upscale and then fix will work better here. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. Thank you for helps New to Comfyui, so not an expert. 55 Welcome to the unofficial ComfyUI subreddit. The only approach I've seen so far is using a the Hires fix node, where its latent input comes from AI upscale > downscale image, nodes. If I wanted any enhancements/details that latent upscaling could provide, I limit the upscale to around 1. I like doing a basic first pass latent upscale before that. articles on new photogrammetry software or techniques. It uses CN tile with ult SD upscale. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. Click on Install Missing Custom Nodes and install any missing nodes. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. . Here is details on the workflow I created: This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. I’m new to ComfyUI and I’m aware that people create amazing stuff with just prompts and detailers. It's messy right now but does the job. Please keep posted images SFW. 17K subscribers in the comfyui community. 5 if you want to divide by 2) after upscaling by a model. ComfyUI Manager issue. Generates a SD1. a. Any guide on creating comic books with SD, I’m interested in developing a workflow that maintains characters, scene, and style consistency, and… Welcome to the unofficial ComfyUI subreddit. There are also "face detailer" workflows for faces specifically. I'm trying to find a way of upscaling the SD video up from its 1024x576. Since you have only 6GB VRAM i would choose tile controlnet + sd ultimate upscale. Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. 9 , euler Welcome to the unofficial ComfyUI subreddit. I generate an image that I like then mute the first ksampler, unmute Ult. The graphic style Welcome to the unofficial ComfyUI subreddit. My problem is that my generation produce a 1 pixel line at the right/bottom of the image which is weird/white. ComfyUI-Impact-Pack. Try NNLatentUpscale instead of the regular latent upscale node. safetensors (SD 4X Upscale Model) Jan 5, 2024 · Search for upscale and click on Install for the models you want. Mar 8, 2024 · In this easy ComfyUI Tutorial, you'll learn step-by-step how to upscale in ComfyUI. The results are comparable. Latent quality is better but the final image deviates significantly from the initial generation. 5, euler, sgm_uniform or CNet strength 0. This will allow detail to be built in during the upscale. The only way I can think of is just Upscale Image Model (4xultrasharp), get my image to 4096, and then downscale with nearest-extact back to 1500. In addition to a whole image inpainting and mask only inpainting, I also have workflows that upscale the masked region to do an inpaint and then downscale it back to the original resolution when pasting it back in. ” From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. The "Upscale and Add Details" part splits the generated image, upscales each part individually, adds details using a new sampling step and after that stiches the parts together I created a workflow with comfy for upscaling images. That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. Also, I did edit the custom node ComfyUI-Custom-Scripts' python file: string_function. Belittling their efforts will get you banned. After borrowing many ideas, and learning ComfyUI. Because i dont understand why ultimate-sd-upscale can manage same resolution in same configuration but supir can not. You end up with images anyway after ksampling so you can use those upscale node. I have been generally pleased with the results I get from simply using additional samplers. If this can be solved, I think it would help lots of other people who might be running into this issue without knowing it. py, in order to allow the the 'preview image' node to For a dozen days, I've been working on a simple but efficient workflow for upscale. 1’s 200,000 GPU hours. No matter what, UPSCAYL is a speed demon in comparison. You get to know different ComfyUI Upscaler, get exclusive access to my Co Very simple workflow to compare a few upscale models. May be somewhere i will point out the issue. this breaks the composition a little bit, because the mapped face is most of the time to clean or has a slightly different lighting etc. 3 denoise, takes a bit longer but gives more consistent results than latent upscale. And above all, BE NICE. I would like to use a step value. It's an 2x upscale workflow. After 6 days of hard work (2 days build, 1 day testing, 2 day recording and 1 day editing and very little sleep, well, I finally managed to upload this! full tutorial in the youtube description (it's entirely free of course) - and the video goes into 1h of detailled instructions on how to build it yourself (because I prefer for someone to learn how to fish than to give them a fish 😂 Welcome to the unofficial ComfyUI subreddit. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. Newcomers should familiarize themselves with easier to understand workflows, as it can be somewhat complex to understand a workflow with so many nodes in detail, despite the attempt at a clear structure. My workflow runs about like this: [ksampler] [Vae decode] [Resize] [Vae encode] [Ksampler #2 thru #n] ^ I typically use the same or a closely related prompt for the addl ksamplers, same seed and most other settings, with the only differences among my (for example) four ksamplers in the #2-#n positions Welcome to the unofficial ComfyUI subreddit. This is a plugin that allows users to run their favorite features from ComfyUI and at the same time, being able to work on a canvas. That's because latent upscale turns the base image into noise (blur). You can find pixel upscale models on OpenModelDB, if you don't know where to start from try: UltraSharp, RealSR or Remacri. Welcome to the unofficial ComfyUI subreddit. 5 for latent upscale you can get issues, I tend to use 4x ultrasharp image upscale and then re-encode back thought a ksampler at the higher resolution with a 0. Basic img2img. As my test bed, i'll be downloading the thumbnail from say my facebook profile picture, which is fairly small. I share many results and many ask to share. Ugh. Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP That's because of the model upscale. I am now just setting up ComfyUI and I have issues (already LOL) with opening the ComfyUI Manager from CivitAI. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. Thanks for all your comments. Like increment seed by 5000 each render, or increment CFG by 0. 22, the latest one available). Good for depth, open pose so far so good. The final node is where comfyui take those images and turn it into a video. The downside is that it takes a very long time. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Welcome to the unofficial ComfyUI subreddit. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. To find the downscale factor in the second part, calculate by: factor = desired total upscale / fixed upscale factor = 2. 2x upscale using Ultimate SD Upscale and TileE Controlnet. Put your folder in the top left text input. 0 + Refiner) This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. 5 are usually a better idea than going 2+ here because latent upscale introduces noise which requires an offset denoise value be added in the following ksampler) a second ksampler at 20+ steps set to probably over 0 Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. It depends on how large the face in your original composition is. xvof lwaa qwaql dshtayiu cppuikx ykhwa mmqg ubvcup qfih ushu
radio logo
Listen Live