Best upscale model for comfyui reddit

Best upscale model for comfyui reddit. so i. Yep , people do say that ultimate SD works for SDXL as well now but didn't work for me. We would like to show you a description here but the site won’t allow us. 1 and LCM for 12 samples at 768x1152, then using a 2x image upscale model, and consistently getting the best skin and hair details I've ever seen. So you workflow should look like this: KSampler (1) -> VAE Decode -> Upscale Image (using Model) -> Upscale Image By (to downscale the 4x result to desired size) -> VAE Encode -> KSampler (2) Welcome to the unofficial ComfyUI subreddit. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird ComfyUI uses a flowchart diagram model. The custom node suites I found so far either lack the actual score calculator, don't support anything but CUDA, or have very basic rankers (unable to process a batch, for example, or only So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. The downside is that it takes a very long time. Super late here but is this still the case? I've got CCSR & TTPlanet. Upscaling: Increasing the resolution and sharpness at the same time. If you want actual detail at a reasonable amount of time you'll need a 2nd pass with a 2nd sampler. There's "latent upscale by", but I don't want to upscale the latent image. Usually I use two my wokrflows: For upscaling I mainly used the chaiNNer application with models from the Upscale Wiki Model Database but I also used the fast stable diffuison automatic1111 google colab and also the replicate website super resolution collection. An alternative method is: - make sure you are using the k-sampler (efficient) version, or another sampler node that has the 'sampler state' setting, for the first pass (low resolution) sample Welcome to the unofficial ComfyUI subreddit. So in those other UIs I can use my favorite upscaler (like NMKD's 4xSuperscalers) but I'm not forced to have them only multiply by 4x. Fastest would be a simple pixel upscale with lanczos. 5 model) >> FaceDetailer. All of this can be done in Comfy with a few nodes. There are also other upscale models that can upscale latents with less distortion, the standard ones are going to be bucubic, billinear, and bislerp. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). messing around with upscale by model is pointless for high res fix. 101 votes, 27 comments. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. 2 - image upscale is less detailed, but more faithful to the image you upscale. The resolution is okay, but if possible I would like to get something better. For the samplers I've used dpmpp_2a (as this works with the Turbo model) but unsample with dpmpp_2m, for me this gives the best results. FWIW, i was using it WITH the PatchModelAddDownscale node to generate with RV 5. 5, see workflow for more info Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. Connect the Load Upscale model with the Upscale Image (using model) to VAE Decode, then from that image to your preview/save image. I believe it should work with 8GB vram provided your SDXL Model and Upscale model are not super huge E. 0 seconds (IMPORT FAILED): R:\diffusion\ComfyUI\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale 0. For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. e. Best method to upscale faces after doing a faceswap with reactor It's a 128px model so the output faces after faceswapping is blurry and low res. To get the absolute best upscales, requires a variety of techniques and often requires regional upscaling at some points. 5, now I use it only with SDXL (bigger tiles 1024x1024) and I do it multiple times with decreasing denoise and cfg. So from VAE Decode you need a "Uplscale Image (using model)" under loaders. attach to it a "latent_image" in this case it's "upscale latent" Welcome to the unofficial ComfyUI subreddit. pth "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. pth or 4x_foolhardy_Remacri. same seed probably not nessesary and can cause bad artifacting by the "Burn in" problem when you stack same seed samplers. This is the 'latent chooser' node - it works but is slightly unreliable. Though, from what someone else stated it comes to use case. Then output everything to Video Combine . I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling If you want to use RealESRGAN_x4plus_anime_6B you need work in pixel space and forget any latent upscale. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. It's possible that MoonDream is competitive if the user spends a lot of time crafting the perfect prompt, but if the prompt simply is "Caption the image" or "Describe the image", Florence2 wins. You could also try a standard checkpoint with say 13, and 30. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. A pixel upscale using a model like ultrasharp is a bit better -and slower- but it'll still be fake detail when examined closely. the factor 2. 5 models such as dreamshaper or those which provide good details. Reactor has built in codeformer and GFPGAN, but all the advice I've read said to avoid them. - latent upscale looks much more detailed, but gets rid of the detail of the original image. Does anyone have any suggestions, would it be better to do an ite From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. Note: Remember to add your models, VAE, LoRAs etc. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. I took a 2-4 month hiatus, basically when the OG upscale checkpoints came out like SUPIR so I have no heckin' idea what is the go-to these days. 15-0. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). 4 This custom node is failing to load but I think this is a separate issue. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider from 1 to 4 or more). 34 per hour) and discovered this workflow by @plasm0 that runs locally and support upscaling as well. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. I am curious both which nodes are the best for this, and which models. DirectML (AMD Cards on Windows) pip install torch-directml Then you can launch ComfyUI with: python main. But for the other stuff, super small models and good results. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. Please share your tips, tricks, and workflows for using this software to create your AI art. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. true. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. 0-inpainting-0. And when purely upscaling, the best upscaler is called LDSR. That's because latent upscale turns the base image into noise (blur). 5), with an ESRGAN model. Also, both have a denoise value that drastically changes the result. Upscale x1. That's practically instant but doesn't do much either. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. It's a lot faster that tiling but outputs aren't detailed. Welcome to the unofficial ComfyUI subreddit. You can easily utilize schemes below for your custom setups. Jan 13, 2024 · TLDR: Both seem to do better and worse in different parts of the image, so potentially combining the best of both (photoshop, seg/masking) can improve your upscales. py --directml In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. You can also run a regular AI upscale then a downscale (4x * 0. to the corresponding Comfy folders, as discussed in ComfyUI manual installation. . Florence2 (large, not FT, in more_detailed_captioning mode) beats MoonDream v1 and v2 in out-of-the-box captioning. Edit: i changed models a couple of times, restarted comfy a couple of times… and it started working again… OP: So, this morning, when I left for… r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. Aug 5, 2024 · Flux has been out of under a week and already seeing some great innovation in the open source community. Like I can understand that using the Ultimate Upscale one could add more details through adding steps/noise or whatever you'd like to tweak on the node. fix. Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. Please keep posted images SFW. with a denoise setting of 0. Thanks. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from… hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. If you want a better grounding at making your own comfyUI systems consider checking out my tutorials. I haven't been able to replicate this in Comfy. Upgrade your FPS skills with over 25,000 player-created scenarios, infinite customization, cloned game physics, coaching playlists, and guided training and analysis. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting I'm using a workflow that is, in short, SDXL >> ImageUpscaleWithModel (using 1. This ComfyUI nodes setup lets you use Ultimate SD Upscale custom nodes in your ComfyUI AI generation routine. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). model: base sd v1. But I probably wouldn't upscale by 4x at all if fidelity is important. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. 0-RC , its taking only 7. But basically txt2img, img2img, 4x upscale with a few different upscalers. Generates a SD1. Good for depth, open pose so far so good. Tried the llite custom nodes with lllite models and impressed. You create nodes and "wire" them together. Id say it allows a very high level of access and customization, more thanA1111 - but with added complexity. Reply reply Welcome to the unofficial ComfyUI subreddit. Sometimes models appear twice, for example “4xESRGAN” used by chaiNNer and “4x_ESRGAN” used by Automatic1111. Best aesthetic scorer custom node suite for ComfyUI? I'm working on the upcoming AP Workflow 8. Ultimate sd upscale is the best for me, you can use it with controlnet tile in SD 1. If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. I first create the image with SDXL then ultimate upscale using a SD 1. "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. Then another node under loaders> "load upscale model" node. There are also "face detailer" workflows for faces specifically. There is no tiling in the default A1111 hires. I run some tests this morning. I was working on exploring and putting together my guide on running Flux on Runpod ($0. I want to upscale my image with a model, and then select the final size of it. 6. That's a a good model but to be very clear it's not "objectively better" than anything else on that site, OP's entire basis for the post is just wrong, purpose built upscale models are NOT "advancing" in the way they seem to believe. 1 at main (huggingface. 0 and want to add an Aesthetic Score Predictor function. g Use a X2 Upscaler model. diffusers/stable-diffusion-xl-1. 25 i get a good blending of the face without changing the image to much. Import times for custom nodes: 0. The world’s best aim trainer, trusted by top pros, streamers, and players like you. co) Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. in a1111 the controlnet Welcome to the unofficial ComfyUI subreddit. Search for upscale and click on Install for the models you want. Forgot to mention, you will have to download this inpaint model from huggingface and put it in your comfyUI "Unet" folder that can be found in the models folder. It turns out lovely results, but I'm finding that when I get to the upscale stage the face changes to something very similar every time. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. Now go back to img2img generated mask the important parts of your images and upscale that. That's because of the model upscale. qkzhxu wtyjg dpdl wxi ffgu svmv nmdys nkl cevgbw fpcxyh