Stable diffusion face refiner online reddit.
 

Stable diffusion face refiner online reddit 9 looked great after the refiner, but with 1. I’ve already been experimenting with this method of cropping characters out and building composites. The Face Restore feature in Stable Diffusion has never really been my cup of tea. g. 75 before the refiner ksampler. 4 noise reduction. One guess is that the workflow is looking for the Control-LoRAs models in the cached directory (which is my directory on my computer). This isn't just a picky point -- its to underline that larding prompts with "photorealistic, ultrarealistic" etc -- tend to make a generative AI image look _less_ like a photograph. Experimental Functions. 5 to 1. Only dog, also perfect. I do it to create the sources for my MXAI embeddings, and I probably only have to delete about 10% of my source images for not having the same face. My idea is to go bit by bit with inpaint. An example: You impaint the face of the surprised person and after 20 generation it is just right - now that's it. 5, we're starting small and I'll take you along the entire journey. To encode the image you need to use the "VAE Encode (for inpainting)" node which is under latent->inpaint. If you look at the base image, I prefer the features she has there, with the rounded nose and the shape of the mouth. the refiner doesn't even need cross-attention since it only runs on timesteps 200->0 where cross attention isn't used If you're using ComfyUI you can right click on a Load Image node and select "Open in MaskEditor" to draw an inpanting mask. 5 despite it being 1024 model it looks like its upscaled 512 + blurry without refiner, i think something went wrong during /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It just doesn't automatically refine the picture. I'm not really a fan of that checkpoint, but a tip to creating a consistent face is to describe it and name the "character" in the prompt. And after all that refiner I quess. Please keep posted images SFW. a photo of an ugly 35 year old Tongan woman 2. the hand color does not see very healthy, I think the seeding took pixels from outfit. Sure, it's not 2. With the new release of SDXL, it's become increasingly apparent that enabling this option might not be your best bet. 0 refiner. I used the refiner as a LoRa with 15 steps, CFG set to 8, euler, and 0. 2 Be respectful and follow Reddit's Content Policy. With 100 steps refiner the face of the man and the fur on the dog are smoother, but whether that is preferable for an oil painting is a matter of personal preference. Thanks. fix Next, we'll explore the Refiner. " Consequently, the refiner will take the previous latent image and, after rendering, transform it into this: Now, some incorrect comparisons I've encountered involve using a node configuration similar to this: just made this using epicphotogast and the negative embedding EpicPhotoGasm-colorfulPhoto-neg and lora more_details with these settings: Prompt: a man looks close into the camera, detailed, detailed skin, mall in background, photo, epic, artistic, complex background, detailed, realistic <lora:more_details:1. Can say, using ComfyUI with 6GB VRAM is not problem for my friend RTX 3060 Laptop the problem is the RAM usage, 24GB (16+8) RAM is not enough, Base + Refiner only can get 1024x1024, upscalling (edit: upscalling with KSampler again after it) will get RAM usage skyrocketed. After some testing I think the degradation is more noticeable with concepts than styles. Getting a single sample and using a lackluster prompt will almost always result in a terrible result, even with a lot of steps. What you may have thought that I wanted to do is use the SDXL refiner model as the main model. Have used multiple workflows/settings, but haven't figured it out yet. As a short term thing, you could take the base image, and paste the face over the refined image and blend it in. 0, all attempts at making faces looked a bit distorted/broken. Hello there, I’m relatively beginner into using Stable Diffusion especially with AI world. No, because it's not there yet. 9 Refiner pass for only a couple of steps to "refine / finalize" details of the base image. safetensors (all-in-one, non-diffusers) format and metadata are both an absolute must for me. While the models officially released to the open source community are in order: Stable Diffusion 1. safetensors and . To avoid this, don't mention the exact age (e. , 24 y. 4), (mega booty:1. I can't figure out how to properly use refiner in inpainting workflow. In today’s development update of Stable Diffusion WebUI, now includes merged support for SDXL refiner. Actually the normal XL BASE model is better than the refiner in some points (face for instance) but I think that the refiner can bring some interesting details Reply reply ScionoicS Another trick I haven't seen mentioned, that I personally use. It's the process the SDXL Refiner was intended to be used. ai are for the base sdxl, whereas on almost all the documentation from Hugging face I see image2image the refiner sdxl being used. 0 for ComfyUI (Hand Detailer, Face Detailer, Free Lunch, Image Chooser, XY Plot, ControlNet/Control-LoRAs, Fine-tuned SDXL models, SDXL Base+Refiner, ReVision, Upscalers, Prompt Builder, Debug, etc. But it is extremely light as we speak, so much so 38 votes, 10 comments. Just like Juggernaut started with Stable Diffusion 1. 5 of my wifes face works much better than the ones Ive made with sdxl so I enabled independent prompting(for highresfix and refiner) and use the 1. However, that's pretty much the only place I'm actually seeing a refiner mentioned. My workflow and visuals of this behaviour is in the attached image. The refiner is a separate model specialized for denoising of 0. 78. 7 in the Denoise for Best results. This article will guide you through the process of enabling Dec 23, 2024 · Most of the Lora weights on civits. 0 and upscalers It's "Upscaling > Hand Fix > Face Fix" If you upscale last, you partially destroy your fixes again. Use 1. In the end, I feel that, as many others, stable diffusion is a little bit like a slot machine . Jul 22, 2023 · After Detailer (adetailer) is a Stable Diffusion Automatic11111 web-UI extension that automates inpainting and more. May 10, 2025 · The base Stable Diffusion models released by Stability AI, are only the tip of the iceberg. Typically, folks flick on Face Restore when the face generated by SD starts resembling something you'd find in a sci-fi flick (no offense meant to /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Some of the available SDXL checkpoints already have a very reasonable understanding of the female anatomy and variety. SDXL can indeed generate a nude body, and the model itself doesn't stop you from fine-tuning it towards whatever spicy stuff there is with a dataset, at least by the looks of it. The two keys to getting what you want out of Stable Diffusion are to find the right seed, and to find the right prompt. 5 can get close, SDXL can probably do it with the use of some good loras. I had assumed this is how the workflow would work , but evidently that's not right. 5 as a Refiner. Honestly! Currently trying to fix bad hands using face refiner, but it seems that it is doing something bad. I'll be trying it out once 3. Understandable, it was just my assumption from discussions that the main positive prompt was for common language such as "beautiful woman walking down the street in the rain, a large city in the background, photographed by PhotographerName" and the POS_L and POS_R would be for detailing such as "hyperdetailed, sharp focus, 8K, UHD" that sort of thing. 5 ) which gives me super interesting results. hey all, let's test together, just hope I am not doing something silly. 5 model in highresfix with denoise set in the . There's a diagram on stable diffusion 0. 2), well lit, illustration, beard, colored glasses Prompting in xl is different. We would like to show you a description here but the site won’t allow us. Describe the character and add to the end of the prompt: illustration by (Studio ghibli style, Art by Hayao Miyazaki:1. I just started learning about Stable Diffusion recently, I downloaded the safe-tensors directly from huggingface for Base and Refiner model, I found multiple VAEs there. I also used a latent upscale stage with 1. Used Automatic1111, SDXL 1. Some of the learned lessons from the previous tutorial, such as how height does and doesn't work, seed selection, etc. I am an AUTOMATIC1111 webui user, I tried Comfy and Forge but ultimately went back to AUTO because of the UI and the extensions, but with the SD3 release I had to choose between comfy and Swarm, which I had never tried before. IOW, their detection maps conform better to faces, especially mesh, so it often avoids making changes to hair and background (in that noticeable way you can sometimes see when not using an inpainting model). ai and search for NSFW ones depending on the style I want (anime, realism) and go from there. 5 models since they are trained on 512x512 images. Posted by u/DevilmanWunsen - 1 vote and no comments In my experiments, I've discovered that adding imperfections can be made manually in Photoshop using tools like liquify and painting texture and then in img2img Personally, it appears to me that stable diffusion 1. 1 is out though. Babyface: Sometimes, when generating young women, child-like faces appear. safetensors Very nice. So for example, if I have a 512x768 image, with a full body and smaller / zoomed out face, I inpaint the face, but change the res to 1024x1536, and it gives better detail and definition to the area I am Do not use the high res fix section (can select none, 0 steps in the high res section), go to the refiner section instead that will be new with all your other extensions (like control net or whatever other extensions you have installed) below, enable it there (sd_xl_refiner_1. AP Workflow v5. If you are using Stable Diffusion with A1111 you can ckeck the restore faces feature to get better results. The base model is perfectly capable of generating an image on its own. 5 model and its LORAs to swap the face on sdxl pics i have nodes setup for this , i can also do img2img with SD1. What most people do is generate an image until it looks great and then proclaim this was what they intended to do. Depends on the program you use, but with Automatic 1111 on the inpainting tab, use inpaint with -only masked selected. a dark digital painting for a fantasy RPG of a cyclops towering above the surrounding landscape holding a club above it's head Introductions. . 7 in the Refiner Upscale to give a little room in the image to add details. In my understanding, their implementation of the SDXL Refiner isn't exactly as recommended by Stability AI, but if you are happy using just the Base model (or you are happy with their approach to the Refiner), you can use it today to generate SDXL images. 0 Refine. ) [CROSS-POST] Resource | Update For anyone interested, I just added the preset styles from Fooocus into my Stable Diffusion Deluxe app at https://DiffusionDeluxe. For faces you can use Facedetailer. Anyway, I too have tossed a lot of excess prompt baggage in the bin , especially when I played with promptgen, and just for the heck of it let the minimalistic prompts that thing spit out go. Aug 11, 2023 · SDXL 1. The base doesn't - aesthetic score conditioning tends to break prompt following a bit (the laion aesthetic score values are not the most accurate, and alternative aesthetic scoring methods have limitations of their own), and so the base wasn't trained on it to enable it to follow prompts as accurately as possible. The base model should produce okay pictures in general but for generations like these, that's when you know to use the refiner on it. 1. E. 5), (large breasts:1. o. My favorite currently is Realities Edge XL (a merge but very good!) that I've been using for an erotic/boudoir photography project that I started on 1. 5 model as the "refiner"). 0 includes the following experimental functions: Free Lunch (v1 and v2) AI researchers have discovered an optimization for Stable Diffusion models that improves the quality of the generated images. Please share your tips, tricks, and workflows for using this software to create your AI art. 0 and 1. Hi everybody, I have generated this image with following parameters: horror-themed , eerie, unsettling, dark, spooky, suspenseful, grim, highly… I haven’t not played with the refiner much with 1. , will not be addressed in detail again, so I do recommend giving the previous tutorial a glance if you want further details on the process. Access that feature from the Prompt Helpers tab, then Styler and Add to Prompts List. Same with SDXL, you can use any two SDXL models as the base model and refiner pair. The control Net Softedge is used to preserve the elements and shape, you can also use Lineart) 3) Setup Animate Diff Refiner The intended way to use SDXL is that you use the Base model to make a "draft" image and then you use the Refiner to make it better. In the Refiner node, "Add noise" is disabled, as well as "return with leftover noise. Most full names mean something very specific, and even partial names will have an influence. But under the img2img tab, the option to load a refiner does not exist. With the new images, which use an oil painting style, it is harder to say if any of the images is actually better. That scenario of faces not at close range being bad is precisely the scenario that the refiner was created for from what I've read. But generally, if you are generating low resolution images, you have very few pixels to work with when generating smaller faces, for example. I think the ideal workflow is a bit debateable. For artists, writers, gamemasters, musicians, programmers, philosophers and scientists alike! The creation of new worlds and new universes has long been a key element of speculative fiction, from the fantasy works of Tolkien and Le Guin, to the science-fiction universes of Delany and Asimov, to the tabletop realm of Gygax and Barker, and beyond. 5, it is possible by using 1. 5, currently there exist a lot of different fine-tunes of these models available online. 0. the goal for step1 is to get the character having the same face and outfit with side/front/back view ( I am using character sheet prompt plus using charturner lora and controlnet openpose, to do this) I was a big 1. Cascade into an XL refiner will probably do better. 9(just search in youtube sdxl 0. Look what other people prompt, in the model examples on civtai. Fooocus-MRE v2. Activate the Face Swapper via the auxiliary switch in the Functions section of the workflow. SDXL models on civitai typically don't mention refiners and a search for refiner models doesn't turn up much. On a 1. 5 user for anime images and honestly was pretty wholly satisfied with it except for some few flaws like anatomy, taking forever to semi-correctly inpaint hands afterwards etc. 2 or less on "high-quality high resolution" images. total steps: 40 sampler1: SDXL Base model 0-35 steps sampler2: SDXL Refiner model 35-40 steps Yep! I've tried and refiner degrades (or changes) the results. I tend to like the mediapipe detectors because they're a bit less blunt than the square box selectors on the yolov ones. I'm aware that this is possible. Craft your prompt. " Ugly faces: Another problem with faces is ugly results in long shots. having this problem as well Inpaint prompt: chubby male (action hero 1. Next fork of A1111 WebUI, by Vladmandic. Try the SD. 5 and am not really looking back. ), and instead use descriptive words like "Middle aged. You can just use someone elses workflow of 0. 51 votes, 39 comments. I recently discovered this trick and it works great to improve quality and stability of faces in video, especially with smaller objects. 0. I'm not sure what's wrong here because I don't use the portable version of ComfyUI. true. So people made GUI graphical interfaces for it that add features and make it a million times better. You do only face, perfect. Particularly with faces. True for Midjourney, also true for Stable Diffusion (although there it can be affected by the way different LORAs and Checkpoints were trained). Prompt: An old lady posing in a bra for a picture, making a fist, bodybuilder, (angry:1. My Automatic1111 installation still uses 1. The only drawback is that it will significantly increase the generation time. 4), (panties:1. I'm trying to figure out a workflow to use Stable Diffusion for style transfer, using a single reference image. First only background, second the lady alone, third the dog alone, fourth some details, like inpaint face to repair, or sand castle to make it more content rich. cinematic photo majestic and regal full body profile portrait, sexy photo of a beautiful (curvy) woman with short light brown hair in (lolita outfit:1. Make sure when your choosing a model for a general style that it's a checkpoint model. I do have some basics but there are still certain areas where I need to learn. safetensors) while using SDXL (Turn it off and use Hires. Using the base stable diffusion model isn’t always going to be good and I recommend you get more fine tuned models for what you want from hugging face or civitai. 2) Set Refiner Upscale Value and Denoise value. 2), low angle, looking at the camera what model you are using for the refiner (hint, you don't HAVE to use stabilities refiner model, you can use any model that is the same family as the base generation model - so for example a SD1. 6), (nsfw:1. Seems that refiner doesn't work outside the mask, it's clearly visible when "return with leftover noise" flag is enabled - everything outside mask filled with noise and artifacts from base sampler. ) [CROSS-POST] Resource | Update Some of the images I've posted here are also using a second SDXL 0. I just use SD1. SD understands lots of names. The default style you will get depends on the prompt and the score tags and it can vary wildly from pastel, anime style, manga style, digital art, 3D, realistic painting if you want to use artist tags, you would need to use the tag that is used on danbooru (in this case "akamatsu ken"). 0, but with a comfy setup with 0. 9 hugging face page that shows base pass is 128x128, and refiner pass is 1024x1024. 0 base model and HiresFix x2. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. AP Workflow 4. Use a value around 1. Shall I use the base model instead or I'm doing something wrong? That is colossal BS, don't get fooled. 9 that ran steps 1-13 on the base and 13-20 on the refiner, sure it increased detail and often realism in general, but the huge thing was what it did to faces/heads - that seemed a much larger jump than simply increasing detail. Does anyone have any advice on how to improve the following process to make Pony style images more photorealistic? Here is what I am currently doing… Use at least 512x512, make several generations, choose best, do face restoriation if needed (GFP-GAN - but it overdoes the correction most of the time, so it is best to use layers in GIMP/Photoshop and blend the result with the original), I think some samplers from k diff are also better than others at faces, but that might be placebo/nocebo effect. So prompting for "Kate Wilson" makes the model think it should be creating a specific person, and it is some culmination of all the Kate's and all the Wilson's that it knows. 5, 2 and SDXL 1. So far, LoRA's only work for me if you run them on the base and not the refiner, the networks seems to have unique architectures that would require a LoRA trained just for the the refiner, I may be mistaken though, so take this with a grain of salt. 5 model use resolution of 512x512 or 768 x 768. 2) and used the following negative - Negative prompt: blurry, low quality, worst quality, low resolution, artifacts, oversaturated, text, watermark, logo, signature, out of frame, cropped, deformed, malformed, disfigured Photon, I mainly make LoRA's and nothing comes even close to capturing the likeness as Photon. . Why do they show it like that? I tend to like the mediapipe detectors because they're a bit less blunt than the square box selectors on the yolov ones. Hence ugly and deformed faces are generated. However, this also means that the beginning might be a bit rough ;) NSFW (Nude for example) is possible, but it's not yet recommended and can be prone to errors. Some of the images I've posted here are also using a second SDXL 0. 2), (light gray background:1. 0_0. 5 model as your base model, and a second SD1. 5 for whole SDXL pic so its sharper , SDXL is really soft , IMO resolution is inferior to SD1. And after running the face refiner I think that ComfyUI should use SDXL refiner on face and hands, but how to encode a image to feed it in as latent? Must be related to Stable Diffusion in some way, comparisons with other AI generation platforms are accepted. Use 0. 5 excels in texture and lighting realism compared to later stable diffusion models, although it struggles with hands. Faces always have less resolution than the rest of the image. The problem is I'm using a face from ArtBreeder, and img2img ends up changing the face too much when implementing a dif Hello everyone I use an anime model to generate my images with the refiner function with a realistic model ( at 0. net or Krita or Gimp, load that tile back in SD and mask both eyes to inpaint them, do some attempts tweaking prompt and parameters until you get a result you are happy with, stitch the "fixed" tile back on top of your upscaled Step one - Prompt: 80s early 90s aesthetic anime, closeup of the face of a beautiful woman exploding into magical plants and colors, living plants, moebius, highly detailed, sharp attention to detail, extremely detailed, dynamic composition, akira, ghost in the shell if you take the refiner concept to its ultimate conclusion you can slice the sdxl base and refiner models down by about 2B parameters such that both are 1B, with fewer transformer blocks. Hands work too with it, but I prefer the MeshGraphormer Hand Refiner controlnet. A list of helpful things to know If I'm using the SDXL base model as the main model, I can choose the SDXL refiner model under the txt2img tab. Haven't been using Stable Diffusion in a long time and since SDXL has launched and a lot of really cool models/loras. 2 to 0. here is my idea and workflow: image L-side will be act like a referencing area for AI. an anime illustration of a cute girl with blue hair with hands on hips 3. However, with SD 1. We all know SD web UI and ComfyUI - those are great tools for people who want to make a deep dive into details, customize workflows, use advanced extensions, and so on. In this post, you will learn how it works, how to use it, and some common use cases. 5 model IMG 2 IMG, like realistic vision, can increase details, but destroy faces, remove details and become doll face/plastic face Share Add a Comment 3-The base model is style-oriented, while the refiner model tends towards photorealism, it's not that bad, but it's detrimental, for example, if you're working on an illustration and the refiner only worsens the result and doesn't add relevant details. Also the face mask seems to include part of the hair most of the time, which also gets lowres by the process. I observed that using Adetailer with SDXL models (both Turbo and Non-Turbo variants) leads to an overly smooth skin texture in upscaled faces, devoid of the natural imperfections and pores. Stable Diffusion-1 and Stable Diffusion-2 all-in-one . Hires fix is the main way to increase your image resolution in txt2img, at least for normal SD 1. An easy method is do your 768x512 landscape or whatever initial image enough to where you like the look of it, then blow it up 2x or 4x, etc. You don't actually need to use the refiner. ckpt models currently do not load due to a bug in the conversion code. I started using one like you suggest, using a workflow based on streamlit from Joe Penna that was 40 steps total, first 35 on the base, remaining noise to the refiner. I'd like to share Fooocus-MRE (MoonRide Edition), my variant of the original Fooocus (developed by lllyasviel), new UI for SDXL models. 3 - 1. The refiner gives her what I consider a completely different face. Stable Diffusion XL - Tipps & Tricks - 1st Week. What are your settings for inpainting? For something like eyes in a face, you'll want to make sure you're either inpainting 'whole picture', or if you're inpainting 'only masked', you'll want to make sure that the "only masked padding, pixels" is set high enough that it can see the entire head (if the padding doesn't include the entire head, it's not going to know that it's putting eyes in a Simply ran the prompt in txt2img with SDXL 1. 2) face by (Yoji Shinkawa 1. But I'm not sure what I'm doing wrong, in the controlnet area I find the hand depth model and can use it, I would also like to use it in the adetailer (as described in Git) but can't find or select the depth model (control_v11f1p_sd15_depth) there. 0 vs SDXL 1. Here’s my workflow to tweak details: Upscale your pic if it isn’t already, crop a 512x512 tile around her face using an image editing app like Photoshop or Paint. Even all the other realistic models like absolute reality, realistic vision or Epic realism always seem to morph the face just enough so it doesn't resemble the person enough. As a tip: I use this process (excluding refiner comparison) to get an overview of which sampler is best suited for my prompt, and also to refine the prompt, for example if you notice the 3 consecutive starred samplers, the position of the hand and the cigarette is more like holding a pipe which most certainly comes from the Sherlock Holmes part of the prompt, so /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users 22 votes, 25 comments. That said, Stable Diffusion usually struggles all full body images of people, but if you do the above the hips portraits, it performs just fine. Since the research release the community has started to boost XL's capabilities. 5 models so wondering is there an up-to-date guide on how to migrate to SDXL? I haven't had any of the issues you guys are talking about, but I always use Restore Faces on renders of people and they come out great, even without the refiner step. Generation metadata isn't being stored in images. This might be due to the VAE model used. Whats the best sampling method for anime style faces? I want some that look strait out of stuff like Fate/Stay Night but I also want to get some that resemble Sakimichan, Alexander Dinh, Axsen, and Personalami's art styles. it works ok with adetailer as it has option to use restore face after adetailer has done detailing and it can work on but many times it kinda do more damage to the face as it undo what adetailer did. 4 - 0. This is a refresh of my tutorial on how to make realistic people using the base Stable Diffusion XL model. Two things: 1: Are you using the standard one or jp/cute jp? 2: using the right model as a refiner amost always changes the faces more Caucasian. The diffusion is a random seeded process and wants to do its own thing. When inpainting, you can raise the resolution higher than the original image, and the results are more detailed. That is so interesting, the community made XL models are made from the base XL model, which requires the refiner to be good, so it does make sense that the refiner should be required for community models as well till the community models have either their own community made refiners or merge the base XL and refiner but if that was easy wouldn't it be done by the stability ai themselves? We would like to show you a description here but the site won’t allow us. That Works pretty well for me when I’m doing img2img and I like one thing or another from one iteration or the next. 7> The example workflow has a base checkpoint and a refiner checkpoint, I think I understand how that's supposed to work. 9vae. 5, 99% of all NSFW models are made for this specific stable diffusion version. An style can be slightly changed in the refining step, but a concept that doesn't exist in the standard dataset is usually lost or turned into another thing (I. 0 Base, moved it to img2img, removed the LORA and changed the checkpoint to SDXL 1. It saves you time and is great for quickly fixing common issues like garbled faces. A person face changes after EDIT: ISSUE SOLVED!! Thanks a lot for the help! So it seems the culprit was mostly clip skip, which on my old model was set to "2", while the new one was "1" by default. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. If I set the Denoise value on the refiner low enough to keep the face, I lose out on improvements in the background, clothing etc. hey got your workflow running last night and this is why I liked it so much as well! Wish moving the masked image to composite over the other image was easier, or like a live preview instead of queing it for generation, cancel, move it a bit more etc. And, sometimes less is more in stable diffusion. Welcome to the unofficial ComfyUI subreddit. I also recommend learning how to apply Lora models for certain styles or features and do some searching for potentially useful addons. com with all the advanced extras made easy. With my inputs, I rarely end up with asian looking output. 1. The actual Stable Diffusion program is text mode and really klunky to use. Restore face makes face caked almost and looks washed up in most cases it's more of a band-aid fix . I'm glad to hear the workflow is useful. From L to R, this is SDXL Base -- SDXL + Refiner -- Dreamshaper -- Dreamshaper + SDXL Refiner I was surprised by how nicely the SDXL Refiner can work even with Dreamshaper as long as you keep the steps really low. Defenitley use stable diffusion version 1. Yes only the refiner has aesthetic score cond. 30ish range and it fits her face lora to the image without It is an image-to-image model that has been trained to denoise small noise levels of high-quality data and is not expected to work as a pure text-to-image model; instead, it should only be used as an image-to-image model1. But try both at once and they miss a bit of quality. You should really start with a empty negative and a simple positive prompt. The issue has been that Automatic1111 didn't support this initially, so people ended up trying to set-up work arounds. You should try to click on each one of those model names in the ControlNet stacker node and choose the path of where your models skip the highres fix, go strait to img2img, click the script dropdown menu at the bottom, choose "SD upscale", the select the 4x-Ultra Sharp, use scale factor 2. There is one file called sd_xl_refiner_1. 5. Now for finding models, I just go to civit. 2), full body Im using automatic1111 and I run the initial prompt with sdxl but the lora I made with sd1. You can add additional steps with base or refiner afterwards, but if you use enough steps to fix the low resolution, the effect of roof is almost gone. As per the title, with Fooocus, I do know that the based model to use is only possible with SDXL based model. Hello, beautiful people! 🙂 I was hoping someone might try to help me, because I'm struggling with a difficult problem. vjnndy uuaspip odfib lvghar ffnmppy nhhh kjau mmhyp pbtt nwliu