Reddit automatic1111.
Reddit automatic1111 Feb 18, 2024 路 Start AUTOMATIC1111 Web-UI normally. Major features: settings tab rework: add search field, add categories, split UI settings page into many I obviously have youtubed howto’s use and download automatic1111 but theres too many tutorials saying to download a different thing or its outdated for older versions or dont download this version of python do this blah blah. If that fails then try manually installing torch before launching webui From the command line go to your stable-diffusion-webui folder and type "cd venv/scripts" Not sure, but you gave us 4 examples and no generation information (model, Clip skip, Sampler, steps, etc) from civitai, then one example and all generation info from webui. Create flipped copies: Don't check this if you are training on a person's likeness, since people are not 100% symmetrical. bat file and then open a terminal in the stable diffusion folder and run git reset --hard HEAD~1 I use automatic1111 over colab and can´t see it over additional networks but I find it via the red button. The following repo lets you run Automatic1111 or hlky easily in a Docker /r/StableDiffusion is back open after the protest of Reddit killing open API access Hello Friends, Could someone guide me on efficiently upscaling a 1024x1024 DALLE-generated image (or any resolution) on a Mac M1 Pro? I'm quite new to this and have been using the "Extras" tab on Automatic1111 to upload and upscale images without entering a prompt. Been enjoying using Automatic1111's batch img2img feature via controlnet to morph my videos (short image sequences so far) into anime characters, but I noticed that trying anything that has more than say 7,000 image frames takes forever which limits the generative video to only a few minutes or less. I want to run it locally and access it remotely (not the same network). For instance, that shit about the triple-paren encapsulation being 'racist'? That only has any relevance to Automatic1111 in regards to Reddit's automatic spam/hate algorithm, and that was literally a wild guess somebody made as to why a single person might have been banned two or three months ago. I do have a friend that uses a GTX 1080 GPU for Stable Diffusion as well and I set up his installation for him, so if the situation is different for a non RTX card that would also be good to know. I have using for 2 months this app, 2 days ago, I saw on a post about "Draw Things", I tested, OMG, the consumption of memory is easily 3x less. (Tips: Don’t use the Apply and Restart button. 6, as it makes inpainted part fit better into the overall image Try adding the "--reinstall-torch" command line argument. /r/StableDiffusion is back open after the protest of Reddit killing open API access You can use automatic1111 on AMD in Windows with Rocm, if you have a GPU that is supported /r/StableDiffusion is back open after the protest of Reddit killing If you installed your AUTOMATIC1111’s gui before 23rd January then the best way to fix it is delete /venv and /repositories folders, git pull latest version of gui from github and start it. ultimate-upscale-for-automatic1111: tiled upscale done right if you can't afford hires fix/super high-res img2img Stable-Diffusion-Webui-Civitai-Helper: download thumbnails, models, check for updates for CivitAI I've read some reddit posts for and against, mainly involving LoRA's. In Automatic1111, I will do a 1. Note that this is Automatic1111. I believe this can be set in automatic1111 but I don't know how offhand. "(x)": emphasis. Meaning it's the same code taken at a point in time and modified. Once the dev branch is production ready, it'll be in the main branch and you'll receive the updates as well. Automatic1111, 12gb vram but constantly running out of memory . 0 Released and FP8 Arrived Officially /r/StableDiffusion is back open after the protest of Reddit killing open API Yeah, I'm not entirely sure but I guess there is a good reason behind it. Bottom line is, I wanna use SD on Google Colab and have it connected with Google Drive on which I’ll have a couple of different SD models saved, to be able to use a different one every time or merge them. Finally got my graphics card and am working with AUTOMATIC1111. I need some guidance on how to remove Automatic1111 from my pc, I want to do a fresh install as it has become convoluted and to be honest I cant remember all of the changes I have made. It's a real person, 1 person, you can find AUTOMATIC1111 in the Stable Diffusion official Discord under the same name. I've attached a couple of ex In the latest update Automatic1111, the Token merging optimisation has been implemented. And render Things to notice and explore: is Automatic1111's just the best distro? i nvr hear about others Automatic1111 is the GUI with the most exensive list of features. So the user interface (UI) is the same. . Next. , Doggettx instead of sdp, sdp-no-mem, or xformers), or are doing something dumb like using --no-half on a recent NVIDIA GPU. I bought a second SSD and use it as a dedicated PrimoCache drive for all my internal and external HDDs. /r/StableDiffusion is back open after the protest of Reddit killing open API access /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Automatic1111 has not pressed legal action against any contributors, however contributing to the repo does open you up to risk. My only heads up is that if something doesn't work, try an older version of something. --listen lets it be accessible from the local network, but not remotely, even if I open up the port for port forwarding (unless there's something wrong with my NAT) That was my first thought, but there's some weird Gradio stuff happening so clicking Generate somehow doesn't make any network calls at all. Automatic1111. Reply reply Top 1% Rank by size We would like to show you a description here but the site won’t allow us. Automatic1111 is easier to do what you want and get a generation out and done to a high quality due to its age and custom weights, but it is slow to load models, slow Community of SimpleX Chat users. I am fairly new to using Stable Diffusion, first generating images on Civitai, then ComfyUI and now I just downloaded the newest version of Automatic1111 webui. He's just working on it on the dev branch instead of the main branch. hm, i mean yeah, it "can" sometimes work with non-inpainting models but it's generally a pretty miserable experience; inpainting models have additional unet channels that traditional models don't, as well as an understanding of image masking - that being said, other software like invoke might possibly be doing something completely different behind the scenes* to better accommodate inpainting are you talking about the rollback or the inpainting? I have not tried the new version yet so I don't know about the new features. Hey,馃憢 I noticed that setting up Automatic1111 with all dependencies, models, extensions, etc is a hustle (at least for me)… For Automatic1111, you can set the tiles, overlap, etc in Settings. Invoke has a far superior ui and I like how it displays a history of all my outputs with the seed and prompt data ready to “rewind” any mistakes I make. You're legally not allowed to edit it under the current lack of license, only view it. ) Automatic1111 Web UI - PC - Free Fantastic New ControlNet OpenPose Editor Extension & Image Mixing - Stable Diffusion Web UI Tutorial Automatic1111 Stable Diffusion Web UI 1. > TRAINING: I don't think InvokeAI currently supports training embeddings, models, hypernetworks, etc Automatic1111 has plugins that allows you to dreambooth to train models, you can also train textual inversions or embeddings. 0. I tried forge for SDXL (most of my use is 1. 4. Commandline Arguments. To add content, your account must be vetted/verified. Feb 19, 2025 路 Please do not use Reddit’s NSFW tag to try and skirt this rule. I activate it (it adds <lora:Robert Gransds:1> but when doing the txt2img it doesn´t resemble in anything my model (in fact my model is bald and it draws people with hair). 320 votes, 216 comments. There's a few things you can add to your launch script to make things a bit more efficient for budget/cheap computers. “(Composition) will be different between comfyui and a1111 due to various reasons”. When I opened the optimization settings, I saw that there is a big list of optimizations. I have always wanted to try SDXL, so when it was released I loaded it up and surprise, 4-6 mins each image at about 11s/it. I am lost on the fascination with upscaling scripts. Hello guys i hope you doing well so for the past weeks i've been trying to setup a working automatic1111 on my system (32gb… In case it's helpful, I'm running Windows 11, using a RTX 3070, and use Automatic1111 1. 5 and my 3070ti is fine for that in A1111), and it's a lot faster, but I keep running into a problem where after a handful of gens, I run into a memory leak or something, and the speed tanks to something along the lines of 6-12s/it and I have to restart it. 6. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. 5 support. Their unified canvas is awesome too. bat. It seems you can enter multiple prompts and they'll be applied on alternate steps of the image generation. In the end, there is no "one best setting" for everything since some settings work better for certain image size, some work better for realistic photos, some better for anime painting, some better for /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Click the Install from URL tab. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. ) Result will never be perfect. Now I start to feel like I could work on actual content rather than fiddling with ControlNet settings to get something that looks even remotely like what I wanted. ) I have a GTX1080 that ran automatic1111 iterations at 1it/s. I recently installed SD 1. I know, it doesn't make sense to me, either; add that to the pile of "I don't get Python" 馃槀 Hi! This might be a strange question, but I'm new to SD and I'm just wondering if there are files/folders generated that I need to keep an eye on when using Automatic1111 and SD for many hours on a smaller-ish drive? 14 votes, 19 comments. I have been Automatic1111 AWOL until tomorrow! So, I can't give even scotch doused opinion until the great uninstall! Thanks for the heads up though! If you have more tips or insight please add on here. It also uses ESRGAN baked-in. This is where I got stuck - the instructions in Automatic1111's README did not work, and I could not get it to detect my GPU if I used a venv no matter what I did. Any modifiers (the aesthetic stuff) you would keep, it’s just the subject matter that you would change. 7. If someone could tell me how to do an uninstall or point me to a guide that does, it would be greatly appreciated. Working with Automatic1111 and wondering which is better - a massive negative prompt with a zillion variables, or one of the embeddings like… Skip to main content Open menu Open navigation Go to Reddit Home Watch out, it looks like the newest version of Automatic1111 breaks a lot of stuff. Automatic1111 is giving me 18-25it/s vs invokes 12-17ish it/s. You can even overlap regions to ensure they blend together properly. In case anyone has the same issue/sollution you have to install the SDXL 1. true. These are --precision full --no-half which appear to enhance compatbility, and --medvram --opt-split-attention which make it easier to run on weaker machines. And nobody in their right mind use basic comfy inpainting nodes. Automatic1111 is the web gui for Stable Diffusion. 5 inpainting ckpt for inpainting on inpainting conditioning mask strength 1 or 0, it works really well; if you’re using other models, then put inpainting conditioning mask strength at 0~0. Either you use Invoke AI (the true superior inpainting/outpainting solution), or you use A1111 and any of their fo /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Where images of people are concerned, the results I'm getting from txt2img are somewhere between laughably bad and downright disturbing. Nov 23, 2024 路 Please do not use Reddit’s NSFW tag to try and skirt this rule. It renders the image in two steps instead of one. Wait for the confirmation message that the installation is complete. 5. UPDATE: 19th JAN 2023 - START OF UPDATE - As some people have pointed out to me, the latest version of Automatic1111's and d8hazard's Dreambooth is bugged, so naturally I went and tested the results and compared them with my settings I have posted here. 1. Automatic1111 is significantly faster though. 3. Eventually hand paint the result very roughly with Automatic1111's "Inpaint Sketch" (or better Photoshop, etc. There are a number of other popular user interfaces, such as WebUI (aka, Automatic1111), ComfyUI, and Vlad (a modified version of Automatic1111, whose official name escapes me at the moment). 5 models since they are trained on 512x512 images. It's not perfect, but you get quite a long way with it. I especially like the wildcards. Euler Ancestral is pretty good, so is DPM adaptive, for generating people. I have a text file with one celebrity's name per line, called Celebs. Go to Open Pose Editor, pose the skeleton and use the buttom Send to Control net Configure tex2img, when we add our own rig the Preprocessor must be empty. Automatic1111 Stable Diffusion Web UI 1. 3. This is a slightly better version of a Stable Diffusion/EbSynth deepfake experiment done for a recent article that I wrote. Automatic1111 is trash, the typical app with a ton of features, but poorly optimized. Hey everyone, Given the recent ban of Automatic1111 on Google Colab, I'm on the hunt for alternative cloud platforms where we… Skip to main content Open menu Open navigation Go to Reddit Home Using the Automatic1111 interface, you have two options for inpainting, "Whole Picture" or "Only Masked". Inpaint the are is usually the next thing to do on the list. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. As zoupishness7 already pointed out, renaming your existing folder and starting again may be the only way, depending on the fault. ) Automatic1111 Web UI - PC - Free How To Do Stable Diffusion LORA Training By Using Web UI On Different Models - Tested SD 1. At some point I also tested EasyDiffusion, but it was well, easy, nothing fancy. ) Automatic1111 Web UI - PC - Free 8 GB LoRA Training - Fix CUDA & xformers For DreamBooth and Textual Inversion in Automatic1111 SD UI 馃摲 and you can do textual inversion as well 8. 5 in about 11 seconds each. That was good until the 23rd of Mar I came back from a trip, fired up the Automatic1111 with a get pull receiving an update and my it/s went down to a shockingly 4s/it!! (yes that's right 4 seconds / iteration!) Update your Automatic1111, we have a new extension OpenPose Editor, now we can create our own rigs in Automatic for Control Net/Open Pose. 5 - 2x on image generation, then 2 - 4x in extras with R-ESRGAN 4x+ or R-ESRGAN 4x+ Anime6B. Other repos do things different and scripts may add or remove features from this list. It would be better for me if I can setup AUTOMATIC1111 to save info as the above one (separate txt file for each image, and get more parameters). Luckily AMD has good documentation to install ROCm on their site. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will Nah, this is more anti-AI shill bullshit. Invoke just released 3. Its image compostion capabilities allow you to assign different prompts and weights, even using different models, to specific areas of an image. I haven't tried to use vladmandic's fork but the last commit there was 2 days ago, and it appears to already have 3. I learned yesterday (from a kindly Redditor) of the following line that can be run from command line, if you browse to your stable diffusion folder: And this is saved as a txt file along with the image whilst AUTOMATIC1111 saves all information of all images in one cvs file. I'll need it! 馃槀 The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. I certainly think it would be more convenient than running Stable Diffusion with command lines, though I've never tried to do that. After that you need PyTorch which is even more straightforward to install. Is Stable Video Diffussion availible for Automatic1111? Or only ComfyUI Right now? /r/StableDiffusion is back open after the protest of Reddit killing open API Automatic1111 is great, but the one that impressed me, in doing things that Automatic1111 can't, is ComfyUI. Asked reddit wtf is going on everyone blindly copy pasted the same thing over and over. General prompt used to generate raw images (50/50 blend of normal SD and a certain other model) was something along the lines of: 13 votes, 33 comments. 2. Only Masked crops a small area around the selected area that is looked and, changed, and then placed back into the larger picture. As always, Google is your friend. I'm currently running Automatic1111 on a 2080 Super (8GB), AMD 5800X3D, 32GB RAM. We have made the popular facefusion gradio app integrated with sd webui, so you don't have to leave the webui interface to generate face swapping videos I've used Easy Diffusion in the past and it seemed adequate, but then I came across Stable Diffusion from Automatic1111. SD Upscaler doesn't just upscale the picture like Photoshop would do (which you also can do in automatic1111 in the "extra" tab), they regenerate the image so further new detail can be added in the new higher resolution which didn't exist in the lower one. Quite a few A1111 performance problems are because people are using a bad cross-attention optimization (e. A place to discuss the SillyTavern fork of TavernAI. K12sysadmin is for K12 techs. Question as I've been out of the loop with SD for a while, but I read there were a few recent improvements to automatic1111 that by doing an update, speeds increased almost by 2x, similar to vlad's release. Hey I just got a RTX 3060 12gb installed and was looking for the most current optimized command line arguments I should have in my webui-user. I'm curios if this will solve the random black images I sometimes get in some large batch generations (the filter was off, BTW; I'm still investigating the issue, the first time I encountered the black square of morality in a batch, the prompt was tame, so I immediately changed it to something raunchier for science, and I got NSFW results, but the frequency of the black pictures got up to 15% I see a lot of mis-information about how various prompt features work, so I dug up the parser and wrote up notes from the code itself, to help reduce some confusion. The code takes an input image and performs a series of image processing steps, including denoising, resizing, and applying various filters. The first step is a render (512x512 by default), and the second render is an upscale. PyTorch 2. bat file that runs Automatic1111. 7. A copy of whatever you use most gets automatically stored on the SSD, and whenever the computer tries to access something on an HDD it will pull it from the SSD if it's there. Under the hood, they're all Stable Diffusion. The app is "Stable Diffusion WebUI" made by Automatic1111, and the programming language it was made with is Python. May 10, 2025 路 To use your downloaded models with the Automatic1111 WebUI, you simply need to place them in the designated model folder: \sd. 8. 0 (not a fork). I did a search and no one had a list posted so I thought I'd start one. It will download everything again but this time the correct versions of pytorch, cuda drivers and xformers. The AUTOMATIC1111 SD WebUI project is run by the same person; the project has contributions from various other developers too. I am the author of the SAM extension . No excessive violence, gore or graphic content Content with mild creepiness or eeriness is acceptable (think Tim Burton), but it must remain suitable for a public audience. Make sure you have the correct commandline args for your GPU. Lots of users put that in to keep up to date. **So What is SillyTavern?** Tavern is a user interface you can install on your computer (and Android phones) that allows you to interact text generation AIs and chat/roleplay with characters you or the community create. My preferred tool is Invoke AI which makes upscaling pretty simple. The platform can be either your local PC (if it can handle it) or a Google Colab. Long story short i noticed either my comfyui or lora settings are not compatible or something. 0 gives me errors. black boxes being added are a result of improper resolutions, in terms of downsampling on the A1111 repo, LDSR by default will only upscale to 4x, so if you leave it at the default setting of 2x upscale it will always downsample by 1/2, there are also further options in the settings. Automatic1111 has an unofficial Smart Process extension that allows you to use a v2 CLIP model which produces slightly more coherent captions than the default BLIP model. I had it separately, but I didn't like the way it worked, as it blurred the detail of the picture a lot. I know it's not exactly what you're asking for, but if you're interested in working with any open source models, without the hassle of maintaining checkpoints, GPU, dependencies, etc. Enter the extension’s URL in the URL for extension’s git repository field. Just wondering, I've been away for a couple of months, it's hard to keep up with what's going on. I've been using Automatic1111 until some update lowered speed on old cards (GTX1660), then I had a break from SD, recently I tried ComfyUI, speed is great, but I feel lack of plenty of ease-of-use and workflow features Automatic1111 has. So it sort of 'cheats' a higher resolution using a 512x512 render as a base. SimpleX Chat is the first chat platform that is 100% private by design – it has no user identifiers of any kind and no access to your connections graph – it's a more private design than any alternative we know of. Multiplies the attention to x by 1. webui\webui\models\Stable-diffusion, restart the WebUI or refresh the model list using the small refresh button next to the model list on the top left of the UI, and load the model by clicking on its name. It's been totally worth it. loopback_scaler - is an Automatic1111 Python script that enhances image resolution and quality using an iterative process. It will only automatically update if you have a "git pull" command in the . Before I muck up my system trying to install Automatic1111 I just wanted to check that it is worth it. K12sysadmin is open to view and closed to post. /r/StableDiffusion is back open after the protest of Reddit /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. There are so many sampling methods available in the AUTOMATIC1111 GUI, but I don't know which one is best for generating certain types of images. Yes, you would. After launching Automatic1111 with--nowebui and using the API interface at http /r/StableDiffusion is back open after the protest of Reddit killing open API 24 votes, 22 comments. The solution for me was to NOT create or activate a venv and install all Python dependencies Automatic1111 is a web-based graphical user interface to run stable Diffusion. We would like to show you a description here but the site won’t allow us. It brings up a webpage in your browser that provides the user interface. After installing SD, you should make a few settings that are quite important: Yeah I like dynamic prompts too. One thing I noticed right away when using Automatic1111 is that the processing time is taking a lot longer. However, automatic1111 is still actively updating and implementing features. Hi Guys, I hope to get some technical help from you as I’m slowly starting to lose hope that I’ll ever be able to use WebUI. Decent automatic1111 settings, 8GB vram (GTX 1080) Discussion /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will 12 votes, 23 comments. Clone Automatic1111 and do not follow any of the steps in its README. if you want to rollback your version to the previous one you have to remove the git pull command from your . 5, SD 2. Comparing NMKD SD GUI with Automatic1111 GUI. Im running a rtx3090 24gb and a 32gb ram on a windows pc so i dont need one of those low version ones. ) Automatic1111 Web UI - PC - Free Sketches into Epic Art with 1 Click: A Guide to Stable Diffusion ControlNet in Automatic1111 Web UI 18. Whole picture takes the entire picture into account. UPDATE: Vlad is SD. the prices seem decent to me so I wanted to understand if it is possible. With Easy Diffusion I could crank out 4 to 8 images in just as many seconds, but things took 1 to 2 minutes using the same model in the Automatic1111 version. Installing Automatic1111 is not hard but can be tedious. 0 that ads controlnet and a node based backend that you can use for plugins etc so seems a big teams finally taking node based expansion serious i love comfy but a bigger team and really nice ui with node plugin support gives serious potential to them… wonder if comfy and invoke will somehow work together or if things will stay fragmented between all the various Automatic1111 + ChatGPT + Affinity Photo + Procreate I come from a traditional arts background so it's much easier for me to whip up a simple composition in Affinity Photo, Procreate, or even a photographed sketchpad page than to f--k around with ComfyUI. bat file to update Automatic1111, which is IMO the more prudent way to go. Enable dark mode for AUTOMATIC1111 WebUI: -Locate and open the webui-user. g. To clarify though, these are not special shortcuts that automatic1111 has these are just from the browser. Nobody ever thought that Comfy Inpainting was even good to start with. Navigate to the Extension Page. Forge is a fork created by the developer behind Controlnet. Easy Diffusion is a user interface to Stable Diffusion. Also, use the 1. I used to really enjoy using InvokeAI, but most resources from civitai just didn't work, at all, on that program, so I began using automatic1111 instead, seems like everyone recommended that program over all others everywhere at the time, is it still the case? Noted that the RC has been merged into the full release as 1. Restart AUTOMATIC1111. The Cavill figure came out much worse, because I had to turn up CFG and denoising massively to transform a real-world woman into a muscular man, and therefore the EbSynth keyframes were much choppier (hence he is pretty small in the frame). Before SDXL came out I was generating 512x512 images on SD1. Discussion There are significant changes in how upscaling works, plus Hi Res fix doesn't seem to work anymore. 16. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. But with Automatic1111 sadly the best option remains Atl+Tab > Photoshop. It's become the de facto default GUI for the time being, but I'm sure better ones will replace it in the future. This is really worth highlighting and passing on the praises, A1111's repo uses k-diffusion under the hood, so what happened is k-diffusion got the update and that means it automatically got added to A1111 which imports that package. 5 and Automatic1111 to a Windows 10 machine with an RTX 3080. Automatic1111, safetensor and CKPT custom models are supported. bat in your install directory and open it with a Text Editor -There you will find a COMMANDLINE_ARGS section. 0 Released and FP8 Arrived Officially /r/StableDiffusion is back open after the protest of Reddit killing open API Hi, Champs! We've made a new sd-webui-facefusion extension. Thanks :) Video generation is quite interesting and I do plan to continue. 0 version of Automatic1111 to use the Pony Diffusion V6 XL checkpoint. txt, and I can write __Celebs__ anywhere in the prompt, and it will randomly replace that with one of the celebs from my file, choosing a different one for each image it generates. Images in Automatic1111 are coming out softer /r/StableDiffusion is back open after the protest of Reddit killing All images created with Stable Diffusion (Automatic1111 UI), only other image editing software was MSPaint. I was curious if Automatic1111 had any special shortcuts that might help my workflow. Oct 24, 2024 路 The last commit to this repo was 3 months ago. Automatic1111 installs dependencies in a venv like this, it's not the most transparent thing when it comes to blindly pull commits without checking first but the source is available and in my opinion it's just in the spirit of practicality. If you want to have fun with AnimateDiff on AUTOMATIC1111 Stable Diffusion WebUI… Hires fix is the main way to increase your image resolution in txt2img, at least for normal SD 1. I just refreshed the Automatic1111 branch and noticed a new commit "alternate prompt". I have a separate . Some versions, like AUTOMATIC1111, have also added more features that can effect the image output and their documentation has info about that. fiszrm wilgg nmxix quziyzg bmgzuji peir fwq uiwuxpvt sacp wxsw