Comfyui sdxl tutorial reddit. Can I ask what steps you followed to get SDXL 1.


Comfyui sdxl tutorial reddit. New comments cannot be posted.

Comfyui sdxl tutorial reddit CG Live Action Integration Tutorial with CamTrackAR The question "what is SDXL?" has been asked a few times in the last few days since SDXL 1. 0 Base and Refiner models An automatic calculation of the steps required for both the Base and the Refiner models A quick selector for the right image width/height combinations based on the SDXL training set View community ranking In the Top 1% of largest communities on Reddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users Welcome to the unofficial ComfyUI subreddit. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. example here. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. My primary goal was to fully utilise 2-stage architecture of SDXL - so I have base and refiner models working as stages in latent space. Help with Facedetailer workflow SDXL . In the video, I go over how to set up three workflows text-to-image, image-to-image, and high res image upscaling. upvotes /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Here are some facts about SDXL from the StablityAI paper: SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis A new architecture with 3. You dont get it don't you? The issue isnt wht he offers. 05. I'll check out your link anyway since you went to the effort to find it and edit your comment with more information. I discovered a few glitches and missing bits on their documentation (which is not well documented) and have capture all the fixes in my /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. SDXL + COMFYUI + LUMA Thank you! I've given up on Comfyui for now, after watching all the tutorials available i'm gonna stick to A1111 + controlnet because it's simpler, there's alot of tutorials for it, and it serves my purposes. Based on the information from Mr. SDXL model that gives the best results in my testing /r/StableDiffusion is back open after the protest of Reddit killing open API /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Members Online. 5 and sdxl but I still think that there is more that can be done in terms of detail. 3 GB VRAM via OneTrainer - Both U-NET and Text Encoder 1 is trained - Compared 14 Hey r/comfyui, . After an entire weekend reviewing the material, I This is a place for kriyabans from the many lineages that branched from Sri Yogiraj Lahiri Mahasaya. 5 with lcm with 4 steps and 0. This workflow/mini tutorial is for anyone to use, it contains both the whole sampler setup for SDXL plus an additional digital distortion filter which is what im focusing on here, it would be very useful for people making certain kinds of horror /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 0 up and running within ComfyUI? I have been using it with SD 1. IF there is anything you would like me to cover for a comfyUI tutorial let me know. It actually uses ComfyUI SDXL 1. Wanted to share my approach to generate multiple hand fix options and then choose the best. LoRA training with sdxl1. Posted by u/cgpixel23 - 1 vote and no comments Hi! I just made the move from A1111 to ComfyUI a few days ago. 5, but enough folk have sworn by Comfy to encourage me. 5B (6. I'm looking for a workflow (or tutorial) that enables removal of an object or region (generative fill) in an image. Implementing SDXL Refiner - SDXL in ComfyUI from Scratch Series Tutorial | Guide Locked post. Here is a quick tutorial on how I use Fooocus for SDXL inpainting. The only references I've been able to find makes mention of this inpainting model, using raw python or auto1111. 5 is. More info /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. SDXL most definitely doesn't work with the old control net. 0 on comfyUI default workflow, weird color artifacts on all images. A lot of people are just discovering this What's new in v4. Updates are being made based on the latest ComfyUI (2024. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper Welcome to the unofficial ComfyUI subreddit. This subreddit has gone Restricted and reference-only as part /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. To update the controlnets, go to that new folder Click on the address Type cmd In the window type git pull Adding Facial Details to Stable Video Diffusion Animation using consistent images With Canny SDXL, TUTORIAL LINK on the comments Locked post. I can listen to music , or light browse the net with running this. 5 and HiRes Fix, IPAdapter, Prompt Enricher via local LLMs (and OpenAI), and a new Object Swapper + Face Swapper, FreeU v2, XY Plot, ControlNet and ControlLoRAs, SDXL Base + Refiner, Hand Detailer, Face Detailer, Upscalers, ReVision, etc. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. ComfyUI and SDXL. Try generating basic stuff with prompt, read about cfg, steps and noise. 20K subscribers in the comfyui community. 0 Refiner Automatic calculation of the steps required for both the Base and the Refiner models Quick selection of image width and height based on the SDXL training set XY Plot ControlNet with the XL OpenPose model (released by Thibaud Zamora) 26 votes, 11 comments. Oh yes! I understand where you're coming from. 3. 6B if you include the Hey I'm curious about the mixing of 1. I just published a YouTube tutorial showing how to leverage the new SDXL Turbo model inside Comfy UI for creative workflows. So I spent 30 minutes, coming up with a workflow that would fix the faces by upscaling them (Roop in Auto1111 has it by default). AP Workflow 6. ComfyUI basics tutorial. SDXL + COMFYUI + LUMA /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5 and embeddings and or loras for better hands. Thesedays I make comprehensive comfyUI tutorials because 7K subscribers in the comfyui community. Or check it out in the app stores Isn’t the base SDXL model relatively safe? That combined with a hidden, well crafted negative prompt should cover most things. For those just getting started Install comfyui Install comfyui manager Follow basic comfyui tutorials on comfyui github, like basic SD1. ComfyUI Master Tutorial - Stable Diffusion XL (SDXL) - Install On PC, Google Colab (Free) & RunPod, SDXL LoRA, SDXL InPainting while I already have ComfyUI I'll get one or 2 gens, then something goes wrong and it slows way down and in trying to fix that it's now stopped working entirely, keep getting weird errors. TLDR, workflow: link. That's the advantage of using SDXL lightning it can generate animation in 3h with RTX 3060 6gb instead of 6 hours long and the quality is slightly improved compared to sdxl versions /r/StableDiffusion is back open after the protest of Reddit killing 20K subscribers in the comfyui community. Please share your tips, tricks, and workflows for using this software to create your AI art. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Next, install RGThree's custom node pack, from the manager. I was just looking for an inpainting for SDXL setup in ComfyUI. I work with In this guide, we'll set up SDXL v1. support/docs/meta Here is an alternative variant using the full sdxl and the established dual setup. We don't need this kind of complex workflow anymore, use a refined SDXL turbo model like DreamshaperXL turbo or others, use the basic workflow, should work perfectly. You can upscale in SDXL and run the img through a img2img in automatic using sd 1. (207) ComfyUI Artist Inpainting Tutorial - YouTube. ComfyUI was created by comfyanonymous, who made the tool to understand how Stable Diffusion works. ComfyUI with SDXL (Base+Refiner) + ControlNet XL OpenPose + FaceDefiner (2x) Tutorial | Guide ComfyUI is hard. 0 with the node-based Stable Diffusion user interface ComfyUI. Speed up ComfyUI 3 - On SDXL it works better above 1024x1024 and will even have trouble going below that, there is a list hanging around with the resolutions SDXL is best used at. Yes, 8Gb card, ComfyUI workflow loads both SDXL base & refiner models, separate XL VAE, 3 XL LoRAs, plus Face Detailer and its sam model and bbox detector model, and Ultimate SD Upscale with its ESRGAN model and input from the same base SDXL model all work together. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. 1024x1024 for SDXL Tutorial | Guide Files are hosted on Civit: https: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Link: Tutorial: Inpainting only on masked area in ComfyUI. The feedback was positive, so I decided to post it. At the present I'm using basic SDXL with its refiner. 2. b1: responsible for the larger areas on the image b2: responsible for the smaller areas on the image s1: responsible for the details in b2 s2: responsible for the details in b1 So s1 belongs to b2 and s1 to b2. ComfyUI Setup Tutorial: ComfyUI - SDXL basic to advanced workflow tutorial - 4 - upgrading your workflow Heya, tutorial 4 from my series is up, it covers the creation of an input selector switch, use of some math AnimateDiff for SDXL is a motion module which is used with SDXL to create animations. However, I kept getting a black image. 0 ComfyUI Tutorial - Readme File Updated With SDXL 1. AP Workflow v3. . ComfyUI Fundamentals Tutorial - Face Restoration + Roop + Impact Pack Is there a good solution for the resolution problem with Reactor/roof and SDXL? ComfyUI Tutorial: Dreamshaper Turbo model comparaison Tutorial - Guide /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 0 and done some basic image generation /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper Hi there. It's since become the Here is a great tutorial if you dont have ComfyUI already set up. ComfyUI - SDXL + Image Distortion custom workflow. This is the issue. 0 and upscalers Text2Image with SDXL 1. 2. SDXL-Turbo Animation | Workflow and Tutorial in the comments 0:11. This pack includes a node called "power prompt". Ryu Nae-won's NVIDIA AYS posting, this tutorial is conducted. So far I find it amazing but so far I'm not achieving the same level of quality I had with Automatic 1111. And I've read that SDXL still doesn't work well with Auto1111, so I guess ComfyUI it is. As of writing of this it is in its beta phase, but I am sure some are eager to test it out. Comfy UI Sdxl Turbo Advanced Latent Upscaling Workflow Video Locked post. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and Welcome to the unofficial ComfyUI subreddit. 07). Share Sort by: [Tutorial] How to connect anything to Stable Diffusion Install ComfyUI. Welcome to the unofficial ComfyUI subreddit. You can use a model that gives better hands. SDXL-Turbo Animation | Workflow and Tutorial in the comments /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Starts at 1280x720 and generates 3840x2160 out the other end. SDXL Official Control Net models are released from stability. AI Animation using SDXL and Hotshot-XL! Full Guide Included! /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Hi! /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You can repeat the upscale and fix process multiple times if you wish. This is a variant of the ComfyUI area composition example, but instead of defining a landscape, I've used it to compose a character. The issue is that he is being a self-serving parasyte of this community. 9 img2img tutorial upvotes UW wallpaper generation with Comfyui SDXL workflow 18K subscribers in the comfyui community. A long long time ago maybe 5 months ago (yeah blink and you missed the latest AI development), I have been using Comfyui for quite a while now and i got some pretty decent workflows for 1. These were all done using SDXL and SDXL Refiner and upscaled Source image. might be in the wierd comfyui anime slideshow tutorial on the comfyui workflows site Reply reply StetCW • • /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Appreciate you 🙏🏽🙏🏽🫶🏽🫶🏽 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. I've mostly tried the opposite though, SDXL gen and 1. This guide will It's more applicable with SDXL since it can allow for character integration without needing to train a LoRA. Last week I shared my SDXL Turbo repository for fast image generation using stable diffusion, which many of you found helpful. More info: https://rtech. 0 and upscale with comfyUI sdxl1. ComfyUI - SDXL basic to advanced workflow tutorial - 4 - upgrading your workflow upvotes ComfyUI SDXL Basics Tutorial SDXL-Turbo Animation | Workflow and Tutorial in the comments Duchesses of Worcester - SDXL + COMFYUI + LUMA 0:45. 0 download links and new workflow PNG files - New Updated Free Tier Google Colab now auto downloads SDXL 1. It's ComfyUI, with the latest version you just need to drop the picture of the linked website into ComfyUI and you'll get the setup. Sort by: Best. The Gory Details of Finetuning SDXL for 30M samples SDXL initial review + Tutorial (Google Colab notebook for ComfyUI (VAE included)) Reddit's #1 place for all things relating to simulators, VR, and more. I set up a workflow for first pass and highres pass. 5, requires more VRAM to work and is slower to generate images. This is a plugin that allows users to run their favorite features from ComfyUI and at the same time, being able to work on a canvas. 5 arguably doesnt need it on comfyUI because most of us run a series of samplers and stepped upscale practices anyway making I just recorded this video tutorial that explains, in just ten minutes, how to do very fast inpainting only on masked areas in ComfyUI. But for a base to start at it'll work. SDXL initial review + Tutorial (Google Colab notebook for ComfyUI (VAE Such a massive learning curve for me to get my bearings with ComfyUI. It works with any SDXL model. I followed the normal process, couldn't get it to run, got the portable version, got stuck in the same place, and finally tried to follow a YT tutorial, only to realize I didn't do anything wrong, and got stuck a 3rd time in the same place. We offer free tutorials on the specific in app uses. ICU page with the downloadable workflow. Base generation, Upscaler, FaceDetailer, FaceID, LORAS, etc. /r/StableDiffusion is back open SDXL is newer and was trained on larger images and is better at following the prompt. How To Use SDXL in Automatic1111 Web UI - SD Web UI vs ComfyUI - Easy Local Install Tutorial / Guide - GitHub Branches Logic Tutorial | Guide ComfyUI Tutorial - How2Lora - a 4 minute tutorial on setting up Lora Share Sort by: What is lora? My current experience level is having installed comfy with sdxl 1. Open comment sort options /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. SDXL Turbo with ComfyUI Workflow Included Locked post. g. You only need the DreamshaperXL_Turbo checkpoint to run it, no extra nodes. 6K subscribers in the comfyui community. 3 GB VRAM via OneTrainer - Both U Don't install ALL the suggested nodes from ComfyUI Manager's "install missing nodes" feature!!! It will lead to conflicted nodes with the same name and a crash. 0? A complete re-write of the custom node extension and the SDXL workflow . Try inpaint Try outpaint Welcome to the unofficial ComfyUI subreddit. If I were you however, I would look into ComfyUI first as that will likely be the easiest to work with in its current format. New comments cannot be posted. Upscale your output and pass it through hand detailer in your sdxl workflow. An online sangha where we can share inspiration and an open community amongst all lineages. 5 to SDXL although the implimentation may be a little different if you use SDXL with advanced ksamplers. In the process, we also discuss SDXL architecture, how it is supposed to work, what things we know and are SDXL (Stable Diffusion XL) represents a significant leap forward in text-to-image models, offering improved quality and capabilities compared to earlier versions. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper You can encode then decode bck to a normal ksampler with an 1. He's using open source knowledge and the work of hundreds of community minds for his own personal profit through this very same place, instead of giving back to the source where he took everything he used to add his extra script onto. Supports: Basic txt2img. x models for about three weeks and I am pretty satisfied with the results, but I have yet to find a comprehensible tutorial on how to get SDXL running, which models and nodes I need to use, etc. [ 🔥 ComfyUI - Nvidia: Using Align Your Steps Tutorial ] 1. Highly optimized processing pipeline, now up to 20% faster than in older workflow versions . /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. SDXL-Turbo Animation | Workflow and Tutorial in Hello, community! I'm happy to announce I have finally finished my ComfyUI SD Krita plugin. 5, inpaint checkpoints, normal checkpoint with and without Differential Diffusion 6. 0 : How To Use SDXL in Automatic1111 Web UI - SD Web UI vs ComfyUI - Easy Local Install Tutorial / Guide - Working ComfyUI SDXL FaceSwap -Question- Hi guys, only the ip-adapter plus models when I followed that LatentVision tutorial on YouTube. The trick is to skip a few steps on the initial image and it acts like choosing your denoiser settings, the more steps skipped the more of the original image This will be follow-along type step-by-step tutorials where we start from an empty ComfyUI canvas and slowly implement SDXL. 5 and SDXL. I tend to use Fooocus for SDXL and Auto1111 for 1. ipadapter + ultimate upscale) /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I learned about MeshGraphormer from this youtube video of Scott Detweiler, but felt like simple inpainting does not do the trick for me, especially with SDXL. There are tutorials covering, upscaling in ComfyUI I compare all possible inpainting solutions in this tutorial, BrushNet, Powerpaint, fooocuse, Unet inpaint checkpoint, SdXL ControlNet inpaint and SD1. When using Roop (faceswaping extension) on sdxl and even some non xl models, i discovered that the face in the resulting image was always blurry. View community ranking In the Top 20% of largest communities on Reddit. ComfyUI Tutorial: Exploring Stable Diffusion 3 Share Add a Comment. Please keep posted images SFW. SDXL 1. 0 for ComfyUI (SDXL Base+Refiner, XY Plot, ControlNet XL w/ OpenPose, Control-LoRAs, Detailer, Upscaler, Prompt Builder) Tutorial | Guide I published a new version of my workflow, which should fix the issues that arose this week after some major changes in some of the custom nodes I use. So I made a workflow to genetate multiple /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Now you can manage custom nodes within the app. I definitely agree that someone should definitely have some sort of detailed course/guide. ComfyUI Tutorial - 3 pass workflows - multiprompt shenannigans Share Add a Comment. 0 Base SDXL 1. At the moment i generate my image with detail lora at 512 or But hey, if you're so concerned with me wasting time on reddit, feel free to stop trying to waste more of it. The Gory Details of Finetuning SDXL for 30M samples /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind I meant using an image as input, not video. Why is this needed? In this case, prompting the robot with a human face would normally be Hello! I'm new at ComfyUI and I've been experimenting the whole saturday with it. I played for a few days with ComfyUI and SDXL 1. What would people recommend as a good step by step starter tutorial? SDXL initial review + Tutorial (Google Colab notebook for ComfyUI (VAE included)) In the previous tutorial we were able to get along with a very simple prompt without any negative prompt in place: photo, woman, portrait, standing, young, age 30. upvote /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Not unexpected, but as they are not the default values in the node, I mention it here. Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Hi, any way to fix hands in SDXL using comfy ui? I am generating decent/ok images, but they consistently get ruined because the hands are atrocious. I like to do photo portraits - nothing crazily complex but as realistic as possible. I want to get into ComfyUI, starting from a blank screen. Inpainting only on masked area in ComfyUI, + outpainting, + seamless blending (includes custom nodes, workflow, and video tutorial) (e. The Inpaint Crop and Stitch nodes can be downloaded using ComfyUI Master Tutorial - Stable Diffusion XL (SDXL) - Install On PC, Google Colab (Free) & RunPod, SDXL LoRA, SDXL InPainting youtube upvotes · comment Thanks for the tips on Comfy! I'm enjoying it a lot so far. Refer Hey r/comfyui, . Inpainting (with auto-generated transparency masks). With SDXL 0. I have yet to find a tutorial that gives me a good explanation on how to use it. A lot of people are just discovering this technology, and want to show off what they created. Duchesses of Worcester - SDXL + COMFYUI + LUMA 0:45. The style transfer and composition aspects are also improved in the Everyone who is new to comfyUi starts from step one! What does it do?: It contains everything you need for SDXL/Pony. Just install these nodes: Fannovel16 ComfyUI's ControlNet Auxiliary Reboot comfyui You now have all the sdxl control nets in a folder listed in comfyui with the folder name so you can distinguish them. I know it must be my workflows because I've seen some stunning images created with ComfyUI. Support for Controlnet and Revision, up to 5 can be applied together . Both of the workflows in the ComfyUI article use a single image as input/prompt for the video creation and nothing else. comments sorted by Best Top New Controversial Q&A Add a Comment More posts you may like Dancing Statue Tutorial using - SD, Help with hands in SDXL/ComfyUI . ai! I put up a quick tutorial on how to use them with ComfyUI for those interested. SDXL-Turbo Animation | Workflow and Tutorial in when I see comments like this, I feel like an old timer that knows where QRCode monster is coming from and what it actually is used for now. Multi-LoRA support with up to 5 LoRA's at once . Tutorial: Controlers/motors installation & first runs ComfyUI - SDXL basic-to advanced workflow tutorial - Tutorial Readme File Updated for SDXL 1. SDXL + COMFYUI + LUMA questions about the uses of the app and so on. I tested with different SDXL models and tested without the Lora but the result is always the same. solution: follow this tutorial (https: Before I couldn't even generate with sdxl on comfyui or anything else. You can just use someone elses workflow of 0. I tested all of them which are now accompanied with a ComfyUI workflow that will get you started in no time. Because I definitely struggled with what you're experiencing, I'm currently into my 3-4 months of ComfyUI and finally understanding what each nodes does, and there's still so many custom nodes that I don't have the patience to read and Real-time interactive experience with SDXL Turbo and comfyui, with interactions handled in touchdesigner /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. MistoLine: A new SDXL-ControlNet, It Can Control All the line! Welcome to the unofficial ComfyUI subreddit. First, I generated a series of images in a 9:16 aspect ratio, some in comfyui with sdxl, and others in midjourney. Hi amazing ComfyUI community. ComfyUI - SDXL basic-to advanced workflow tutorial - part 5 Heya, part 5 of my series of step by step tutorials is out, it covers improving your adv ksampler setup and usage of prediffusion ComfyUI SDXL Basics Tutorial Series 6 and 7 - upscaling and Lora usage Both are quick and dirty tutorials without tooo much rambling, no workflows included because of how basic they This will be follow-along type step-by-step tutorials where we start from an empty ComfyUI canvas and slowly implement SDXL. ComfyUI SDXL Basics Tutorial Series 6 and 7 - upscaling and Lora usage Not only I was able to recover a 176x144 pixel 20 year old video with this, in addition it supports the brand new SD15 model to Modelscope nodes by exponentialML, an SDXL lightning upscaler (in addition to the AD LCM one), and a SUPIR second stage, for a total a gorgeous 4k native output from comfyUI! /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper Can I ask what steps you followed to get SDXL 1. ComfyUI Master Tutorial - Stable Diffusion XL (SDXL) - Install On PC, Google Colab (Free) & RunPod, SDXL LoRA, SDXL InPainting comment sorted by Best Top New Controversial Q&A Add a Comment Welcome to the unofficial ComfyUI subreddit. 0 refine model Tutorial | Guide SDXL initial review + Tutorial (Google Colab notebook for ComfyUI This is just a basic SDXL workflow, 2 K-samplers for basic and refiner. Tidying up ComfyUI workflow for SDXL to fit it on 16:9 Monitor, so you don't have to | Workflow file Eh, Reddit’s gonna Reddit. 0 for ComfyUI - Now with support for SD 1. Ok, Here is a trick ive been using for a while, it works with just about any workflow from 1. For me it produces jumbled images as soon as the refiner comes into play. It hasn't caught on as much as SD1. (comfyui, sdxl turbo. MoonRide workflow v1. Very impressive and easy to follow stuff. Searge Excellent work. And above all, BE NICE. Belittling their efforts will get you banned. In the process, we also discuss SDXL architecture, how it is supposed to work, what things we know and are missing, and Here are my findings: Neutral value for all FreeU options b1, b2, s1 and s2 is 1. SDXL 0. This workflow/mini tutorial is for anyone to use, it contains both the whole sampler setup for SDXL plus an additional digital distortion filter which is what im focusing on here, it would be very useful for people making /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Install ComfyUI Manager. (SDXL) with only 10. I’m going to keep putting tutorials out there and people who want to learn will find me 🙃 Maximum effort into creating not only high quality art, but high quality walk throughs incoming. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other and can be bigger. 24K subscribers in the comfyui community. 9(just search in youtube sdxl 0. 5 models. I am trying out using SDXL in ComfyUI. Once you get to the step that asks you to import a JSON file, you would instead come back to this page and import mine. Building on that, I just published a video walking through how to setup and use the Gradio web interface I built to leverage SDXL Turbo. The power of SDXL in ComfyUI with better UI that hides the nodes graph SDXL-Turbo Animation | Workflow and Tutorial in the comments WF included Share Add a Comment. 9 I was using some ComfyUI workflow shared here where the refiner was always an improved version versus the base. ComfyUI Tutorial SDXL Lightning Test and comparaison Share Add a Comment. I teach you how to build workflows rather than just use them, I ramble a bit and damn if my tutorials aren't a little long winded, I go into a fair amount of detail so maybe you like that kind of thing. 0 and refiner and installs ComfyUI /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation The goal of this tutorial is to give an overview of a method I'm working on to simplify the process of creating manga, or comics. I used the workflow kindly provided by the user u/LumaBrik , mainly playing with parameters like CFG Guidance, Augmentation level, and motion bucket. Stable cascade is a slightly weird model using a different architecture to SD1. 5 model as generation base and the SDXL refiner pass afterwards. Tutorial Readme File Updated for SDXL 1. I tried this prompt out in SDXL against multiple seeds and the result included some older looking photos, or attire that seemed dated, which was not the desired outcome. - basically high res fix is not needed for SDXL ( even 1. I assembled it over 4 months. View community ranking In the Top 1% of largest communities on Reddit. 0 : How To Use SDXL in Automatic1111 Web UI - SD Web UI - Easy Local Install Tutorial / Guide - Working Flawlessly /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site Welcome to the unofficial ComfyUI subreddit. StabilityAI just release new ControlNet LoRA for SDXL so you can run these on your GPU without having to sell a kidney to buy a new one. 0, did some experiments, and came up with reasonably simple, yet pretty flexible and powerful workflow I use myself: . In one of them you use a text prompt to create an initial image with SDXL but the text prompt only guides the input image creation, not what should happen in the video. ComfyUI Tutorial: Background and Light control using IPadapter youtu. Better Image Quality in many cases, some improvements to the SDXL /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. ComfyUI Tutorial for beginners, how to install and active, checkpoints, Controlnet, SEECODER, LORA, VAE and Tutorial | Guide Hi everyone, I'm excited to announce that I have finished recording the necessary videos for installing and configuring ComfyUI, as well as the necessary extensions and models. Enjoy! 15 votes, 18 comments. ComfyUI already has the ability to load UNET and CLIP models separately from the diffusers format, so it should just be a case of adding it into the existing chain with some simple class definitions and modifying how that functions to work I learned more about ComfyUI from these tutorials than any other that I found on YouTube /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It is made by the same people who made the SD 1. Basic img2img. 5 workflow (dont download workflows from YouTube videos or Advanced stuff on here!!). SDXL-Turbo Animation /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. For me, it has been tough, but I see the absolute power of the node-based generation (and efficiency). I have a wide range of tutorials with both basic and advanced workflows. 5 as refiner. Most Awaited Full Fine Tuning (with DreamBooth effect) Welcome to the unofficial ComfyUI subreddit. Anyone?. With the extension "ComfyUI manager" you can install almost automatically the missing nodes with the "install missing custom nodes" button. Here's the Comfy. I definitely look forward to when using SDXL is as simple to use as 1. Get the Reddit app Scan this QR code to download the app now. Awesome tutorial I have been trying since yesterday to start ComfyUI on my laptop, to no success. I've tried using an empty positive prompt (as suggested in demos) and describing the content to be replaced without success -- the region I define with a mask InstantID tutorial (A1111 and ComfyUI) Tutorial - Guide /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 0 came out, and I've answered it this way. ComfyUI SDXL Basics Tutorial Series 6 and 7 - upscaling and Lora usage Stable Diffusion SDXL Lora Training Tutorial /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Can you let me know how to fix this issue? I have the following arguments: --windows-standalone-build --disable-cuda-malloc --lowvram --fp16-vae --disable-smart-memory Welcome to the unofficial ComfyUI subreddit. iwwes xgldokl jvahlm kmdg pbx wkcoazq wdtpum hrbmds jkip eyjud