How to control multiple images at once Tiled Diffusion Region Control Stable Diffusion Automatic1111
Vložit
- čas přidán 17. 06. 2023
- An interesting feature of Multidiffusion / Tiled Diffusion extension is the ability to control specific regions of your image at the same time. This provides another way to get greater control over image creation. In this video I'll cover the basic ways to use this so you can start experimenting.
Link to Github Multidiffusion extension for Automatic1111: (Note: Some would consider the images in these instructions not appropriate for younger audiences)
github.com/pkuliyi2015/multid... - Věda a technologie
Solid tutorial!
Very easy to follow. And a chill but timely pacing.
Keep on being awesome.
its insane that most people don't use these brilliant extensions just because they "seem" so complicated to understand and use.
I’m only at 4 mins and can already see how phenomenal this is going to be
Great explanation, thanks.
great video , i have some question? latent tiles make something? , i notice that when i change the method , you got a total different image . thanks for the video
in my case, the whole image only rely on the seed in the main prompt, no matter how i change in the region prompt, basically no effect at all. i don't know if that's only my case or not.
Latent Couple,Two-shot/Composable LoRA seems to do a better job at blending regions into a coherent image.
These borders seem somewhat blurred
Do these work with the latest versions? I'd like to try them out too ^^. Also, can they be used to split LoRAs? I assume that's what Composable LoRA does.
As for MultiDiff, I think it wasn't shown here, but it comes with a second extension called Tiled VAE. Enable that, it makes the images much more consistent for some reason.
Have you tried with lora’s? I want to add two lora’s but the prompt doesn’t like more than one face
I haven't tried that but good idea for another experiment!
very interesting!
Thanks for the feedback and for watching!
2:23 -- Something is wrong here. Why do you have to say "ultra-high quality" in the positive prompt (even though "ultra" is exaggerated and impossible because there is no reference possible) and then say "low quality" in the negative prompt? There is no need for such a negative prompt because you have already mentioned Stable Diffusion in the positive prompt that you want high quality. It doesn't make sense. It's like saying _"I want the best quality, BUT ... I don't want the worse quality, OK?"_
Also, you need to be precise if you say there is another cat running far away. So, if you say to Stable Diffusion that there is a cat in your other "box", but don't mention "a cat running in the background", Stable Diffusion will create an image with two cats of the same size. So, you need to explain or mention it in the prompt for this specific box, and then Stable Diffusion will combine that box (the running cat) with the background "box". And then, the result would be correct this time with the correct perspective in your images.
Being precise is the key, and even more, if you use multi-boxes like this in your prompts to generate an image because Stable Difusion can't read people's minds. At least, not yet, hehe!y
Thanks for this info and I agree precision in the prompting is very important, experimentation is key!
@@renaissancelaboratories5645 You're welcome, my pleasure. And also, yes, I agree. Experimentation is key!
because that's not case how the diffusion working inside. best quality and worst quality might be affecting differnetn "features/ parameters" in the model. not a slider concept. you may try generating 100 images with only ultra, and another 100 adding worst quality in neg. prompt and see if they affect the percentage of good image output.
Where are you showing a tutorial on a feature you haven't mastered?
This is very fair feedback! Thank you for watching!