NEW Outpaint for ControlNET - Inpaint_only + Lama is EPIC!!!! A1111 + Vlad Diffusion
Vložit
- čas přidán 13. 06. 2023
- The new outpainting for ControlNET is amazing! This uses the new inpaint_only + Lama Method in ControlNET for A1111 and Vlad Diffusion. The method is very easy to use. In this Outpainting Tutorial I show you all the settings you need and also my img2img method that gives better results
#### Links from my Video ####
Create ADs in A1111 • Make AI Ads in Flair.A...
Map Bashing with ControlNET • Map Bashing - NEW Tech...
#### Join and Support me ####
Buy me a Coffee: www.buymeacoffee.com/oliviotu...
Join my Facebook Group: / theairevolution
Joint my Discord Group: / discord - Jak na to + styl
#### Links from my Video ####
Create ADs in A1111 czcams.com/video/LBTAT5WhFko/video.html
Map Bashing with ControlNET czcams.com/video/z6Xwh9G24uw/video.html
#### Join and Support me ####
Buy me a Coffee: www.buymeacoffee.com/oliviotutorials
Join my Facebook Group: facebook.com/groups/theairevolution
Joint my Discord Group: discord.gg/XKAk7GUzAW
Intrested extensions Face swap
My controlnet version is stuck at v1.1.200, checking for updates say it's the latest one, please help.
i dont see any model near inpaint only box ,help please
@@boostergold4101 did you do the update? What does the version number say? You can also remove controlnet and reinstall it. But make sure to keep the models or you need to download all of them again
is it possible to download the models manually? ((theres one on civitai by the ally, works but not sure if its the same one))
FYI, if you change both the height and the width it's going to stretch your image. If you truly want to outpaint from 512x512 to 768x1280, you should do it in 2 steps. So first go to 512x1280, then repeat going from that to 768x1280. Hope this makes sense. Cheers!
True. In the example in the video his base image was already 768px, so he just adjusted that upward to match it, then only changed the width...but helpful tip overall!
Yeah, I noticed that on the first try. I realized that it paints wherever is the difference in ratios as opposed to initial scale.
mine just keeps filling the canvas with the existing image. im not sure why
@@shabadooshabadoo4918 low denoising setting
Olivio --this is AMAZING. I'm so thankful that I found your channel. Thank you so much for EVERY video you put out and teaching us how to use these amazing tools!
thank you, my pleasure
And here I was creating transparencies and then inpainting the sides. Thanks Olivio!
Olivio is THE man! Always providing us with superb updates and tutorials. :D
Really digging the new videos! The practical workflows are just great.
thank you very much
Super useful, can't wait to try it! Thank you!
I love your videos my friend, thanks for sharing!
Thanks Olivio, this is great. Thanks for showing us this great technique. This is amazing outpainting.
my pleasure :)
This was exactly the video I needed, exactly when I needed it!
TYSM!
Dropping nuggets! Thanks Oli!
This looks great! I cannot wait to try it out. Thanks for the vid
Thank you! This is a fantastic learn 👍
This doesnt seem to work anymore in Automatic 1111 1.6. I just get black bars...
It is exactly what I need. Thank you very much
brother your job is great!! you are the ONE!!! Thanks your tutorial!
Note: You'll want to use the exact same checkpoint for outpainting as was used for creating the image in the first place.
I did some testing and even rather similar models perform very poorly, mostly resulting in largely blurry messes.
I had to download the inpaint model manually but later it did work! Thanks again Olivio
where do you have put the model in ?
This stuff is totally wicked!
Hmm, I updated the ControlNet extension, but the control_v11p_sd15_inpaint sadly doesnt show up in the model drop down :/
Same same, Do we just download it directly from Hugging Face?
@@Maltebyte2 yeah i just googled the name and found a download, probably what you said
Can you tell me what directory i have to put this file? Thanks@@DasHnezzEdits
\stable-diffusion-portable-main\models\ControlNet @@Maltebyte2
BRILLIANT! Thank you!
Thanks for this latest update Olivio! With the download step, the image will usually already exist within the outputs/txt2img-images folder/directory so can always be uploaded from there, unless wanting to create a copy as part of the workflow. Recent updates of the Infinite Image Browsing extension (highly recommended) also has a context menu option to send an image directly to the t2i and i2i control net panels, so you can just click the tab for images, right click the desired image and send it straight to CN.
You've just sold me on that extension. It always seemed crazy to have to shuffle an image you're already using around so much just to put it in another tab in the same window.
If I'm not completely mistaken, if you just don't select an image in controlnet at all it will just use the img2img one anyway so that's even easier.
@Janek Nice. Just tested generating a cat, sending to i2i and enabling canny. Changed the cat style in the prompt and the cat changed but the image structure was nicely coherent.
This is awesome, thanks!
Incredibly useful. Tbh, as someone who is maybe a 5th grader at best when it comes to SD knowledge and skill, this is why I had yet to learn how to effectively outpaint - knowing that something like this would eventually come along.
yes, that is so much easier than the classic outpainting :)
you are Amazing man... love your Videos....
Great video, thanks!
WOOOOW so cool !! it works ! 😍
is was surprised by how well this works too :)
The outpainting algorithm is decent in guessing what the extra data on the sides should be. However, I prefer the extension openOutpaint, since you can control which side is extended and by what amount of pixels.
I hope they'll soon implement a method to outpaint only to a selected side, like only to the right and not always equally to both sides. Or let's hope that invokeAI catches up some day and implements ControlNet, since their Unified Canvas is the most intuitive and user-friendly way of out- and inpainting. However, as usual great video Olivio! :)
You can use the poor man outpaint script to expand image.
hello help. I cant get the model controlnet v11p inpaint... It's not in the selection of models. Since he said it'll get downloaded automatically the first time I use llama so I left the model to "none". But for some reason mine wont. It's still not in the selection even after restarting my terminal and webui. Am I doing something wrong? thank you
Can't wait to start this method with all my previous pictures and see what I get.
Great video. Can't wait to try it!
Me too
Same 🎉
Thanks Olivio super cool, I was happy to see you uploaded the video again as it's great content 🤠👌🏻 are there ai tools that can make a 3D render more realistic? Or perhaps populate a 3D building with people, furniture and vegetation? I'm an architect and would love to see examples of this please 🙏🏻🙏🏻🙏🏻😇 even something to help make a photomontage of my building into the context a bit easier 🙏🏻 thanks in advance
Trully is gold ! I'll definitely add that method and then finish off with Invoke.
awesome :)
Great explanation, i like the vibe of your video and your voice. Subscribe ❤
Great stuff! Basically what Adobe are doing with Photoshop Generative Fill, but for free and without content filter. 😉
Well done man
you should use (:1.2) instead of (()) it's more convenient especially if you go to high strenght or lower than 1, use shift + arrow up or down to change the str
the model doesn't download for me, i get a "You have not selected any ControlNet Model."
amazing, thanks
Incredible, thank you. I think outpainting is finally viable. If people are finding seams where the original image meets the new parts, you might want to place the oupainted image back into img2img and then generate a new image with a very low denoise strength. That should fix it.
hello help. I cant get the model controlnet v11p inpaint... It's not in the selection of models. Since he said it'll get downloaded automatically the first time I use llama so I left the model to "none". But for some reason mine wont. It's still not in the selection even after restarting my terminal and webui. Am I doing something wrong? thank you
@@vonpheusarts6948 Yes, I had this same problem as well. It did not automatically download, just find the page for controlnet for hugging face and download it from there, wamn
Yes, thats an issue Im having with this method, the seams are very obvious. Will try that trick. Thanks
You could even delete your main prompt to allow it to imagine even more potentials, can also use masks for doing only small or particular areas.
interesting. i have to try that
Thx. guys
You can use the function in T2I mode, and an extra high-resolution function can be selected
Amazing content by the way how are the results with real photos or photorealistic images, also which of your tutorials you recommend following to have the installation of Stable Difusion to make this kind of outpainting?
Thanks this is amazing! Btw you don't need to download the image you can just copy/past it, it works!
Do you know if it's possible to pick a direction to outpaint? Can we outpaint only on the left of the image, and not on both sides?
You are the best
thank you uncle
ControlNet is the most powerful tool that we have in SD.
Thanks. I'm late comer to stable diffusion world so had to go through extra steps to download models for controlnet on website since it won't auto download. Recommendation would be to provide links in future for tutorials, just in case it doesn't do things automatically. Been following you for a week now after I watch your tutorial when you were rapping 😂 Great work dude😊
could you share the link you used in teh end - having the same problem haha
Hi Olivio thank you for sharing such amazing and useful material, I have a question, where do I get or download , the prepocessor +lama, I have the updated models and the control net also updated but it is not there.
Hey thanks for your great work. I followed this tutorial, but the issue I am seeing is that when the outpainting is done, the original "internal" image has changed as well. Your examples seem to show that the original image and the outpainted image, the internal image stayed the same.
Mind you I am also getting the "RuntimeError: The size of tensor a (3) must match the size of tensor b (0) at non-singleton dimension 2" type of error, so it may be in issue with size mismatch somewhere.
thanks for this, I just tried it, ist there a way to blur the edges? I have pretty rough hard edges in my results
Thank you olivio , do you have any tutorial talking about mov2mov extension
Thank for video, but you didn't mention which positives and negatives you are using during inpainting mode of ControlNet. Or are they not important? Or use as in script Outpainting mk2?
Help. I cant get the model controlnet v11p inpaint... It's not in the selection of models. Since you said it'll get downloaded automatically the first time I use llama so I left the model to "none". But for some reason mine wont. It's still not in the selection even after restarting my terminal and webui. Am I doing something wrong? thank you
Having the same issue, shows as empty. Were you able to figure it out?
@@FingerThatO I forgot already the exact step by step but as far as I remember I was able to fix it. Downloading the model manually. I probably asked that on reddit then somebody told me to download the model and they sent a link. I'll send it here if I can still find it
@@vonpheusarts6948 i have downloaded it too, but which folder should i put it in
Excelent
So with this method it seems the outpaint always expand outwards from the center of the image. How should I approach an outpaint with the original image offset? for example only outpaint the rightside
Thanks for the video. My machine can only handle one task at a time so I get to try it and see one go at a time. I did wonder about the prompt. Does the prompt stay the same as when I made the image, or do I change it for what I want the fill to look like? And can I use CN with maps to populate the wider area? I'll give it all a go and find out, but that's my question. I do wish I could specify where in the new image my original image was positioned. Centred every time is a bit boring. I guess I just render two 512 and stick them together. Trick would be matching up their sides. We'll look back on all this and none of it will be as fiddly in a year or so.
Great ! Have you tried it out with other type of images? Like different artist's styles?
Thanks !
Thank you. I wanted to try it, but SD does not automatically download the model. How else can I get the model?
Where do you get the controlnet model if it doesn't auto-download?
same issue here
did the update and got the inpaint lama but the control model does not download no matter what I do. Is there a manual way?
So for 1. it does not automatically the inpaint model (and I can also not find a download anywhere) and 2. my main image also changes, it doesn't filly only the empty space.
Great function and tutorial! Would it be feasible to do it on ComfyUI?
a life saver! thanks for making this tutorial! i finally managed to get outpainting to work! do you have any idea on how to extend in one direction e.g to the left only or upwards? thanks again!
Thank you very much 😍
hey, have you found an answer? :)
Supper helpful!! is it work with videos??
as others said, it does not download the inpaint model
Great job. Thanks bro😉
What model and prompt were used for that old man standing by the lake? That image is really great.
How does the download of the model trigger? I have not managed to do it
what about using outpainting to extend images that were not originally generated in SD? my results show it struggling to match the original artstyle
It didnt automatically download the model "control_v11p_sd15_inpaint" when I tried this... how do you manually download / where does it get saved to?
Can you change outpainting to only increse the height of the image to the bottom and treat the image as the top of the image?
That's cool
Can you make a video about mov2mov extension??
How do you drag the image from the output into ControlNet? No matter what I try it just opens the fullscreen view of the image. I have to save->download->upload
what if I want to keep the original character in the original image, and just want to outpaint the image without affecting the original character?
can we use controlnet to restore color in retro photos ??? - if yes = this is the idea for one of your next vids :D
Great video! How do you get the automatic scale button (the ruler) next to the two arrows in your resize option?
Nothing special to do, it's here by default
Can you do this in bulk from an IMG2IMG folder? I'm trying it now, but just trying to see if there's any tips to make it work.
Excellent video, thank you from Spain
bro, you look like you're on holiday every day 😀
Does this work with any image or does it have to be an image generated by SD in the first place? For example a photo I have taken and want to imagine left and right from the picture? I have tried following every step of the tutorial but I just get back an entire image of garbled mess.
Yeah, this is frustrating, the inpaint+lama shows up on the first dropdown, but the control_v11p_sd15-inpaint doesn't show up in the model dropdown. I verified that the model is in my models folder. What Am I doing wrong?
yes, do you have fixed it ? im running the forge version
Hello, you mentioned that after selecting the processor inpaint the model will be downloaded automatically but to me it doesn't work. How to get the model downloaded? Not sure why but my controlnet has only the Openpose and Canny model while it has all the processors. How to fix?
I can get control_v11p_sd15_inpaint model, it didn't instaled automatically.
Sounds really cool, unforunatly my memory is too low to use this :(
finally, I'm starting to get some results with inpaint. tried for an hours, thos method works on Mac. tip : Set in right dimentions
On "cowboy shot" character, have you ever try to out-painting the bottom of character so they can be "full body" character? Tell me the result 🙏
I don't get it, he sets up a higher resolution, but the main image doesn't change even one pixel, as if it was masked, whenever I use img2img the base image will change slightly. Is there something I'm missing? is there a mask somewhere?
When I try this the original image and subject also changes, it doesn't just outpaint the original image. Am I doing something wrong?
I dont know why but area where Im trying to add something has a little bit changed color. Therefore I can see initial rectangle in center and outpainted area with a little bit different color ((
What if I want to extend the image only into a certain direction?
Does this still work? I'm following this guide and getting this error: ValueError: No mask detected for ControlNet inpaint
@2:35 my RESIZE MODE buttons are present, but they are greyed out, I cannot click any of them, it is set to CROP AND RESIZE and cannot be changed. Ideas? (I'm using Vladmandic)
Resize mode is greyed out as if you don’t provide controlnet image
How would you offset the image so that we can extend, for example, only to the right of the original image?
same question
You can click the edit button in the image field, same place you’d normally go to inpaint in automatic, and then extend the border to whatever size you want. Then you can click the protractor icon to update the resolution
Everytime I generate an image I get people in either side of my main image. Is there a way to stop people from appearing in the background?
where do i get the link to download the inpainting model?
I tried this method to finish a character whose feet were not visible, but I couldn't get any results... does this method work for this kind of outpainting?
has this method changed? i used to use this but i can't get it to work now. I haven't created images for a few months.
does this work for real world non-ai generated photos too?