Stable Diffusion for Flawless Portraits
Vložit
- čas přidán 19. 03. 2023
- Are you tired of taking photos that never quite capture the true beauty of your subject? Do you want to turn your dull photos into stunning portraits that truly capture the essence of the person? Then you need to know about Stable Diffusion!
ControlNet models:
huggingface.co/lllyasviel/Con...
Join a Facebook group and share your work / bestaiart
In this video, we'll show you how to create perfect portraits using Stable Diffusion, a powerful tool for image processing and enhancement. With Stable Diffusion, you can transform even the most lackluster reference photo into a jaw-dropping portrait that captures the subject's unique features and personality.
We'll start by explaining the basics of Stable Diffusion and why it's such a powerful tool for portrait enhancement. Then, we'll guide you step-by-step through the process of creating a perfect portrait from a reference photo. You'll learn how to use Stable Diffusion to smooth out imperfections, enhance details, and bring out the subject's natural beauty.
But that's not all! We'll also give you expert tips and tricks for using Stable Diffusion to achieve even better results. Whether you're a professional photographer or a hobbyist, you'll find something useful in this video to take your portrait game to the next level.
So, if you want to create stunning portraits that truly capture the essence of your subject, watch this video and learn how to use Stable Diffusion to achieve portrait perfection!
Very impressive AI driving image and video upscale topazlabs.com/ref/1514/ , try for free.
THANK YOU for your support!
Please subscribe and leave your comments, and don't forget to click on the notification bell: “Look daddy, the teacher says, every time a bell rings an angel gets his wings,” ZuZu Bailey from “It's A Wonderful Life.”
What do I use:
Image upscaling topazlabs.com/ref/1514/
Micro SD cards 256Gb V3, I use these in Drones, Insta 360, and GoPro's amzn.to/3HsTU7j
Must have: Micro SD storage/reader/Cables: amzn.to/3GZ33D8
Simplified Photoshop, no annual fees, just one-time purchase amzn.to/3DaUpAi
Canon R5 is not cheap, but it is definitely the best amzn.to/3DclTFX
Must have lens adapter EF to RF amzn.to/3Dk1qhp
Insta 360 X3, I am really impressed by this camera: amzn.to/3HqdMb4
My to-go camera GoPro 11 : amzn.to/3XMcKMc
DJI Osmo 6 amzn.to/3zsKlAG
Portable lights, very versatile usage : amzn.to/3wsUqMa
Another DJI great product. Drone Mini 3 Pro, my togo: amzn.to/3XKN9mG
One of my favorite modifiers from Fotodiox - amzn.to/2Rfr1Px
Another modifier, that helps with fill light - amzn.to/2ReC2jX
Corsair keyboard: amzn.to/3DlclHN
Adobe Photoshop CC - amzn.to/2TNrLwL
My #1 dress in Baroque/Rococo photoshoots. very good quality: amzn.to/401rGHQ
Baroque outfit for males: amzn.to/3kBdEg2
I recommend getting this shirt for the outfit above: amzn.to/3Wzcv5Z
My Vue book - amzn.to/2TGUkvQ
3D Art essentials - amzn.to/2RfqPjh
My Patreon webpage - / geekatplay
Tutorials and packs - gumroad.com/geekatplay
Tutorials website - www.geekatplay.com
Photography - www.chopinephotography.com
Subscribe to my channel for fast notifications on new tutorials - / @geekatplay
Holy crap this is great. I'm 6 days down the rabbit hole of A1111/Stable Diff and I can't get enough. I've been looking for this exact video! Thank you!
This is the best tutorial i saw about how to use it. Realy great.
Great work! Thanks so much, very comprehensive!
This is an amazing workflow Vladimir, great job! So many people fighting to get exactly this for so long. Again, great job!
thank you
Such a great trick.❤ Watching these vids makes me realize that I'm still a noob when it comes to SD. 😉
THANK YOU!!, That face trick has been something I've been trying out for months, now I can better make portraits!!
Great to hear!
Amazing! I think the inpaint will solve my lipstick issues for singing videos! And I could learn more about the control nets!Thanks a lot
This could save the trouble of training models for different faces. Very helpfull! thanks
Absolutely!
Genius. The video and workflow technique are very much appreciated!
Glad it was helpful!
amazing thx, you explained it very well
Exactly what I was looking for. Thank you.
thank you
Thank you sir for sharing your knowledge with the world! I fully watch all ads for you😂😅
WOW! You have me very excited. I need to see where to get started with this. Looks exactly like what I want to start doing! Liked and subscribed!
You can do it!
Хоть ролик и пошаговая инструкция к портретам, у вас получилось мимоходом объяснить работу многих параметров. Спасибо за видос.
Genius. The video and workflow technique are very much appreciated! thank you
thank you for your support!
Excellent! Thank you!
I'm lost for words. Subscribed. This is too accurate and detailed to be free
thank you
Thank you 😮 master , you're goat ❤
Excellent!! Thanks!
Thx so much! That's a super nice tutorial.
You're welcome!
I really enjoyed this video. All of your videos are great. Thanks.
thank you for your support!
牛逼!!? 你太厉害了。我研究了半天,看到你这个视频我终于学会了。谢谢你。👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻
thank you!
Владимир спасибо большое. Отличный тутониал. Буду пробовать что то похожее
Spasibo!
Thank you very much for the tutorial. Went to find those model you used.
thank you
Bravo! Thanks Vladimir
*Thank you for your support!*
Another great video! thanks!
Thanks again!
Wow, this is amazing! I updated the Civitai page to announce that I started training RPG V5.0. I will ship that version with a set of Control Net image to help people have more control on the model.
thank you
Thanks. I was looking for a tutorial about this.
Glad I could help
Thank you my friend
great lesson, learned a lot , thanks
thank you
Amazin amazing content,thank you
Glad you enjoy it!
From an art point of view/perspective, Vlad is the best A.I. mentor on CZcams, by far.
Thank you for your support!
His equivalency to A.I. art is of Da Vinci modern age. Imagine if Vlad lived in Da Vinci's time. 🤔
@@Geekatplay I just now looked up a CZcams video about stable diffusion, and brought me back here brother. The algorithm knows where to take me for education. Its so good.
Vladimir Thank you brother, you are great And all settings are complete Best person ♥️ I hope you also look into the topic of the video freme I wanted to get a more realistic animation setting With the same settings as this video
I will check it out
@@Geekatplaythanks I really appreciate this 😘
brilliant video !! thanks
thank you for your support!
Very cool, and helpful! Have you figured out a way to make the in-painted face match the style of the rest of the picture?
Yes...using Affinity Photo you can do just that!
Could do another img2img on low denosing with the controlnet.
@@cryptojedii you mind linking a tutorial? Thanks for the recommendation of affinity photo, never heard of it
@@tstone9151 I use the whole Affinity suite for a bunch of stuff. It's not really AI driven, just a photoshop/lightroom alternative (in the case of photo)
Very good tutorial! Thank you!
You are welcome!
@@Geekatplay is it automatic 1111 or something other?!
How is your computer so fast? Great stuff, thanks for sharing!
Smoooth 👍
Thanks 💯
hello Vladimir. I appreciate your videos 👍
thank you!!!
Could you do this with an photo of a building or house. Keeping an acuate representation of the subject and placing it in a different environment?
What video card do you have running to be able to get results that fast with all these controlnets and script running?
thank you !!
any tiime
Awesome tutorial man. Some people have recommended me stable difussion as the most accurate image2image tool, currently I'm using Midjourney and it always changes the facial features of my character. Can you tell me if stable difussion is really accurate with character consistency most of the time? Or is it as tricky as midjourney?. Thanks for your time .
thank you
Спасибо, Вовчик
Hello, great video. However, I'm not sure how you got the ControlNet section there and the models? Can you add an explanation for that? There are many results when searching for it and in the link you provided and there is no explanation about that. Thank you.
Does all of this still fit within 16GB VRAM or do I need more? Thanks nice video.
Thank you
You're welcome
Thank You
You're welcome
so dope 🔥
thank you
Very good tutorial!
thank you
It's amazing. Man, do you think that is possible apply this techinque for food photography or productos?
i will try that.
Very Nice tutorial about AI workflow .
Glad you liked it!
Nice nice thank you
Thank you too!
excellent, really nice
Thank you! Cheers!
Top!
thank you!
great video really helped me understand how to keep the face structure, is it possible to do inpaint batch in order to create videos, that retain the face structure? working on your other video on created flicker free video and wanted to use this feature to keep the face structure consistent with my models face.
thank you, it is possible, but you will need to load masks for in-painting
❤❤❤ great
thank you
Very nice video explainer.
Glad you liked it
09:10 was it possible to use openpose_full instead of inpaint, which also captures the face?
Hey, thanks for the tutorial it helped a lot! But I have a quick question, how to make the face and the rest of the body have matching colors and tones? Which settings do I need to change? Thanks!
Yea i was thinking the same
He didn't mention it in the video, but there is another controlNet model simply called 'color' (search for t2iadaptor color) that will make a mosiac grid like sampling of your source image colors and apply them to the generated image
use full body in imported image
@@Geekatplay could you please explain in detail? how is this done?
The video looks awesome by generating portraits. May I know program is this?
Stable Diffusion, local installation. czcams.com/video/oTrmgXuc3e8/video.html
I noticed something about the your Stable Diffusion setup when you were using the ControlNet features
There was a LORA feature just above the ControlNet menu
How do I get that on my SD ?
very cool👍
thank you for your support!
quality walk through -- can you explain in more detail what the Lora configuration means and what it is doing? Thanks in advance
thank you!
I gasped out loud when I saw my original face come through the preview. Thanks for the video. It's like a real quick photoshop face swap. Do you have any suggestions on how to stylize the face a little bit? I seem to only get original face colors and details but for some images I'd like to match the generated image more and turn into an illustration.
thank you. you can used "styles" option. I will cover that in upcoming videos
Very interesting
thank you
👍👍
thank you
Thanks for this Tutorial. But I can't find the Model under the Preprocessor. I think i ticked all the right stuff in ControlNet and restarted the UI. Any suggestions?
you need be sure models located in correct folder, i will make video about it
@@Geekatplay I dont have this model too, can you please put link for it and just write where to put model in what directory/ folder
Wow! This is amazing! But how to get this crazy tool? This is not Leonardo or Stable Diffusion website?
This is Stable Diffusion on local installation, check my channel for the videos how to install it
I followed it entirely, I am getting my face pasted on the generation, ( I just want it to keep the structure of myface) it's not blending the face with image, how to do that, which settings to adjust, please help
hello Vladimir Beautiful tutorial, only I don't have the "preview annotator result" button in the control net section. Do you know how I can do it?
in new version it is icon, looks like spark, next to the preprocessot drop down
please upload same tutorial with new version lot of different getting confuse for me not showing preview option
Спасибо
Spasibo za podderzhku!
good skills 👋
thank you
Great video👏👏👏 subsciribed👍
thank you
Cool
thank you
Love the video thanks, but when I use inpaint to paint the face and click generate with the same settings, it just puts the face on a random place on the image, and does not replace the face.
be sure you set correct masking, it may be inverted
Did you follow the prior steps to match the pose first?
how to get composable lora on the bottom?
How do you find out what size the model is trained on to get best results? I'm finding adjusting the size proportions of the canvas really drastically effects my image output/
it is in model description if you downloading from Hugingface or Civit.ai
@@Geekatplay Thanks for the reply .. Found out all the specific sizes for the model I was using. Turns out I was using an outdated version of sdxl
That's what I'm talkin' about
thank you
great video. is it possible to replicate the same face from the input?
Yes, absolutely
@Geekatplay I am struggling with it for several weeks now. Do we need to mask and generate again for face and features ?
you have option to invert mask for inpainting. you can send me email with problem. need more info on what you trying to do
Amazing! But Stable Diffusion is just too overwhelming for me. Too many options, terms, models i do not know what they do or when to use what…I envy everybody that gets into that…
Hi, it looks like , version of Imac is differente of windows ! how to install it in windows ?
can this be achieved using leonardo or midjourney?
not yet
The button Preview annotator result dont show. Any tip to show this option? (Controlnet 1.1.02)
Thanks for mentioning this. Same issue for me ControlNet v1.1.112
I have the same problem, did you solve it?
looks like it's set up differently now, you have to check the box that says allow preview and then click run preprocessor (the little explosion icon next to the preprocessor field)
it was changed, now it is small icon, on right side from drop down box. look like spark.
Hey man, great video but I just can't manage to get full-body persons. It's always cropped to head or upper body. Any ideas what I can do?
This might help, I changed the first prompt to ‘full body pose’ and it gives a near on full body.
it was originally 2/3 photo. for full body can have tricks to add (hat) (shoes) (floor) (sky) ..etc something above object and below
@@Geekatplay ah nice, that sounds smart. thanks!
how you get that prompts ? is there any tool or site for good prompts ?
yes, i will release video soon about creating prompts ( prompts generators)
Are you using Automatic1111 gui? You're very simular to mine but I don't have controlnet.
you need install it in extension tab
How are you using stable diffusion like that?
it is Automatic1111 installation (UI) and control net
Very good tuto it's a shame thaht i have only 6 gb of VRAM i would love to try this !!
you can try with low mem models
@@Geekatplay i've tried work once then crash so i need each time to reload web ui. I will search a solution on collab maybe it exist !
Thank you, a very useful lesson. However, I ran into a problem: the preview window is not active. It exists, but it doesn't show anything. Do I need to install something?
on some versions (latest) preview located on side of the selection for preprocessor, it is small icon.
@@Geekatplay Thanks!
why i dont have upload image in controlnet img2img ?
I just cant get anywhere i have image A and when i generate someting in image to image i get a cow! for example!
name of the tool kit pls
congratulations on the job! I can use this technique to create pets?
thank you. i will make video specifically about pets, and yes it does works. I made a lot of photos/videos with my Border Collie
🎉🎉🎉
How was this software setup? What's the install process?
it is Stable Diffusion, Automatic1111 installation.
Hello, I ve installed Control Net, but I cant see the "Preview annotator result" buttons. Should I install another extension or what?
in newer version it is look like spark icon, next to the preprocessor drop down selector
@@Geekatplay Ok I got it, thanks!
Can this method be used for architectural rendering?
yes, if you using ControlNet model with architectural preprocessing. can not recall from top of my head, but i will check and post.
@@Geekatplay If you make a post about architecture, that would be great
thank you for suggestion, i will
what is the website he is using to do stable diffusion?
it is local installation: czcams.com/video/oTrmgXuc3e8/video.html
i have all the same settings as you but when im in "inpaint" it just generates the face but doesn't keep the body or background? why is this