How to make AI Faces. ControlNet Faces Tutorial.
Vložit
- čas přidán 27. 04. 2023
- ControlNet 1.1 for Stable diffusion is out. Let's look at the various face preprocessors and models.
ControlNet 1.1 Tutorial and install guide: • How to use ControlNet....
My Stable diffusion workflow to Perfect Images • Revealing my Workflow ...
ControlNet 1.1 face models (openpose): huggingface.co/lllyasviel/Con...
Mediapipe face models: huggingface.co/CrucibleAI/Con...
Prompt styles here:
/ sebs-hilis-79649068
Support me on Patreon to get access to unique perks! / sebastiankamph
Chat with me in our community discord: / discord
Ultimate Stable diffusion guide • Stable diffusion tutor...
My Weekly AI Art Challenges • Let's AI Paint - Weekl...
ControlNet tutorial and install guide • NEW ControlNet for Sta...
Famous Scenes Remade by ControlNet AI • Famous Scenes Remade b...
LIVE Pose in Stable Diffusion • LIVE Pose in Stable Di...
Control Lights in Stable Diffusion • Control Light in AI Im...
Inpainting Tutorial - Stable Diffusion • Inpainting Tutorial - ...
The Rise of AI Art: A Creative Revolution • The Rise of AI Art - A...
7 Secrets to writing with ChatGPT (Don't tell your boss!) • 7 Secrets in ChatGPT (...
Ultimate Animation guide in Stable diffusion • Stable diffusion anima...
Dreambooth tutorial for Stable diffusion • Dreambooth tutorial fo...
5 tricks you're not using in Stable diffusion • Top 5 Stable diffusion...
Avoid these 7 mistakes in Stable diffusion • Don't make these 7 mis...
How to ChatGPT. ChatGPT explained in 1 minute • How to ChatGPT? Chat G...
This is Adobe Firefly. AI For Professionals • This Is Adobe Firefly....
Adobe Firefly Tutorial • Adobe Firefly Tutorial...
ChatGPT Playlist • ChatGPT
Download prompt styles here: www.patreon.com/posts/sebs-hilis-79649068
I've been waiting for this one, epic. This space is moving so fast. Thanks for all the content!
You're very welcome 😊🌟
Because of the advancements in AI; lots of youtubers skipped the control net v1.1. Thanks to you; I have just downloaded some of the control net models that I skipped.
Happy my content is helping you. I'll continue to focus on the art aspect of generative AI for now 🌟
Wow, the results are stunning.
That's a very detailed tutorial. That's great. Thank you!
Thank you for choosing to speak normally and teaching so brilliantly.
Thank you for the kind and valuable feedback.
Watching this before falling asleep. Good night my dudes ❤
I love all these new tools. It really helps what you want to generate easier instead of just depending on what the AI gives you.
Honestly, it feels like owning a car for 7 months then the manufacturer makes a tyre update. AI Art feels unusable without controlnet now.
Great, thank you.
I wish I could give you two thumbs up per video - one for the incredible, in-depth content, and one for the supreme "Dad joke"-tier puns!
Hah, thank you! 🌟
another technique added to my workflow!! wooohooo
Sweet! Happy to have you aboard the hypetrain 🌟
Your tutorials are good and so are those jokes love it.
Glad you like them! Thank you very much :) Got a favourite joke?
Great video! Thanks!
Thank you for your support my friend! 🌟
Great video. But after watching your latest video on ControlNet ReferenceOnly I wonder, if we still need openpose_face. Have you compared the results of both? I could imagine that openpose is more trained/ specialized..
Another great tutorial
More of these please, I subbed and All Bell button hit
Glad to hear it! I got lots of ControlNet videos. Check my ControlNet Playlist :)
Oh man! When was the last time I remembered to update ControlNet? So many preprocessors!
Scroll list is huge now!
The preprocessors do an amazing job at capturing the openpose framework for face or full body, but then I cannot edit them. I often want to just adjust a leg or arm segment a little. I have the openpose editor and posex extensions, but I can’t get editable frameworks into them from the controlnet preprocessor.
You have no choice but to use multiple ControlNets
Hello what are "SD WAE", "ADD lora to prompt" and "add hypernetwork to prompt"? I don't have such options. They are at the top of the window
YES
Thanks for great info. Btw, people in Dubai don't like the Flintstones, but people in Abu Dhabi do.
I wonder if there is a way to use batch image processing on control net?
Awesome
great video !
Thanks!
How to work with two sources for ControlNet that have different resolutions?
For example Unit 0 source is a portrait of a person sitting, but Unit 0 source is a square photo of a face looking the other way I want to imitate? How do I make it only read the direction of the face from Unit 0 photo and not the whole composition of it being a square with face in the middle? Because I end up with a square portrait of a face instead of a sitting photo I facing the direction I wanted.
you are great!!!
😘
will you make a tutoriol for the tia models?
Hello, can you send the link of the plug-in for CALC1:1, 3:2, 4:3, 16:9? Thank you very much!
Could we get this updated to explain what's changed with this? Since there's no face models to download anymore, just the base OpenPose, so I assume it's handled differently.
Bro, this is just what I needed 🌟
Glad you enjoyed it! Keep consuming those cereals. But mix in a little fiber in your diet too 🌟
@@sebastiankamph hehe yeah .. I dig your dad jokes at the start of your videos
great tuto i have only one issue : when i trained a model with a real face and when i want to put the face on a son goku face shouting for example it doesn't work the mouth of my trained model keep closed if you have i trick it would be fine plz !
Great job, thank you! How about creating a weapon made up of different parts? For example, a shotgun made from parts of hockey equipment.Its real in SD control-net?
Goddamn this was greatly needed improvement!!!
🌟🌟🌟
I don't know How to add more models to Stable Diffusion checkpoint and how to add more styles.Could you please help.Thank you
I'm super new to all of this.
Are you using A1111 for your personal use or are you using the newish vlad you showed us some days ago?
Honestly, both of them. I have a1111 setup the way I want to for videos. Then I play around with Vlad on my own time.
Hi Sebastian! Could you please help me? Openpose doesnt work since updating. I have the properly named .yaml files and cldm_v15.yalm for the config. Anything else I should check for?
Is there a workflow where you can generate a character (and face) to reuse across many poses and settings? And avoid creating a slightly new person each time?
Dreambooth + Lora. Or a prompted character sheet.
Thank you for useful information.
When I use openpose face, a lots of faces are made for each face dot...
How do I solve this problem?
Make sure you have both preprocessor and model set correct. Not just 1 of them
when i click the explosion icon the preprocessor preview just blank (black) no stick figures, any suggestion?
i can generate images 512x512 in 5 seconds, but when i use any pose or face, i get Cuda out of memory error, do you know how much controlNet vram consume?
Please make video showing your hardware/software setup
thank you very much, 1 sub
well that openpose_face is good when the face is close to the camera but when the face is small, mediapipe will do a better job at keeping face details. I really like that Openpose_full option though, just a little bit of inpainting and its a done deal. very interesting option to use.
Would you use mediapipe and open pose together? I guess what I’m after is recreating the subject with face clarity from a full body image.
@@noahsplayground2564 it could work not sure how you would use them in a different position then the one used to make the annotator
I couldnt get control net to work, I got it installed, and it gave me the controlnetm2m script but the actual controlnet UI didnt show up. any ideas?
Friend, how do I control the direction of the eyes? . Ex: the character is facing the viewer, but the eyes are turned to the side.
Great video, but I have a question. I have the following problem: I have a base image of a girl whose face I want to save, and an emotion of another girl that I want to apply to my base image using image2image. Have you had any luck applying the emotion from one image to the base image, via image2image and using the tools in your video? I'm having trouble with this.
Check out my reference only video. It's two faces on the thumbnail that says "same face easy"
so mediapipe is better than openpose or is it the other way?
I am using controlnet 1.0, i never used the face model or w.e extra you got. Do you have videos talking about the rest?
A couple, yes! 😂
is there a way to keep the face character the same on the end result? so it looks like the original person instead of changing to a complete new person?
With reference controlnet also, kinda.
4:58 yoo, what are these styles? How did you make them?
Your opening dad's joke is just..... 🤣👏
Glad you enjoyed it :D
it doesn't generate anything once I press "run preprocessor", it just keeps loading with no result. It says "AttributeError: 'NoneType' object has no attribute 'model" in the CMD. Is this because my PC is too weak to process this?
How do you make is work so fluid? My image render speed is the same, but each time I change the preprocesor it takes a couple of minutes at least.
The first time you change, it will download new files. After that, it should be faster.
Hey Sebastian, do you have any videos on upscaling?
I have a comparison in my ultimate stable diffusion guide. But nothing on the latest ones yet.
0:25 - Ah, but did you hear the one about the *three legged dog* that walks into a saloon and says "Ah'm lookin' fer the guy that shot ma *paw* !"
What models do I need to download please? I can't seem to find ones with names that match the ones in the video
See previous video, also linked in description
9:30 the workflow to make the icon for a modern mobile game app
Is there a way to make the rendered face the same as the one used in controlnet?
If you use it both in img2img and controlnet, you will get the same one.
I just can't this to work. I generate an image, I load a face into openpose, and then I re-run the same prompt, seed, etc. The result is identical.
Hi all. I have a question. Why does ControlNet give an error when working with SD2.1 based models?
Already in the line in the "settings" CN wrote the model name 2.1yaml. Just the other day everything was working, now it's not working. I have a local station.If anyone knows the answer please advise, may be a specific format you need, for example 768x768? Although I tried it.
Are you sure you're using a 2.1 ControlNet model? They're specific to 1.5 or 2.1
guys if you are using brave and all the output of the preprocessors are white or black images, you can fix it disabling brave shields
how u get same face every time?
Is it possible to have a reference-pose-picture and then apply this pose to a picture of my choice?
I started with all this AI Image stuff a few days ago and can't wrap my head around all these bars, options, settings, prompts and so on. You really need to have a master degree in programming to make something useful out of it. ^^
Are all the old models for ControlNet now obsolete and can be deleted?
Thanks for great tutorials :)
Pretty much, yep!
Any reason you went back to A1111 instead of vlad?
I had more stuff installed in a1111 that I wanted to use. Some days it's the flip of a coin. Icons are next to each other.
Many games will use generated avatars like this :))
For sure! :)
Now we just need a controlnet for eyeballs. I still haven't found a great way to control where the characters are looking.
This +1, lol i used ahegao lora to cludge them to look up sometimes.
The good thing about not getting your jokes instantly, is that my potato GPU doesn't feel that slow when upscaling an image. 😊
I only ever get multiple heads trying to form a single head or a "The Thing" type of monstrosity... The preprocessor makes the right mask based on the input image, the output images don't seem to care though
@sebastiankamph please answer this, I also got the same problem and have no idea on how to fix it
@@m.ghazianhindami5834 I also get the multiple heads. Please help!
Can you please do a video on the hundreds of new preprocessors they added in 1.1. I'm very confused.
With one video at a time, we'll get there eventually! 😅
I came for the Stable diffusion and stayed for the dad jokes.
Real mvp right there
for me the openpose_face preview turns completly black.. and doesnt work
same here
bro i like your vids a lot, but the recording of audio has a hiss every time you speak. mby u can get rid of it from the settings?
I'll look it over
Now, that's very interesting, but what if I actually want to change facial expression on a picture I've made? When I take photos with cinematic tools in various games, one of the biggest struggles is that you usually can't set up the emotions how you would like to. I've tried Photoshop's neural filters and they results are laughable.
I have the same question. Photoshop's Liquify Tool does a little better, but still isn't great. I am sure this will change soon, but in spite of all the cool AI options out there, the best way to do it is still manually by putting the eyeballs on a separate layer, and using Krita's Liquify Tool to adjust the eyelids and eyebrows by hand. (On a transform mask, so that it's non destructive.)
tbh i only come here for the dad jokes haha that was a a good one
Glad you're enjoying them! 😊
please consider using RTX voice i can hear a slight buzzing in your audio when you speak
Please check my latest video and see if you feel I fixed the audio 😘
When I do this, it doesn't match the face expression - ever.
Why you selected the styles but didnt applied them ?
They are still applied even though you don't click the little button. Makes for a cleaner prompt.
Thanks for the tips. BTW, why do horses have low divorce rates? They have stable relationships.
That was one of those "I'm not laughing but I'm nodding and approving of this one."
I'm ashamed to admit I've laughed at most of these jokes lmao🤣
One of us, one of us!
If you notice, prompters are learning art 😂
This thing must be fun to use. However, from now on we can't trust any recording I guess.
Is there a controlnet model that can remove bad jokes from these videos?
You misspelled dad jokes.
You must make video tutorial for dad jokes to.
"This one trick dad jokers won't tell you"
First!!! How's your day going? :)
Boom!🏆 Sun is shining, so excellent! What about yours?
@@sebastiankamph I'm doing amazing!! About to go to japanese class and then to the park for a walk :)
Ive been shouted at by women, I relate
Great tut btw , many thanks legend
😂
Немедленно добавите этот риг в блендер.
fix your Audio buzz sir..
stop using AI that makes you look directly into camera all the time! I am scared. (I am kidding btw)
Once you see a guy do a paywall on stable diffusion prompts its time to unsub
They're free. I just hosted them there.
PLEASE FOR THE LOVE OF ALL THINGS HOLY CAN YOU MAKE A NOTEPAD FILE AND TELL ALL YOUR styles the negatives positives and add ons it would save all of us a lot of time please broo .
All my styles are available in the description and top pinned comment on almost every video :)
Plsss nooo crying here 🤣😜