Throwing data to your face (models)!
Vložit
- čas přidán 7. 06. 2024
- I ran hours of benchmarks to try to establish what is the best IPAdapter face model and this is not a subjective selection: this video is backed by Science :)
Workflows: f.latent.vision/download/face...
Discord server: / discord
Marco's ComfyUI channel (in German): @ALatentPlace
00:00 Intro
00:17 Models benchmarks
04:53 Ideal Workflow
06:24 Changing hair color
08:22 Upscaling
10:15 Multiple reference images
11:41 FaceID Portrait
15:46 Two people workflow
19:37 Outro
Matteo, I think I speak for many of us in the community: Your content f'n rocks. Thank you so much for taking the time and explaining all the nitty gritty details of IPAdapter and how to use it to create consistent results. Please keep doing what you're doing.
You're speaking for me.
And me!
You don't speak for me, these tutorials are painful to watch lol.
Yes yes yes!!
I wholeheartedly agree. It's been a long time since I've not been so enthusiastic about CZcams videos. You ROCK Matteo !
That is some high quality spaghetti right there. Compliments to the chef!
😅
I'd also like to give voice to the many people saying your content is amazing. It truly is. The rocket ship from learner to intermediate and beyond. You explain the "why", not just show an example of "how" with a click-bait title and an enticing misleading thumbnail. Thank you. Please don't stop making this content, whatever the platform you choose to publish it on. It is fantastic.
Your are new member of GOAT ComfyUi workflow Creator
As always, excellent and inspiring content. Thank you
Brilliant stuff. Thanks so much for taking the time to produce these tutorials Matt3o 🙂
Awesome as always!!! Thank you Matteo!
Thanks for taking the time to make theses tutorials. Very hard stuff to understand but I'll keep trying :)
Thank you for always providing great lectures! I am learning a lot from the detailed and friendly explanations. Thank you again.
Another amazing video Matteo, direct, to the point and with great didactics, God bless you!
Your videos always give me some information I don't think I need to know before. Thank you for your hard work. 😊
Great video as always man !😀
wow, what a great scientific approach. Love your style and obsession with this. i respect you
I am at the start of your video, it feels like a sportevent :'). ComfyUI can be quite daunting sometimes, but this... this is entertainment. Great content and many thanks.
I've been able to make some amazing progress by following along with your videos. Thank you very much!
These videos are golden. Please keep going.
Hands down the best content of comfiui in my view
Best tutorial and best explaination ever, very great video, also very inspiring.
that workflow looks crazy!
Community MVP! 🙏🏻
These videos are great, thank you!
Thank you🙏, very useful analytics and a tremendous amount of work done👷.
Sempre video top!
Another HQ video from Mateo about Stable Diffusion? Yes, please! 😊
Best videos on CZcams with respects to Comfy! A few videos that I think would help others (including myself) is inpainting with SDXL and photo bashing on SDXL. I find inpainting with SDXL inconsistent. It would be nice to see these topics covered in images that are geared towards landscapes and not just on people. Such as adding objects to a scene where prompting alone can't handle it because you are adding several objects. Just my two cents and your channel is just awesome!
Ferniclestix has some great tutorials about those! Him and matt3o have some of the best comfy tutorials
Great video, learned a lot!
Awesome!!! So helpful. Thanks!
Great work!
Wow, another great video Matteo! I would love to see your take on the new PhotoMaker and InstantID SDXL models.Peace!
I'm trying PhotoMaker right now... not impressed at the moment. InstantID has potential.
Matteo - one thing I'd love to know, is how the base checkpoints determine the faces (without ipadapter). What I mean, is almost every checkpoint I have tried has a "base" female or male face. Even when changing the nationality or race, the face characteristics stay similar to that base face. It's only when you start adding names or other positive prompts can you change the face into something more unique. I understand the models were trained on common sets of portraits, but surely there were many thousands of faces, if not millions. Ideally in my workflow I want to create a never before seen face, then use a workflow such as this to put the character into different scenes, poses. However I just can't seem to to randomise the face enough without putting in lots of ugly looking prompts.
thank you very much Mateo
amazing! thank you 🙏🙏🙏
Thank you for this video👍👍👍, again very in-depth tutorial in simple language ❤❤. I have 1 request - Please upload in-depth prompting tutorial video for comfyUI
+1
thanks!
prompting is such a complex topic and really depends a lot on the kind of image you are trying to do... but it would make a very interesting video for sure.
@@latentvision i will wait for your video on this topic.❤️
Matteo, thank you so damn much for your outstanding work. I have been on a deep dive on all your work, clip models, and diffusers
Crazy good inspiration
like magic
Something I’ve done for upscaling with 1.5 models is to go back to the original model and separately add the IPAdapter(s) and Lora with full 100% weights for the upscaler-so lower weights on the model going into the first ksampler so it’s not too rigid, but high weights on the upscale to pull it closer to the sample image.
yes that's a good strategy especially when you have multiple people or when the image is not a portrait. Always better to make a rough composition first and refine in a second pass
Amazing video Thank you.
For me the best result is FaceID v2 SDXL+Upscale but sadly that truns my comfyui into a lowvram mode, i guess there was a deleted FaceID sdxl, it ran fine and results were really good.
Loved the video! Given your experience in this space, are you aware of any tech that will improve upon inswapper/insightface etc to enable swapping of faces or generation of custom faces at side or angles other than front-on or nearly front-on? this seems to be largest constrain of the current tech. Thx!
thanks man this helped
Awesome demo Mateo !!!
Respect.
#NeuraLunk
BRAVO 👏 🙌 🎉
I love how the name of the models overlap the interface and become completely unintelligible. Good stuff. Remarkable UI design.
To say that it's at least a month that I'm implementing two faceIDs in the same image, didn't know that it's been frequently asked on discord, you have clearly explained how to do it in other videos after all. Anyway, thanks A LOT for these benchmarks, these really are a time saver. Guess I'll stick to faceIDv2 and reactor faceswap in the end for total consistency of my characters. The faceID is still useful for hair and face shape.
yeah we already talked about that... you know, on internet something said 1 month ago doesn't matter anymore 😄
Thanks
Hi Mateo 👋
About changing the Hair.
Using Unsampler with Controlnet would change the colour with keeping the same hair exactly , when writing in a prompt another colour. Right?
instead of masking or inpainting
Im new to all this, its there any way to make the head tilt or show a different facial expression?
Hey, your tutorials are awesome. How can I use the FaceID Portrait workflow into an existing image? e.g. just a simple faceswap of an existing image, maybe using Inpainting or something? What do you recommend, is there an existing workflow where I can set the face images, and the image to be swapped?
you can do inpainting sure, or you can bbox the face and do a simple image to image
Thank you so much! How would bbox face and i2i work? Am I outpainting the original face into a new image, or something else?
Also, is FaceID the highest fidelity for photorealism, or InstantID can also work with photorealism?
Thanks!!@@latentvision
Did you try the pony model or any pony modles for face recognition in your tests.? If not I'd like to see you do that next time around to compare how that model fares in face recognition.
pony models are not compatible in general
@@latentvision Okay makes sense.
From what I saw SD15 FaceIDv2 + 2x upscale works incredibly well (upscale as simple resample with 0.4 denoise). Adding any other face model makes face blurry. With FaceIDv2 its incredibly sharp and keeps level of "likeness" comparable to best lora.
Can you go a bit further on what you mean by upscale as simple resample? Thank you!
I think it's the upscale mode you can choose
Like in default it's on (near exact)
Im trying to follow along with the new pul ID demo but am unable to see the DLib Model for Face Analysis Module. Only the insigt face model shows up. Where do we place the Dlib Models as I have downloaded them but they are not showing up in the load model. Thanks
Awesome title
IKR?! It basically wrote by itself!
So for commercial use only Plus Face and Full Face are allowed?
Is it possible to apply makeup on an input image without using a checkpoint? I've been working on this for a long time but I'm not sure if my efforts are in vain.
Thanks!
Is PhotoMaker is different separate model?
yeah completely different thing.
❤
How do I calculate the embedding differences of my workflows? I want to see if I recreate the same individual at the end. Thanks!
I'll add a node for that
Great benchmark, thanks a lot. You said that Instant ID and Face ID portrait rely on insightface, thus you need to buy a license for commercial use. Does this also apply to using Apply ip adapter face ID then as it is also using insightface? Thanks again.
it's not the node itself, it's the insightface model. If you don't use the model for the image generation, no problem
@@latentvision thanks. But for my understanding and to double check: doesn't the node use the model under the hood?
@@DanielPartzsch if you use any FaceID model, then yes, those make use of insightface. You should check their license. If you don't use any of the FaceID models, then you are fine.
sry if it is a dumb question, but how they can identify if somebody used their insightface by looking at the production image?
treasure
Hi and thank you for your splendid video.
Does anyone know why i have this error please?
---
Error occurred when executing InsightFaceLoader:
No module named 'insightface.app'
---
I've proceeded with the install inside ComfyUI via Git Url and everything was correct but i cannot test InsightFace.
Thank you in advance.
Olivier
Amazing video, one question , what is the difference between weight and weight_v2? @5.43 , he mentioned these weights, but I am unable to understand, can someone help?
weight is the global weight. weight_v2 is used for the clip vision embeds. I suggest a value between 1.5 and 2
위와같이 모델 헤어도 일률적으로 비슷해질까요?
such good information here , thanks for the research ! one thing that might be nice is establishing a baseline , using actual photos of the same person in different scenarios to see how low the difference tends to be with realife variation
Matteo, can you confirm that this only works when the positive conditioning is plugged directly to the sampler, i.e.: it breaks when using concat/combine with another positive prompt?
no, it should always work. Of course more conditionings will pollute the composition
@@latentvision You're right, I guess that was the case. Had to bump up the prompt for it to take effect again. Thanks 👍
I'm struggling when I want a specific eye color, especially the abnormal colors. Any tips?
well inpainting is the easiest solution
try adding heterochromia in the prompt
Did anyone figure out how to run insightface with cuda instead of cpu?
leave insightface in the CPU, you don't need to bother the GPU for feature extraction
Don't get me wrong but... can I get your aunties number? I'll help her carry the groceries!
for unrealistic character (anime) ,🥲 its bad
you need more work, but it works for that too