Image stability and repeatability (ComfyUI + IPAdapter)
Vložit
- čas přidán 31. 05. 2024
- This time it's all about stability and repeatability! I'm generating a character and an outfit and trying to reuse the same elements in multiple settings, poses and facial expression.
Workflow and all images: f.latent.vision/download/char...
Discord server: / discord
IPAdapter extension: github.com/cubiq/ComfyUI_IPAd...
ComfyUI essentials: github.com/cubiq/ComfyUI_esse...
@ALatentPlace
00:00 Intro
00:22 Building the face
05:35 IPAdapter face model
07:55 The body
10:53 Merging face and body
16:51 Women always get the best armors
17:15 Modern outfit
18:06 Outro
** Background music **
-- "Part A" by Alexander Nakarada (www.serpentsoundstudios.com) Licensed under Creative Commons BY Attribution 4.0 License
-- "Last Stop" Synthwave by Karl Casey @ White Bat Audio (whitebataudio.com/)
-- "CyberPunk City" by Peritune (peritune.com/)
-- "White Gold" by Karl Casey @ White Bat Audio
U should win some kind of award for being so helpful to the community
Yeah seriously
The irony is that it's him who's giving the awards. Matteo is an absolute legend 👑!
Yes indeed ❤
Nobody has opened my eyes to the possibilities of ComfyUI more than you have. I'm only two weeks in to my Comfy journey, and I'm already weeks ahead of where I would have been had I never found your channel. If there was an 'Ultimate AI Bro' award, you would win it.
I was aware that moving on from automatic1111 was the correct decision, yet I hadn't realized the extent of what I was foregoing 😂😂
totally.
So many ComfyUI video's show very little and only hint at the possiblities. Your video's are a revelation! You show SO MUCH and have a deep knowledge of the interface and its possibilities. GREAT STUFF!! thankyou!
I feel like I am in some science class at the university watching your videos, you have so much knowledge, it's crazy. I watched all your videos and at the end of every one of them my jaw dropped to the floor… Amazing stuff! Keep up the good work, and thanks a lot!
Matteo almost summoned Exodia at 11:22
Thank you as always for the great tutorials
😂
It is so uncommon to listen to a guide were author has such a clear understanding of the subject. Find you work extremely inspiring and helpful. Thank you, Matteo!
Thanks Latent Vision, really helpfull, i' was a bit scared by this V2 changes, but it seems that is a good upgrade
Upvote if you are against background music. Great videos my man,keep up the good job this is one of the best sources on the topic
no worries, BGM has been removed long ago :)
This is brilliant and super helpful! Thank you!
few months back I thought the image generation AI is just type some text and get the results, which was boring, but fortunately people found a way to make it skillfull again
Your videos are insanely dense and totally on top of being comfy to watch and packed with quality info!! ❤
I have to say... probably the best AI tutorials out there. Well done and thanks for the knowledge!
Terrific stuff, Matteo! I really appreciate not only that you share your amazing workflows, but that you take the time to really break things down & show us the what, when, how and why of things. I've only been using ComfyUI for a couple of weeks & these kinds of tutorials are an absolute godsend. Thank you ever, ever so much!
Wow! This is exactly what I was looking for. Thank you so much for creating this walkthrough!
Supreme amazing stuff (like always) maestro latente!! No other person offers such great knowledge!!
You are the light in the latent-space darkness. Thank you for the content!
your videos are great with a lot of details explained in a simple and concise manner! keep up the good work!
Thank you again, Matteo 🎉 your freedom of creative thinking is beautiful 🎉 so happy there are such stars in the community
thank you so much for this! I've been waiting for this for so long! I can't begin to tell you how greatful I am for what you do. 🙏🙏🙏
Just thank you. Omg been looking for tutorials on this where I can open a work flow and look it through and follow along. Thank you.
Wow. Using your tutorials I can pretty much manipulate any image to be what I imagine in my head. Thanks so much.
Really excellent overview of how you can put many node types together to achieve a fantastic result!
this was exactly what i have been looking for the whole time!!! thank you so much
Awesome! Ur ideas know no bounds.
Ciao Matteo! Thank you for your work, this is a very nice video. Easy to follow
What can I say, another jewel in the IPAdapter toolbox.
Your tutorials are the perfect blend of technical and interesting, i sometimes watch them with no intention to try it myself but just to understand . also would love to see an example of this workflow on SDXL.
Your tutorials have helped more than anything else on the internet, keep it up man! P.S. Ty for IPAdapter :)
Great work matt3o 👏👏👏
Wow, thank you very much for this amazing tutorial! I'm definitely going to try comfy UI as I would like to have characters to create a small comic
Great tutorial. I'm subscribed!
That was probably the best video on comfyui that I have seen. I'm new to comfyui and am facing the steep learning curve that goes with it. I find many videos overly technical to start with.
You clearly explained your steps, not only how to do them, but why. I learned more from this video than probably the last 15 that I've watched combined!
Thank you very much for your time. Keep up the fantastic work!
very informative, well performed, thank you!
I'm not sure what it is, but these are the most clear and helpful guides I've found. I think it helps that you're both building it live and also explaining very clearly what each connection does. Thanks
Great tutorial as always!
Excellent Guide, Top Man
Jesus holy moly. Its like "So this 50 nodes is just a beggining".
Fantastic video. I have no idea how long I'd have been stumbling along on my own to try and do something similar.
grazie matteo, questi video sono GOLD!! grazie per il tempo che stai dedicando alla comunità di comfyui!! super apprezzati (se mai ti venisse voglia sarebbe bello avere un corso 101)
perfect work, thank you for inspire us 💪
Listen, the only problem with your videos is that there are too few of them. Each video I watched contains a wealth of knowledge that is missing from any other comfyui or SD videos I watched. I learn so much from your work. Thank you very much.
I love your videos. Keep them coming
Incredible control!
Mind blowing and amazing, your all videos are enriched with vitamin, calcium, protein , great stuff. Absolute genius. Probably best in AI. Thank you very much.
Awesome stuff 😊
Awesome 👍🏿👍🏿👍🏿.
Can't wait for your tutorial of batch embeds
Amazing! Thank you Matteo
You are a virtuoso comfyUI magician. I am in awe.
this is another world. So much to practice.
this is incredible!
This video is an international treasure. Protect it at all costs!
this is such a GREAT video, great knowledge, great presentation ! what else can I say..
Thank you. Great tutorial.
Nice tutorials, thanks!
A tip that I figured out yesterday while watching some of the videos on this channel;
If your images are dark and muddy it's because you are using samples that are too high resolution. There's a node called 'ImageScaleToTotalPixels' that you should run your samples through. 0.5 (512) for samples and 0.25 (256) for faces work for me.
great stuff!! U are a comfyUI wizard
Food for thought, I couldn't eat another byte or bit. Thank you for sharing your skills and hard work.
One of the best mods availeble!
Absolutely amazing. So much better than training a character lora.
Dude you are a comfyui NINJA this is incredible
amazing dude!!
Esse magrão é ninja!
The harvest is full, thanks again for sharing!
thanks you so much for tutorial
thank you Matteo!🙏
Thank you!
Ur legend man ❤
mitico - grazie
bravo, Matteo!
freakin amazing stuff
I was wondering about ways to enhance animation stability with this, might try something later
grande!
Truly fantastic ! Like someone else said on your channel, yiu really openned my eyes on the technical approach and i m understanding a bit more the fundamental concept! Thank you!!! I would love to get more content/ courses from you but at bit slower pace - most of the time ot s hard to get all the details from your screen recordings. I would pay for your high quality content
this is some crazy magic ..
genius
Awesome (A)EYE opener
I see what you did there 😄
Bro is a wizard
Oh God, I'm glad I came back to this video, I started using attention masks but misunderstood what they were for, I thought it was so the ipadapter would only take into account whatever was in the mask /_ \
how did you generate the control net image for the face? Would it be possible to generate one where the character was looking to the right so I can then use it as a reference and generate an image where the character is looking to the right?
thanks for the workflow, however, I seem to get images that matches the backgrounds of the reference images supplied to the ip adapter processors. Is there a way I can reduce this bias? For example when I generate flat background of poses (gray background), it tries to generate an image that matches theese backgrounds.
It also gets influenced by the hair color of the character. If I have a red hair woman, the output will contain more "red"ish things on it.
I'm still pretty new to SD, what's the difference between lowering the CFG in the K Sampler vs using the rescale CFG node?
roughly said with a cfg rescale you get the benefit of a higher CFG without burning the image
Ciao Matteo, 6 un Mito!!
Great video! I'm getting this error though and can't seem to find the node when searching: When loading the graph, the following node types were not found:
IPAdapterApply. If I try running it regardless, I get SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)
Impressive! Do you think adding 1-2 additional images to the face ip adapter will allow for more flexible generations - e.g. a side profile view as well? Or will this simply crowd out the concepts?
I would use multiple images based on what I need to do. For example if I need to generate only happy people, I would make a smiling reference. The closer the reference is to what you need to do the better. Sending two contrasting pieces of information to the IPAdapter could be detrimental. That being said nobody knows how this stuff really works and the best way is to experiment.
Thanks. Is there any way to copy the facial expression to a specific photo?
Hey, Matteo! Thanks so much for the lesson. Could you plz explain how to variate noise with the help of the Advanced IP Adapter? Its parameters are different from those of the old node that by times I'm feeling at a loss. Is there any way to use the old legacy IP Adapter Nodes?
What custom node are you using to bring up the custom node quick search at 2:22? Is it built in and a keyboard shortcut... or?
double click on the work area
Great tut as usual! Will help me greatly in concepting my own characters) btw, could you tell me your opinion on UltimateSDUpscale and why don't u use it?)
any more advice on how the ip adapter plus face understand the input image ? which is best recommendation to let the model make more likeness of the original input ?
just have only the face in the picture as shown in the video. realistic photos work better than illustrations generally
appreciate your advice@@latentvision
Thanks for your videos, learning so much from you! I am trying to change the shirt and to keep it as original, but the model keep changing the shirt (even when I use weight of 1 and no noise), is it possible to do it with the IPAdpter?
it's very hard to say without looking at the workflow sorry
Thanks, in general its possible to keep the image as is (with same logo/text etc) ? The changes I made in your workflow was in the torso part, added new image of a shirt to the ipadapter and changed weight, noise etc. The final image came out is showing the new shirt, but the details are wrong (logo/text). @@latentvision
The way you harness the power of ComfyUI is amazing, I wish you could make a tutorial for Comfyui, not just IP adapter. I'm sure there are a lot of things that we can learn from you. Please consider this.
I've made a couple of more generic tutorials, I'll do more in the future. thanks!
@@latentvision No sir, thank you for sharing your knowledge with us. Hope I could learn more from you.
A quick question though, I followed the link you provided in the captions for ComfyUI essentials extension, seems it leads to IP-Adapter GitHub page. Yet I couldn't find "Image Crop+", even after loading the workflow, Install missing custom nodes couldn't find it.
my bad, wrong link, should be fixed now@@KooroshGhotb
@@latentvision Thanks
Great video as always. If you don't mind me asking, what does CFG Rescaling do? I've never seen this node before.
simply said lets you use higher CFG values without burning the image
When you have a character Lora, would you apply it just to the initial image or would you also apply it every step of the way?
well depends on the lora and if you want to keep the same exact character at the end. Generally speaking yeah if you keep it till the end it would be very stable, but again... depends...
Amazing work, Matteo! I've never liked Comfy because it often gave me errors that were hard to debug but no one explains things like you do, and now it's a lot more clear! So thank you for your tutorial - it's very helpful! However, right now I'm stuck with one error - where can I find the model you have put in the clip vision node? When I load your workflow, it's empty. I tried googling for it, but to no avail. Any help is appreciated!
go to the extension repository, there are detailed instructions for installation
Great video! thank you! but I can't figure out why the Apple IPAdapter node is missing.
Awsome video! This is something I have been trying to do but have had little success with. I do have a question though, where do you find the control_v11p_sd15_openpose_fp16.safetensor file that you have in your workflow? I have seen several people who share their workflows have started to use it but I have no idea where to find it as I keep getting sent back to the regular repositories which don't seem to have them.
Thank you for your precious sharing. I got one question: what if I want to use SDXL model to generate face? In this case, how should I set the ipadapter model? There is no SDXL face ipadapter model.
there's a new SDXL model that was released recently!
Could you tell me where to download it please?I couldn't find it via manager@@latentvision
I think I found them, thank you again!@@latentvision
i dont have this note., please help me to install this note . Thanks you so much " When loading the graph, the following node types were not found:
IPAdapterApply
Nodes that have failed to load will show as red on the graph."
check my latest video, IPAdapter went through a code rewrite. The node to use now is IPAdapter Advanced
Hello ! first thing... thank you for your very usefull tutorial. There are many good points in it.
But i was wondering : Where did you find your "ImageCrop+" node ? i can't find it anywhere.
Thank you !
ComfyUI essentials
Thank you ! @@latentvision
wow,it can make animate
crazy