Become a Style Transfer Master with ComfyUI and IPAdapter
Vložit
- čas přidán 17. 05. 2024
- This time we are going to:
- Play with coloring books
- Turn a tiger into ice
- Apply a different style to an existing image
Github sponsorship: github.com/sponsors/cubiq
Support with paypal: www.paypal.me/matt3o
Discord server: / discord
All the workflows can be downloaded here no strings attached: f.latent.vision/download/styl...
The SDXL lineart controlnet: huggingface.co/TencentARC/t2i...
00:00 Intro
00:16 Sketch to Image
08:52 Replace objects and materials
12:41 Apply style to image
17:02 Negative image conditioning - Věda a technologie
Love this quote: "This is not magic and it's definitely not going to change everything. It's just a very powerful tool at your disposal. If you understand how it works, you'll be able to get great images out of it, but don't think that you can send whatever reference and have perfect results with no effort."
however, ipadapter is so magical for me ❤
7:48 for timestamp
the best thing about matteo is he always start with a fresh comfyui default setting, not some overwhelming spaghetti pre made nodes. makes it easy to understand the process and follow along. thanks matteo!!
Watch tv? Nah. Play games? Nah. Tweak and generate with IP-adapter deep into the night, under the guidance of master Mateo : YES!!
I know you’ve lamented people leaving your videos before the end, but this one leaving is just because I wanted to get amped for when I actually have time to watch the whole thing sometime tomorrow. Love the 15-20 min video format, you’re still the GOAT.
eheh don't worry I was just kidding. the videos are here, people can watch them for how long or how short they want :D
thank you for explaining how to use the negative image input. i added different images and was never sure what to put there.
I love your videos, the amount of useful information you give (although it makes me dizzy to see the nodes), the tranquility of your voice and the charisma you exude.
Thank you very much for the workflows
aaaw thanks
Wow ! I'm messing with it all day and you upload a new masterpiece ! 😍 I even learning to draw all day yesterday . This is a great usage !
Thank you so much for the detailed breakdowns of how IPadapter works. We are looking forward to new videos!
This is such a nice tutorial. Thank you for walking through IPA+Controlnet possibilities.
God bless you dear Matteo.You are so precious mind,Thankful for your time shared us,best regards.
Maestro Latente delivers another masterclass and entertaining creation!! You may live forever!!!
You sir, is an excellent teacher... So easy to understand step by step... Please do this most of the time... The level difficulties is so helpful for a noob like me..
Brother, thank you for your video. They are particularly useful because with them, I went from knowing nothing to having clear thinking, and it only took me a little time. Thank you very much for your efforts.
You make your work available for everyone! Thank you! You have a good ❤
Amazing. Just it...amazing.
Thinking about myself now. I spend lot of time watching videos and trying to mimic those tecniques...i wish someday i can reach that kind of mastery.
Amazing. Just amazing.
Thank you for this amazing tutorial. I love to see my own drawings and styles come to life, and how quickly new things are created 🙂
oh is it yours? please tell me more so I can give you proper credit
@@latentvision No worries, these are not my drawings 😅 Sorry for confusion . I was meaning my drawings at home, which im gonna use 😉
I always enjoy watching your videos. You are the master!
the way of teaching is very simple and effective. easy to understand 😍😍
Incredible work as usual. Love it!!!
You're... simply the best
Better than all the rest ?!
@@latentvision Let's just say that your explanations lift the veil on the magical side of generations, and that even if we understand that we'll have to try a bit at random, we still get the feeling of having more control. The other CZcams channels don't go into as much detail, so you can apply their precepts to give it a try, but as it doesn't seem to be based on anything, you might be tempted to give up as soon as you've had a few failures.
Thx for the work, you're awesome
Thank you Mateo. I will watch over and over again this video to make sure I get all!
Ps : "you are now the master of style transfer.." ! 😅😅😅
I have been using your embeds node to try and go the other way, from a photo to a hatched pen drawing... much harder but I got quite close. Being able to save and load embeds is a great touch.
you are a wizard and your genorosity is inspiring!
being inspiring is the greatest recognition I can ask for... thanks
This is a fantastic video that seems to teach legendary magic. Thank you always.
What you talk about flows smoothly, and I gain a lot from it.thanks.
Wow you make it look so effortless and I swear this is pretty much MagnificAI haha. Great work!
i really love your content, very informative thanks!!
thanks again for sharing.
great video as always!💪
Sir. You are my lord. Simple and usable and even changeable for the work I want to apply. You are the true engineer my lord
lol thanks but I'm no lord.
Thank you kind sir!
Thank you master !
Amazing. Just it...amazing.
Grande Matteo ❤
Phenomenal
Epic video
ip adapter is a real magic
incredible as always. ring the subscribe bell, people!
Thanks!
You are a star....
Thank you for another great tutorial!
The models and many modules are mostly black boxes for the community and any insight on their internal workings is very helpful. Such clues as "SDXL prefers CN strength and end_percent lower than SD1.5" or "bleeding of undesired elements can be counterbalanced with noisy negative image" are invaluable. Any insights on behavior of Unet, Clip, Vae, latents, save us hours of trials and errors.
Is it possible to control the scale of model application better than with the regular img2img denoise? Namely, is it possible to force a model to preserve large scale structures and change the textures only or vice versa? IPAdapter appears to be working along these lines already but separate feature scale control would be of additional help. Any insights on how various types of noise affect the diffusion would be great. Looking forward to more of your videos.
If I had money, I would be throwing it at you, but sadly I'm broke. Great Video!!!
WOW!!😍😍
Thank you, Matteo, your videos are always helpful. One question: what is the use of "prep image for clipvision"? Just to make output image shaper?
it tries to use the best scaling algorithm possible to catch as much details as possible. on top you can add sharpening
@@latentvision Thank you so much!
Will you release a ComfyUI course in the future? I love your workflows but I find the software daunting
Hello, Matteo
I was wondering what Lineart controlnet you used for SDXL with the sketch images.
Keep up the great work! It's super helpful across all the community!
it's the controlnet lora by stability ai, but you can check other models if they are available
You are awesome 😎.
no, you are awesome!
I'd love to give this a shot but I can't seem to find a way to install the t2i-adapter-sdxl for comfyui, I'd greatly appreciate any help I could get. Thanks!
I'm learning from you, God.
goat sacrifices only on Friday
This vídeos are amazing!
What kind of hardware are you using? I'm considering to build a machine for SD and Small LLMs, but my budget is low.
Would a 3060 12gb be good enough to start?
I have a 4090. I had a 3060 before... to start, yeah should be enough.
@@latentvision thanks for the reply man!
youre a master
I really like the flow of the video. The example at the end with one IPAdapter and two ControlNets; would using InstantID be better for portraits?
face models don't generally like other conditioning on top, but yeah it is possible
First of all, you are the best your tutorial videos are great. I tried to download "t2i-adapter-lineart-sdxl-1.0" but in download area, there are two pytorch models, where can i find that?
Edit: I found in "install models"
Thank you for these amazing tools Matteo! I was wondering if maybe you have some tips on how to best transfer an art-style to a subject that the checkpoint has no knowledge of.
I have some 3D renders of creatures that I would like to turn into an illustration. So far sending the 3D render as the latent image and a style reference through ipadapter along with some style descriptions in the prompt was "ok". However unless I keep the denoise extremely low the features of the creatures (especially the faces) change drastically. I already tried turning the 3D render into lineart/depth, testing several controlnets...similar to what you did with the castle. Unfortunately nothing really did the trick. Either the design of the creatures changes or I get hardly any of the style into the picture.
the checkpoint is actually very important, try many of them, it makes a huge difference. Regarding your specific question it's hard to say without checking the actual material
Where to download the "ipadapter-xl-lineart-fp16.safetensors" used in the setup? EDIT: Got it - used "Install Models" in the ComfyUI Manager.
Valeu!
Great tutorial! You're really doing a fantastic job! Thanks a lot! Just tell me, please, where can I find xl-lineart-fp.16 that you're using as a controlnet model?
I found it 😉
linked in the description!
@@latentvision Thanks 🙂
motto i love u
sorry, I'm taken :)
this is so cool but i got a question what if I tried to do reverse of coloring book from normal image to line art/ coloring book do I just swap the images..thank you
yes, works very well the other way around too. be very aggressive with the text prompt in saying exactly what you want. Also you might NOT want to send the original image into the ksampler latent to avoid getting colors.
is there any way to mix 2 objects into a image with style transfer ? for example I want to mix image of tiger and dragon with the style transfer from someone painting.
check attention masking
+1!
i see this can be very useful in someway of work ... very impressive
Does style transfer also work with sd 1.5 models?
👌👌👌👌👌👌
Hey bro could you help me with a question ?
Is it possible to get a model to not see everywhere what it wants to see ?
As example i try to create a "landscape" out of a sketch of a face.
Yes its a face sure...
And obviously the AI sees it.
Inside the prompt is nothing with "Eye" or somthing like this.
So my Upscale workflow wont work pretty well, if i try to create a "Landscape" as example out of it, because there is always an eye there.
Is it possible to get a result without this shit happen ?
it's not easy to give you an answer, I'd need to see the reference images. Check my discord maybe
@latentvision , why you changed the name of the selection at the weight type from "Style transfer(SDXL) " to " Style transfer " only ?! , Can we now works with the style transfer for both (SDXL) & (SD1.5) ?!....🤔
yes, you can transfer style (and composition) in SD1.5 too, even though it's not as effective. The style+composition node is only for SDXL, but I'm working on it.
@@latentvision thanks...👍
@matteo, I am following this and while doing the inpainting part I get a "AttributeError: 'NoneType' object has no attribute 'shape'" error coming from the KSampler node, I can't figure out why it's happening. Can you please help?
you are probably using the wrong controlnet
Error occurred when executing IPAdapterUnifiedLoader:
ClipVision model not found.
Please help me, I don't understand why after I updated my IP_adapter to the latest version, it doesn't show the "style transfer" section anymore. I've tried everything to no avail. Hope you can help me fix it. Thank you very much for your contributions. IP_adapter is great.
I'm sorry it's hard to say. Try so give more details in a github issues
thanx, but i have an error, clip vision model not found. Could you please help? What clip vision should i install?
i installed CLIPVision model (IP-Adapter) CLIP-ViT-bigG-14-laion2B-39B-b160k but it doesn't work
@@eugenbuzuk2000 Most of the ip adapter models require the other one (CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors)
Hello Matteo, thank you very much for your videos, they are really good, I only have a little problem with this tutorial, it gives me an error in the sample when I delete the adapter it creates the image, at the beginning it generates 5 images but now this error appears, I wanted to know if you know Any solution for this error.
Error occurred when executing KSampler:
Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead.
File "C:\artificial intelligence\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
try to execute comfy with --force-fp16 option
Can you please help combine your WF with CosXL ! And then to anidiff and we could start create movies ))
How to use IPAdapter with Cos Stable Diffusion XL 1.0 Edit?
I believe it's compatible with cos but not with cos-edit. I need to look into that
Sorry, could someone please post a link to that specific ControlNet model (T2I-Adapter/adapter-xl-lineart-fp16.safetensors) ? I cannot seem to find it anywhere :(
t2i-adapter_diffusers_xl_lineart.safetensors
@@joebreaker11 Thanks. Could you help me a bit further, please ?When I follow the hugging face link, it points to the "Model card" tab by default. So I switch to "Files and versions", where there are 2 safetensors models : diffusion_pytorch_model.fp16.safetensors and diffusion_pytorch_model.safetensors. Do I need to download and rename one of them, to match "t2i-adapter-lineart-sdxl-1.0.safetensors" name ?
@@pmtrekThey are TencentARC models, afaik, I renamed them accordingly, full and fp16, put them in ComfyUI/models/controlnet/. Both models worked, but seem to work a bit differently than Matteo's, maybe because his model has a different name. Didn't find exactly the same one, but found several repos of controlnet models on HF besides TencentARC - lllyasviel, SargeZT, Diffusers... Now to find a few months to experiment....
@@valerymoyseenko thanks, that's helpful
Now, if you reverse the process, could we make a useful Coloring Book drawing, with thick(er) lines?
you can very easily make coloring books, but calibrating the thickness of the line would be not trivial
simply the best. I hope your channel will soar soon. We had enough of the Ai Image generation fake tutors. This is a discipline and needs a sound teaching method. And thank you for the freebies to the unemployed. Not everyone can afford a subscription. God Bless you Matteo.
you are most welcome! Have fun! and thanks
why ipadapter model not found?
i cannot find T2I adapter lineart fp16. how to get in component. could you help me please
huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0
@@latentvision but when i install no controlnet like you show so error
My brother went to an amusement park and sent a picture of his daughters sitting on a mushroom there, he did this because my mom had a picture of me and my brother sitting on that same mushroom, she found it and it was still a bit visible but the colours have faded and turned severely red, severely, I wonder is there a way to retouch old fotos close to what the original was? She has many old pictures of us and other family, it would be nice if I could restore the memories for my parents now that they are getting of age and their memories are as faded as the pictures they have of them. Mind you I do not want to turn them in mangafigures with big boobs, but as original as possible
yes there are multiple strategies for image restoration. Supir is very good for that, I believe in my discord someone posted a workflow to do that
its not about sketch , its about colour controls...many people want to make comic, but cant draw...with control-net, ipadapter and sd any one can draw anything... But colour control, for example dress color, home colour, overall multiscreen scene colour. that's the problem.
What's lineart model you're using?
whatever works 😄this is t2i adapter lineart I believe
@@latentvision could you pls post a link to it ? I cannot find it....
@@latentvision thought there's bigger model like with depth full or something.👍Also is there any way to use this ip-adapter style transfer with forge? Can't really use sdxl with comfy😢
@@pmtrek huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0
@@latentvision thank you. When I follow that link, it points to the "Model card" tab by default. So I switch to "Files and versions", where there are 2 safetensors models : diffusion_pytorch_model.fp16.safetensors and diffusion_pytorch_model.safetensors. Do I need to download and rename one of them, to match "t2i-adapter-lineart-sdxl-1.0.safetensors" name ?
cant see strong style transfer in the options :(
you just need to upgrade! 😄
@@latentvision yes I tried via the manager but it did not come. but after another restart it came :). cheers!
Castles are easier than women, just fyi.
we had castles back in the Disco Diffusion days 🏰🏰
shush, don't tell anybody!
!
ed
Thank you so much Legend 🫶🏼
Subscribed. I love these projects from start instead of downloading template and spending weekend on debugging
Thanks!
thank you! cheerio