STOP wasting time with Style LORAs! Use THIS instead! How to copy ANY style with IP Adapter [A1111]
Vložit
- čas přidán 20. 05. 2024
- #aiart, #stablediffusiontutorial, #generativeart
This tutorial will show you how to use IP Adapter to copy the Style of ANY image you want and how to apply that style to your own creation. It does the job of a LORA with just one image. We will also compare Control Net Reference Only versus IP Adapter and see how these two are different from each other.
Chapters:
00:00 - Intro
00:26 - Topics overview
00:45 - How to Install Control Net extension
02:05 - Downloading IP Adapter models
03:30 - How to Use IP Adapter to copy the style of a Reference Image
10:54 - Understanding IP Adapter parameters
15:05 - Reference Only versus IP Adapter comparison
Useful links:
Where to get the IP-Adapter models
huggingface.co/lllyasviel/sd_...
Where to get the ip-adapter-plus-face_sd15.bin file.
huggingface.co/h94/IP-Adapter...
The reference image I used in this video was from a reddit post by navalguijo. Thank you for creating such a beautiful image!
**If you enjoy my videos, consider supporting me on Ko-fi**
ko-fi.com/keyboardalchemist
Oscar Wilde once said, “Imitation is the sincerest form of flattery", and that's what I think about when using IP Adapter since you can use it to copy the style of an image that you like or admire and transfer it to a new image while keeping the same style. But please do keep in mind another great adage from Spiderman's Uncle Ben, "With great power comes great responsibility", so use this power tool to create and use it responsibly my friends. Cheers!
Glad someone is finally making a video about this in A1111. Every video seems to be comfy UI and honestly that interface looks anything but Comfy.
Yeah, I'm not too keen on comfy UI either. Glad this video was helpful for you! Cheers!
No one would use it if it was realistically named ClaustrophobicSpaghettiUI :)
This is the kind of stuff other ai tubers dont explain in their rush to be first with content. Thank you.
Thank you for your support!
As someone who is known in the training comunity... LORA's purpose are beyond that. I mean, it's more than OK to use IP adapters but LORA's can do things IP adapter can't and that's why people are still requesting and using them.
Thanks for your feedback!
Holy s- I just entered the world of AI art and found your videos browsing for tutorials and I'm loving them, they are easy to follow and you even show the amazing results. You earned a new sub, keep up the good work!
Yeah, generative AI is a lot of fun. Hope you enjoy all of my videos and thank you for the support!
5:33 Just a tip here. If you select the image with the seed you like, you can just click the Recycle button and it will place that seed in automatically so you don't have to copy paste :)
Great tip! I don't use that button nearly enough. Old habits die hard I guess. =) Thanks for watching!
You cover a lot of great concepts in your videos. I'm an illustrator who mainly uses img2img to refine my sketches or CG renders. I would love to see more videos on img2img. This is where I think established artists would appreciate this tool immensely. Thanks again.
Thank you for tuning in and for making this suggestion! I will keep it in mind for future videos. Cheers!
Instant sub! I just discovered your inpaint anything videos, and then I watched this. Kudos, my friend. The praise you've received from other commenters is well warranted. Thank you so much for the time and care you've put into your tutorials. You've just rocketed to the top of my list of AI-focused youtubers. I wish you nothing but success and look forward to seeing your subscriber count grow!
Thank you for your support and your kind words! I’m glad you enjoyed my tutorials.
Love all your videos. I binge watched them since, put together, they gave me very good insight into inpainting that I couldn’t find anywhere else. (I was struggling with invokeai)
That's awesome! I'm glad my videos were helpful for you. Thanks for supporting my channel!
Great overview. Super useful tutorial that I will be incorporating into my image generation!
I'm glad you liked the video. Thank you for your continuing support of my channel!
the best tutorial on this subject! Thanks a lot
You're welcome! I'm glad you liked the video!
The plus face model does exactly what it says. You place a face in the control net, real person or character and then prompt for a person and it will put the reference face in your rendered image.
The trick is that it really has to be a face image, only the face or face and hair, nothing else. So you better crop the image or use the controlnet selection tool.
Thanks for the tip! I did not have great results with the plus face model, but I'll try to crop the face real close and see how it goes. Cheers!
The big issue I feel is that IPAadapter brings across far too much of the subject(s) within the source image to be considered for style creation. Whereas training a LORA fares a bit better.
Thanks for the tut! well explained
I'm glad you liked it! Thanks for watching!
THANK YOU
Excellent Tutorial. Thanks !
I'm glad you liked it!
thank you very helpful
You're welcome! Thanks for watching!
Hey bro one question ❓
If we take few images of coloured manhwa and want to have certain style like (flat 2d animerge) but we don't want to change anything in the image like people etc. But only style how it look like painterly anime or else only without changing any object or position of anything is it possible if than how ?❤
We want the image to look same but different style like painting, water colour, anime etc.
Hello, thanks for watching! If you want your characters to look the same, you should use a LORA, either grab one from CivitAI or you train one yourself. For the style, you can either use prompts, specialized checkpoints, LORAs, IP Adapter, or a combination of all of these things. IP Adapter is very good at transferring style, but not so great at copying the looks of a character.
For some reason when I hit generate the ip-adapter radio button unselects it's self and generates without using the control net. Has anyone else encountered this?
I'm only about a week into the AI scene and this video is top notch. You covered exactly everything that is important. I hope you continue to do more. There are so many videos out there and most are bloated and missing key considerations. Very well done!
I appreciate your kind words! Thanks for watching!
I've found (especially when copying faces) that a control weight that's too high will make the image look hazy and not crisp. I keep the weight down to about 0.2
Thank you for sharing this tip! I will give it a try.
Hi! Grate content! Make one for the hands please, I'm unable to make a proper realistic hand.. Please I beg you
I Wanna to use depth map ref to guide it can u show the way pls ?
Starting from 15:31 in this video, I showed an example of IP Adapter in Control Net Unit0 and Open Pose in Control Net Unit1. If you want to use Depth Map instead, just replace OpenPose in Unit1 with Depth Map. Depending on the effect that you want to achieve, you can also add another CN Unit (i.e., CN Unit2) and add Depth Map there. I hope this helps. Thanks for watching!
can we make logos using this IP Adapter ?
there is a new style selector extension as well which you may enjoy using
This style selector extension looks very cool! I'll have to try it out. Thank you for your continuous support of this channel!
I have a problem where the preprocessor "ip-adapter_clip_sd15" and ""ip-adapter_clip_sdxl" are not showing up. Can someone help me?
can you make a comfyui version?
Bro ip adaptor is not working for me , can you help? It's not copying any style , it's just generating a blurry pencil sketch like image with no coherency.
Hello Keyboard Alchemist, this teaching video is very good. Can I translate this video into Chinese? Thank you
weird, when I generate an initial image it is completely different than what I put in ip adapter
IP adapter will copy the style of a reference image. You might want to try tweaking some of your parameters. For example, increasing Control Weight.
can i use it with img2img too?
Yes you can. Starting at 15:20 in the video I gave an example.
how can I do similar thing using sdxl ?
If you are using a SDXL model, just select the sdxl preprocessor and sdxl ipadapter model in control net. I mentioned this at around 4:15 in the video.
@@KeyboardAlchemist I tried it and it keeps giving me a "mat1 and mat2 shapes cannot be multiplied (1x1024 and 1280x8192) controlnet" error. I did use the ip adapter xl clip for the preprocessor and chose the proper xl model in controlnet. Any idea why it isn't working still?
@@HAJJ101Not sure about this, but did you check the PixelPerfect checkbox?
@@KeyboardAlchemist Yes and I retested this just now again. Do you have discord?
Roop and Reactor can fix your face
For sure!
Alright dude you're like killing me, why in every video does your voice gradually change from start to finish into an asian accent? Are you using AI to change your voice at the beginning? Be proud of who you are!
Hello there, please know that I'm not doing it to upset anyone's auditory senses, it's simply out of necessity. I mostly create these videos in my spare time, which is very late at night, when my family is asleep. It is easier and quieter to use text-to-speech synthesis with my cloned voice rather than speaking into the microphone. I will try to figure out a better workflow to avoid this issue, but until then I ask for your patience and understanding. Thanks for your continuing support!
@KeyboardAlchemist hi, what is text-to-speech service you use? Thanks
@@quocbaonguyen4421 I use Eleven Labs to clone my voice.
lol anyone using automatic1111 in 2024 is a noob.
Just got started with this Ai stuff, is Automatic1111 not the thing to use anymore? What is?