WOW! NEW ControlNet feature DESTROYS competition!
Vložit
- čas přidán 12. 05. 2023
- With a new major update to ControlNet for Stable diffusion, Reference only literally changed the game, again.
Prompt styles here:
/ sebs-hilis-79649068
Support me on Patreon to get access to unique perks! / sebastiankamph
Chat with me in our community discord: / discord
My Weekly AI Art Challenges • Let's AI Paint - Weekl...
My Stable diffusion workflow to Perfect Images • Revealing my Workflow ...
ControlNet tutorial and install guide • NEW ControlNet for Sta...
Famous Scenes Remade by ControlNet AI • Famous Scenes Remade b...
LIVE Pose in Stable Diffusion • LIVE Pose in Stable Di...
Control Lights in Stable Diffusion • Control Light in AI Im...
Ultimate Stable diffusion guide • Stable diffusion tutor...
Inpainting Tutorial - Stable Diffusion • Inpainting Tutorial - ...
The Rise of AI Art: A Creative Revolution • The Rise of AI Art - A...
7 Secrets to writing with ChatGPT (Don't tell your boss!) • 7 Secrets in ChatGPT (...
Ultimate Animation guide in Stable diffusion • Stable diffusion anima...
Dreambooth tutorial for Stable diffusion • Dreambooth tutorial fo...
5 tricks you're not using in Stable diffusion • Top 5 Stable diffusion...
Avoid these 7 mistakes in Stable diffusion • Don't make these 7 mis...
How to ChatGPT. ChatGPT explained in 1 minute • How to ChatGPT? Chat G...
This is Adobe Firefly. AI For Professionals • This Is Adobe Firefly....
Adobe Firefly Tutorial • Adobe Firefly Tutorial...
ChatGPT Playlist • ChatGPT
Download Prompt styles: www.patreon.com/posts/sebs-hilis-79649068
Please support me on Patreon for early access videos. It will also help me keep creating these guides: www.patreon.com/sebastiankamph
how do you get two controlnet unit on your gui?
You have to add the styles to the prompt, btw. In the video you just selected them from the dropdown, but they're not added to the prompt until you click on "add style to prompt"
@@UnBknT Not needed to click the button. They are still applied.
Why pay your monthly patreon while I can watch your free youtube videos with adblock on? We were beating the competition I thought no?
@@142vids you're free to do whatever you want. The people supporting me does it from the kindness of their heart, helping me keep doing these videos.
Started playing with it a few hours ago. It is insane. It's nearly as good as training but without the training. It pulls faces, poses, lighting, art style, everything. I cannot believe this is only the first iteration, it is already so good. I thought Shuffle was dope but this is on a whole new level.
Exactly, "almost as good as training" is the scary part. I've been able to get better likeness out of this reference_only model than I've had with pretty much every early training attempt. There's been a bit of cherry picking but in some cases I've gotten 2 extremely good hits from a 4 batch render. It's crazy how good this is already!
@@Mocorn Strange - casue I still cannot get it to create a decent copy of the original face. Always makes the new image look younger and very different from original face
To me it looks like this method only works for people "coming out of the model". For example if you take the seed image from this video and try to generate other images from that without Sebastian's "Digital/Oil Painting" and "Easy Negative" styles, the results are very unimpressive. I'm not saying that this new ControlNet is not super cool for some use cases, but I though he could have been clearer about the limitations.
I couldn't make it work with v1.1.174, txt2img is completely broken even hair colour doesn't match. While img2img kinda works better at least matching hair and clothes but faces are out of a horror movie twisted etc. Im using exactly same styles and settings
how to access free trial?
this is actually the very definition of game changing
💯
Can create a character and make a whole tv show or anime out of the character
This has been extremely helpful in redesigning the characters for a video game I did way back in high school. I've taken my art, ran it through Ai, and saw it give me different variations of my work. I'd then pick what I liked from each and draw up the final design, It is such a time saver.
Thank you so much! You've been pretty much the only source I've needed to learn everything I need about control net. Great videos with clear and concise information. Keep it up!
Thank you very much, glad the videos have been helpful to you 😊
Dude... You're litrally faster than me clicking update button in SD... Have my sub!
Thank you kindly! 😊🌟
I have only started using Stble Diffusion a bit over a week ago and your videos are such a big help.
The Open Pose 3D extension is great for posing - you can run it in the GUI tab, set the skeleton in three-dimensional space, together with hands and feet and generate 3 images: canny, depth and openpose.
please raise your volume, I almost had an heart attack when ad kicked in lol
Waiting for my new computer with beefy vram to arrive, watching your vids to prep, and I'm loving what I'm seeing! Thanks so much for these!
Sebastian big thanks for providing your styles. I mostly use them right at the beginning before even prompting and they provide beautiful results.
Happy to help!
Wow. CN guys are on a roll. They are innovating faster than OpenAI and Google. Hopefully they can keep up the momentum.
Ha, every day there is a dozen new breakthroughs!
@@AG-ur1lj Thats why the battle for those brilliant mind is not based on ambition but depravation. The big ones will acquire what they can, and the rest will be depraved and obscured. As always.
@@AG-ur1lj powerful how? Will it scale to millions of users? Will it be safe from lawsuits or flexible enough to attract business users? I doubt that. Microsoft or Google could wait and buy anything viable and you, even with your brilliance will have nothing to say. As always in the history.
@@AG-ur1lj you didn't realized that this technology is already paywalled and regulated. You will not profit of it - above certain level of course - because you will not have a resources to train those tools or licenses to use copyrighted source data. As of now, it is not a problem for big corporations, because they just take the best solutions and use them with their data. You probably will be happy, but once more, you will not profit of that. Even if you will be able to train the best of the art algorithm it will be WORSE than their, because they have access to all those data and resources.
@@AG-ur1lj you downloaded terabytes of images and text and all copyrighted books and proprietary magazines from internet? I doubt that. Yet, Google or Microsoft works on that scale. Since you will NEVER have access to data, you will just become a giver of ideas to big corporations with your improvements to "open" algorithms. Without data, those algorithms just dont work. Even algorithms I enclosed in parantheses because when open source community will produce some breakthrough algorithm, big corporations WILL patent some small improvement and you will be barred from using them. That is the reality based on history. I'm amazed of your idealistic view of business.
This is fantastic! Thanks so much for the heads up.
Thank you for this news update!
Love this!!! I need this. Character consistency is my biggest problem.
If you High res fix after the init image is generated, you can usually cut through the noise. Go with R-ESRGAN 4x -> Denoise to 0.3 or 0.2. Keep that part weak. Or, alternatively, you can drag your CFG, and try to use High Res fix to add additional noise and burn if you are going for a Noisey style.
Thanks for the video, I love watching how you present it. Keep it up!
Thank you for the support! 😊
Amazing! Thanks you for making these tutorials
Many thanks for sharing tutorials, its a massive time saver ;D
I LOVE THIS FEATURE. Already got some awesome results on first few minutes fooling with it.
Upon seeing this I upgraded to a 12gb gpu this week so I could finally run ControlNet.
It is indeed a literal game changer for projects that need character consistency. No more Lora and prompting gymnastics while crossing your fingers that the next batch will render what you want.
Cuts workflow to a fraction of what is was before and opens all kinds of new creative doors.
I’m loving this feature!
Happy to hear it's working out for you! ControlNet is life.
Pog, didn't notice the Update. xd
ty, Seb. Had a good day.
You're welcome! And thank you, you too 🌟
Fantastic info dude, thanks again
You bet! 🌟
I did get the smile to work, but I had to add my whole prompt so my image didn't change drastically and added the (woman smiling:1.2) at the beginning of my prompt. The Posing part was changing my image too much but I have to play some more with that. In the time you made this video they updated controlnet to v1.1.164. Thanks love your videos!
Glad you're enjoying the videos! I had to test a bunch of stuff before I got it working, and some versions barely even worked for me. Hoping new versions will make it easier to use for all.
This is exactly my experience too.
Also, "ControlNet is more important" brightens up the image for me. I can get more consistent lighting with "My prompt is more important", but that changes the image more.
Im not getting nowhere fast! might just give up alltogether! i mean the output looks nothing, nothing like the input image! and i did everything exactly the same as in the video! ;(
Sebastian thank you so much for doing what you are doing. I found you today and I have been watching your tutorials all day. I immediately signed up your patron! so glad to find you. I have 2 questions regarding this amazing tutorial. 1. I saw you highlighted prompt, woman smiling, and then ctrl+ Up. could you tell me what it does? is there any resources about those tricks on prompt? 2. would it be possible to combine 2 photos that I like to create new one? Thank you so much again. have a wonderful day!
the ctrl+up with text selected make this text have more weight. the default weight is 1 and is valid for everything inside the parêntesis. the weight takes in consideration all the prompt.
How would you recommend getting the back of a character? I am trying to grab a depth map from both sides and combine them in blender. I guess I could do head on and then 120deg turns in either direction....
Thank You Sebastian, As ever your Tutorials are informative and straight to the point...And they Work!
Happy to help, thank you for being here! 🌟
I would love to see how they pulled this off. It seems like if they can do this, then a lot of other things we don't have yet ought to be possible, like maintaining outfits or architecture. This is perfect for making comics, though, with character coherence between frames. Maybe they could even fix the coherency issue of tiling a high res image, depending on what they did, exactly. This is pretty crazy,
You can maintain outfit with it, just promy that outfit or maybe use just outfit here and face in separate controlnet .. you know what gonna check that today
@@wykydytron did you figure out how to do it? I try to use a CN for reference and one for open pose but cant seem to figure out how to get good results
Cant wait to try it, thx!
This seems like something they could really use to do multi-frame rendering for txt2video
Just wow, GAME CHANGER is the right set of words for this... just tried it and am uttelry impressed, thanks for reporting on this!!
Awesome video👍 Your computer is so fast in generating pictures, what are your hardware specs (cpu, gpu, RAM)?
Hey there. Good content. Learning a lot on this channel! Thank you Sebastian.
How do I bring such a face (as here) into a generated image of say an assassin? Do I just carry on with my prompt as I would have? And bring a face image to controlnet?
AMAZING!! Fantastic video! Thank you for sharing it!!
Glad you liked it, you superstar, you! 😊🌟
Being able to do my characters in different 3D positions Dang this is godlike
This has more character consistency than many 'old-fashioned' comic books :-)
@@fernando749845 😅😅 this is actually sad to hear
@@MrErick1160 my results are compeltly different than the reference :D :D
🌟🌟
@@fernando749845 yes but actually no.. comic books stay very character consistent unless a panel gets drawn by a different artist
This is what I was waiting for! My goodness
"I only have my shelf to blame." What a super fine hack joke. I bow, and thanks for the quality info.
Great video thank you. I have a question, I can make a pose in img2img. When you use a batch of 4 you get 4 pictures and one pose picture. Can i save this pose. Because when i click on the pose image and i use the save button it doesnt work. I don't get a download button as with a normal picture.
This is quite amazing. Even better than using LORAs and the chance to combine LORAs, seeds and ControlNet with reference methods, NICE...
BTW... I was specting my "Wonderwall" dad joke. I'm very disapointed, mister Kamph (read it in a beautiful british Sean Connery angry tone).
You posted it after I recorded this. But I did find it very good! 😂😘
How did you get your styles menu subdivided like that? Is there an extension that does that or what?
Wow, that is amazing, great video as always.
Glad you liked it! 😊
Thank you very much for another great video.
Wanted to ask about the styles, its the first time to see this, it there any videos that you explain what it is and how to use it or can you let me know here quickly about it?
Thank you
Check the pinned commend or video description. Install instructions and usage in that link
What I would like to do is inpainting woth controlNet. And what I mean is, I have an img with a pose, I remove one arm for inpainting and I pass another arm pose, and the inpainting is done with that new arm pose. Is this possible? What i found is not like that
Bro is the best, thank you so much for saving a ton of time
Well, this wasn't quite what I was looking for but holy hell I got something good.
I accidently wiped my prompts and I didn't know how to get them back... loading the image into png brought up my prompts/settings.
So, thank you for that!!!
Awesome Tutorial! Thank you soo much! However for some reason SD ignores the second controlnet and doesn't give me the pose I want. Any idea what the issue might be? please keep making more videos!
Really curious how this could also work with inpainting and img2img at the same time. exciting!
Wonder if people have started building graphic novels with this. Consistency in character design and style between frames is going to be really useful for something like that.
Or video. 😮
U can already get consistent characters with textual inversion or LORA , u can train one yourself, especially textual inversion which needs 8 images , anymore is just useless to train a TI
@@HunterIndia but then you'd need to train a model for each character.. Suppose it's not that tall of an order but still, this'll make things much easier. I should start looking for some webcomics with an AI tag. Would love to see AI being utilized in that space.
that's the dream --> Video indeed, scary how much GPU power would be required
@@Pahiro I'm trying but with Blender and img2img (more fine control).
This is actually a game changer 🎉🎉🎉
Hi Sebastian, your videos are amazing!! Thanks very much, i have a question for you. Do you think it's possible to make an Ai model wear a real dress? For example if I have the ghost mannequin photo of a dress can i generate a worn photo with an AI model? Please let me know, i'm new in this fild anch I think this could be very useful
Thank you for this excellent content!
Happy to help! 😊
when I use controlnet, it only produces an inverted image as a result of the reference even when I select reference as the control. how would I fix this?
This is insanely useful. Like I've been trying for the last week to collect images for a Lora. It can be tricky as hell because keeping characters consistent is HARD. Change just a few words and suddenly the whole piece looks like a different style. It will be SOOO easy to make Lora now thanks to this. What will they come up with next because google and openai in my opinion are doing a pretty "meh" job.
Yeah, this was my first thought too. By itself, it's great, but it can be SO useful for training Loras, which I suppose, are more accurate
Hey can you please tell me how this makes it easier for training a Lora?
@@scottyfityoga Easier to source images of a certain person, for example.
Hey Sebastian, loving your videos. I notice that I don't have any ControlNet Units in my UI. Any advice on why/how that is set up?
Settings - controlnet - multicontrolnet: It is set to 3 by default. If you set it to 2, it should work.
Hi ,
very good tutorial .
I tried my own image is input for the controlnet with refrence_only , and a simple promt like "man is smiling" in the face are totally different. how can I preserve the face ?
Thanks
Eran
I've tried it... I don't get it to render anything even close to the likeness of the input image 😥
Thnak you!
Thank you, this is very helful😉
You are on the top of your game Seb! Go king!
Thanks superstar! 🌟
hello, meanwhile thanks for your videos because I'm learning so much! I wanted to ask if there is a way to add real objects into an image, for example a model holding a real bag. Thank you
Do I have to follow this procedure if I want to take one image from image 2 image window and apply on it open pose to get different variations? Or is there a simpler way!
Thanks, great video and straight to the point. Liked, subbed and commented !!!
The holy trinity! You're the real mvp 🌟
Thanks again. First try failed, but will attempt again soon. ? After you get it to draw the character correct, can you then load a ref pic of costume only and use in-painting with it to give the character a chosen costume?
I wanted to ask you if there is a way to have two "Lora" in the same image, you know how to do it?, could you make a tutorial about it?, thanks
If you keep injuring yourself, it's time to book an appointment to learn some shelf improvement.
This looks amazing, I keep meaning to look into ControlNet more but never seem to get around to it. Cheers.
what is the difference between reference only and roop? thanks
What a gamechanger...my goodness!
do you have any tutorials on how to create professional self portraits? I want to look pretty on Linkedin lol
Hello there I have a question regarding controlnet. I have seen, using a 3d model you can make poses and use openpose to extract it, now in this video I have learned, that you could use any face as a reference and even combine it with the openpose. Now my question is: I do have a whole finished 3D Model of my Character eg. a 3d anime character in blender, I would pose it and it has its own face and clothing. Now I would pose my 3d model and have a picture of it. How can I use Controlnet so it would use the reference picture, and generate a image with the same face and clothing? Is there any way?
thank you for your work.
Thank you for your engagement! 🌟
Unfortunately, it doesn't work for me. The generated images all look like the same person but they dont resemble the person in my original image. It's like my image is completely ignored.
Totaly same. Iam getting whole different face..
Do you use mac m processors? Cause i have and there is a bug when it tryies to catch uploaded face.
Hi there! I am new subscriber and quite new to SD. I was just wondering about the style. Is it something you set up or you can add it default by using the deliberate v2 safetensor model. Thank you so much.
It's in the video description. Free styles link.
10x for the update! Looks reassuring as not to have to learn how to train and fine tune. Wonder if you can just keep using the same reference face with ANY different scenario, thus we got ourself a character mapped by seed only
It's still very clunky but we can see the future here. I want to be able to adjust like making a MMO character, then dress how ever i see fit, then put the character in any scene I want, in any pose i want, talking / singing / dancing /what ever. we are so close to that now it is so exciting!
There is controlnet that allows for easy outfit swap, my poor memory can't handle it's name it has 3 versions first end on 20 in name if that helps, anyway it detects what's on picture and paints it in corresponding color, then you just say you want person to have x outfit and it will change clothes but rest will remain unchanged
@@wykydytron Segmentation.
Hm this one don't work so well for me, tried different kinds of photos but they become soooo far of which is weird since I have tried the same settings you use and used other check points, samling methods and longer sampling steps but still not close like yours.
Any tips or idea what it might be?
I'm actually doing even more crazy things with tile. But yeah, reference ones are great too.
That was one the missing feature : the ability to keep the same character. Still not perfect but we are going there ! I now wonder if it will become possible to generate few good looking image and train a dreambooth on them. That way you can reuse the face only as an inpaint
wondering the same thing. things like coping over styles like a person's clothes, and patterns on the clothes etc to the generation imges. does midjourney remix do that?
Hello Sir, can you help with my query about Hand and fingers deform ho when creating artwork like extra legs extra fingers. sometime the went missing or not corrects as per human anatomy. I tried negative prompts, but still issue remains.
Best channel. period.
Does it work with multiple pictures of reference?
This video metaphorically "saved my life". Because I was just on the look for a way of having the possibility of using the same character over and over without the hassle of creating my own model in dreambooth or whatever this is called.
Happy to hear it! Also check my latest video on Roop for this.
oh i meant to comment one of the last times. I downloaded the styles file from your link and put it in my SD folder as instructed on the link, and they aren't on my styles tab on Auto1111. am i missing something?
great job sebastian. is there a website where i can try this out?
Would you share your pc setup to run stable diffusion? or atleast your GPU?
Is it possible to use this to get a different angle of a specific environment in the same style? No people or characters, just an environment.
Yeah, but it won't be 100%. It's like a better img2img
I've followed the same steps but my pictures come out nothing like my original. I am enabling it, selecting 'reference only' but new pictures look like nothing like me
Very strange! I updated everything, turned everything on exactly the same way, uploaded a picture but the result is completely random. It does NOT WORK!
the same. it works only with some demo pictures (perfect face, no sobtle expressions, no background). And openpose miss the front/back pose 70% of time
Yes, same thing.
Can I achieve something like mix mid journey with cnet? For example human photo and potato and achieve something in-between?
Great video, amazing tool!!
Thank you! ControlNet is so powerful, it blows my mind. And I'm not exaggerating.
In order to have the openpose model, you have to download the models separately, right?
has anybody figured out while there are no models coming with ControlNet v1.1.234? I tried to use it on this version and nothing worked -ControlNet just ignored for anything(canny, pose etc). I could not select any models for any pre-processor as Models dropdown list was empty. I have downloaded one model for openPose, put it in models folder in extensions. Now I can select this model for pose and it is all started working. I have installed ControlNet from Automatic1111 but it only puts yml files for models in the required folder but not actually models itself.
How is this different from image2image? I played with it and dont see a diference
which one of the videos above explains the style details you refered saying they are in the description?
The free prompt styles link. Also top pinned comment.
Hello, how do you increase the importance or weight of the prompt so quickly? Do you use keyboard shortcuts?
Ass seen in the video: 02:07
Ctrl + arrow up/down
Do you have a video for training models or how to set up loras and the such? im lil dumb, Ty
6:11 can this be applied to the full body pose? I noticed the generated image only focuses on the half of the body.
My laptop has only 4gb vram so not a good start already😅 but i was able to generate at 1024×1024 resolution but after updating automatic 1111 i can't generate above 512×512 , and also can't use controlnet, everytime the vram usage goes through the roof. Then i upgraded to torch 2.0 but still it didn't help.
Torch 2 definitely decreased my generation time though ngl.
What should i do?? I want to use controlenet.
Great videos. But I always have to crank up the sound to max to listen. 😊
Can you then use a ref pic of clothing and in-painting to set chosen clothes?
There's actually one thing, I was thinking about... After version 1.1, ControlNet started to implement something new almost every week. First of all, new preprocessors. So, I'm pretty much curious about, when there is going to be an actual counterpart of Midjourney's Remix mode...?