I have absolutely no idea what is happening here.... but it was amazing and I watched it all in fascination. This channel will be huge, I'm sure. Currently 31 subscribers... 32 now 👍
This might help: (text)--->(img) is t2img, with controlnet it'd be (text)--->[control/"conform to this outline or shape or depth"]--->(img) img2img with controlnet is: (text+img)---->[control]---->(img)
Thanks for the explanation of the different models. How do you use the preprosessed image in and original image in Midjourney? Do you upload and use the images as a reference image?
Great job. Did you use both the preprocessor and model version of every technique for these examples? Can you use one without the other and/or can you mix and match? Thanks.
great video fam, do you know if its possible to export the calculation the models make? like if i wanted to export just the depth map or normal map it makes
If you go to the settings for the extension you can enable this. You need to save the "detected_maps". They'll then be available in extensions/sd-webui-controlnet/detected_maps
Nice video man. Thanks. Is there a good model for adding furniture? E.g. I upload a photo of an empty living room, set the style in the prompt, and let the model keep the same room structure but add several furniture and decoration?
Hi, I have tryied everything and I never get something base on what I upload in AI. It always does what it wants. I have all the models, styles, everything but it just doesn´t do it.
Man I really want to watch this but 1)the music is instant migraine-inducing. No way in hell I'm sitting through 20 minutes of that. 2) it took you three minutes to say "These are our two test images". No.
I have absolutely no idea what is happening here.... but it was amazing and I watched it all in fascination.
This channel will be huge, I'm sure.
Currently 31 subscribers... 32 now 👍
Thanks for this concise explanation on the ControlNet models. Great examples and very well explained.
I really enjoyed the way you laid this out with consistent examples across each model so the effect of each was more clear.
The best tutorial of controlnet!😂❤
This clears up all doubts about what ControlNet really is. I'd love to know more about the UI in A1111.
This might help:
(text)--->(img) is t2img, with controlnet it'd be (text)--->[control/"conform to this outline or shape or depth"]--->(img)
img2img with controlnet is: (text+img)---->[control]---->(img)
Good Examples, thank you.
Agree with the above comment - concise and valuable.
Great !! Very good explanation.
Thank you very much. You deserve a new subscriber!
Thanks for the explanation of the different models. How do you use the preprosessed image in and original image in Midjourney? Do you upload and use the images as a reference image?
Great job. Did you use both the preprocessor and model version of every technique for these examples? Can you use one without the other and/or can you mix and match? Thanks.
great video fam, do you know if its possible to export the calculation the models make? like if i wanted to export just the depth map or normal map it makes
If you go to the settings for the extension you can enable this. You need to save the "detected_maps".
They'll then be available in extensions/sd-webui-controlnet/detected_maps
Nice video man. Thanks. Is there a good model for adding furniture? E.g. I upload a photo of an empty living room, set the style in the prompt, and let the model keep the same room structure but add several furniture and decoration?
Thanks
what model and propmts did you use for the company Logos? like Burger King etc. im struggeling to recreate something similar : /
A time stamp as between the different models would have been helpful.
I have a "ControlNet" folder with "model" folder inside with pixar, monaliza, etc in pt. files how use it ?please
"30 something couple, so an older couple". Wasn't feeling old today... until
i have a mac m1 chip when i run control net it takes hours to get result any tips on how to speed it up?
Why don't you show the setting in the GUI? Does the video explain the preprocessor (canny) or the model (control_canny-fp16 [e3fe7712])?
where can i get the Le Res Depth Model for Control Net?
You can use the existing depth model, leres only gives you a different depth map to use with it
Hi, I have tryied everything and I never get something base on what I upload in AI. It always does what it wants. I have all the models, styles, everything but it just doesn´t do it.
Hi, Can't find ControlNet is it a plugin?
This video will help: czcams.com/video/uUizoFA7OYY/video.html
i can't change model in controlnet it only show none. can someone tell me pls
You need to download the models and put them in the right folder, see czcams.com/video/uUizoFA7OYY/video.html
Man I really want to watch this but 1)the music is instant migraine-inducing. No way in hell I'm sitting through 20 minutes of that. 2) it took you three minutes to say "These are our two test images". No.
cry and stay dumb