NEW ControlNet for Stable diffusion RELEASED! THIS IS MIND BLOWING!

Sdílet
Vložit
  • čas přidán 8. 06. 2024
  • ControlNet can transfer any pose or composition. In this ControlNet tutorial for Stable diffusion I'll guide you through installing ControlNet and how to use it. ControlNet is a neural network structure to control Stable diffusion models by adding extra conditions.
    Open cmd, type in: pip install opencv-python
    Extension: github.com/Mikubill/sd-webui-...
    Updated 1.1 models: huggingface.co/lllyasviel/Con...
    1.0 Models from video (old): huggingface.co/lllyasviel/Con...
    FREE Prompt styles here:
    / sebs-hilis-79649068
    How to install Stable diffusion - ULTIMATE guide:
    • Stable diffusion tutor...
    Chat with me in our community discord: / discord
    Support me on Patreon to get access to unique perks!
    / sebastiankamph
    The Rise of AI Art: A Creative Revolution
    • The Rise of AI Art - A...
    7 Secrets to writing with ChatGPT (Don't tell your boss!)
    • 7 Secrets in ChatGPT (...
    Ultimate Animation guide in Stable diffusion
    • Stable diffusion anima...
    Dreambooth tutorial for Stable diffusion
    • Dreambooth tutorial fo...
    5 tricks you're not using
    • Top 5 Stable diffusion...
    Avoid these 7 mistakes
    • Don't make these 7 mis...
    How to ChatGPT. ChatGPT explained:
    • How to ChatGPT? Chat G...
    How to fix live render preview:
    • Stable diffusion gui m...

Komentáře • 505

  • @sebastiankamph
    @sebastiankamph  Před rokem +4

    Download Prompt styles: www.patreon.com/posts/sebs-hilis-79649068
    Please support me on Patreon for early access videos. It will also help me keep creating these guides: www.patreon.com/sebastiankamph

  • @IIStaffyII
    @IIStaffyII Před rokem +510

    This is the reason that its so important that Stable Diffusion is open source.

    • @losttoothbrush
      @losttoothbrush Před rokem +20

      I mean its cool yeah, but doesnt it steal art from Artist that way?

    • @JorgetePanete
      @JorgetePanete Před rokem +1

      it's*

    • @IIStaffyII
      @IIStaffyII Před rokem +54

      ​@@losttoothbrush
      Open source just means people can access the source code and therefore add to the tool.
      Being open source is not directly contributing to the "stealing" issue. Although indirectly it can make it more accessable.
      In the end it's a tool and I'd argue what you make with it may be transformative work or not.

    • @Mimeniia
      @Mimeniia Před rokem +12

      People "artists" cling to their prompts like their lives depend on it.
      Asking them to share is like squeazing blood from a stone.

    • @verendale1789
      @verendale1789 Před rokem +30

      @@losttoothbrush Well, yknow, if we are gonna steal art, at least make it public and for everyone instead for one big corpo having the goods, hell yea brotha

  • @user-zv6su5cp6o
    @user-zv6su5cp6o Před 11 měsíci +1

    man you are incredible! so good and simple, i installed stable diffusion with one of your videos, and now im ready to install control net. i am officially your fan!! thanks for everything!! greetings from corfu greece

  • @GrandHorseMusic
    @GrandHorseMusic Před rokem +14

    Thank you, this is really helpful. My "pencil sketch of a ballerina" had three arms and no head, but eventually I generated something usable. It's all absolutely fascinating and it's been fun to learn over the past week or so.

    • @sebastiankamph
      @sebastiankamph  Před rokem

      Glad it was helpful! And we've all struggled with the correct amount of body parts 😅

  • @woszkar
    @woszkar Před rokem +8

    This is probably the most useful thing for SD. Thanks for showing us!

  • @marcelqueiroz8613
    @marcelqueiroz8613 Před rokem +7

    Really cool. Things are evolving pretty fast! Thanks

    • @sebastiankamph
      @sebastiankamph  Před rokem +1

      Right? This is moving extremely fast. I'm hyped for what's more to come! 🌟

  • @sharadrbhoir
    @sharadrbhoir Před rokem +195

    As a Drawing Teacher of having 33 years of experience of teaching school kids how to draw and paint, One thing for sure....AI can not replace human creativity but I must say this will surely help so many people with poor drawing skill to unleash their creative thoughts and imagination! which for a teacher like me gives immense hope of revolution in Arts!
    Thanks for such an easy and helpful tutorial on this topic!

    • @mike_lupriger
      @mike_lupriger Před rokem +8

      ​@@ClanBez Same, I see possibility to work on multiple projects as a designer. Tedious parts of process are getting automated. Super excited keep exploring!! Will get more time for vacation, well I hope!🤞 ​ PS: In my area, high school art teacher is referred to as Drawing teacher and College art teachers are referred to as Art teachers. Yeah it's little weird.

    • @rushalias8511
      @rushalias8511 Před rokem +16

      honestly refreshing to see some people be so open minded to this. AI art is often viewed as a job killer but i mean honestly speaking look at so many incidents from the past. When digital art first started i'm sure millions of artists who worked hard with paint, and pencils and ink and every other form of real life art, felt threatened by it.
      Why pay a guy to paint a logo for you, when you can use a paint tool? Among so many other stuff.
      But look what happened now, digital art is so common now because its quicker, cheaper and more flexible. If you made a mistake in a real life painting, you didn't have an undo button or an eraser.
      Just like digital art gave so many new individuals a chance to make art, so too is ai, its all on how you use it

    • @pedrovitor5324
      @pedrovitor5324 Před rokem +7

      People feel threatened because a lot of artists still lives from comissions (Btw, they aren't wrong for doing that, it's "easy money"). When you're a teacher in an art school it's easy for you to not be threatened by AI art.
      Don't get me wrong, I'm not here to sound mad or anything, I'm just saying the truth. I agree AI art will revolutionize the way we think about creativity and I also think it won't destroy art (At least not completely), people will still have their community of non AI art. But it's undeniable, AI art has tons of legal issues and the AI is pretty bad right now. Very rarely I wasn't able to spot if an art was AI or not.

    • @viquietentakelliebe2561
      @viquietentakelliebe2561 Před rokem

      yeah, but it can sure enhance what skill you have yet to acquire or lack the talent for

    • @lilacbuni
      @lilacbuni Před rokem +5

      @@viquietentakelliebe2561 How can u enhance a skill ur not practising? drawing a squiggle then letting ai complete the work based off actual artist's work isn't YOUR imagination or skill and u still learn nothing. ur not doing any of the work the ai is

  • @agusdor1044
    @agusdor1044 Před rokem +7

    This is gold, and Im talking about your video, dude. Really well explained, very detailed, thanks a lot!

    • @sebastiankamph
      @sebastiankamph  Před rokem +1

      Why thank you for the kind words, that's really thoughtful of you 😊🌟

  • @1salacious
    @1salacious Před rokem +1

    Another good easy to follow tutorial, thanks Seb 👍

  • @justinwhite2725
    @justinwhite2725 Před rokem +10

    This looks amazing. My drive is full but I definitely want to play more with this.

    • @sebastiankamph
      @sebastiankamph  Před rokem +3

      Throw away the other models and get this, it's fantastic! If you only have space for one, get the canny model.

    • @justinwhite2725
      @justinwhite2725 Před rokem +2

      @@sebastiankamph I'm going to get a new hd after work today. 2tb or so. My stable diffusion folder is 500gb.
      I'm also a little nervous since I have an AMD card I'm not sure if this will work on the CPU, but I'm working on building a new computer soon.

  • @VIpown3d
    @VIpown3d Před rokem +22

    This is the second best thing right after Ikea Köttbullar

  • @Jaxs_Time
    @Jaxs_Time Před rokem

    Brah, your camera is so nice..... Love to see the commitment to your craft. Keep it up fam

  • @MONGIE30
    @MONGIE30 Před rokem

    Set this up yesterday its pretty amazing

  • @artistx8512
    @artistx8512 Před rokem

    I messed with this already... seems like the first step to something amazing!

  • @dommyshan
    @dommyshan Před rokem +2

    That is really awesome :D Gonna try the scribble! I've been having horrible varied results of deformed humans and I was getting sick of it. Haven't touched SD since. Now this changes! :D

  • @daconl
    @daconl Před rokem +42

    If you want to use the source image as ControlNet image, you don't have to load the ControlNet image separately (it will automatically pick the source image when no image is selected). Saves some time. 🙂

    • @Naundob
      @Naundob Před rokem

      I wonder why img2img is used at all since ControlNet is meant to do the job now instead of the old img2img algorithm, right?

    • @superresistant8041
      @superresistant8041 Před rokem +1

      @@Naundob ControlNet can create from something whereas img2img can create from nothing.

    • @Naundob
      @Naundob Před rokem

      @@superresistant8041 Interesting, isn't img2img meant to create a new image from an image instead from nothing?

    • @daryladhityahenry
      @daryladhityahenry Před rokem +2

      Please please please finish these arguments... I don't understand what you both talking about hahaahahah. And give conclusion please. Thanksss

    • @ikcikor3670
      @ikcikor3670 Před rokem +4

      ​​@@Naundob img2img gives you way less control, basically you pick "denoising strength" which at 0.5 basically tells AI "this is a 50% done txt2img image, half way between random noise and desired result, continue working on it until the end" so you have to look for golden middle between your image not changing at all and changing way too much. Controlnet can be used both in txt2img and img2img and it has many powerful features like drawing very accurate poses, keeping lineart intact and turning simple scribbles into actual art (where with normal img2img you'd end up with either an ugly result or one that doesn't resemble the doodle almost at all)

  • @Dante02d12
    @Dante02d12 Před rokem +2

    The pose algorithm is EXACTLY what I've been looking for. Thanks for this video!
    Hopefully I'll manage to install it. Last time I tried to use extensions, Stable Diffusion just refused it and I had to reinstall everything, lol.
    EDOT : Ok, I installed it, and it works! Sadly, the Open Pose model seems... capricious. It often doesn't give me any skull. The Depth Map works wondefully though.

  • @bongkem2723
    @bongkem2723 Před 10 měsíci +1

    great video on controlnet man, thanks a lot !!

  • @jubb1984
    @jubb1984 Před rokem +7

    Thanks for this well put together tutorial on how to get it going!
    This is kinda what i was hoping for, turning my b&w line art into ai generated images =D, lotsa scribbles here i come!

  • @TonyRobertAllen
    @TonyRobertAllen Před rokem

    Super helpful content man, thank you for making it.

  • @blackswann9555
    @blackswann9555 Před rokem

    Installing controlNet !!!! eeeeeek great tutorial so much fun!

  • @MaxWeir
    @MaxWeir Před rokem +1

    I had Pingu vibes at the end, this is quite an amazing update.

  • @cassiosiquara
    @cassiosiquara Před rokem

    This is absolutely amazing! Thank you so much!! s2

  • @BryGuy_TV
    @BryGuy_TV Před rokem

    Controlnet is insane. Thanks for the examples

  • @dhavalpatel3455
    @dhavalpatel3455 Před rokem

    Thanks for explaining this.

  • @Argentuza
    @Argentuza Před 10 měsíci

    You have teached me so much, thank you very much!

  • @jzwadlo
    @jzwadlo Před rokem

    Great video thank you brother!

  • @dancode9738
    @dancode9738 Před rokem

    got it working, great video.

  • @Refused56
    @Refused56 Před rokem +27

    Since I've been playing with ControlNet I am in a constant state of awe and disbelief😮 Truly game changing. What I really like is the possibility of rendering higher resolution images with that much control. Does anyone have a tip on applying a certain color scheme when using ControlNet? Probably something we have to wait for until the next SD revolution hits. So roughly 5 days.. (me making sounds of pure excitement and slight fatigue at the same time).

    • @sebastiankamph
      @sebastiankamph  Před rokem +1

      Hah, I totally feel you. I'm hyped for every new update, and then I look at the list of all the videos I want to do.

    • @deadlymarmoset2074
      @deadlymarmoset2074 Před rokem +5

      Try using the base picture in the img2img for the colors and tone you want, use a de-noising strength of like 70+,
      (it can be of a completely unrelated subject and different aspect ratio)
      Then set the text prompt to the subject you want. Additionally you can set the base control net image, to the pose and subject your looking for.
      This is creating a relatively new image however, not color grading an existing one, still, it is an interesting way to control the general vibe and keep consistent colors between renders.

    • @sergiogonzalez2611
      @sergiogonzalez2611 Před rokem

      @@sebastiankamph SEBASTIAN GREAT CHANNLE AND CONTENT, i hacve a doubt this extention work with stable difusion 1.5 models?

    • @sebastiankamph
      @sebastiankamph  Před rokem

      @@sergiogonzalez2611 Works with all models, majority of my testing have been on 1.5.

    • @prettyawesomeperson2188
      @prettyawesomeperson2188 Před rokem

      I'm having trouble to get it to work. I'm lost. I tried for example to scribble a poorly drawn dog, prompted "A photorealistic dog"(With openpose, canny, depth) and the only time I got a photorealistic dog was when it outputed a black img, otherwise It just spits out a 3D image of my scribble. Hope that made sense.

  • @matthallett4126
    @matthallett4126 Před rokem

    Very helpful.. Thank you!

  • @namds3373
    @namds3373 Před rokem

    amazing video, thanks!

  • @CoconutPete
    @CoconutPete Před 3 měsíci

    controlnet is king from what I can tell.. so far

  • @rayamc607
    @rayamc607 Před rokem

    Be so much better when somebody actually puts a proper UI on all of this.

  • @EmanueleDelFio
    @EmanueleDelFio Před rokem +2

    Thanks Seb ! you are my Obiwan Kenobi of ai !

    • @sebastiankamph
      @sebastiankamph  Před rokem

      Thank you as always my friend! Your supportive attitude is a national treasure 🌟

  • @Dessme
    @Dessme Před rokem

    The audio is SUPER👌👍

  • @jameshughes3014
    @jameshughes3014 Před rokem +5

    I feel silly, but I hadn't tried this yet because I dont have 50 gigabytes of free drive space. It didn't occur to me that I could just install part of them. This is truly amazing stuff, I'm looking forward to seeing how animations look with this tool.

  • @noonelivesforever2302
    @noonelivesforever2302 Před rokem +1

    ooohhh someone that explain things like should be done. ty

  • @ArtbyKurtisEdwards
    @ArtbyKurtisEdwards Před rokem

    another awesome video. Thanks!

  • @coloryvr
    @coloryvr Před rokem +4

    ...WOW! ...the next growth spurt of SD...people say AI makes us stupid but i haven't learned so much since AI crashed into my life...Big FANX for keeping us up to date!

    • @sebastiankamph
      @sebastiankamph  Před rokem

      So much new information entering our heads 😅 Thanks for the support! 🌟

    • @conorstewart2214
      @conorstewart2214 Před rokem

      AI does and will make people stupid, in the sense they don’t need to learn anything themselves they just ask an AI to do it for them. You are learning because you are interested in it and it is new, once it becomes more prevalent it will most likely stop being open source and people will just be interested in the results, not how it works.

    • @coloryvr
      @coloryvr Před rokem

      I agree with many things and I think that children should not have access to generative AIs until a certain age ((16?). However, I have no idea how to remove open source software from millions of private PCs (?).
      My biggest concern is that the AIs will greatly increase the general smartphone addiction.
      (I don't have one myself and don't want one either).
      But: I love "painting" and filming in VR... and thanks to the new AIs, I now have the potential of an entire animation studio at my own disposal.... BTW:
      The absolute nightmare are AIs that develop weapons, toxins, etc. as well as the AI-based mind-reading technology that is already pushing onto the markets...

  • @chariots8x230
    @chariots8x230 Před rokem +9

    Pretty awesome! 😍 Now I’d like to know if there’s a way to apply these poses to our own custom characters, instead of just random characters. 🤔
    Is it possible to pose two of our original characters together?
    Also, it’s nice that we can copy the pose, but can we also copy facial expressions into our characters?

    • @sebastiankamph
      @sebastiankamph  Před rokem +11

      Yes and yes! 🌟 It might be a little tricky to get exactly what you're looking for though, but it is possible. I would inpaint each character separately to get the original features.

  • @paulgomez3318
    @paulgomez3318 Před rokem

    Thank you for this mate

  • @JesseCotto
    @JesseCotto Před 2 měsíci

    If you lower the weight to zero it will cost you and arm and a leg. Brilliant! Thanks for Your Video! Definitely Highly Valuable Content.

  • @nackedgrils9302
    @nackedgrils9302 Před rokem +1

    Thanks for sharing your experience! I'd kind of given up on SD because my computer is way too slow (5-10min to generate a 512x512 Euler a image) but when I came back to the community last week, everyone was creaming their panties over Controlnet and I had no idea why. Thanks to your explanation, now I kind of understand but I guess I'll have to try it myself some day once I can afford a better computer.

  • @dustyday837
    @dustyday837 Před 9 měsíci

    another great video!

  • @devnull_
    @devnull_ Před rokem

    Thanks another well done video. Annoying, are those two dropdowns really needed? Seems like preprocessor type and model go hand in hand? Or is it some UX decision made by extension author?

    • @sebastiankamph
      @sebastiankamph  Před rokem +1

      Thanks! Honestly, I couldn't say. It's still too early, let's see as people explore the tool more how it ends up.

  • @doze3705
    @doze3705 Před rokem +7

    I'm trying to find a way to have SD include character accessories accurately and consistently. Like having a character holding a Gameboy, or some other specific device. Would love to see a video breaking down how to train SD on specific objects, and then how to include those objects in a scene.

  • @Agent-Spear
    @Agent-Spear Před rokem

    This is really a Game Changing feature!!!

  • @roger7641
    @roger7641 Před rokem

    How challenging would it be to add your own training data (not sure if correct term) that this stack would use?
    Let's say that I would get too much of certain style, but in case I would like to do something totally different.

  • @eddybeghennou8682
    @eddybeghennou8682 Před rokem

    amazing thanks

  • @MrMikeIppo
    @MrMikeIppo Před rokem

    What stable diffusion checkpoint do you recommend? Does it change anything picking a different one apart from the first image generation?
    Amazing video! Got everything up and running

    • @sebastiankamph
      @sebastiankamph  Před rokem +1

      I've been playing a lot with Dreamshaper and variants of Protogen lately, but there are a lot of good ones out there.

  • @robcorrina8897
    @robcorrina8897 Před rokem

    I had difficulty cutting through the jargon. thanks man.

  •  Před rokem +1

    Is the preprocessor always has to match the controlnet model? I was using it with mostly no preprocessor selected and it seems to still work? I thought it's only an optional thing which allows you to create an additional pass.

  • @emmettbrown6418
    @emmettbrown6418 Před rokem +1

    For the Openpose, is there a way to get the coordinates of the joints in the pose?

  • @emmasnow29
    @emmasnow29 Před rokem +21

    This is an AMAZINGLY useful tool. Another big step for A.I art.

    • @sebastiankamph
      @sebastiankamph  Před rokem +3

      Couldn't agree more! Real game changer 🌟🌟🌟

  • @jonathaningram8157
    @jonathaningram8157 Před rokem +1

    I'm convinced the future of IA generated picture will be with a mix of 3D models. Like you do a precise pose in 3D and apply stable diffusion on it so that it can have precise informations about depth in the scene and that will achieve true photorealistic render.

    • @martiddy
      @martiddy Před rokem

      You can do that already with ControlNet

  • @fynnjackson2298
    @fynnjackson2298 Před rokem +2

    For storyboarding this is insane.

  • @MatthewEverhart
    @MatthewEverhart Před rokem +1

    Thank you for the tutorial - I am not getting the two images when I generate from ControlNet - just the one.

  • @royceahr
    @royceahr Před rokem +2

    Sabastian, I get this error when I tried typing pip install opencv-python 'pip' is not recognized as an internal or external command, operable program or batch file. Any idea what is wrong?

  • @GrayFates
    @GrayFates Před rokem

    Does stable diffusion rely on metadeta created when it generates the sketch or the original image to generate the reposed image? I'm wondering because I think it would be interesting to upload hand drawn sketches for the pose sketch and have stable diffusion redraw an image based on that.

  • @StefanPerriard
    @StefanPerriard Před rokem

    This is truly mind-blowing. Thank you for sharing. What version of Stable Diffusion are you using. 1.5 or 2?

    • @sebastiankamph
      @sebastiankamph  Před rokem

      Both! Your Stable diffusion program is not version dependant. It's the actual model .ckpt or .safetensors file that has a version. 1.5 is great for illustrations, while 2.1 does a great job with photorealistic portraits.

  • @leilagi1345
    @leilagi1345 Před rokem +1

    HI! Very useful video i got intrigued but how to do it all in Google Colab especially the first steps in "Command Prompt" or cnd, is it possible?

  • @TrashMull
    @TrashMull Před 2 měsíci

    Hello Sebastian Kamph,
    I really like your channel and the way how you talk and make those very comprehensive videos. I learn a lot from you and I thank you very much for that. Pls never change the way of your videos (calm, stabil, precise).
    Of course I have a question. I am concerned about the pickle files from Illyasviel. Does pickle mean, that it can harm your PC? If yes, what safetensor files can be the alternative?
    thank you very much and have a nice day.
    Best Regards

    • @sebastiankamph
      @sebastiankamph  Před 2 měsíci

      Hey! Thank you! Safetensors are pickle-free and safe, yes. But official files from lllyasviel are safe

  • @wowclassicplus
    @wowclassicplus Před rokem

    thanks alot. only works with 1.5 tough. but i found out, so all good:)

  • @CoconutPete
    @CoconutPete Před 3 měsíci

    having issues with scribble brush color.. seems as if I'm drawing with a white color brush on a white canvas

  • @thorminator
    @thorminator Před rokem

    This is nuts! 🤯

  • @Gerard_V
    @Gerard_V Před rokem

    Fantastic! thanks for the tutorial! let's play!

  • @LeChaunpre
    @LeChaunpre Před rokem +1

    Any clue why the controlnet models takes a while to load for me ? I've had the same issue with safetensors models

  • @DanielS-zq2rr
    @DanielS-zq2rr Před rokem

    What GPU do you have? I noticed you generate stuff way faster than I'm able to.
    Thanks for the tutorial btw

  • @AnimatingDreams
    @AnimatingDreams Před rokem +2

    My question is: Can you give SD a character in the img to img tab and use ControlNet to pose them, thus having a near identical character from the img to img one, just in a different pose?

    • @Max_Powers
      @Max_Powers Před rokem +1

      I would like to know the answer to this too

  • @OriBengal
    @OriBengal Před rokem +1

    On Civitas there's "extracted" safetensors models of the controlnet models - 700mb instead of 5-7GB each.

  • @hazencruz
    @hazencruz Před 2 měsíci

    what do i do if my canvas won't show any marks even after inverting the preprocessor?

  • @user-pv7fm9ep5e
    @user-pv7fm9ep5e Před rokem

    Thank U

  • @messer_sorgivo
    @messer_sorgivo Před rokem

    Super useful tutorial. I have one question, my stable diffusion does not show me Scribble mode next to enable, i have invert input color, rgb to bgr, low vram and guess mode, why is that?

  • @user-ri8to4rd5u
    @user-ri8to4rd5u Před rokem

    When I open the pre-processor tab there is a long list of processors to choose from, also processors I have not installed (manually). For instance, there are 3 scribble processors: scribble_hed, _pidinet and _xdog - which one to choose? It is also hard to invert the sketch from black to white

  • @mlnj144
    @mlnj144 Před rokem

    ty!!

  • @e1123581321345589144
    @e1123581321345589144 Před rokem

    how does it handle larger images? I played a bit with version 1.6 and I got a lot of out of v-memory exceptions for thing like 1000x800 pixels. and I have 12GB of visual ram.

  • @CoconutPete
    @CoconutPete Před 3 měsíci

    controlnet is amazing.. still trying to figure out the HED model

  • @ZakZky007
    @ZakZky007 Před rokem +1

    Thanks for the explanation! Just asking , the checkpoint that you got there, is it self made? Or can I get it from somewhere? If I use the v2-1_768-ema-pruned.ckpt, I get this error "RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x1024 and 768x320)". Any idea?

    • @Mrig87
      @Mrig87 Před rokem +1

      I get the same... any ideas ?

    • @sebastiankamph
      @sebastiankamph  Před rokem +1

      Check Civitai for models. I recommend finetuned 1.5 models.

    • @Mrig87
      @Mrig87 Před rokem

      @@sebastiankamph yup I figured this was because I used 2.1 models, 1.5 works !

  • @pdealmada
    @pdealmada Před rokem

    is there a way to clone a object or a person with the background with Inpaint? what would be the prompt ? Ty

  • @3dzmax
    @3dzmax Před 9 měsíci

    Hi, thank you for this, I'm very interested but I can't download your prompt styles, any help ?

  • @CHRIS_NORRIS
    @CHRIS_NORRIS Před 6 měsíci

    🎯 Key Takeaways for quick navigation:
    00:00 🎨 *Introduction to ControlNet for Stable Diffusion*
    - ControlNet is a revolutionary tool for AI art, allowing you to transform images while preserving composition or pose.
    00:26 📥 *Downloading ControlNet Models*
    - Download ControlNet models, including Canney, Depth Map, Open Pose, and Scribble, to get started.
    01:05 ⚙️ *Installing ControlNet for Stable Fusion*
    - Install ControlNet in Stable Fusion by adding the GitHub link in the extensions tab and restarting the UI.
    02:44 🖼️ *Using ControlNet with Image to Image*
    - Demonstration of using ControlNet with Image to Image, starting with a pencil sketch and generating a transformed image.
    03:08 🧩 *ControlNet Model Variations*
    - Explains the different ControlNet models (Candy, Depth Map, Open Pose, Scribble) and their unique capabilities.
    06:25 🌟 *Impact of ControlNet on AI Art*
    - Shows the results of ControlNet transformations and highlights its potential to revolutionize AI art.
    08:54 🎨 *Using ControlNet Scribble Mode*
    - Demonstrates how to use ControlNet in scribble mode to transform a hand-drawn sketch into an image.
    10:47 🧪 *Experimentation and Conclusion*
    - Encourages experimentation with ControlNet in different modes and concludes by highlighting its game-changing potential in AI art.
    Made with HARPA AI

  • @Name-sl3bm
    @Name-sl3bm Před rokem +1

    this is cool

  • @aviator4922
    @aviator4922 Před rokem

    awesome

  • @laioliver2299
    @laioliver2299 Před 9 měsíci

    thanks for the tutorial ! However I couldn't find the tab "Open drawing canvas"

  • @pkay3399
    @pkay3399 Před rokem +1

    Thank you. If we are running it on Colab Notebook with WebUI enabled, can we paste the models in Google Drive's Models folder instead of the WebUI folder and then just paste the path into the Notebook?

    • @SilasGrieves
      @SilasGrieves Před rokem +1

      Not OP but yes, you can copy/paste the models into your folder on your Google Drive but make sure you paste them to the Models folder in the Extensions parent folder and Stable Diffusion’s base models folder.

    • @pkay3399
      @pkay3399 Před rokem +1

      @@SilasGrieves Thank you

  • @newone295
    @newone295 Před rokem

    Thanks 👍

  • @cinemantics231
    @cinemantics231 Před rokem

    If they can find a way to make this work with the ChatGPT integration, I think most of us will be set lol. Can the ControlNET ckpt file be checkpoint merged with other models though? I think that might help.

  • @parsons318
    @parsons318 Před rokem

    what kind of specs are you using for your computer? and how long does it take to generate a controlnet image?

  • @Amelia_PC
    @Amelia_PC Před rokem

    After being so disappointed with Pose, I had much better results with Depth. Thanks!

  • @wombattos
    @wombattos Před 5 měsíci

    Whenever I try to generate anything with for example img2img, it freezes and the generate button doesn't function anymore. Normal txt2img works just fine.

  • @grillodon
    @grillodon Před rokem

    How can I use an alpha of an image to use it for create a new different image? Thx

  • @adriaanspronk8806
    @adriaanspronk8806 Před rokem

    Awesome !

    • @sebastiankamph
      @sebastiankamph  Před rokem

      Thanks Adriaan! Good to hear from you again 😊🌟

  • @Romazeo
    @Romazeo Před rokem

    Links to images from video preview?? I really like lighting from the first two.

  • @tiesojones9880
    @tiesojones9880 Před rokem

    Could you help me adding Control Net to the Deforum extension? Thank you

  • @roughlyEnforcing
    @roughlyEnforcing Před rokem

    is there any docs on these model so i have an idea what I'm dldling ? -- sorry if that's a dumb q , I'm SUPER new to all of this :)

  • @onlyyoucanstopevil9024
    @onlyyoucanstopevil9024 Před rokem +1

    GLORY TO A.I REVOLUTION!!!

  • @gszeman
    @gszeman Před 11 měsíci

    Issue: When I try to generate after adding the "ballerina dancing in a colorful space nebula, swirling with saturated colors" it generates up to about 94% and then stops without actually generating an image. Any idea what I could be doing wrong?

  • @nikkitrucking
    @nikkitrucking Před rokem

    I was hoping you'll creating the image from the thumbnail, instead, I got a penguin. lol

  • @SurveillanceSystem
    @SurveillanceSystem Před rokem

    Hej, I am interested in car body design and I need to produce orthogonal views of a vehicle (front, side, rear and top). Do you know if there is any Stable Diffusion extension that allows me to generate these views/images based on a car render I already have? My idea is to use these four views as a blueprint to make the 3D CAD model in Solidworks. Thank you!

  • @larryvw1
    @larryvw1 Před 10 měsíci

    how do you choose a specific model to use on your project , where is the model tab