Style Transfer Adapter for ControlNet (img2img)

Sdílet
Vložit
  • čas přidán 6. 03. 2023
  • Very cool feature for ControlNet that lets you transfer a style.
    HOW TO SUPPORT MY CHANNEL
    -Support me by joining my Patreon: / enigmatic_e
    _________________________________________________________________________
    SOCIAL MEDIA
    -Join my discord: / discord
    -Instagram: / enigmatic_e
    -Tik Tok: / enigmatic_e
    -Twitter: / 8bit_e
    - Business Contact: esolomedia@gmail.com
    _________________________________________________________________________
    Details about Adapters
    TencentARC/T2I-Adapter: T2I-Adapter (github.com)
    Models
    huggingface.co/TencentARC/T2I...
    Ebstynth + SD
    • Stable Diffusion + EbS...
    Install SD
    • Installing Stable Diff...
    Install ControlNet
    • New Stable Diffusion E...
  • Zábava

Komentáře • 83

  • @ixiTimmyixi
    @ixiTimmyixi Před rokem +5

    I can not wait to apply this to my AI Animations. This is a huge game changer. Using less in the text prompt area is a step forward for us. Having two images being the only driving factors should help a ton with cohesion/consistency in animation

    • @enigmatic_e
      @enigmatic_e  Před rokem +1

      I agree. Definitely makes it easier in some aspects.

  • @digital_magic
    @digital_magic Před rokem

    Great video :-) Thanx for sharing

  • @clenzen9930
    @clenzen9930 Před rokem +3

    Guidance start is about *when* it starts to take effect.

  • @snckyy
    @snckyy Před rokem

    incredible amount of useful information in this video. thank YOU!!!!!!

  • @ramilgr7467
    @ramilgr7467 Před rokem

    Thank you! very interesting!

  • @BeatoxYT
    @BeatoxYT Před rokem

    Thanks for sharing this! Very cool that they’ve added this style option. Excited for your next video on connecting it with eb synth. I’ll watch that next and see what I can do as well.
    But damn, these Davinci Deflicker/Dirt removal render times are killing me haha

    • @enigmatic_e
      @enigmatic_e  Před rokem

      I feel you on the deflicker. Sometimes stacking too many is not a good idea 😂

    • @BeatoxYT
      @BeatoxYT Před rokem

      @@enigmatic_e have you found a good compromise? I tried just one and it wasn’t great. So I stuck with the 3 you stacked after the dirt remover. But 24 hours for a 30 second clip is unsustainable lol

    • @enigmatic_e
      @enigmatic_e  Před rokem +1

      @@BeatoxYT no not yet. I’m sure another faster alternative will come out soon.

  • @androidgamerxc
    @androidgamerxc Před rokem +1

    @2:47 thank you so much for that i was thinking of reinstalling controlnet just because of that

  • @CompositingAcademy
    @CompositingAcademy Před rokem

    Really cool thanks for sharing! I wonder what would happen if you put 3d wireframes in the controlnet lines instead of the generated ones, could be very temporally stable

  • @BoringType
    @BoringType Před 8 měsíci

    Thank your very much

  • @koto9x
    @koto9x Před rokem

    Ur a legend

  • @zachkrausnick5030
    @zachkrausnick5030 Před rokem

    Great Video ! Trying to update my xformers, it seemed to install a later version of pytorch that no longer supports cuda, and the version of xformers you used is no longer available, how do I fix this?

  • @JeffFengcn
    @JeffFengcn Před rokem

    hi Sir., thanks for making those good videos on style transfer, i have a question , is there a way to change a person's outfit based on a input pattern of picture? using style transfer and inpaint? thanks in advance

  • @J.l198
    @J.l198 Před rokem +2

    I need help, when I generate the result is way different than the actual image im using.

  • @sidewaysdesign
    @sidewaysdesign Před rokem +1

    Thanks for another informative video. This style transfer feature already makes Photoshop’s Style Transfer neural filter look sad by comparison. It’s clear that Stable Diffusion’s open-source status, enabling all of these new features, is leaving MidJourney and DALL-E in the dust.

  • @HopsinThaGoat
    @HopsinThaGoat Před rokem

    Ahhh yeah

  • @GS195
    @GS195 Před 6 měsíci

    It turned Barret into President Shinra 😂

  • @SHsaiko
    @SHsaiko Před rokem

    Great video man! I been learning so much from your vids. it might be a rookie quesion but I got stuck on the third controlnet model when you choose clip_vision under the preprocessor, I dont seem to have that option. Is it becuase I have to use a certain version of SD? thanks!

    • @enigmatic_e
      @enigmatic_e  Před rokem

      Thanks! Have you updated everything? Like SD and ControlNet?

    • @SHsaiko
      @SHsaiko Před rokem

      @@enigmatic_e oops. that solved it! thanks for the help! looking forward to your next vid man, great work :D

    • @mikhaillavrov8275
      @mikhaillavrov8275 Před 11 měsíci

      @@SHsaiko Please describe what have you done, exactly please? I have updated all requirements but still there is no clip_vision on the left drop-down menu

  • @tonon_AI
    @tonon_AI Před 11 měsíci

    any tips on how to build this with ComfyUI?

  • @christophervillatoro3253

    Hey I was going to tell you how to get after effects to pull in multiple png sequences and autocross fade them. You have to pull them in and make each ebsynth outfolder it's own sequence. Then right click all the sequences, create new composition, in the menu there is an option to crossfade all the imported sequences. Specify with the ebsynth settings. Voila!

    • @enigmatic_e
      @enigmatic_e  Před rokem

      Ahh ok! I will have to try this! Thank you for the info!!

  • @user-ec5hh2eq9v
    @user-ec5hh2eq9v Před 3 měsíci

    Hi! I really ask for help, I'm desperate :( Clip Vision preprocessor is not displayed (automatic1111), and I can't find where to download it. What am I doing wrong?

  • @judgeworks3687
    @judgeworks3687 Před rokem +2

    love your clear instructions.
    I'm following along but my system seems to get stalled on the HED and clip vision controlnets. Any tips for when this happens? I keep restarting.
    Am trying same steps but first only doing one control net at a time to see if it works, and then adding each controlnet after it successfully runs. So far the HED is definitely slow to run. After this will try clip vison/T21style by itself (as one control net tab).

    • @enigmatic_e
      @enigmatic_e  Před rokem

      What does your height and width look like? I ran into a similar problem and had to reduce the size to make certain parameters work. Might be that it can’t handle it

    • @judgeworks3687
      @judgeworks3687 Před rokem

      @@enigmatic_e 512x512 I found this video (the guy mentioned some issue and how they got fixed) I will watch it later, I attached link below in case of interest. . the controlnet LED tab seems to be an issue. Is there a reason you use 3 tabs of controlnet? I'm testing out one tab of clipvision/t21 alone. It's still running. czcams.com/video/tXaQAkOgezQ/video.html

    • @judgeworks3687
      @judgeworks3687 Před rokem

      @@enigmatic_e which video of yours do you show how to add gitpull into the code? I think it was your video? I need to access my webui-user.bat to add something to the code but I can't recall how to do that. Thanks if you have link to video you made where you showed that.

    • @enigmatic_e
      @enigmatic_e  Před rokem +1

      @@judgeworks3687 i think this one czcams.com/video/qmnXBx3PcuM/video.html

    • @judgeworks3687
      @judgeworks3687 Před rokem

      @@enigmatic_e yes this was it. Great video. I ended up uninstalling SD and re-installing from the video you sent me. Thank you!

  • @ErmilinaLight
    @ErmilinaLight Před 2 měsíci

    Thank you!
    What should we choose as Control Type? All?
    Also, noticed that generating image with txt2img controlnet with given image it takes veeeeery long time, though my machine is decent. Do you have the same?

    • @enigmatic_e
      @enigmatic_e  Před 2 měsíci +1

      i believe there should be a box you can check that says "Upload independent control image"

    • @ErmilinaLight
      @ErmilinaLight Před 2 měsíci

      @@enigmatic_e THANK YOU!!!!

  • @Fravije
    @Fravije Před 7 měsíci

    Hello. What about style transfer in images?
    I'm looking for information about this but haven't found anything.
    For example, I want to make a series of images of animals. I have a photo of a tiger, a pencil drawing of a horse, a pencil drawing of a bull (but by a different artist), an ink drawing of a wolf, a watercolor drawing of a cheetah... and I want to transform them so that all these images are done in the same style, like as if they were painted by the same artist. Is there any product that can help achieve this goal?

    • @enigmatic_e
      @enigmatic_e  Před 7 měsíci

      Unfortunately this tutorial is outdated now. I havent messed around with style transfer lately so I dont know whats a good alternative at the moment.

  • @User-pq2yn
    @User-pq2yn Před rokem +3

    The color adapter works for me, but the style adapter does not. The Guidance Start value doesn't change anything. The result is the same as when ControlNet is turned off. Please tell me how to fix this? Thank you!

    • @miguelarce6489
      @miguelarce6489 Před rokem

      Happen the same to me. did you figure it out?

    • @User-pq2yn
      @User-pq2yn Před rokem

      @@miguelarce6489 the total number of tokens in prompt and negative prompt should not exceed 75

  • @erdbeerbus
    @erdbeerbus Před 11 měsíci

    this is really a cool way to get it ... thank you!did u explain the way to bring a whole img sequence into comfy to get your great 0:20 result? thx in advance!

    • @enigmatic_e
      @enigmatic_e  Před 11 měsíci

      No I haven’t. I still need to get into comfy, still haven’t tried it yet.

  • @beatemero6718
    @beatemero6718 Před 10 měsíci

    Why did you provide a Link for style Adapter, but Not for the clip_vision preprocessor?

  • @iamYork_
    @iamYork_ Před rokem

    Looks like Gen-1 will have competition…

  • @theairchitect
    @theairchitect Před rokem +1

    i try use this new controlnet extension and get not style in generated result. i remove all prompts (using img2img with 3 controlnets activate: cany + hed + t2iadapter with clip_vision preprocessor), in generating process appears error: "warning: StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference" and generated result appears with not affected style =( frustrating .... i try many denoising strengths in img2img and many weights on controlnet instances without success... not applying the style on final generated result =( try to enable "Enable CFG-Based guidance" in contronet setting too, and still not working =( anyone got this same issue?

    • @J.l198
      @J.l198 Před rokem

      I need help, having the same issue, when I generate it just generates a random image...

  • @PlayerGamesOtaku
    @PlayerGamesOtaku Před rokem

    hi, I have created more than 70 images with stable diffusion, and I would like to know how I can transform these photos into a moving animation with the same photos, could you help me?

    • @enigmatic_e
      @enigmatic_e  Před rokem +1

      Other than using premier pro or any other editing software, I know there are websites that can make your image sequences into videos. I’m not sure which one is a good choice though I’ll have to look into it.

    • @PlayerGamesOtaku
      @PlayerGamesOtaku Před rokem

      @@enigmatic_e if you create a tutorial, or find the sites you mentioned before, let me know :)

    • @enigmatic_e
      @enigmatic_e  Před rokem

      @@PlayerGamesOtaku working it right actually

  • @gloxmusic74
    @gloxmusic74 Před rokem

    Nice find bro !! ....yeh consistency is still a problem with video, i find lowering the denoising strength helps but then you lose the style,..its a double edged sword ⚔️

  • @RonnieMirands
    @RonnieMirands Před rokem +2

    I am not getting great results like you out of the box, i have to play a lot of slides for starting showing, wondering what i am missing here lol

    • @RonnieMirands
      @RonnieMirands Před rokem +1

      I follow the instructions from the Aitrepreneur channel and it worked for me.

    • @enigmatic_e
      @enigmatic_e  Před rokem +1

      happy you figure it out.

  • @OsakaHarker
    @OsakaHarker Před rokem

    Have you looked at the new Ebsynth_Utility extension to A1111?

    • @enigmatic_e
      @enigmatic_e  Před rokem +1

      Wait what??

    • @BeatoxYT
      @BeatoxYT Před rokem

      @@enigmatic_e new enigmatic eb synth utility video incoming

    • @melchiorao9759
      @melchiorao9759 Před rokem

      @@enigmatic_e Automates most of the process.

  • @mayasouthmoor3339
    @mayasouthmoor3339 Před rokem

    where do you get clip vision even from?

    • @enigmatic_e
      @enigmatic_e  Před rokem

      if its not in the link i provided, it might appear when you update everything

  • @j_shelby_damnwird
    @j_shelby_damnwird Před rokem

    If I run more than one ControNet tab I get the CUDA out of memory error (8GB of VRAM GPU). Any suggestions?

    • @enigmatic_e
      @enigmatic_e  Před rokem

      have you tried checking the low vram option in controlnet?

    • @j_shelby_damnwird
      @j_shelby_damnwird Před rokem

      @@enigmatic_e Thank you for responding. Yes, to no avail :-(

    • @enigmatic_e
      @enigmatic_e  Před rokem +1

      @@j_shelby_damnwird try lowering dimensions, that might help

    • @j_shelby_damnwird
      @j_shelby_damnwird Před rokem

      @@enigmatic_e Thank you. Currently trying 1024 x 768- Will give a go 768 X 512.
      I definitely need to grabe me one of those fancy new GPUs :-/

    • @j_shelby_damnwird
      @j_shelby_damnwird Před rokem

      @@enigmatic_e Will give it a go. Currently trying to output 1024 x 768. Maybe 768 x 512 will do the trick.
      I really need to grab me one of those new fancy GPUs :-/

  • @clenzen9930
    @clenzen9930 Před rokem +1

    I made a post about making sure you deal with the ymal files, but I think it got deleted because it linked to Reddit. Anyway, there’s some work to be done if you haven’t.

    • @enigmatic_e
      @enigmatic_e  Před rokem +1

      is it from the stable diffusion reddit?

  • @dexter0010
    @dexter0010 Před rokem

    i dont have the clip_vision prerprocessor where do i download it ??????

    • @enigmatic_e
      @enigmatic_e  Před rokem

      Did you update?

    • @Azasemey
      @Azasemey Před rokem

      @@enigmatic_e I also can't find it. I did git pull inside the ControlNet folder, reinstalled it 2 times and still can't find it

  • @ohyeah9999
    @ohyeah9999 Před rokem

    This is can make video and free??? I tried disco difussion, thats like trial.

  • @dreamayy8360
    @dreamayy8360 Před rokem

    Shows "where to download it" with a list of .pth files..
    Then shows his folder where he's got safetensors and yaml files..
    Great tutorial.. just making stuff up and not actually showing where or how you installed anything.

  • @K-A_Z_A-K_S_URALA
    @K-A_Z_A-K_S_URALA Před rokem

    не работает!

  • @user-nc2hs4rp7l
    @user-nc2hs4rp7l Před rokem

    Ebstynth + SD link

    • @enigmatic_e
      @enigmatic_e  Před rokem +1

      My bad, just updated link, czcams.com/video/47HpHOLkIDo/video.html