Ultimate Guide to IPAdapter on comfyUI

Sdílet
Vložit
  • čas přidán 13. 04. 2024
  • Welcome to the "Ultimate IPAdapter Guide," where we dive into the all-new IPAdapter ComfyUI extension Version 2 and its simplified installation process. This video will guide you through everything you need to know to get started with IPAdapter, enhancing your workflow and achieving impressive results with Stable Diffusion.
    🔍 *What You'll Learn:*
    - Step-by-step instructions on installing IPAdapter Version 2.
    - Overview of the key features and enhancements in the latest update.
    - Demonstrations of IPAdapter workflows that can transform your projects.
    🔗 *Useful Links:*
    - Join our Discord Community: [Join Discord]( / discord )
    - Models & Workflows:
    - [IPAdapter Models](github.com/cubiq/ComfyUI_IPAd...)
    - [Workflows](endangeredai.com/ultimate-gui...)
    🌟 *Support Endangered AI:*
    - Patreon: [Support on Patreon]( / endangeredai )
    - Buymeacoffee: [Buy me a coffee](www.buymeacoffee.com/endangered)
    🔔 *Stay Connected:*
    - Visit our Website: [Endangered AI](endangeredai.com)
    - Subscribe to our Newsletter: [Sign Up Here](endangeredai.com/sign-up/)
    👍 *Engage with Our Content:*
    Don't forget to like, subscribe, and hit the notification bell to stay updated with our latest tutorials and guides. Your engagement greatly supports our channel and helps us keep producing high-quality content!
    Thank you for watching the "Ultimate IPAdapter Guide." We’re excited to help you streamline your workflows with IPAdapter. Stay tuned for more detailed tutorials and insights into the latest in AI technology.
    #IPAdapter #TechTutorial #AIWorkflow #IPAdapterGuide #TechGuide #InstallationGuide #stabilityai #stablediffusion #comfyui
  • Věda a technologie

Komentáře • 56

  • @latentvision
    @latentvision Před měsícem +15

    if you use the unified loader you never connect the clipvision input (or the insightface). The input is optional and only required if you use the legacy model loaders (eg: IPAdapter model loader). That might be helpful for advanced users who know what they are doing. Otherwise just connect the unified loader and that's all you need. Nice video btw

    • @EndangeredAI
      @EndangeredAI  Před měsícem

      1. Thrilled to get a message from you 😍
      2. Noted on that, I was getting a few errors in my testing, which is why I opted for it
      3. Is my Unet explanation suitable? It’s very basic, but I know you covered it in one of your videos too right?
      Thanks again Mateo @latentvision!

    • @latentvision
      @latentvision Před měsícem +1

      @@EndangeredAI yeah for the purpose of these youtube videos is totally fine... and you also pronounce my name correctly which is rare 😄 . Good luck with you videos hope to see you on my discord!

    • @EndangeredAI
      @EndangeredAI  Před měsícem +1

      @latentvision I should hope so. From your accent were geographical neighbours 😂

  • @rasol136
    @rasol136 Před měsícem

    You have such a gift explaining these complex concepts in layman's terms. Thank you!

  • @virtualalias
    @virtualalias Před měsícem +8

    "AI artists are just clicking a button."
    ComfyUI: "Hold my beer."

    • @EndangeredAI
      @EndangeredAI  Před měsícem +1

      Well said! 😂

    • @skrollreaper
      @skrollreaper Před 25 dny +1

      after a certain point, it is that simple though.
      im an artist who is learning comfy in order to have superpowers as an artist.
      Im a proponent for fellow artists to get into localized AI and incorporate the multiple years of experience to make super artists.
      but at the end of the day, 2 weeks of learning how to set up comfyUI and the ipadapters, controlnets, etc
      VS
      4 years + of learning how to draw, paint, learn anatomy to a mechanical degree, 3d, light and color theory, composition, etc. is a huge difference
      also how long it takes to paint an image vs generating one.
      2 weeks for comfyUI to be in place
      2 weeks to paint a good piece, give or take. (after the 4+ years of experience)
      then another 2 weeks to paint another piece, (or somewhere in between.)
      compared to spamming generate button and pushing art after art after art in a matter of seconds to minutes depending the gpu
      once I have this set up, ill be able to do hundreds of iterations a day, and be a super power as an artist.
      I look at this art AI and AI in general as a superpower.
      yes it takes time to set up comfyui but once its ready to go and you have each workflow in place for whatever purpose. its clockwork

    • @virtualalias
      @virtualalias Před 25 dny

      @@skrollreaper Yeah. It almost becomes an axe VS chainsaw issue. The chainsaw is actually the more complicated tool to build and maintain, but the axe (to be used effectively) requires more time, fitness, and practice. At the end of the day, those trees are coming down. Reminds me of John Henry VS the steam engine.

    • @skrollreaper
      @skrollreaper Před 25 dny +1

      ​@@virtualalias It's not quite like John Henry nailing nails to a railroad track. Art is a science. just because the program can generate art doesn't mean it's creating compositionally sound art or art with good anatomy. It's passable, sure, but even if it gets close to nailing good anatomy, the composition might be off, etc. among other hinderances for less skilled artists.
      I see it more as the swordsmith creating pristine blades and a robot making pseudo-pristine blades. An untrained eye can't see why the robot's blades aren't pristine, but a swordsmith can see the flaws, and sometimes major flaws that would break the fake pristine sword if in use. the same goes artist who learns the skills it takes to make a masterpiece with lighting, composition, and painting techniques, we can see the major flaws hidden underneath good rendering, etc. flaws it would take years of anatomical understanding, compositional, and other theories to see.
      for the artists who are 20+ years of seriously developed knowledge, they live in a realm where AI would slow them down actually. it would be quicker to just hand draw everything because the level of understanding is so vast. waiting for an ai generator to prompt it instead of just drawing 50 poses, or 50 designs would be quicker.
      im more of a 6+ year sworsmith. so for me it will just fit right in with my workflow and actually speed up a lot of skill acquisition. so in my case its more like the swordsmith who makes only pristine swords, now uses the robot that if, by itself, would only make pseudo-pristine swords, weak and fragile in areas. But with the swordsmith's craftsmanship, they can now produce 10 truly pristine swords in a month or more.
      So for an artist who can match any style, using the AI is more like if the swordsmith equipped the exosuit. Someone with a superpower that was handcrafted, combined with the technological superpower.
      If everyone has superpowers, then noone has.
      but if someone with a superpower combines with another superpower.
      they have a super power again.

    • @_gr1nchh
      @_gr1nchh Před 11 dny +1

      @@skrollreaper Yeah AI is still a ways off. And I always think there'll be the "human element" which AI can never replicate. At the end of the day, we're explaining what we want. And you may or may not get it, and even if one thing is off, and you want it to be PERFECT, getting an AI to do that is next to impossible. It's just randomness thrown together to give you kinda sorta what you're looking for. And I 100% agree that if a REAL artist used AI as a tool and not see it as a threat, it could increase their own work by a lot, especially with poses, concepts, compositions, etc.

  • @hleet
    @hleet Před 12 dny

    Nice video tutorial, well paced. Really enjoyed it

  • @WhySoBroke
    @WhySoBroke Před měsícem

    Excellent video and a gem of a channel!! Souls like you make the world so much better for many of us!!

    • @EndangeredAI
      @EndangeredAI  Před měsícem

      Wow, thank you! I’m very happy the video has been helpful for you!

  • @adydeejay
    @adydeejay Před 26 dny

    Oh man! You're my hero! Minute 6:27 where you say to add insightface and onnx in ComfyUI's requirements.txt gave me back the 3 hours I spent trying to fix IPAdapter nodes after the last update. THANK YOU! 👍

    • @EndangeredAI
      @EndangeredAI  Před 26 dny +1

      Glad to be helpful! I found other guides had this long complicated process, but… this is much simpler and handles 90% of situations haha

  • @opensourceradionics
    @opensourceradionics Před 19 dny +2

    So I already gave you one thumb up, but I would give you 10 thumbs ups, because you explain everything in detail without hiding very important informations like all the other tutorials on IPAdapter

  • @premium2681
    @premium2681 Před měsícem

    Together with mateo you are my favorite stable diffusion channels. So many usefull information and so litte hype and/or bullshit.
    Keep it up!

    • @EndangeredAI
      @EndangeredAI  Před měsícem +1

      It means a lot to be compared to mateo! He’s one of my favourites too! It means a lot, as I try and be circumspect with the content, even if it is sponsored, like the scenario one, which is a tool I actually use I try and bring value!

  • @mufeedco
    @mufeedco Před měsícem

    Great video. Thank you.

  • @user-ef4df8xp8p
    @user-ef4df8xp8p Před měsícem

    Thank you....

  • @KodexAnt
    @KodexAnt Před 29 dny

    I've been playing with ipadapter for a few days now, for fun, especially with faceid and I think it's great, awesome. All my workflows include it by now.
    I have a question tho... is it appropriate to use IPAdapter to embed a specific icon into an image? for example a medieval shield, or a logo? I can't get good results so far (after 20 minutes haha, I'm just getting started with it)
    The quest (for no other reason than the challenge) is to apply it without masks or using areas to achieve a generalist workflow.

    • @EndangeredAI
      @EndangeredAI  Před 29 dny +1

      Hmmmmm that’s an interesting question. I’m guessing it would depend on what else is in the image. If you have a person holding the shield, that makes the scene/prompt more complex resulting in greater room for error.
      You might be able to use a generalist workflow but it would be complex, require multiple passes and may need an image detection node which can dynamically detect the area to manipulate and inject noise accordingly.
      The downside is that you may struggle with even a 4090 as I imagine you’ll need to load a bunch of things into memory.
      For something like this you might be better off with a lora trained on the icon, which you can then easily reference in the prompt “shield with icon printer” where icon is your token

    • @KodexAnt
      @KodexAnt Před 28 dny

      You're right. I don't think I can avoid the effort

  • @surajnarayanan8882
    @surajnarayanan8882 Před 29 dny

    after uninstalling the ipadapter plus custome node , just downloading the models from github is enough to work or do I have to reinstall it

    • @EndangeredAI
      @EndangeredAI  Před 29 dny

      You need to reinstall it using comfy manager.
      So best practice is uninstall with comfy manager then install it again :)

    • @surajnarayanan8882
      @surajnarayanan8882 Před 29 dny

      @@EndangeredAI thank you and the discord link is invalid

  • @yiluwididreaming6732
    @yiluwididreaming6732 Před 26 dny

    You were going to give an example of using image negative??? thank you.

    • @EndangeredAI
      @EndangeredAI  Před 26 dny +1

      It’s in the next video ipadspter -> facial expressions

  • @jonhodges6572
    @jonhodges6572 Před 22 dny +1

    at 6;46, what command terminal is that? it doesn't work from a cmd terminal in the comfy folder

    • @EndangeredAI
      @EndangeredAI  Před 22 dny

      I’m a Linux user, so that’s the Linux command terminal.
      If you’re on Mac or windows your regular terminal should work, just make sure you cd into your comfyui folder. The same one where the requirements.txt folder is

    • @jonhodges6572
      @jonhodges6572 Před 22 dny

      @@EndangeredAI Thanks EndangeredAi for the quick response. Install -r won't work does it need to be a ? I can't get it to work :(
      It seemed to install ok using this method czcams.com/video/vCCVxGtCyho/video.htmlsi=QIXBL8kUT1chjppr , but comes up with "insightface required" if I try to use a faceid ipadapter model.
      I have uninstalled and reinstalled the ipadapter nodes.
      On day 2 of trying to get this one module to work now:(

    • @jonhodges6572
      @jonhodges6572 Před 22 dny

      @@EndangeredAI Thanks, neither or works from me, this one module is giving me a lot of trouble :(

    • @EndangeredAI
      @EndangeredAI  Před 22 dny

      The second one should have worked pip install -r requirements.txt what error are you getting?

    • @jonhodges6572
      @jonhodges6572 Před 22 dny

      @@EndangeredAI no such command, probably user error. Think I need to run virtual environment first? but can't remember how. I'm a bit out of my wheelhouse tbh, I'm a 3D artist not an IT guy lol :D
      I have had partial success with this method - czcams.com/video/vCCVxGtCyho/video.htmlsi=uYbRQwSqFdGbi07t, it seemed to install ok, but using any faceiD ipadapter models throws up errors with missing insightface. nodes module was uninstalled and reinstalled.

  • @3dbraga
    @3dbraga Před měsícem

    How do you stop the server without closing your cmd window :D

  • @evak2802
    @evak2802 Před měsícem

    Fantastic video!! I am testing your workflow (daisy_chain_ipadapter) and I am getting two subjects per image, even though I only have one on my input image and prompt. Any ideas?

    • @EndangeredAI
      @EndangeredAI  Před měsícem

      What prompt are you using? Come by the discord and drop a screenshot

  • @ChaoLi-ou1pd
    @ChaoLi-ou1pd Před 10 dny +1

    How to stop server?

  • @MarkScorelle
    @MarkScorelle Před měsícem

    honestly I cannot find a solution for this error message Error occurred when executing IPAdapterUnifiedLoader:
    ClipVision model not found.
    File "D:\ComfyUI-master\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI-master\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI-master\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI-master\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 432, in load_models
    raise Exception("ClipVision model not found.")

    • @EndangeredAI
      @EndangeredAI  Před měsícem

      Ah! Ok so couple of things to check!
      I suggest you come by the discord and ask, but off the top of my head;
      1. If you are using an advanced IpAdapter node, add a load clipvision and connect it.
      2. If you are using a unified loader, be sure to check that you’ve downloaded and properly renamed the clipvision models.
      That’s usually the two main causes. Anything else come by the discord