3D+ AI (Part 2) - Using ComfyUI and AnimateDiff

Sdílet
Vložit
  • čas přidán 2. 06. 2024
  • Topaz Labs: topazlabs.com/ref/2377/
    This is a multi part tutorial where I will be talking about how to use Blender without having to learn how to rig or model. We use Mixamo to get our models and animations and import them into Blender to render a video we could later run through an AI program like ComfyUI.
    HOW TO SUPPORT MY CHANNEL
    -Support me by joining my Patreon: / enigmatic_e
    _________________________________________________________________________
    SOCIAL MEDIA
    -Join my discord: / discord
    -Twitch: / 8bit_e
    -Instagram: / enigmatic_e
    -Tik Tok: / enigmatic_e
    -Twitter: / 8bit_e
    - Business Contact: esolomedia@gmail.com
    ________________________________________________________________________
    My PC Specs
    GPU: RTX 4090
    CPU: 13th Gen Intel(R) Core(TM) i9-13900KF
    MEMORY: CORSAIR VENGEANCE 64 GB
    ComfyUI + AnimateDiff Tutorial: • ANIMATEDIFF COMFYUI TU...
    Jboogx : • Make INSANE AI videos:...
    Akumetsu971: www.tiktok.com/@akumetsu971?_...
    PDF file with all links to workflows and models:
    mega.nz/file/OUggQJqC#umv_P2U...
    0:00 Intro
    0:50 Workflows
    2:12 ComfyUI Parameters
    6:11 ControlNets
    7:24 Queue Prompt
    9:31 LCM Setup
    9:57 Lets Run an example
  • Zábava

Komentáře • 47

  • @enigmatic_e
    @enigmatic_e  Před 4 měsíci +1

    Topaz Video AI: topazlabs.com/ref/2377

  • @chi_squared
    @chi_squared Před 3 měsíci

    high quality tutorial, thanks bro

  • @user-zc9eh1qn5s
    @user-zc9eh1qn5s Před 3 měsíci

    so cool,expect your next video!!

  • @user-hb6dd9iu9g
    @user-hb6dd9iu9g Před 4 měsíci

    Please, don't stop! Great tuttorials!°

    • @enigmatic_e
      @enigmatic_e  Před 4 měsíci

      thanks for checking it out. hope it helps.

  • @PulpoPaul28
    @PulpoPaul28 Před 3 měsíci

    es realmente increible lo bueno que sos explicando y manjeando estas herramientas, gracias crack!

  • @P4TCH5S
    @P4TCH5S Před 4 měsíci

    Ohhh snapppp part 2 here we gooooooo!

    • @P4TCH5S
      @P4TCH5S Před 4 měsíci

      "Controlnets... I aint teachin you that" LOOOL

    • @enigmatic_e
      @enigmatic_e  Před 4 měsíci

      😂

  • @gabesaltman
    @gabesaltman Před 4 měsíci

    THANK YOU for putting together all the resources in a clean document and thank you for a great workflow! One thing I noticed is that the iterative upscaler definitely adds details or extra elements to a render that may disrupt your original composition. The quality is fantastic but I'm wondering if there's a way to maintain quality upscale without adding extras?

    • @enigmatic_e
      @enigmatic_e  Před 4 měsíci

      No problem. Regarding your question, maybe reducing cfg or denoise in the upscale ksampler?

  • @mhfx
    @mhfx Před 4 měsíci +1

    Thank you for sharing this, I'm a 3d artist that's been waiting for ai to get to this point so I am super excited to try this out. I am curious about openpose --- is there no option to use an exported rig directly from your 3d software? You already have the camera and rig in blender, you should be able to export this info somehow so openpose doesn't have to guess with depth or soft edges and that would ideally solve the issue with it messing up which way the character is facing --- I'll investigate it own my own as well, but I'd figure I'd at least ask first

    • @enigmatic_e
      @enigmatic_e  Před 4 měsíci

      I dont know of a way to do what you're saying. The closest thing I've seen is someone created a rig and model that is designed like the openpose skeleton but I haven't tested that. But if you do find anything out let me know. I would love to learn about it. Thank you!

    • @calvinherbst304
      @calvinherbst304 Před 3 měsíci

      @@enigmatic_e I bet if you were to render the wire frame as a separate mp4 that mirrors the same as your 3D video, you could use it as the input latent for open pose, then send the output to intercept the latent of the 3D video. Not sure how the node tree would look but I bet it's possible.

    • @enigmatic_e
      @enigmatic_e  Před 3 měsíci

      yea im sure theres a way to do that. Thats the great thing about comfyui, theres so many possibilities@@calvinherbst304

  • @alexebcy
    @alexebcy Před 10 dny

    HELP PLS :/
    all my video combine node are red :/
    Failed to validate prompt for output 281:
    * (prompt):
    - Return type mismatch between linked nodes: frame_rate, INT != FLOAT
    * VHS_VideoCombine 281

  • @amkkart
    @amkkart Před 4 měsíci +1

    Hi i used your installation guide and set the base path to my A1111.. where do I drop the loras and embeddings and how to install the ip adapter? Love your videos thanks for your efforts to educate us

    • @enigmatic_e
      @enigmatic_e  Před 4 měsíci

      You drop the Lora and embedding in the A1111 folders. I can’t remember exactly where those folders are but check the models folder, they’re not hard to find if you explore a little bit. And the ipadapter can be installed if you go manager and install missing nodes. Let me know if you still run into issues.

  • @KuschArg
    @KuschArg Před 3 měsíci

    Hi, this is great! Thanks for sharing :) Btw how can I put the wires straight lines in the workflow? I do really like that setting, looks more cleaner than the curve wires... Thanks!

    • @enigmatic_e
      @enigmatic_e  Před 3 měsíci

      Yea just go to settings in the manager window and change spline to straight I believe

  • @Truthseeker_12638
    @Truthseeker_12638 Před 3 měsíci

    where can i find and install the the LineartStandardPreprocessor node
    ERROR: comfyui When loading the graph, the following node types were not found: LineartStandardPreprocessor Nodes that have failed to load will show as red on the graph.
    FIX: If you stumble across this after already installing the preprocessor node just uninstall the node and reinstall and you will be fixed

  • @moritzryser
    @moritzryser Před 3 měsíci

    tysm! got everything working except on the last node group: FaceRestoreModelLoader & Upscale Model Loader, which 2 models do you recommend to install there so I can finalise my renders

    • @enigmatic_e
      @enigmatic_e  Před 3 měsíci +1

      I would look at the model name it shows when you first upload the workflow. You might be able to find them through the manager, install model.

    • @moritzryser
      @moritzryser Před 3 měsíci

      @@enigmatic_e thanks, will do

    • @KuschArg
      @KuschArg Před 3 měsíci

      Hi there! I have the same problem, did you find the model for FaceRestoreModelLoader? Thanks in advance!

  • @drviolet396
    @drviolet396 Před 4 měsíci

    Have you tried Ksampler RAVE? seems working pretty well , would be curious to hear if in this specific workflow it helps even more or nah

    • @enigmatic_e
      @enigmatic_e  Před 4 měsíci

      Hmm I don’t think I’ve used it. What does it do differently?

  • @mVRx3i
    @mVRx3i Před 3 měsíci

    Hi, I downloaded all the files from your PDF and when I try to generate some video i'm getting this error in the KSampler from the "Output" section:
    AttributeError: type object 'GroupNorm' has no attribute 'forward_comfy_cast_weights'
    Can somebody help to know what i'm doing wrong? :S

  • @armandadvar6462
    @armandadvar6462 Před 20 dny

    How did you create Comfy UI workflow? Where is it?

  • @planet_cs
    @planet_cs Před 2 měsíci

    Any clue how to fix the frame rate issue? All nodes connected to the initial Frame Rate node have a red circle around the frame_rate input.

  • @pro_rock1910
    @pro_rock1910 Před měsícem

    😍😍😍

  • @user-em1ss8dh8k
    @user-em1ss8dh8k Před 3 měsíci

    Anyone else getting crazy xformer errors? I can't get anything to go through no matter how many times and ways I disable it.

  • @futurediffusion
    @futurediffusion Před 4 měsíci

    why do you upload the info on mega T.T mega still loading forever and dont give me the file .

    • @enigmatic_e
      @enigmatic_e  Před 4 měsíci

      Never had any complaints about it but what would you recommend?

  • @hefland
    @hefland Před 4 měsíci

    Hmm, I'm getting a purple outline on my ksampler, so it seems that everything before it seems to load and work well. Plus i bypassed everything after, such as Iterative Upscale, Face Detailer sections. I get these errors below. If I figure it out, I'll update a comment.
    ERROR:root:!!! Exception during processing !!!
    ERROR:root:Traceback (most recent call last):

    • @enigmatic_e
      @enigmatic_e  Před 4 měsíci +1

      I’m taking a wild guess and thinking it might have to do with the ipadapter or animatediff. Which models are you using there?

    • @hefland
      @hefland Před 4 měsíci

      Ah, so i fixed that by bypassing the Softedge controlnet section section since had control-lora-depth-rank running in that slot. Whoops!

    • @hefland
      @hefland Před 4 měsíci

      @@enigmatic_e Load IPAdapter Model = ip-adapter-plus_sd15.safetensors
      AnimateDiff Loader = v3_sd15_mm.ckpt
      I'ts actually running fine now. I had the wrong controlnet model running on the SoftEdge section. I ran a control-lora-depth-rank128 in there. I only have the openpose section running right now (all others are bypassed).

  • @luisgregori3817
    @luisgregori3817 Před 3 měsíci

    The render take more than 30 minutes for me, i dont understand i have a rtx4060ti 16go

    • @enigmatic_e
      @enigmatic_e  Před 3 měsíci

      Depends on how high your resolution is.

  • @tiberiuslawson1172
    @tiberiuslawson1172 Před 3 měsíci +1

    The volume on your videos are very low compared to any other youtube video I watch. Just letting you know

    • @enigmatic_e
      @enigmatic_e  Před 3 měsíci

      Do you feel that way about multiple videos or just this one? I’ll try to keep a closer eye on it. I typically keep the voiceover levels at what’s considered industry standards but I gotta double check this video. Thanks for the feedback.

    • @tiberiuslawson1172
      @tiberiuslawson1172 Před 3 měsíci

      @@enigmatic_e Yes I have watched a bunch of your videos with low volumes. Part 1 of this seems louder. You should try to hit close to 0db when editing. Industry Standards might be different than CZcams since everyone is watching on different devices with different volume output levels. I notice I have to put my volume up by 30%+ when switching to your video from someone elses on my Studio Monitors. Nevertheless you have some great tutorials on your channel. Keep up the good content

  • @DimiArt
    @DimiArt Před 2 měsíci

    i really really really hope you can get it to work with automatic1111! i love using automatic1111's UI.