enigmatic_e
enigmatic_e
  • 158
  • 2 492 475
ToonCrafter - This is only the beginning!
Checking out the new interpolation program called Toon Crafter. It helps create Inbetweens for keyframes using images to guide it.
Topaz Labs: topazlabs.com/ref/2377/
HOW TO SUPPORT MY CHANNEL
-Support me by joining my Patreon: www.patreon.com/enigmatic_e
_________________________________________________________________________
SOCIAL MEDIA
-Join my discord: discord.gg/ZuGj5nJGut
-Twitch: www.twitch.tv/8bit_e
-Instagram: enigmatic_e
-Tik Tok: www.tiktok.com/@enigmatic_e
-Twitter: 8bit_e
- Business Contact: esolomedia@gmail.com
________________________________________________________________________
My PC Specs
GPU: RTX 4090
CPU: 13th Gen Intel(R) Core(TM) i9-13900KF
MEMORY: CORSAIR VENGEANCE 64 GB
ComfyUI workflow
github.com/kijai/ComfyUI-DynamiCrafterWrapper
doubiiu.github.io/projects/ToonCrafter/
github.com/ToonCrafter/ToonCrafter
Installing ComfyUI:
czcams.com/video/WHxIrY2wLQE/video.html
zhlédnutí: 3 945

Video

AI music is getting good!! Udio vs. Suno
zhlédnutí 8KPřed měsícem
Testing out AI generated music for the first time. In this video I try Udio and Suno. www.udio.com/ suno.com/ Topaz Labs: topazlabs.com/ref/2377/ HOW TO SUPPORT MY CHANNEL -Support me by joining my Patreon: www.patreon.com/enigmatic_e SOCIAL MEDIA -Join my discord: discord.gg/ZuGj5nJGut -Twitch: www.twitch.tv/8bit_e -Instagram: enigmatic_e -Tik Tok: www.tiktok.com/@enigmatic_e -Tw...
VIGGLE + COMFYUI - CRAZY COMBO
zhlédnutí 21KPřed měsícem
Taking the power of Viggle and running it through ComfyUI!! Topaz Labs: topazlabs.com/ref/2377/ HOW TO SUPPORT MY CHANNEL -Support me by joining my Patreon: www.patreon.com/enigmatic_e SOCIAL MEDIA -Join my discord: discord.gg/ZuGj5nJGut -Twitch: www.twitch.tv/8bit_e -Instagram: enigmatic_e -Tik Tok: www.tiktok.com/@enigmatic_e -Twitter: 8bit_e - Business Contact: esol...
Animate Anyone! SUPER EASY with Viggle!
zhlédnutí 31KPřed 2 měsíci
Taking a look at Viggle, a program that allows you to use one image and animates it. It also removes the background allows for a greenscreen or white screen background. Topaz Labs: topazlabs.com/ref/2377/ HOW TO SUPPORT MY CHANNEL -Support me by joining my Patreon: www.patreon.com/enigmatic_e SOCIAL MEDIA -Join my discord: discord.gg/ZuGj5nJGut -Twitch: www.twitch.tv/8bit_e -Instagram: instagra...
Text-to-3D to ComfyUI & AnimateDiff
zhlédnutí 12KPřed 3 měsíci
We will be looking into LumaLabs text to 3D feature to setup a 3D animation that we could run through ComfyUI to improve or change the look of the original model. Topaz Labs: topazlabs.com/ref/2377/ HOW TO SUPPORT MY CHANNEL -Support me by joining my Patreon: www.patreon.com/enigmatic_e SOCIAL MEDIA -Join my discord: discord.gg/ZuGj5nJGut -Twitch: www.twitch.tv/8bit_e -Instagram: ...
3D+ AI (Part 2) - Using ComfyUI and AnimateDiff
zhlédnutí 13KPřed 4 měsíci
3D AI (Part 2) - Using ComfyUI and AnimateDiff
3D + AI (Part 1) - Basic Blender Tutorial For AI Generation
zhlédnutí 6KPřed 4 měsíci
3D AI (Part 1) - Basic Blender Tutorial For AI Generation
KREA AI | Upscale & Enhance + Real Time Generation
zhlédnutí 36KPřed 6 měsíci
KREA AI | Upscale & Enhance Real Time Generation
SDXL TURBO IN COMFYUI! WE MOVING BABY!
zhlédnutí 5KPřed 6 měsíci
SDXL TURBO IN COMFYUI! WE MOVING BABY!
STABLE VIDEO DIFFUSION | COMFYUI
zhlédnutí 39KPřed 6 měsíci
STABLE VIDEO DIFFUSION | COMFYUI
Real Time Camera to AI in ComfyUI
zhlédnutí 21KPřed 6 měsíci
Real Time Camera to AI in ComfyUI
Gen-2 Motion Brush | Is it good?
zhlédnutí 2,8KPřed 6 měsíci
Gen-2 Motion Brush | Is it good?
CONSISTENT VID2VID WITH ANIMATEDIFF AND COMFYUI
zhlédnutí 33KPřed 6 měsíci
CONSISTENT VID2VID WITH ANIMATEDIFF AND COMFYUI
How MDMZ Escaped the 9 to 5: Becoming a Fulltime YouTuber
zhlédnutí 1,1KPřed 6 měsíci
How MDMZ Escaped the 9 to 5: Becoming a Fulltime CZcamsr
ANIMATEDIFF COMFYUI TUTORIAL - USING CONTROLNETS AND MORE.
zhlédnutí 80KPřed 7 měsíci
ANIMATEDIFF COMFYUI TUTORIAL - USING CONTROLNETS AND MORE.
Top 10 Free After Effects Plug-ins
zhlédnutí 1,6KPřed 8 měsíci
Top 10 Free After Effects Plug-ins
Civitai’s Founder and CEO Interview
zhlédnutí 2,5KPřed 9 měsíci
Civitai’s Founder and CEO Interview
The Mind Behind The Viral Statue Videos: Interview
zhlédnutí 2,8KPřed 9 měsíci
The Mind Behind The Viral Statue Videos: Interview
Working with Corridor Crew and talking about Kytr.animate
zhlédnutí 2,5KPřed 9 měsíci
Working with Corridor Crew and talking about Kytr.animate
Deforum + Controlnet IMG2IMG (TemporalNet)
zhlédnutí 26KPřed 9 měsíci
Deforum Controlnet IMG2IMG (TemporalNet)
NEW Rotobrush 3.0 in After Effects (Beta)
zhlédnutí 56KPřed 10 měsíci
NEW Rotobrush 3.0 in After Effects (Beta)
AI Gatekeeping, Dance Videos, and Commercial Work
zhlédnutí 2KPřed 10 měsíci
AI Gatekeeping, Dance Videos, and Commercial Work
Are AI Videos The New Artistic Expression?
zhlédnutí 1,9KPřed 10 měsíci
Are AI Videos The New Artistic Expression?
DEFORUM CAMERA CONTROL IN AE - AE2SD MOTION BRO
zhlédnutí 10KPřed 10 měsíci
DEFORUM CAMERA CONTROL IN AE - AE2SD MOTION BRO
Spiderverse Animation Tutorial - Stable Warpfusion + After Effects
zhlédnutí 8KPřed 11 měsíci
Spiderverse Animation Tutorial - Stable Warpfusion After Effects
Avoiding Common Problems with Stable Warpfusion
zhlédnutí 10KPřed 11 měsíci
Avoiding Common Problems with Stable Warpfusion
GEN-2, is it any good?
zhlédnutí 2,9KPřed rokem
GEN-2, is it any good?
Create 360 Backgrounds Using AI - Blockade Labs
zhlédnutí 6KPřed rokem
Create 360 Backgrounds Using AI - Blockade Labs
STABLE WARPFUSION TUTORIAL - Colab Pro & Local Install
zhlédnutí 69KPřed rokem
STABLE WARPFUSION TUTORIAL - Colab Pro & Local Install
PARSEQ FOR DEFORUM ON STABLE DIFFUSION - Easy Camera Control!
zhlédnutí 31KPřed rokem
PARSEQ FOR DEFORUM ON STABLE DIFFUSION - Easy Camera Control!

Komentáře

  • @atabac
    @atabac Před 2 hodinami

    nice gif engine😂

  • @ATMAHATMAN
    @ATMAHATMAN Před 4 hodinami

    pls give tutorial how to uput castome chrtcr

  • @joonienyc
    @joonienyc Před dnem

    would like to keep eyes on , like what u have mentioned , this can turn into real cartoon soon !!!

  • @SapiensVirtus
    @SapiensVirtus Před dnem

    hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance

  • @AncientEchoes-vz2tu

    nicely explained

  • @francyszz3
    @francyszz3 Před 2 dny

    hey, how i have to make the reference-based sketch colorization? it doesn´t give me the option of send a video or gif of the scketch animation

  • @adrianfels2985
    @adrianfels2985 Před 2 dny

    Thank you man! I would love if you choose one of those workflows and really dive deep into what every node does and how to tweak it to get different results. But this overview also helped me a lot in terms of basic understanding!

  • @ANUR_KI_gen_Music
    @ANUR_KI_gen_Music Před 2 dny

    hi, thanks for the entertaining video! I have tried both and my favorite is still SUNO. At the moment you can generate songs of up to 4 minutes with one click. You can actually control the structure of the song with the prompts, but it's a bit of a matter of luck. @SanguineUmbra-cc5hj is right. many melodies sound really similar, but every now and then really nice results come out. But it's fun and a great way to waste my time. But don't forget, it's version 3.5 ! Maybe version 6 or 7 will do the trick, I'm very excited. Greetings from Germany

  • @rubenthorell6320
    @rubenthorell6320 Před 3 dny

    Just tried both of them before i found this video, Seems to me like Udio is generally better when it comes to the overall sound and more balanced in the 'mix' BUT Suno excels at creativity and creative freedom as a user. Nice Video man! Keep it up, lots to cover 😅

  • @somusamba6118
    @somusamba6118 Před 3 dny

    Even at this stage this tool can be used to convert 12 frames per second animations into 24 fps

  • @sahilshah-ut2yn
    @sahilshah-ut2yn Před 3 dny

    Ebsynth?

  • @bender203
    @bender203 Před 4 dny

    E this is interesting and I agree it will only get better, add consistent characters, ComfyUI style transfer, additional interpolation and upscaling, and suddenly anyone with a little technical skill, and a story to tell, has the means and opportunity to release their work to the world. "What a time to be alive"

  • @RhapsHayden
    @RhapsHayden Před 4 dny

    Has anyone tried a realistic vision model yet?

    • @mrhappycamper1881
      @mrhappycamper1881 Před 3 dny

      I just used this, but uploaded two real stills of me snowboarding and it managed to link the in between frames really well and make a video of me doing a turn.

  • @J.S.Wilson
    @J.S.Wilson Před 4 dny

    This looks extremely useful even in it's current state to create fluid effect animation, like fire, water, cloth, hair, magic, etc. Very nice!

  • @DJBFilmz
    @DJBFilmz Před 4 dny

    This is what I've been waiting for! Thanks for covering all this, man, love the vids.

  • @estrangeiroemtodaparte

    I'm making a 2 minutes long anime whee I use tooncrafter on parts of it. And other techniques for other parts. Not sure if you're interested in that! Thanks for the vid as always!

    • @enigmatic_e
      @enigmatic_e Před 4 dny

      That sounds exciting. I’d love to see some clips if possible.

  • @TheJovialBrit
    @TheJovialBrit Před 4 dny

    Runway is crap! I manually paint parts out and it's like the A.I has learned which parts need to be removed but then it STILL keeps them in!

  • @eyevenear
    @eyevenear Před 4 dny

    INCREDIBLE TOOL!

  • @jzwadlo
    @jzwadlo Před 4 dny

    "Andddddddddddddddd everyone at MAPPA breathed a huge sign of relief*

  • @terriermonisgod
    @terriermonisgod Před 4 dny

    what i love about this is that it just makes the prior animation pipeline much more efficient by automating in betweening. im sure eventually we will be able to lead it better with 3d or sketches for the in betweening. i think eventually all digital art pipelines will be enhanced with ai

    • @enigmatic_e
      @enigmatic_e Před 4 dny

      Yea I feel that’s where it’s going!

  • @cowlevelcrypto2346
    @cowlevelcrypto2346 Před 4 dny

    Have not seen your channel pop up on my feeds in quite a while. Nice to see you are still active !

    • @enigmatic_e
      @enigmatic_e Před 4 dny

      Thanks!! I wasn’t uploading a lot lately. Thanks for sticking around 🙏🏽

  • @madpencil_
    @madpencil_ Před 4 dny

    I tried it on both Hugging face and Fofr's Replicate, its amazing 🤩

  • @NotThatOlivia
    @NotThatOlivia Před 4 dny

    how to optimize VRAM usage by ToonCrafter??? - that what is bothering me ...

  • @HerraHazar
    @HerraHazar Před 6 dny

    I am just getting some Garbled output, seems to change every frame, nothing at all like the cartoony stuff in your example. What I am missing is the 3d Samaritan Checkpoint , where can I download that ? Should be an .ckpt? Can only find a safetensor?

  • @evgenika2013
    @evgenika2013 Před 6 dny

    clean and wide interpretation. Thnx!

  • @user-ld3si3zs9o
    @user-ld3si3zs9o Před 8 dny

    does anybody know how to take it from an original animated or comic character and make it human ??

  • @bigdaveproduction168

    Learn us please 🥹

  • @the_algo
    @the_algo Před 9 dny

    this is awesome 👌🏼

  • @KonnorGann
    @KonnorGann Před 9 dny

    Woah how was this made??

  • @jostclaassen2198
    @jostclaassen2198 Před 9 dny

    Brilliant!

  • @imtoasty8748
    @imtoasty8748 Před 9 dny

    I LOVE THIS!!! ❤

  • @sutyesz96
    @sutyesz96 Před 11 dny

    bruh, 00:32 came out of pocket 🥶

  • @QuentinWinters521
    @QuentinWinters521 Před 12 dny

    I subscribed

  • @_RobertOnline_
    @_RobertOnline_ Před 12 dny

    Been using Suno, it's just crazy where we are at.

  • @FM-zp2hl
    @FM-zp2hl Před 14 dny

    Nice video there, please is it possible to use regional prompts in warpfusion? I'm striving to process a video with multiple characters appearing in the same frames. Is it feasible to utilize latent coupling, regional prompting, or inpainting masks in Warpfusion to incorporate multiple Loras for different characters within these frames? If so, could you please guide me on how to achieve this in Warpfusion? Thanks

  • @wesallenmedia
    @wesallenmedia Před 14 dny

    What is the specific file to download from the huggingface site?

  • @EinMann123
    @EinMann123 Před 15 dny

    why are u not showing how to do the autocrop thing, thats why we click on this video.. what a waste of time

    • @enigmatic_e
      @enigmatic_e Před 15 dny

      It does it automatically when you add the alpha matte. That’s the whole point of the matte I explain it in the video.

  • @alexebcy
    @alexebcy Před 16 dny

    HELP PLS :/ all my video combine node are red :/ Failed to validate prompt for output 281: * (prompt): - Return type mismatch between linked nodes: frame_rate, INT != FLOAT * VHS_VideoCombine 281

  • @MisterCozyMelodies
    @MisterCozyMelodies Před 17 dny

    there is a problem these days, when you update the ipadapter, so this workflow doesn`t work anymore with the workflow, do you know how to fix or get the new workflow with updated ipadapter?

    • @MisterCozyMelodies
      @MisterCozyMelodies Před 17 dny

      never mind!! i follow some of the comments here and found the answer, works great, nice tuto, thanks a lot

    • @enigmatic_e
      @enigmatic_e Před 17 dny

      I’ll try to upload an updated version today

  • @maximoirurueta5180
    @maximoirurueta5180 Před 17 dny

    Excellent tutorial, thanks bro!, but I have a question, can it be used on an AMD PC?

    • @enigmatic_e
      @enigmatic_e Před 17 dny

      It’s very slow with amd unfortunately

  • @hheelloimyoursweetheeth

    What an awesome software, I would love to try it! I'm using stylar right now because it has so many different styles, hopefully next time there will be a tutorial about stylar! 🤗🤗

  • @TheJan
    @TheJan Před 18 dny

    yoo 7:35 is incredible.. we in the matrix fellas

  • @miukatou
    @miukatou Před 19 dny

    thankyou,bro!but I'm 3070ti .Am I able to operate these programs normally?

    • @enigmatic_e
      @enigmatic_e Před 17 dny

      Thank you! Not sure, I would just test it and see.

  • @larziz573
    @larziz573 Před 19 dny

    this looks realy interesting, but i dont understand how you get hand positioning inside the mask(when hands go infront of body) without a controlNet, i would have used a depth map, but i guess you have a other solution that i'm not catching

  • @fpvx3922
    @fpvx3922 Před 21 dnem

    Useful, Thanks

  • @johnny5805
    @johnny5805 Před 22 dny

    It's great that Kree has ZERO censorship. But it's AI must be in training because it is dumb as eff, especially the AI generation from text.

  • @FurryNonsense
    @FurryNonsense Před 22 dny

    You forgot to show step 1.2 how to install Pytorch

  • @metaverse3090
    @metaverse3090 Před 24 dny

    nice video bro

  • @ZIOJONES
    @ZIOJONES Před 24 dny

    I get this error: !!! Exception during processing!!! ImagingCore.getbbox() takes no arguments (1 given) Traceback (most recent call last): File "C:\code\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\code\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\code\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\code\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\WAS_Node_Suite.py", line 7700, in mask_crop_region region_mask, crop_data = self.WT.Masking.crop_region(mask_pil, region_type, padding) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\code\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\WAS_Node_Suite.py", line 1471, in crop_region bbox = mask.getbbox() ^^^^^^^^^^^^^^ File "C:\code\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\Image.py", line 1325, in getbbox return self.im.getbbox(alpha_only) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: ImagingCore.getbbox() takes no arguments (1 given)