The Local Lab
The Local Lab
  • 24
  • 34 309
How To Run Flux Dev & Schnell GGUF Image Models With LoRAs Using ComfyUI - Workflow Included
Discover the latest in open-source image generation with our deep dive into the new GGUF quantization method for image generation models! In this video, we explore how GGUF is revolutionizing the efficiency of Flux models, allowing for lightning-fast image processing even on less powerful hardware. Watch as we demonstrate the impressive speed improvements-cutting processing times dramatically while maintaining exceptional quality. Plus, learn how to set up these advanced models in ComfyUI and integrate your favorite Loras from Civitai. Don't miss out-links for models and workflows are in the description!
🔗 Links
ComfyUI Github Repo - github.com/comfyanonymous/ComfyUI
ComfyUI GGUF Github Repo - github.com/city96/ComfyUI-GGUF
Flux Models -
huggingface.co/city96/FLUX.1-dev-gguf/tree/main
huggingface.co/city96/FLUX.1-schnell-gguf/tree/main
Clip Models -
huggingface.co/openai/clip-vit-large-patch14/tree/main
huggingface.co/comfyanonymous/flux_text_encoders/tree/main
Flux Vae -
huggingface.co/black-forest-labs/FLUX.1-dev/tree/main/vae
huggingface.co/black-forest-labs/FLUX.1-schnell/tree/main/vae
(TO USE LORA: Connect the GGUF model loader node to the Lora node, then connect the Lora node to the Ksampler node. Be advised that you will need to also make sure there's always Lora loaded to use the workflow. If you no longer want to use the Lora revert back to the default workflow.)
xLabs Realism Lora - huggingface.co/XLabs-AI/flux-RealismLora/tree/main
Flux ComfyUI GGUF Workflow:
drive.google.com/file/d/1lWsaVtESydGudEakK0CW9C28qHSVOlcu/view?usp=sharing
Local Lab Twitter - TheLocalLab_
Support the Channel - buymeacoffee.com/thelocallab
#fluxai #comfyui #stablediffusion #fluxgguf #aiart
zhlédnutí: 7 175

Video

How to Run Flux Image Models In ComfyUI with Low VRAM
zhlédnutí 8KPřed 14 hodinami
Discover the groundbreaking FLUX models and learn how to harness their potential, even with a limited GPU. In this video, we'll dive deep into the world of AI art, exploring the stunning capabilities of FLUX and how it's revolutionizing the creative landscape. Key Highlights: - Understand the FLUX model family: Schnell, Dev, and Pro - Learn about the brilliant minds behind FLUX at Black Forest ...
Run Local AI Agents With Any LLM Provider - Anything LLM Agents Tutorial
zhlédnutí 716Před dnem
Unlock the power of AI with AnythingLLM! In this video, we dive into the impressive features of AnythingLLM, an all-in-one AI application that supports multiple AI providers like OpenAI and Anthropic Claude. Explore its robust capabilities, including seamless document processing, speech-to-text features, and built-in agent tools for web scraping, web-browsing, chart generation, RAG memory, and ...
Llama 3.1 405B Artifacts: Code Entire Apps With One Prompt Locally - Llama Coder
zhlédnutí 1,7KPřed 14 dny
Discover LlamaCoder, an AI-driven coding project that generates entire applications with a single prompt. Learn how this open-source tool, powered by Meta AI's Llama 3.1 405B model and Together AI, revolutionizes coding. Perfect for developers and AI enthusiasts, LlamaCoder supports multiple models and is free to get started with. Watch now to see its incredible capabilities and how to run it l...
How To Run Your Llama 3 1 Models With Open WebUI Web Search Locally
zhlédnutí 5KPřed 21 dnem
Meta's Llama 3.1 model collection is revolutionizing open-source AI With 405 billion parameters, multilingual support, and extended context length, this model is a game-changer. Learn how to harness its power with Open WebUI, a user-friendly platform that lets you run AI models locally with integrated web search. Stay tuned for a tutorial on how to combine Llama 3.1 with Open WebUI's web search...
Remove Unwanted Objects and Backgrounds From Photos with AI App
zhlédnutí 198Před 28 dny
Discover IOPaint, an open-source AI-powered photo editing powerhouse that lets you effortlessly remove objects, replace backgrounds, and upscale images with cutting-edge models like LaMa and Stable Diffusion, all locally on your computer. Local Lab Twitter - TheLocalLab_ Support the Channel - buymeacoffee.com/thelocallab Github Repo - github.com/Sanster/IOPaint
Best Real-time AI NPC Games - Download and Play Now on Steam
zhlédnutí 1,1KPřed měsícem
Explore the future of gaming with AI-powered NPCs From sci-fi action RPGs to medieval strategy games, detective thrillers, and more, we're diving into 5 games that are changing the game with real-time AI interactions. Learn, adapt, and challenge yourself like never before with these immersive experiences. Download and play now on Steam. Cygnus Enterprises -store.steampowered.com/app/1963520/Cyg...
Easily Convert Text to Audio Using Free AI Voice Generator
zhlédnutí 635Před měsícem
Get ready to be amazed by the power of AI voice cloning In this video, we'll explore XTTS-WebUI, a revolutionary tool that lets you generate incredibly realistic speech using cutting-edge text-to-speech synthesis. With XTTS-WebUI, you can create professional-sounding voiceovers, audiobooks, and accessible applications for visually impaired users - all for free and on your own computer! Learn ho...
Create A Local AI Voice Assistant With A Customizable Persona
zhlédnutí 211Před měsícem
We'll show you how to build your own interactive AI assistant with voice recognition, natural language processing, and a dash of personality. This isn't just a chatbot - it's a fully functional AI assistant that runs locally on your machine. Follow along as we take you through a step-by-step guide on how to install and set up the project, from installing Miniconda to configuring the AI model. W...
How to Animate Your Pictures and Photos with Live Portrait: Give your Image Life!
zhlédnutí 2,5KPřed měsícem
Revolutionize your portraits with Live Portrait, the AI tool that brings still images to LIFE Learn how to animate any portrait with realistic expressions, head movements, and lip-sync using this cutting-edge technology. From social media profiles to historical figures, custom avatars, and artistic masterpieces, the possibilities are endless Follow along with our step-by-step guide to install L...
Microsoft Phi-3 Mini June 2024 Update - Beats Large Models in Long Context
zhlédnutí 478Před měsícem
Microsoft Phi-3 Mini June 2024 Update - Beats Large Models in Long Context
How to Translate Videos with AI-Powered Video Dubbing to English or ANY Other Language.
zhlédnutí 1,6KPřed měsícem
How to Translate Videos with AI-Powered Video Dubbing to English or ANY Other Language.
Easy Open-WebUI + LM Studio Tutorial: Free & Local ChatGPT Alternative
zhlédnutí 4,1KPřed měsícem
Easy Open-WebUI LM Studio Tutorial: Free & Local ChatGPT Alternative
Civit AI Bans All Stable Diffusion 3 Models: UPDATE News
zhlédnutí 89Před měsícem
Civit AI Bans All Stable Diffusion 3 Models: UPDATE News
Easy Step by Step Guide To Use Any Open Source AI LLM
zhlédnutí 168Před 11 měsíci
Easy Step by Step Guide To Use Any Open Source AI LLM
Unmasking WizardLM 1.0 - Llama 2 Uncensored: Exploring AI Dialogue Potential
zhlédnutí 317Před rokem
Unmasking WizardLM 1.0 - Llama 2 Uncensored: Exploring AI Dialogue Potential
Game Changer Alert! Testing Replica's AI-Driven Smart NPCs in Epic Matrix Awakens Demo!
zhlédnutí 431Před rokem
Game Changer Alert! Testing Replica's AI-Driven Smart NPCs in Epic Matrix Awakens Demo!
LLM Showdown: Testing StableBeluga 13B GGML Model - CPU Power Unleashed!
zhlédnutí 121Před rokem
LLM Showdown: Testing StableBeluga 13B GGML Model - CPU Power Unleashed!

Komentáře

  • @kirubeladamu4760
    @kirubeladamu4760 Před 3 hodinami

    Fixes for the issues not mentioned in the video - remove the '|pysssss' string on line 143 of the workflow json file - rename the 'diffusion_pytorch_model.safetensors' file you downloaded to 'flux_vae' before adding it to the vae folder

  • @Hood_History_Club
    @Hood_History_Club Před 6 hodinami

    "You have to manually link the "Load Lora" Node to the "KSampler" via the model-link" I dont have telepathy. How do I do this? The nodes dont match. And should I use Lora Loader with the snake or without the snake. Remember I cant read your minds.

    • @TheLocalLab
      @TheLocalLab Před 5 hodinami

      It's all a learning process my guy. I don't know what what you meant by "the snake" but to link the nodes, simply click and drag the link line from the purple node next to" model" in the unet model loader to the purple node on left "model" of lora loader node, then connect the right purple node "model" in the lora node to the left purple node "model" in the ksampler. Its easy as cake.

  • @KITFC
    @KITFC Před 7 hodinami

    Thanks but I got an error: Warning: Missing Node Types When loading the graph, the following node types were not found: LoraLoader|pysssss Do you know how to fix this?

    • @TheLocalLab
      @TheLocalLab Před 7 hodinami

      Another commenter had this same issue and his solution was to modify the workflow json file and remove the "|pysssss" in the model loader section. You can open the file in a notepad or vs code and see if it works for you as well.

    • @KITFC
      @KITFC Před 3 hodinami

      @@TheLocalLab thanks it worked!

  • @oszi7058
    @oszi7058 Před 8 hodinami

    i only get blue pixels

  • @rickytamta87
    @rickytamta87 Před 9 hodinami

    It works..Thank you!!

  • @zdrive8692
    @zdrive8692 Před 12 hodinami

    on apple silicon m1 pro no matter what it always output green blue black boxes or traingular small shapes like TV of 90's when we he have no signal, tested all Q models tried with both cpu gpu it does not work... it is for windows with Nvidia GPU only

  • @FlorinGN
    @FlorinGN Před 12 hodinami

    Gorgeous tutorial! Thank you! :D

  • @Kapharnaum92
    @Kapharnaum92 Před 13 hodinami

    Hi, thanks a lot for your video. Very clear. However, when I start ComfyUI, i have the following error: Missing Node Types > LoraLoader|pysssss Any idea how to solve this?

    • @TheLocalLab
      @TheLocalLab Před 12 hodinami

      Try updating your comfyUI.

    • @Kapharnaum92
      @Kapharnaum92 Před 11 hodinami

      @@TheLocalLab I updated and it still didn't work. I then modified your json file and removed the "|pysssss" part and it worked

    • @TheLocalLab
      @TheLocalLab Před 10 hodinami

      Interesting, your the only one that has told me this but I'm glad its working for you now. Enjoy.

  • @xD3NN15x
    @xD3NN15x Před 13 hodinami

    Thx! but i get an error when trying to use: Error occurred when executing UnetLoaderGGUF: module 'comfy.sd' has no attribute 'load_diffusion_model_state_dict' File "C:\Users\anden\Downloads\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\anden\Downloads\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\anden\Downloads\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\anden\Downloads\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF odes.py", line 130, in load_unet model = comfy.sd.load_diffusion_model_state_dict( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    • @TheLocalLab
      @TheLocalLab Před 13 hodinami

      You have to update your comfyUI, either through the ComfyUI manager and restart(recommended), git pull via command line, or just install the latest version.

  • @Huang-uj9rt
    @Huang-uj9rt Před 17 hodinami

    For a beginner also say that your videos are really very friendly, thank you very much. Because of my professional needs and the high learning threshold of flux, I've been using mimicpc to run flux before, it can load the workflow directly, I just want to download the flux model, and it handles the details wonderfully, but after watching your video, I'm using mimicpc to run flux again finally have a different experience, it's like I'm starting to get started! I feel like I'm starting to get the hang of it.

  • @panzerswineflu
    @panzerswineflu Před 21 hodinou

    I'm going to have to clone the repo and go through the steps and see if it works for me. I've had the portable and can't get it to run and have seen at least one comment about that being an issue. The same checkpoint works fine in forge but out of ram in comfyui

  • @rogersnelson7483
    @rogersnelson7483 Před dnem

    What type of loras can you use for GGUF workflow. The flux loras I tried from Civit AI (Flux1 D) have have no effect even when the trigger words are used.

    • @TheLocalLab
      @TheLocalLab Před 15 hodinami

      Just an fyi, If you used my workflow, be aware that you have to connect the GGUF model loader node to the Lora node then connect the lora node to the ksampler node to actually have the loras take affect.

  • @user-vh2up5lx4v
    @user-vh2up5lx4v Před dnem

    Thanks a lot!! This video really save me. I fuzz this question for a few days! Thank you very much!

  • @stephnocean1095
    @stephnocean1095 Před dnem

    Hello, thank you for this enlightening video. As the owner of an AMD graphics card, do you know how to configure it with Zluda under ComfyUI? I've seen a few tutorials but they're hardly explicit. Greetings from France.

    • @TheLocalLab
      @TheLocalLab Před dnem

      Unfortunately I do not as I'm a Nvidia card holder myself.

  • @dDoOyYoOuUtTuUbBeE

    Again an application spreading a lot of files in my system when Windows already has the framework/libraries needed to build such an application.

  • @alifrahman9447
    @alifrahman9447 Před dnem

    hey man...i've installed everything accordingly but UNET LOADER (GGUF) gives me this error everytime Error occurred when executing UnetLoaderGGUF: module 'comfy.sd' has no attribute 'load_diffusion_model_state_dict' im using flux1-dev-Q6_K.gguf file tried different workflow, same error...everything is updated.

    • @superfeel1275
      @superfeel1275 Před dnem

      u dont have the latest comfy version. when in the comfy folder,open the cmd and run "git pull"

    • @TheLocalLab
      @TheLocalLab Před dnem

      Yeah I think you need to update your comfyUI. I would also look into installing the comfyUI manager to make updating and installing new nodes a breeze.

    • @alifrahman9447
      @alifrahman9447 Před dnem

      @@TheLocalLab already done it man. still same error. cant find a solution, there's a pink border on Unetloader Update: Thanks ,,it wortked!

    • @alifrahman9447
      @alifrahman9447 Před dnem

      @@superfeel1275 thanks man, it worked,m i updated through manager, but when i update using cmd, it worked😊😊

  • @cyanideshep7288
    @cyanideshep7288 Před dnem

    THANK YOU!!!! This is the first tutorial that worked after searching for so long. Very clear and well put together :)

  • @darajan6
    @darajan6 Před dnem

    Hi, I wonder if 3070 8G card +64G ram could run this workflow?

    • @TheLocalLab
      @TheLocalLab Před dnem

      My friend you can for sure run this and more with those specs. You should have no issue.

  • @DaniDani-zb4wd
    @DaniDani-zb4wd Před dnem

    I wanna see a comparison…… what is the drop in quality between versions?

  • @AmerikaMeraklisi-yr2xe

    How much gpu ram I need for 1024x1024 px?

    • @TheLocalLab
      @TheLocalLab Před dnem

      Well it depends on the quant you use and ram(normal) you have, but anything over 3gb vram can produce a 1024x1024 image with the right quant. You would probably just wait longer if your using less vram.

  • @erans
    @erans Před dnem

    1.70it/sec (around 30 seconds) per generation of 512x512 on a rtx 3060ti

  • @BBFanPakistan
    @BBFanPakistan Před 2 dny

    Bhai error aa ra ha accept the license agreement for using pyannote 2.1

    • @TheLocalLab
      @TheLocalLab Před dnem

      Yes you have to accept the model licenses on huggingface to download and use the models.

    • @BBFanPakistan
      @BBFanPakistan Před dnem

      @@TheLocalLab please make a video of how to do that

    • @TheLocalLab
      @TheLocalLab Před dnem

      @@BBFanPakistan All you have to do is click the models link in the github repo, sign in to your huggingface account and accept the licenses. Its very easy.

  • @schuss303
    @schuss303 Před 2 dny

    Thank you for the video.. Does anyone know what is the best way to use 155h with integrated 8gb, 16gb 7600mhz ram, NPU, very fast hard drive.. It's the zenbook 14th gen.. Than you for any info

  • @TrevorSullivan
    @TrevorSullivan Před 2 dny

    One thing that I think is missing from this video, is that you need to open the ComfyUI Manager and install custom nodes from the "pythongosssss/ComfyUI-Custom-Scripts" package. Otherwise the "LoraLoader" node fails to load in the pre-configured workflow.

    • @TheLocalLab
      @TheLocalLab Před dnem

      I'm a bit curious, do you already have Comfy installed or did you install a fresh download?

    • @TrevorSullivan
      @TrevorSullivan Před dnem

      @@TheLocalLab I ran it from the Docker container provided by YanWenKun on GitHub. So it's a fresh ComfyUI environment.

    • @TrevorSullivan
      @TrevorSullivan Před dnem

      I ran a new environment using the Docker container from YanWenKun on GitHub. So yeah, it's a fresh environment.

    • @TrevorSullivan
      @TrevorSullivan Před dnem

      @@TheLocalLab I spun it up with a fresh Docker container, so it's a new ComfyUI environment.

  • @TrevorSullivan
    @TrevorSullivan Před 2 dny

    Which Text-to-Speech model are you using to generate these videos? Sounds really similar to some others I've heard.

  • @TrevorSullivan
    @TrevorSullivan Před 2 dny

    The photo of President Trump with a rifle is awesome! Nice one! 😉

  • @gtatuto4552
    @gtatuto4552 Před 2 dny

    One job and fail.. The Flux_Vae you have is not in the description o you ename it, but in all case don't work

    • @droidJV
      @droidJV Před dnem

      It's on the description, he just renamed it on his computer. It's the file called "diffusion_pytorch_model.safetensors".

  • @gtatuto4552
    @gtatuto4552 Před 2 dny

    Whee is VAE ???????????

  • @alifrahman9447
    @alifrahman9447 Před 2 dny

    its just....um ..i am confused with lots of model versions! i habe 2060 12 gb , and im using nf4 model, it takes 90 sec to generate 1024*1024 image. So, if I prefer quality over speed, well i lil bit faster generation will definitely help, so which model should I choose bro? AND, please make video with your own voice man🙂🙂👌👌

    • @TheLocalLab
      @TheLocalLab Před 2 dny

      You can try the 6_K and the 8_0 quants and see how the output quality compares with the nf4. Its best to experiment to really find the sweet spot, especially if you can improve results with Loras which is why I like the lower quants(4_0).

    • @alifrahman9447
      @alifrahman9447 Před 2 dny

      @@TheLocalLab thanks man, gonna try both

  • @antoniojoaocastrocostajuni8558

    Can I use python and diffusers to run this model using line codes, insteade of ComfyUI?

    • @TheLocalLab
      @TheLocalLab Před 2 dny

      Well the only two python dependencies for the ComfyUI-GGUF Extension node are gguf>=0.9.1 and numpy<2.0.0. You can try looking in the GGUF library and seeing if its possible. Flux does have diffusers support , maybe there's a chance if there's a way to load the gguf models with the GGUF library but I don't see how you would run that with the diffusers library.

  • @antiplouc
    @antiplouc Před 2 dny

    Unfortunately this has no effect on a mac. No speed increase at all and i tried all the gguf models. Any idea why? Or is it simpy not designed to work on macs?

    • @TheLocalLab
      @TheLocalLab Před 2 dny

      No GGUFs are also compatible with MacOS but there could be a variety of reasons why your not seeing speed increases, especially with lower quants. There's just not enough information to really tell.

    • @antiplouc
      @antiplouc Před 2 dny

      @@TheLocalLab what information do you need? I have a mac studio m2

  • @holopyt
    @holopyt Před 2 dny

    Hello!!Please help. Runin it on collab, en error pops out "An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on." What i have missed?

    • @TheLocalLab
      @TheLocalLab Před 2 dny

      Your colab session could of possibly disconnected. If that ever happens, you usually have to start from the beginning since colab doesn't save any work for free users.

    • @holopyt
      @holopyt Před dnem

      @@TheLocalLab Thank you! Now trying to install locally. Have bunch of errors popping out. The last one i cant solve: [aost#0:0 @ 0000024EEE330480] Unknown encoder 'libmp3lame' [aost#0:0 @ 0000024EEE330480] Error selecting an encoder Spend all day seaching and trying, dont you have an advise?

    • @TheLocalLab
      @TheLocalLab Před dnem

      did you install ffmpeg?

    • @holopyt
      @holopyt Před dnem

      Yes, different ways. pip install ffmpeg conda install -y ffmpeg conda install conda-forge::ffmpeg

    • @holopyt
      @holopyt Před dnem

      and mannualy

  • @AcamBash
    @AcamBash Před 2 dny

    There is a pretty important error in your workflow. You have to manually link the "Load Lora" Node to the "KSampler" via the model-link. Otherwise the Lora won't be applied.

    • @TheLocalLab
      @TheLocalLab Před 2 dny

      I do understand what you mean but honestly I'd rather keep the use of Loras optional. Maybe I should've mentioned this in the video. If I'd attached the Lora node in the workflow, you would have to use it in order to generate or detach the node manually as well if you don't.

    • @AcamBash
      @AcamBash Před 2 dny

      @@TheLocalLab Okay. It's all good. Watching the video i thought you used a Lora and wondered why it didn't work for me. See you included a hint now.

    • @TheLocalLab
      @TheLocalLab Před 2 dny

      Yes yes, I'll be sure to mention that again in the future. Hope your enjoying these ggufs.

    • @Hood_History_Club
      @Hood_History_Club Před 6 hodinami

      @@AcamBash we all did. Not sure how to 'connect' the lora node to whatever, because the nodes dont match.

  • @gsudhanshu3342
    @gsudhanshu3342 Před 2 dny

    can you do a similar type of video for forge

    • @TheLocalLab
      @TheLocalLab Před 2 dny

      Could be a possibility in a future video.

    • @GenoG
      @GenoG Před dnem

      @@TheLocalLab Me too please!! 😘

  • @didichung4377
    @didichung4377 Před 2 dny

    lora not working with this flux gguf version...

    • @TheLocalLab
      @TheLocalLab Před 2 dny

      Connect the GGUF model loader node to the Lora node, then connect the Lora node to the ksampler node. Be advised that you will need to also make sure there's always Lora loaded to use the workflow. If you no longer want to use the Lora revert back to the default workflow.

  • @didichung4377
    @didichung4377 Před 2 dny

    missing lora loader nodes right here....

    • @TheLocalLab
      @TheLocalLab Před 2 dny

      Look closer, the Lora node is included in the workflow towards the bottom left.

    • @Enigmo1
      @Enigmo1 Před 2 dny

      @@TheLocalLab It's not connected to ksampler, so you're not getting any results out of it

  • @NimmDir
    @NimmDir Před 2 dny

    Thank you for your work, it works great with your instructions <3

  • @zikwin
    @zikwin Před 2 dny

    why there are background original voice...how to remove that?

    • @TheLocalLab
      @TheLocalLab Před 2 dny

      Go into your advanced settings and adjusts the "Volume original audio" and "Volume translated audio" dials to what you need.

    • @zikwin
      @zikwin Před 2 dny

      @@TheLocalLab nice thanks

  • @ceeespee2204
    @ceeespee2204 Před 2 dny

    19 hours straight with no sleep food or break, and not a single image, or glimpse at a UI for that matter. It's just internet dumpster diving for convoluted code snippets that dont work. Even the official code on AMD's website is broken, and forget troubleshooting thats not possible. I'm so hungry tired and tilted this text is like walking barefoot on legos to my eyes. This 3090 is just sitting there looking at me like I would ever consider putting it into one of my systems, I dont care if it would work if i did or not. actually i may just go give it the office space treatment with an estwing hammer. That would make me feel much better. because this sadomasochistic linux entropy is easily the most irritating thing I've ever dealt with in my life. sorry i'm exploding on your page i'm delirious but no food or sleep until it works or i die trying. Well, time to wipe the partition and try again.

  • @KlausMingo
    @KlausMingo Před 2 dny

    AI is moving so fast, every day there's something new, it's hard to keep up and try everything.

    • @1lllllllll1
      @1lllllllll1 Před 2 dny

      There’s an AI that’ll keep up with progress and distill it all for you to consume once a week.

  • @Huang-uj9rt
    @Huang-uj9rt Před 2 dny

    Because of my professional needs and the high learning threshold of flux, I've been using mimicpc to run flux before, it can load the workflow directly, I just want to download the flux model, and it handles the details wonderfully, but after watching your video, I'm using mimicpc to run flux again finally have a different experience, it's like I'm starting to get started! I feel like I'm starting to get the hang of it.

  • @Xplo8E
    @Xplo8E Před 2 dny

    I have nvidia 3050 4gb, does it runs?

    • @TheLocalLab
      @TheLocalLab Před 2 dny

      You should be able to run one of the gguf quants for sure.

  • @alex.nolasco
    @alex.nolasco Před 2 dny

    I assume xlabs control net is incompatible

  • @SebAnt
    @SebAnt Před 2 dny

    WOW - Great Intro to the latest !!

  • @JamesPound
    @JamesPound Před 2 dny

    The fp8 t5xx model gives less coherence and details. Try on a fixed seed with fp16.

  • @expaintz
    @expaintz Před 3 dny

    very cool intro to GGUF !

  • @TheLocalLab
    @TheLocalLab Před 3 dny

    🔴Animate Your Pictures and Photos with Live Portrait 👉czcams.com/video/jFO5TB9tBIU/video.html

  • @MedinaCliff
    @MedinaCliff Před 3 dny

    will this run on a Surface pro 8 intel gpu i7 16gigs

  • @MedinaCliff
    @MedinaCliff Před 3 dny

    Got to this point but said no such file or directory??? C:\ComfyUI\ComfyUI_windows_portable>python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4 equirements.txt ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'ComfyUI\\custom_nodes\\ComfyUI_bitsandbytes_NF4\ equirements.txt'

    • @TheLocalLab
      @TheLocalLab Před 3 dny

      Go into that folder using your file explorer. Check to make the python_embeded folder is in the directory your running the command. It should be in the same directory that has the run.bat files.

    • @MedinaCliff
      @MedinaCliff Před 2 dny

      @@TheLocalLab D:\ComfyUI\ComfyUI\custom_nodes\ComfyUI\custom_nodes>D:\AI\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable un_nvidia_gpu.bat 'D:\AI\ComfyUI_windows_portable_nvidia' is not recognized as an internal or external command, operable program or batch file. D:\ComfyUI\ComfyUI\custom_nodes\ComfyUI\custom_nodes>

  • @philjones8815
    @philjones8815 Před 3 dny

    Can anyone help? I had most of comfyui installed but the 'run files' are missing from the folder...could this be a python issue?

    • @TheLocalLab
      @TheLocalLab Před 3 dny

      Everything should be included after extracting the files. Try deleting the comfyUI portable folder and extract the files again with 7-zip from the zip file you downloaded from the repo.

    • @philjones8815
      @philjones8815 Před 3 dny

      @@TheLocalLab Thank you for the fast reply. I have it working now...seems to be an issue with my stupid Alienware pc and Windows. Great video and I look forward to your next tutorial.

    • @IamGhe
      @IamGhe Před dnem

      @@TheLocalLab Sorry to bother but I don`t understand about workflow script, where and how? Can explain more detailed? Thx.