SDXL LORA Training Without A PC: Google Colab and Dreambooth

Sdílet
Vložit
  • čas přidán 23. 01. 2024
  • 💻 GitHub Link To Auto Train Advanced: github.com/huggingface/autotr...
    ✨ Patreon prompt guide: / how-to-generate-96224373
    💻 Link to celebrity lookalike site: starbyface.com/
    ⚔️ Join the Discord server: / discord
    🧠 AllYourTech 3D Printing: / @allyourtech3dp
    👾 Follow Me on X: / blovereviews
    💻My Stable Diffusion PC: kit.co/AllYourTech/stable-dif...
    This guide contains everything you need to train your own LORA or Low Rank Adaptation model for Stable Diffusion XL (SDXL) using Google Colab. That's right, you can train this for free without the need for a high end gaming PC. This guide will allow you to train SDXL to generate images of yourself, or anyone else for that matter.
  • Věda a technologie

Komentáře • 227

  • @allyourtechai
    @allyourtechai  Před 4 měsíci +1

    ✨ Support my work on Patreon: www.patreon.com/allyourtech
    💻My Stable Diffusion PC: kit.co/AllYourTech/stable-diffusion-build

  • @seanklooster8245
    @seanklooster8245 Před 4 měsíci +1

    Awesome tutorial - dig your channel!

  • @taintofgreatness
    @taintofgreatness Před 4 měsíci +2

    Kickass tutorial man.

  • @nadiaprivalikhina4101
    @nadiaprivalikhina4101 Před 4 měsíci +8

    Wanted to say thank you for this video! I've been looking for a tutorial like this one and most of what I found is total BS on how to make money with fake influencers. Appreciate your detailed explanation and scientific approach to the topic

    • @allyourtechai
      @allyourtechai  Před 4 měsíci

      I really appreciate that, thank you!

    • @kayinsho2558
      @kayinsho2558 Před 3 měsíci

      Yeah, this is an amazing vid. How can we make alterations to our LORA and save them? Let's say the face needs to be thinner, wider, etc.

    • @Fanaz10
      @Fanaz10 Před měsícem

      how is it bs lol? why u mad?

  • @ChrisChan126
    @ChrisChan126 Před 4 měsíci +1

    Man you're the best! I have a question, if the training got interrupted/stopped by accident, do I need to start everything all over agian?

    • @allyourtechai
      @allyourtechai  Před 4 měsíci +1

      Thank you! Yes if it fails and there is no progress or movement, you may need to restart unfortunately.

  • @TitohPereyra
    @TitohPereyra Před 4 měsíci +1

    Thnx! I love this kind of video! I love automatic1111

  • @shankoty1
    @shankoty1 Před 2 měsíci +2

    Please reply. I've tried 5 different times to train a lora but when I install it on focus it ignores the lora file and doesn't do anything. It was working very well before, and I managed to create much Loras with this method but now it doesn't do anything. What is the problem can you please help me ?

    • @allyourtechai
      @allyourtechai  Před 2 měsíci +1

      I’ve had pneumonia for the past two weeks. I’ll try to look once I’m better

    • @shankoty1
      @shankoty1 Před 2 měsíci

      @@allyourtechai​​⁠ Oh I'm sorry to hear that! Take all the rest, thanks for the reply and I wish you will recover soon. ❤

    • @Fanaz10
      @Fanaz10 Před měsícem

      did u figure it out?

  • @monstamash77
    @monstamash77 Před 4 měsíci +3

    Awesome tutorial, thank you so much for sharing this video. Its going to help a lot of people like me with crappy GPUs

  • @leilagi1345
    @leilagi1345 Před 2 měsíci

    Hi! really helpful video. thank u so much for info!
    i just wonder what if i want to train my style of drawing (flat a little in modern japanese style but not anime, mostly full body of girls or boys in the streets) what trigger word should i use? just "drawing" or my uniq gibberish word like "uhfvuhfuh"?

  • @cheick973
    @cheick973 Před 3 měsíci

    Hi, thank you so much for your good explanation. everything went good for me for the training, but in the end I don't see any output folder with safetensors file. I tried several times. Any idea ?

  • @vadar007
    @vadar007 Před 2 měsíci

    Is there a recommended resolution size for the training images, e.g. 1024x1024?

  • @R3dBubbleButNerfed
    @R3dBubbleButNerfed Před 3 měsíci +4

    Your issue with style is to forget to uncheck Foocus V2, Enhance and sharp.They drive the model toward realism.

  • @_AlienOutlaw
    @_AlienOutlaw Před 2 měsíci

    I was pulling my hair out trying to figure out how to locally train on a Mac and eventually found this video. Thank you! One question - I used a 16 image dataset to see just how real of a headshot I could generate and I'm currently on 10h 5m. I ended up getting a Colab Pro subscription after my first attempt was halted at 6hrs. Any insight on large jobs like this? I'd hate to lose progress when sleeping lol

  • @edmoartist
    @edmoartist Před 4 měsíci +1

    Great tutorial ! clear and to the point. Anyone know if you can input .txt files with captions instead of the ? Cheers

  • @esalisbery
    @esalisbery Před 3 měsíci

    Hello, just wanna ask if this works in training a specific card.

  • @user-qz3cc8ko2j
    @user-qz3cc8ko2j Před 2 měsíci +10

    I have tried following this guide step for step but my LORA doesnt do anything. I can download other loras and add, and they work perfectly, but not when i add my own.
    I am running Fooocus 2.2 on a google colab machine. My model is juggernautXL_v8Rundiffusion.safetensors and the lora is trained stable-diffusion-xl-base-1.0.
    I followed the guide 1-1 and used Dreambooth lora, with 8 pictures pf a celeb and added the prompt being the name of the used celeb person. The training takes around 2 hours and completes correctly - but when used on my fooocus it looks nothing like my Lora :( Can you help us?

    • @ea03941d
      @ea03941d Před 2 měsíci +1

      have you tried using stable-diffusion-xl-base-1.0 as model since this is the one you used to train your lora?

    • @ea03941d
      @ea03941d Před 2 měsíci +3

      Just finished my own lora, I have the exact same issue as you have. My lora is ignored and there are images generated of the celebrity I used for training. Base model SDXL 1.0

    • @user-qz3cc8ko2j
      @user-qz3cc8ko2j Před 2 měsíci

      Yes I tried that. Unfortunatelyit gives the same result.@@ea03941d

    • @mestomasy
      @mestomasy Před 2 měsíci +1

      My lora doesn't have any effect too. I had very good data and used keywords, but it don't work with any model, including sd_xl_base. Very upset(

    • @talvez_priest_wow3720
      @talvez_priest_wow3720 Před 2 měsíci +2

      i did like 10 Loras with this guide like 2-3 weeks ago, working fine, was out for 1 week, tried again now, my old LORAS work, my new LORAS dont do nothing, tried diferent training images, tried using them on Fooocus, A1111, Fusion and ComfyUI, dont work. They never showed on A1111/Fusion inferface on LORAS tab, but they worked, now, just what i trained 2-3 weeks ago work, the new ones do not.

  • @alirezaghasrimanesh2431
    @alirezaghasrimanesh2431 Před 4 měsíci +1

    thanks for your great content! very helpful.

    • @allyourtechai
      @allyourtechai  Před 4 měsíci

      You are very welcome, thanks for watching!

  • @user-fn9dn1co5o
    @user-fn9dn1co5o Před 2 měsíci

    Does the link still work? I got disallowed in the middle of my training

  • @nobody_dude
    @nobody_dude Před 2 měsíci +1

    What can you do if the trained lora model is not visible in Stable Diffusion automatic 1111? Other xl loras are visible.

    • @allyourtechai
      @allyourtechai  Před 2 měsíci

      You can use anything other than a1111. A1111 doesn’t support the format yet

  • @MarkErvin-yg1kd
    @MarkErvin-yg1kd Před 3 měsíci

    Great tutorial! I made it all the way through training, but when I try to access the Pytorch file, my file structure looks completely different. Mine is a long list folders, starting with bin, boot, contant, datalab, etc. I can't get it to go up a file menu to where yours is onscreen. Any ideas?

    • @allyourtechai
      @allyourtechai  Před 3 měsíci

      There is no output folder with safetensors file? That’s odd

  • @Tokaint
    @Tokaint Před 4 měsíci

    so now it looks like me mixed with my celebrity look alike, is it because their name is in the prompt? Anyway to have it look just like me?

    • @allyourtechai
      @allyourtechai  Před 4 měsíci

      How many images did you train with?

    • @Tokaint
      @Tokaint Před 4 měsíci

      @@allyourtechai I think 13 images, it gave me better results the second test when I just wrote my own prompt instead of a celeb look alike. Probably because I have a unique look

  • @rendymusa3527
    @rendymusa3527 Před 4 měsíci

    Do the photos to be prepared have to be the same size? Or can they be random?

    • @allyourtechai
      @allyourtechai  Před 4 měsíci

      Different sizes and aspect ratios are fine. You no longer need to crop all of the photos

  • @fundazeynepayguler8177
    @fundazeynepayguler8177 Před 4 měsíci

    Hello, thank you for the tutorial. I'm curious about how to use captions in this context. I have around 100 images with captions that I've prepared using Kohya, along with a considerable amount of editing afterward. I'm wondering if it's possible to use them.

    • @allyourtechai
      @allyourtechai  Před 4 měsíci

      I’ll do a tutorial :)

    • @Deefail
      @Deefail Před 3 měsíci

      ​@@allyourtechaiI need this so that I can train an art style

    • @JieTie
      @JieTie Před 3 měsíci

      Any luck with that caption .txt files ? :)

  • @guttembergalves3996
    @guttembergalves3996 Před 4 měsíci +1

    Thanks for the video. I'm training my model right now, following his tips. Now the question I have is: do you know of any colab that I can run this .safetensor to generate the images based on the model I just trained? Thanks again and good luck.

  • @Productificados
    @Productificados Před 28 dny

    What if I forgot to put custom words in the "enter your prompt here" section? :(

    • @allyourtechai
      @allyourtechai  Před 28 dny

      I don’t think you will be able to use the files generated in that case. There would be no trigger to prompt the system to use your LoRA

  • @ShashankBhardwaj
    @ShashankBhardwaj Před 2 měsíci

    My Google Colab is stuck on this error after getting to loading the 4/7th pipeline component:
    INFO: 2401:4900:1c31:6d1e:3d79:ce0c:9144:588d:0 - "GET /is_model_training HTTP/1.1" 200 OK
    INFO: 2401:4900:1c31:6d1e:3d79:ce0c:9144:588d:0 - "GET /accelerators HTTP/1.1" 200 OK
    It repeats this every seconds. Help pls?

  • @MindSweptAway
    @MindSweptAway Před 4 měsíci +6

    Coming from your awesome Focus colab tutorial! When it finishes doing the steps thing, it kept repeating something along the lines of “Running jobs: [],” followed by “Get /is_model_training HTTP/1.1” on the output for a few hours. Is it supposed to do that, because my dataset contains around 50-100 images.

    • @allyourtechai
      @allyourtechai  Před 4 měsíci +1

      That many images would likely take 10-20 hours to train. I haven’t ever tried that large of a data set on colab. Does it show a progress percentage at any point?

    • @MindSweptAway
      @MindSweptAway Před 4 měsíci +2

      @@allyourtechaiIt does give me a percentage at the beginning, but when it finishes it keeps outputting “Running jobs” with no percentage at all. I think it’s because I used Firefox and it’s usually the backbone for problems like this, so I might try running colab on chrome for now on. Thanks for listening! 😊

    • @DerKapitan_
      @DerKapitan_ Před 4 měsíci +4

      @@allyourtechai I'm having the exact same issue here, except I'm using Chrome and I only used a dataset of six images, following the same steps and settings outlined in the video. It took about 1 hour and 45 minutes for all 500 training steps to complete, but after that, it gets stuck executing system() > _system_compat() > _run_command() > _monitor_process() > _poll_process()
      and it remains repeating “Running jobs: [],” followed by “Get /is_model_training HTTP/1.1” at the four-hour mark.

    • @utkucanay
      @utkucanay Před 3 měsíci +1

      @@DerKapitan_ Did you find any solution?

    • @DerKapitan_
      @DerKapitan_ Před 3 měsíci +3

      @@utkucanayIt turned out that it did create a LoRA file that I could download and use before it got stuck 'running jobs'. It didn't work well when I tried to use it, but I don't know if that was because of the glitchy process or poor training settings.

  • @wordsofinterest
    @wordsofinterest Před 4 měsíci +1

    Do you prefer training this way or does Kohya produce better results?

    • @allyourtechai
      @allyourtechai  Před 4 měsíci +1

      Kohya provides more flexibility with regularization images and higher training steps assuming you have the VRAM. Generally you will get a higher quality result from local training unless you spend money on a larger memory colab instance.... But depending on your use case, the quick, free training could be good enough. (I hope that helped answer)

  • @mastertouchMT
    @mastertouchMT Před 4 měsíci +1

    Greate tute!! One quick question.. The LoRAs work fine with Fooocus but they dont work in A1111?

    • @allyourtechai
      @allyourtechai  Před 4 měsíci +1

      They seem to work everywhere but A1111, and I haven’t figured out why that is yet

    • @mastertouchMT
      @mastertouchMT Před 4 měsíci

      @@allyourtechai odd. Also noticed that they have a SD1 listed in the stable Diffusion version if that has something to do with it

    • @allyourtechai
      @allyourtechai  Před 4 měsíci +1

      @@mastertouchMT That's interesting. Worth digging into more. I'll see if I can find anything

    • @mastertouchMT
      @mastertouchMT Před 3 měsíci

      @@allyourtechai I just worked with it in the new Forge platform. Have to go into the Jason file and change it to SDXL and its good to go!

  • @JieTie
    @JieTie Před 3 měsíci

    Great tut, but You could explain how to add captions to images, or maybe how to check what caption was inserted while training.
    Edit: or maybe just drop image1.png image1.txt image2.png image2.txt and it will be fine?

  • @lukejames3534
    @lukejames3534 Před 2 měsíci +1

    Couldn't able to train model. I'm getting these error "ImportError: cannot import name 'text_encoder_lora_state_dict' from 'diffusers.loaders' (/usr/local/lib/python3.10/dist-packages/diffusers/loaders/__init__.py)". Please help me to resolve this.

  • @venom90210
    @venom90210 Před měsícem

    my model came out as a JSON file what did i do wrong?

  • @rpc8169
    @rpc8169 Před 3 měsíci

    Hi thanks for the video! I tried this method, but testing the lora in SD, I'm getting images nothing like the training images, it's supposed to be a shirt, but I'm getting images of a beaver lol! Not sure what to do...

    • @allyourtechai
      @allyourtechai  Před 3 měsíci

      What was your trigger word and your dataset for the Lora?

  • @user-cx6rg6mr7d
    @user-cx6rg6mr7d Před 28 dny

    04:55 why my training parameters are less than the ones you showed ?
    did you use full other than basic?
    but i have one extra parameter : "vae_model": "",

  • @miss69smart
    @miss69smart Před 3 měsíci

    Hi! First I want to say thank you so much your videos are actually done in a way that can be understood and followed something I feel is needed in this space.
    I am having trouble getting this to work for me I was able to follow all the directions in the video and when I click on train it seems like it should work it says success you can check the progress of your training here monitor your job locally/in logs but then nothing else seems to happen I came back after several hours and I can't seem to find anything I looked in the files area and doesn't seem to be anywhere so my question is how do I know if it worked and where is it and if it didn't work how can I find out why thanks in advance I appreciate your help or any help from the community that might come across my comment

    • @allyourtechai
      @allyourtechai  Před 3 měsíci

      First, thank you!
      Training typically takes 40-60 minutes from what I have seen. There should be an indication of progress down in the console at the bottom of the text when processing starts. I believe the free version of colab stops after 90 minutes, so it might be that your LoRA finished, but the colab shut down before you came back to download it. I usually try to stay close by when training for that reason.

    • @miss69smart
      @miss69smart Před 3 měsíci

      @@allyourtechai well the thing is I upgraded to the first tier of the paid version and I don't see any indication of it doing anything after I get that success message and when I click on that message that says success you can now track the progress of your training here monitor your jobs locally it takes me to a blank page with a tiny bit of code that says detail not found?

    • @JC-rz4ym
      @JC-rz4ym Před 2 měsíci

      you have to open the code in collab to watch the log/progress. it should be in the folders from the side menu on the left. that's in the vid for reference. otherwise, if you're using your local machine, there should be a cmd window open showing the log.

  • @culoacido420
    @culoacido420 Před 4 měsíci +2

    My LoRA isn't working, I trained it with 11 images, I tried using both celebs and my own token in the prompt but it still doesn't work as intended. I use A1111, and the base SDXL1.0 model but the results look nothing like me (each generation is a completelly different man, It goes from old white guy, to asian kid, to muscular black man). I don't know what I'm doing wrong, any suggestions?
    I also tried using other LoRAs (not trained by me) and they all work beautifully

    • @culoacido420
      @culoacido420 Před 4 měsíci

      I used 5 photos of my face only, 2 photos from my waist to my head, 2 full body shots, and 2 mirror selfies (which might not be the best but it's all I had)

    • @bullseye-stablediffusion8763
      @bullseye-stablediffusion8763 Před 3 měsíci +3

      @@culoacido420 That's because this type of LoRA doesn't currently work in A1111.

  • @user-ve4zt1jn7d
    @user-ve4zt1jn7d Před 3 měsíci

    The generated Lora is working with great with Fooocus but it doesn't do anything in A1111, are you aware of this issue?

    • @allyourtechai
      @allyourtechai  Před 3 měsíci

      A1111 needs to update to allow for the format that colab puts out. Not anything I have control over unfortunately

  • @user-fz2ms2fx1y
    @user-fz2ms2fx1y Před 3 měsíci +1

    thanks

  • @Akashwillisonline
    @Akashwillisonline Před měsícem +1

    my lora is ignored when i generate (2) ☹ Please help! I did the same process before. it worked. but it is not working now. There are few changes in auto train interface ( ex: there is something in training parameters section ("vae_model": "",). idk what that is!

    • @MISTERPASTA12
      @MISTERPASTA12 Před měsícem

      I ran into the same issue, I noticed the slight differences in the parameters for the training, and my lora does not seem to be creating any effect. I wonder if it was updated and there is a simple fix. Just for context, I am running SDXL thru comfyui.

  • @pollockjj
    @pollockjj Před 4 měsíci

    What size is the final Lora using the parameters you suggest? How does the size/quality compare to the local method you published earlier with those parameters?

    • @allyourtechai
      @allyourtechai  Před 4 měsíci +2

      It’s about 23MB in size versus 1.7GB for the version trained locally. Part of the reason for that is the 15GB vram limit on the free version of colab. My local guide requires about 20gb of vram to train. I also used 2000 training steps locally versus 500 in colab.
      So, the local version is higher quality, but is it 100X higher quality? No!
      I’ll do some side by side tests and we can see :)

    • @levis89
      @levis89 Před 4 měsíci

      @@allyourtechai if I have the lowest tier of colab purchased already, what number of steps would you recommend for the best results. Also, some people suggest changing clothes, expressions and environments in the sample photos for better results, do you agree with this?

    • @allyourtechai
      @allyourtechai  Před 4 měsíci

      I would go with 2000 steps for training for a better result. I would definitely try to get variations in both the expressions and clothing. Mine for example tends to put me in a grey polo since the bulk of my images were taken in a hurry with one set of clothes. Normally I try for varied lighting, clothing, etc to create the most flexible model possible.

    • @pollockjj
      @pollockjj Před 4 měsíci

      @@allyourtechai Thanks for the additional info. I was able to get your exact settings working on my 12Gig 4070, but I get how for Collab it is essentially a free video card so I shouldn't complain. :)

    • @levis89
      @levis89 Před 4 měsíci

      @@allyourtechai gotcha! Appreciate the reply. Appreciate the content.
      Got a question I could really use your opinion on. If my final aim is to make a comic art style avatar of myself, should I think about training the LoRA on a different base? Something that has already been trained before on the particular style that Im aiming for? I’ve read that SDXL and juggernaut are designed for realistic images. And The google colab method has a fixed amount of bases that I can use, any in particular that you would suggest for this?
      Either way, you have earned my sub, looking forward to future videos!

  • @apoorvmishra2716
    @apoorvmishra2716 Před měsícem +1

    Thanks for the video. Concise and very clear. But I am facing an issue, which from comments, many others are facing as well. I have created a LoRA using the above instructions (not on colab, but on GCP VM), but when I tried to use it on Fooocus with sd_xl_base_1.0 as the base model, the LoRA does not get loaded. Other LoRAs downloaded from civitai get loaded and work perfectly.
    On debugging, I found that fooocus is expecting LoRA keys in the following format:
    'lora_unet_time_embed_0', 'lora_unet_time_embed_2', 'lora_unet_label_emb_0_0', 'lora_unet_label_emb_0_2', 'lora_unet_input_blocks_0_0', 'lora_unet_input_blocks_1_0_in_layers_0', 'lora_unet_input_blocks_1_0_in_layers_2', 'lora_unet_input_blocks_1_0_emb_layers_1', 'lora_unet_input_blocks_1_0_out_layers_0', 'lora_unet_input_blocks_1_0_out_layers_3', 'lora_unet_input_blocks_2_0_in_layers_0', 'lora_unet_input_blocks_2_0_in_layers_2', 'lora_unet_input_blocks_2_0_emb_layers_1', 'lora_unet_input_blocks_2_0_out_layers_0', 'lora_unet_input_blocks_2_0_out_layers_3', 'lora_unet_input_blocks_3_0_op', 'lora_unet_input_blocks_4_0_in_layers_0', 'lora_unet_input_blocks_4_0_in_layers_2', 'lora_unet_input_blocks_4_0_emb_layers_1', 'lora_unet_input_blocks_4_0_out_layers_0', 'lora_unet_input_blocks_4_0_out_layers_3', 'lora_unet_input_blocks_4_0_skip_connection', 'lora_unet_input_blocks_4_1_norm', 'lora_unet_input_blocks_4_1_proj_in', 'lora_unet_input_blocks_4_1_transformer_blocks_0_attn1_to_q', 'lora_unet_input_blocks_4_1_transformer_blocks_0_attn1_to_k', 'lora_unet_input_blocks_4_1_transformer_blocks_0_attn1_to_v'
    Whereas the actual keys in the LoRA are in a slightly different format:
    'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.lora.up.weight'
    @allyourtechai do you know how to resolve this issue? Or anyone else, can anyone help in resolving this? Thanks!

    • @allyourtechai
      @allyourtechai  Před měsícem

      I haven't come across this one, but chatgpt did provide a way to map the keys properly:
      chat.openai.com/share/34c5ada6-f3b5-4ab8-8bb7-f92638d8e922

    • @apoorvmishra2716
      @apoorvmishra2716 Před měsícem

      Thanks for the reply. Found an easier solve, by adding the following as hyperparameter: --output_kohya_format

  • @jjsc3334
    @jjsc3334 Před 3 měsíci

    Some of LORA disappeared, click Refresh did not work. went to extensions - apply, update, still not work, these disappeared LORA files still in Lora folder, how to fix that?

    • @allyourtechai
      @allyourtechai  Před 3 měsíci

      if you refresh I believe it launches a new colab instance and you lose anything related to the old one.

  • @ryxifluction6424
    @ryxifluction6424 Před 4 měsíci +2

    my results didn't come out that well, any troubleshooting? I got images of like other people ( didn't look like the person I put in or celebrity)

    • @allyourtechai
      @allyourtechai  Před 4 měsíci

      How many images did you train with and which software are you using to generate images after training? Are you using base stable diffusion xl to generate the images?

    • @mattattack6288
      @mattattack6288 Před 4 měsíci

      @@allyourtechai I was getting the same thing. I switched from automatic1111 to fooocus, and it works now. For some reason stable diffusion is not recognizing the lora

  • @palashkumbalwar4798
    @palashkumbalwar4798 Před 2 měsíci +1

    Hi, thanks for the tutorial
    I tried generating lora with same method with 24 images. But when I tested it on fooocus it didn't work.
    It's not at all generating the image it is trained on

    • @JC-rz4ym
      @JC-rz4ym Před 2 měsíci

      yeah me too. i guess we're the user testers for this. lol

  • @baheth3elmy16
    @baheth3elmy16 Před 2 měsíci +2

    Just a note: 99% of the time, free GPU connection is not available on Google Colab. For that, user must change the setting from pf16 to bf16.

  • @ItsSunny-ym5jh
    @ItsSunny-ym5jh Před 4 měsíci +1

    hey umm how can i put this model on hugging face without downloading it

    • @allyourtechai
      @allyourtechai  Před 4 měsíci

      If you have a hugging face account it will automatically upload the Lora to your account when the generation is complete

  • @animatedjess
    @animatedjess Před 4 měsíci

    does this work for training for styles? what would I need to enter in the prompt field?

    • @allyourtechai
      @allyourtechai  Před 4 měsíci

      It does! You would just provide a prompt trigger that describes the style and ensure that trigger is also in your text annotation files for the pictures you use to train the model. It might be something like “neon glow style” for example

    • @animatedjess
      @animatedjess Před 4 měsíci

      @@allyourtechai in the google colab/ngrok app I don't see an option for text annotation. In the tutorial I just saw that you uploaded images only.

    • @Deefail
      @Deefail Před 3 měsíci

      ​@@allyourtechaiAre you saying that we can just upload txts alongside with the images with the same name but different (extensions obviously) and it will work?

  • @AmirAliabadi
    @AmirAliabadi Před 3 měsíci

    Does this not need regularization images ? Seems like that is an import process to to lora training.

    • @allyourtechai
      @allyourtechai  Před 3 měsíci

      They help but are not required. Simply not an option at all in most cases when you aren’t training locally.

    • @AmirAliabadi
      @AmirAliabadi Před 3 měsíci

      @@allyourtechai how about captioning ? does this support captioned text along with the training images ?

  • @animatedjess
    @animatedjess Před 4 měsíci

    I tried to use this method to train the SSD-1B model but I got an error while training. Have you tried training an SSD-1B model?

    • @allyourtechai
      @allyourtechai  Před 3 měsíci +1

      I haven’t tried that yet. Let me see if I can get it to work though

    • @animatedjess
      @animatedjess Před 3 měsíci

      @@allyourtechai thanks!

  • @TheHeartShow
    @TheHeartShow Před 3 měsíci +1

    Anyone having an issue using the LoRA produced on A1111? Every single LoRA shows up except the ones trained with this method on A1111.

    • @allyourtechai
      @allyourtechai  Před 3 měsíci

      I use fooocus or InvokeAI (or even comfyui). No idea why a1111 would have issues though.

  • @faraday8280
    @faraday8280 Před 25 dny

    can you make a video on how to install and run it on lokal hardware and not google colab ?

    • @allyourtechai
      @allyourtechai  Před 25 dny

      Yep, I have a couple videos on that already :)

  • @Fggg-hz9tl
    @Fggg-hz9tl Před měsícem

    Why does my finished model weigh 20 MB?

  • @elmapanoeselterritorio7343
    @elmapanoeselterritorio7343 Před měsícem

    Hi, I followed all the steps carefully but when I trigger the propmt (in my case "jim carrey man") I keep getting images of jim carrey and not mine... Whats the problem? Amazing content btw, thanks

    • @elmapanoeselterritorio7343
      @elmapanoeselterritorio7343 Před měsícem

      I even trained the Lora again setting up a combination of words that only represents me and not working

    • @Fanaz10
      @Fanaz10 Před měsícem

      same here bro.. did u figure it out?

  • @ImNotQualifiedToSayThisBut

    do we not need to tag the images anymore?

    • @allyourtechai
      @allyourtechai  Před 25 dny

      Ideally, yes, but running this remotely just doesn't allow for blip captioning.

  • @nochan6248
    @nochan6248 Před 4 měsíci +1

    I get this error after I hit the run button.
    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    lida 0.0.10 requires kaleido, which is not installed.
    Error is longer

    • @allyourtechai
      @allyourtechai  Před 3 měsíci +1

      Are you training with SDXL? It sounds like a missing package, but i'm unsure from the error. Can you also post the full error if you can?

    • @nochan6248
      @nochan6248 Před 3 měsíci

      yes sdxl. could not copy paste whole error in chat, youtube algorythm detected as spam likely.
      I did not save the error.
      @@allyourtechai

    • @JC-rz4ym
      @JC-rz4ym Před 2 měsíci

      @@allyourtechaithis error is probably from autotrain in colab. I'm getting one for protobuf =3.20.3 because it loads with protobuf 4.23.4 and says that's incompatible with tensorflow-metadata 1.14.0. this is like the only error i haven't resolved yet and i believe that's why my lora isn't generating images like my training in fooocus.
      i know it's probably obvious to programmers, but how can i get the right protobuf loaded into the colab machine? can i just do pip install from that code box window above the log? or does that need to be included in the code you run before getting the public url for the ui? sorry if im not using the right names.

  • @nicolas.c
    @nicolas.c Před 2 měsíci +1

    excellent tutorial! Sadly, Google colab keeps shuting down in the middle of the training... like at 64% (training only 10 images). I tried this for several days. Any solution? anyone? thanks in advance!

    • @ea03941d
      @ea03941d Před 2 měsíci +3

      Pay for colab

    • @ea03941d
      @ea03941d Před 2 měsíci +2

      Or monitor the files menu more closely, the model should be finished before it disconnects if you use 10 images.

  • @onfire60
    @onfire60 Před 4 měsíci +1

    Its amazing to me how people just upload their images to wherever. Do you know where those images go and how they may be used after you upload them? I mean this is really cool and all but I'm not sure i would suggest people upload personal images of themselves to random sites. Especially in this AI world. Just my opinion take it or leave it. Cool tutorial though!

    • @allyourtechai
      @allyourtechai  Před 4 měsíci +3

      This is a cloud instance of a virtual machine. The files go to Google cloud, run on colab, then disappear the moment you disconnect and the virtual machine is destroyed. Pretty safe all things considered.

    • @onfire60
      @onfire60 Před 4 měsíci +1

      @@allyourtechai What about the site where you see what celebrity you look like?

    • @allyourtechai
      @allyourtechai  Před 4 měsíci +2

      yep, that one for sure. In general if you are doing a LoRA for yourself, chances are someone in your life has already told you who you look like, so might not be necessary. Use your own judgement of course, but good point.

  • @freeviewmedia1080
    @freeviewmedia1080 Před 3 měsíci

    What about the captions text ?

    • @allyourtechai
      @allyourtechai  Před 3 měsíci

      Unfortunately that’s a limitation of the colab. I’ve been looking for alternatives but so far this is one of the best I have found

  • @davidelks7766
    @davidelks7766 Před 4 měsíci +1

    Thanks alott 🤍 keep going

  • @240clay98
    @240clay98 Před 2 měsíci

    Can this be done on a mac?

  • @olvaddeepfake
    @olvaddeepfake Před dnem

    wouldn't it be better to save a checkpoint every so often to google drive? i know i will come back and it will be disconnected and the lora file will be gone

    • @allyourtechai
      @allyourtechai  Před dnem

      You definitely can as an option. I always stick around during the training personally but not everyone does

  • @havelicricket
    @havelicricket Před 3 měsíci +1

    pls help, colab is automatically disconnected in 90 minutes, while training is on, and, usually, training takes place around at least 2 to 4 hours, how do I finish the training?

    • @allyourtechai
      @allyourtechai  Před 3 měsíci

      How many images are you using? Running this myself completed in under an hour

    • @havelicricket
      @havelicricket Před 3 měsíci

      There are 11 images I am using@@allyourtechai

    • @havelicricket
      @havelicricket Před 3 měsíci

      ❌ ERROR | 2024-02-11 06:40:05 | autotrain.trainers.common:wrapper:91 - train has failed due to an exception: Traceback (most recent call last):
      File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/common.py", line 88, in wrapper
      return func(*args, **kwargs)
      File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/dreambooth/__main__.py", line 312, in train
      trainer.train()
      File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/dreambooth/trainer.py", line 406, in train
      self.accelerator.backward(loss)
      File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 1962, in backward
      self.scaler.scale(loss).backward(**kwargs)
      File "/usr/local/lib/python3.10/dist-packages/torch/_tensor.py", line 492, in backward
      torch.autograd.backward(
      File "/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py", line 251, in backward
      Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
      RuntimeError: Expected is_sm80 || is_sm90 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
      ❌ ERROR | 2024-02-11 06:40:05 | autotrain.trainers.common:wrapper:92 - Expected is_sm80 || is_sm90 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
      @@allyourtechai

    • @havelicricket
      @havelicricket Před 3 měsíci

      11 images@@allyourtechai

    • @havelicricket
      @havelicricket Před 3 měsíci

      @@allyourtechai ❌ ERROR | 2024-02-11 06:40:05 | autotrain.trainers.common:wrapper:91 - train has failed due to an exception: Traceback (most recent call last):
      File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/common.py", line 88, in wrapper
      return func(*args, **kwargs)
      File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/dreambooth/__main__.py", line 312, in train
      trainer.train()
      File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/dreambooth/trainer.py", line 406, in train
      self.accelerator.backward(loss)
      File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 1962, in backward
      self.scaler.scale(loss).backward(**kwargs)
      File "/usr/local/lib/python3.10/dist-packages/torch/_tensor.py", line 492, in backward
      torch.autograd.backward(
      File "/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py", line 251, in backward
      Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
      RuntimeError: Expected is_sm80 || is_sm90 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)

  • @faraday8280
    @faraday8280 Před 24 dny

    Tried a 1.5 and the lora does nothing in Automatic1111 D:

    • @allyourtechai
      @allyourtechai  Před 24 dny +1

      you might try fooocus. I haven't had any problems, but it's possible that enough code has changed that the colab doesn't work. I see that all the time. These systems have dozens of updates a week in some cases and they break things.

  • @user-fn9dn1co5o
    @user-fn9dn1co5o Před 3 měsíci

    Hello. Does this work with anime character also?

    • @allyourtechai
      @allyourtechai  Před 3 měsíci

      Yes, although I would probably train on top of an anime specific SDXL base model. You might still get good results on SDXL though, but I haven't tried.

  • @popovdejan
    @popovdejan Před 3 měsíci

    We can't use that LORA with SD1.5 in Automatic1111 ??

    • @allyourtechai
      @allyourtechai  Před 3 měsíci +1

      It would seem that A1111 doesn’t support the format. Seems to work everywhere else

    • @glassmarble996
      @glassmarble996 Před 3 měsíci

      @@allyourtechai there is a setting in both forge and automatic1111 named show all loras etc something. enable it and loras will work. my question is, can we raise network dim and alpha dim? 22mb for sdxl is decrasing quality.

  • @vespucciph5975
    @vespucciph5975 Před měsícem +1

    It doesn't work for me. I have tried it multiple times - with and without celebrity and also with different images. The settings are correct, I'm running it on fooocus. It seems to load and create images no problem but they don't look like me. not even close. what could have gone wrong?

    • @vespucciph5975
      @vespucciph5975 Před měsícem +2

      Update: i got it working using the .safetensor file wirh „kohya“ in the name. In my case there are two.

    • @rubenbaenaperez6183
      @rubenbaenaperez6183 Před měsícem

      ​@@vespucciph5975😮

    • @Fanaz10
      @Fanaz10 Před měsícem +1

      @@vespucciph5975 hey bro, did it work well? I saw there was kohya file, but used the regular, now its deleted, have to run train again........ :(

    • @vespucciph5975
      @vespucciph5975 Před měsícem

      @@Fanaz10 yes. It works as well as in his video.

    • @Fanaz10
      @Fanaz10 Před měsícem

      @@vespucciph5975 yea bro I ran with kohya and it works. HOWEVER, whenever I try to add even one word to the prompt, the end result is unrecognizable. Do I have to do something with weights? I'm just trying to make a simple corporate portrait, like "tom cruise man, corporate portrait"

  • @researchandbuild1751
    @researchandbuild1751 Před 3 měsíci

    Your have to buy compute credits now to use colab

    • @allyourtechai
      @allyourtechai  Před 3 měsíci

      No more free credits? Or do you mean after you use all of your free credits?

    • @mikrodizels
      @mikrodizels Před 3 měsíci

      Don't use colab for a couple of days, you will regain access to a GPU again, but keep in mind that it's like for 4 hours total max before you lose priority and not during peak hours. It sucks, but hey, it's free

  • @diamondhands4562
    @diamondhands4562 Před 11 dny

    I created one today. I don't think this works anymore. I can't get it to work at all. Has anyone created one lately that works?

    • @allyourtechai
      @allyourtechai  Před 10 dny

      I’ll have to see if there is another one we can use. These change so frequently

  • @OurResistance
    @OurResistance Před měsícem

    I am very frustrated that it does not allow me to use text files to describe the images. Therefore it is useless for most Lora training purposes!

    • @allyourtechai
      @allyourtechai  Před měsícem

      Yeah, hard to find anything that allows for that unless it is run locally. If I find anything i'll let you know.

  • @Zorot99
    @Zorot99 Před 4 měsíci +1

    For some reason, it only works when training for sdxl, when I try sd 1.5 I get an error.
    Anyone experiencing the same issue?

    • @PSYCHOPATHiO
      @PSYCHOPATHiO Před 4 měsíci

      same

    • @allyourtechai
      @allyourtechai  Před 4 měsíci

      I haven’t tried that specific colab for 1.5 training, but all of my old colab I used for 1.5 no longer work, so it would seem that something major changed

    • @Zorot99
      @Zorot99 Před 4 měsíci

      @@PSYCHOPATHiO still didn’t find a solution?

    • @PSYCHOPATHiO
      @PSYCHOPATHiO Před 4 měsíci

      @@Zorot99 did the SDXL but it kida crashed at the end or timed out, basically got nothing

    • @Zorot99
      @Zorot99 Před 3 měsíci

      @user-qc7rz1ep9d I only tested to train sdxl to see if it actually works or not since sd 1.5 is not working for me.

  • @batuhankaraca2578
    @batuhankaraca2578 Před měsícem +1

    Omg, johnny sins making a tut on sd!

  • @RodrigoIglesias
    @RodrigoIglesias Před 4 měsíci +2

    Very good tutorial! Finally I can have my own XL Lora, I couldn't with an 2070 RTX 8GB 😊
    Edit: Do you think I could train a Juggernaut XL LoRA with this? It fails with the default settings 🤔

    • @allyourtechai
      @allyourtechai  Před 4 měsíci +3

      Let me take a look!

    • @Dean-vk8ff
      @Dean-vk8ff Před 4 měsíci

      @@allyourtechai Great tutorial, still cant get round the error its producing when trying to use juggernaut

  • @axelrigaud
    @axelrigaud Před 4 měsíci

    thank you ! Am I the only one to get this error after it processed the files ? "You don't have the rights to create a model under this namespace"

    • @allyourtechai
      @allyourtechai  Před 4 měsíci +1

      Do you have the correct api key for hugging face entered, and does it have write access like it needs?

    • @axelrigaud
      @axelrigaud Před 4 měsíci +1

      @@allyourtechai yep i figured that out by reading again what was asked in the notebook :) my token was "read". thank you for replying !

    • @allyourtechai
      @allyourtechai  Před 4 měsíci

      @@axelrigaud Awesome, it's always nice when it turns out to be something simple!

  • @krazyyworld1081
    @krazyyworld1081 Před 2 měsíci

    my lora is ignored when i generate

    • @allyourtechai
      @allyourtechai  Před 2 měsíci

      What software, and are you using the trigger in the prompt?

    • @krazyyworld1081
      @krazyyworld1081 Před měsícem

      @@allyourtechai what your steps in the tutorial to the dot

  • @gamersgabangest3179
    @gamersgabangest3179 Před 2 měsíci

    You look like "Jerry rigs everything " lol

  • @Hariom_baghel08
    @Hariom_baghel08 Před 4 měsíci +1

    I am Android user

  • @Hariom_baghel08
    @Hariom_baghel08 Před 4 měsíci

    I am Android user please help me 😢

    • @allyourtechai
      @allyourtechai  Před 4 měsíci

      What questions do you have?

    • @Hariom_baghel08
      @Hariom_baghel08 Před 4 měsíci

      @@allyourtechai Will what you say be reduced in Android or not?

  • @BabylonWanderer
    @BabylonWanderer Před 4 měsíci +3

    great tutorial.. but plz plz use dark mode in your browser.. that white screen is blinding 😎

    • @allyourtechai
      @allyourtechai  Před 4 měsíci +2

      Haha, I just changed over to dark mode, and my eyes thank you too 😂

    • @BabylonWanderer
      @BabylonWanderer Před 4 měsíci

      @@allyourtechai 🤣👍

  • @rorutop3596
    @rorutop3596 Před 3 měsíci

    training is killed when starting? probably cuz i have a whopping 270+ images in the datasets as im training for style, but dont know how to figure it out..
    {'variance_type', 'dynamic_thresholding_ratio', 'thresholding', 'clip_sample_range'} was not found in config. Values will be initialized to default values.
    > INFO Running jobs: [2564]
    INFO: 180.242.128.113:0 - "GET /is_model_training HTTP/1.1" 200 OK
    INFO: 180.242.128.113:0 - "GET /accelerators HTTP/1.1" 200 OK
    > INFO Running jobs: [2564]
    INFO: 180.242.128.113:0 - "GET /is_model_training HTTP/1.1" 200 OK
    > INFO Running jobs: [2564]
    > INFO Killing PID: 2564

  • @animeful7096
    @animeful7096 Před 3 měsíci

    my code gets struck on
    > INFO Running jobs: []
    INFO: 103.133.229.36:0 - "GET /is_model_training HTTP/1.1" 200 OK
    INFO: 103.133.229.36:0 - "GET /accelerators HTTP/1.1" 200 OK
    and it keeps on going and returns the same command again and again and never stops executing
    what is the problem here please help?

    • @AleixPerdigo
      @AleixPerdigo Před 3 měsíci

      me too

    • @animeful7096
      @animeful7096 Před 3 měsíci +2

      @@AleixPerdigo actually that wasn't an error it was just an indication for completion of task during this a file named safetensors should be appearing in your folder that is your lora file

  • @Fggg-hz9tl
    @Fggg-hz9tl Před měsícem

    Why does my finished model weigh 20 MB?

    • @vespucciph5975
      @vespucciph5975 Před měsícem

      same here. I believe it's just a weights preset or something like that