SDXL LORA Training Without A PC: Google Colab and Dreambooth
Vložit
- čas přidán 23. 01. 2024
- 💻 GitHub Link To Auto Train Advanced: github.com/huggingface/autotr...
✨ Patreon prompt guide: / how-to-generate-96224373
💻 Link to celebrity lookalike site: starbyface.com/
⚔️ Join the Discord server: / discord
🧠 AllYourTech 3D Printing: / @allyourtech3dp
👾 Follow Me on X: / blovereviews
💻My Stable Diffusion PC: kit.co/AllYourTech/stable-dif...
This guide contains everything you need to train your own LORA or Low Rank Adaptation model for Stable Diffusion XL (SDXL) using Google Colab. That's right, you can train this for free without the need for a high end gaming PC. This guide will allow you to train SDXL to generate images of yourself, or anyone else for that matter. - Věda a technologie
✨ Support my work on Patreon: www.patreon.com/allyourtech
💻My Stable Diffusion PC: kit.co/AllYourTech/stable-diffusion-build
Awesome tutorial - dig your channel!
Thank you!
Kickass tutorial man.
Thank you!!
Wanted to say thank you for this video! I've been looking for a tutorial like this one and most of what I found is total BS on how to make money with fake influencers. Appreciate your detailed explanation and scientific approach to the topic
I really appreciate that, thank you!
Yeah, this is an amazing vid. How can we make alterations to our LORA and save them? Let's say the face needs to be thinner, wider, etc.
how is it bs lol? why u mad?
Man you're the best! I have a question, if the training got interrupted/stopped by accident, do I need to start everything all over agian?
Thank you! Yes if it fails and there is no progress or movement, you may need to restart unfortunately.
Thnx! I love this kind of video! I love automatic1111
Please reply. I've tried 5 different times to train a lora but when I install it on focus it ignores the lora file and doesn't do anything. It was working very well before, and I managed to create much Loras with this method but now it doesn't do anything. What is the problem can you please help me ?
I’ve had pneumonia for the past two weeks. I’ll try to look once I’m better
@@allyourtechai Oh I'm sorry to hear that! Take all the rest, thanks for the reply and I wish you will recover soon. ❤
did u figure it out?
Awesome tutorial, thank you so much for sharing this video. Its going to help a lot of people like me with crappy GPUs
Glad it helped
Hi! really helpful video. thank u so much for info!
i just wonder what if i want to train my style of drawing (flat a little in modern japanese style but not anime, mostly full body of girls or boys in the streets) what trigger word should i use? just "drawing" or my uniq gibberish word like "uhfvuhfuh"?
Hi, thank you so much for your good explanation. everything went good for me for the training, but in the end I don't see any output folder with safetensors file. I tried several times. Any idea ?
Is there a recommended resolution size for the training images, e.g. 1024x1024?
Your issue with style is to forget to uncheck Foocus V2, Enhance and sharp.They drive the model toward realism.
I was pulling my hair out trying to figure out how to locally train on a Mac and eventually found this video. Thank you! One question - I used a 16 image dataset to see just how real of a headshot I could generate and I'm currently on 10h 5m. I ended up getting a Colab Pro subscription after my first attempt was halted at 6hrs. Any insight on large jobs like this? I'd hate to lose progress when sleeping lol
Great tutorial ! clear and to the point. Anyone know if you can input .txt files with captions instead of the ? Cheers
Hello, just wanna ask if this works in training a specific card.
I have tried following this guide step for step but my LORA doesnt do anything. I can download other loras and add, and they work perfectly, but not when i add my own.
I am running Fooocus 2.2 on a google colab machine. My model is juggernautXL_v8Rundiffusion.safetensors and the lora is trained stable-diffusion-xl-base-1.0.
I followed the guide 1-1 and used Dreambooth lora, with 8 pictures pf a celeb and added the prompt being the name of the used celeb person. The training takes around 2 hours and completes correctly - but when used on my fooocus it looks nothing like my Lora :( Can you help us?
have you tried using stable-diffusion-xl-base-1.0 as model since this is the one you used to train your lora?
Just finished my own lora, I have the exact same issue as you have. My lora is ignored and there are images generated of the celebrity I used for training. Base model SDXL 1.0
Yes I tried that. Unfortunatelyit gives the same result.@@ea03941d
My lora doesn't have any effect too. I had very good data and used keywords, but it don't work with any model, including sd_xl_base. Very upset(
i did like 10 Loras with this guide like 2-3 weeks ago, working fine, was out for 1 week, tried again now, my old LORAS work, my new LORAS dont do nothing, tried diferent training images, tried using them on Fooocus, A1111, Fusion and ComfyUI, dont work. They never showed on A1111/Fusion inferface on LORAS tab, but they worked, now, just what i trained 2-3 weeks ago work, the new ones do not.
thanks for your great content! very helpful.
You are very welcome, thanks for watching!
Does the link still work? I got disallowed in the middle of my training
What can you do if the trained lora model is not visible in Stable Diffusion automatic 1111? Other xl loras are visible.
You can use anything other than a1111. A1111 doesn’t support the format yet
Great tutorial! I made it all the way through training, but when I try to access the Pytorch file, my file structure looks completely different. Mine is a long list folders, starting with bin, boot, contant, datalab, etc. I can't get it to go up a file menu to where yours is onscreen. Any ideas?
There is no output folder with safetensors file? That’s odd
so now it looks like me mixed with my celebrity look alike, is it because their name is in the prompt? Anyway to have it look just like me?
How many images did you train with?
@@allyourtechai I think 13 images, it gave me better results the second test when I just wrote my own prompt instead of a celeb look alike. Probably because I have a unique look
Do the photos to be prepared have to be the same size? Or can they be random?
Different sizes and aspect ratios are fine. You no longer need to crop all of the photos
Hello, thank you for the tutorial. I'm curious about how to use captions in this context. I have around 100 images with captions that I've prepared using Kohya, along with a considerable amount of editing afterward. I'm wondering if it's possible to use them.
I’ll do a tutorial :)
@@allyourtechaiI need this so that I can train an art style
Any luck with that caption .txt files ? :)
Thanks for the video. I'm training my model right now, following his tips. Now the question I have is: do you know of any colab that I can run this .safetensor to generate the images based on the model I just trained? Thanks again and good luck.
I'll see what I can find!
Here you go!!! : czcams.com/video/kD6kD6G_s7M/video.html
@@allyourtechai Thanks. I'll check it out now.
What if I forgot to put custom words in the "enter your prompt here" section? :(
I don’t think you will be able to use the files generated in that case. There would be no trigger to prompt the system to use your LoRA
My Google Colab is stuck on this error after getting to loading the 4/7th pipeline component:
INFO: 2401:4900:1c31:6d1e:3d79:ce0c:9144:588d:0 - "GET /is_model_training HTTP/1.1" 200 OK
INFO: 2401:4900:1c31:6d1e:3d79:ce0c:9144:588d:0 - "GET /accelerators HTTP/1.1" 200 OK
It repeats this every seconds. Help pls?
Coming from your awesome Focus colab tutorial! When it finishes doing the steps thing, it kept repeating something along the lines of “Running jobs: [],” followed by “Get /is_model_training HTTP/1.1” on the output for a few hours. Is it supposed to do that, because my dataset contains around 50-100 images.
That many images would likely take 10-20 hours to train. I haven’t ever tried that large of a data set on colab. Does it show a progress percentage at any point?
@@allyourtechaiIt does give me a percentage at the beginning, but when it finishes it keeps outputting “Running jobs” with no percentage at all. I think it’s because I used Firefox and it’s usually the backbone for problems like this, so I might try running colab on chrome for now on. Thanks for listening! 😊
@@allyourtechai I'm having the exact same issue here, except I'm using Chrome and I only used a dataset of six images, following the same steps and settings outlined in the video. It took about 1 hour and 45 minutes for all 500 training steps to complete, but after that, it gets stuck executing system() > _system_compat() > _run_command() > _monitor_process() > _poll_process()
and it remains repeating “Running jobs: [],” followed by “Get /is_model_training HTTP/1.1” at the four-hour mark.
@@DerKapitan_ Did you find any solution?
@@utkucanayIt turned out that it did create a LoRA file that I could download and use before it got stuck 'running jobs'. It didn't work well when I tried to use it, but I don't know if that was because of the glitchy process or poor training settings.
Do you prefer training this way or does Kohya produce better results?
Kohya provides more flexibility with regularization images and higher training steps assuming you have the VRAM. Generally you will get a higher quality result from local training unless you spend money on a larger memory colab instance.... But depending on your use case, the quick, free training could be good enough. (I hope that helped answer)
Greate tute!! One quick question.. The LoRAs work fine with Fooocus but they dont work in A1111?
They seem to work everywhere but A1111, and I haven’t figured out why that is yet
@@allyourtechai odd. Also noticed that they have a SD1 listed in the stable Diffusion version if that has something to do with it
@@mastertouchMT That's interesting. Worth digging into more. I'll see if I can find anything
@@allyourtechai I just worked with it in the new Forge platform. Have to go into the Jason file and change it to SDXL and its good to go!
Great tut, but You could explain how to add captions to images, or maybe how to check what caption was inserted while training.
Edit: or maybe just drop image1.png image1.txt image2.png image2.txt and it will be fine?
Couldn't able to train model. I'm getting these error "ImportError: cannot import name 'text_encoder_lora_state_dict' from 'diffusers.loaders' (/usr/local/lib/python3.10/dist-packages/diffusers/loaders/__init__.py)". Please help me to resolve this.
Getting the same error!
my model came out as a JSON file what did i do wrong?
Hi thanks for the video! I tried this method, but testing the lora in SD, I'm getting images nothing like the training images, it's supposed to be a shirt, but I'm getting images of a beaver lol! Not sure what to do...
What was your trigger word and your dataset for the Lora?
04:55 why my training parameters are less than the ones you showed ?
did you use full other than basic?
but i have one extra parameter : "vae_model": "",
it is possible that they updated the scripts
@@allyourtechai thank you!
Hi! First I want to say thank you so much your videos are actually done in a way that can be understood and followed something I feel is needed in this space.
I am having trouble getting this to work for me I was able to follow all the directions in the video and when I click on train it seems like it should work it says success you can check the progress of your training here monitor your job locally/in logs but then nothing else seems to happen I came back after several hours and I can't seem to find anything I looked in the files area and doesn't seem to be anywhere so my question is how do I know if it worked and where is it and if it didn't work how can I find out why thanks in advance I appreciate your help or any help from the community that might come across my comment
First, thank you!
Training typically takes 40-60 minutes from what I have seen. There should be an indication of progress down in the console at the bottom of the text when processing starts. I believe the free version of colab stops after 90 minutes, so it might be that your LoRA finished, but the colab shut down before you came back to download it. I usually try to stay close by when training for that reason.
@@allyourtechai well the thing is I upgraded to the first tier of the paid version and I don't see any indication of it doing anything after I get that success message and when I click on that message that says success you can now track the progress of your training here monitor your jobs locally it takes me to a blank page with a tiny bit of code that says detail not found?
you have to open the code in collab to watch the log/progress. it should be in the folders from the side menu on the left. that's in the vid for reference. otherwise, if you're using your local machine, there should be a cmd window open showing the log.
My LoRA isn't working, I trained it with 11 images, I tried using both celebs and my own token in the prompt but it still doesn't work as intended. I use A1111, and the base SDXL1.0 model but the results look nothing like me (each generation is a completelly different man, It goes from old white guy, to asian kid, to muscular black man). I don't know what I'm doing wrong, any suggestions?
I also tried using other LoRAs (not trained by me) and they all work beautifully
I used 5 photos of my face only, 2 photos from my waist to my head, 2 full body shots, and 2 mirror selfies (which might not be the best but it's all I had)
@@culoacido420 That's because this type of LoRA doesn't currently work in A1111.
The generated Lora is working with great with Fooocus but it doesn't do anything in A1111, are you aware of this issue?
A1111 needs to update to allow for the format that colab puts out. Not anything I have control over unfortunately
thanks
You're welcome!
my lora is ignored when i generate (2) ☹ Please help! I did the same process before. it worked. but it is not working now. There are few changes in auto train interface ( ex: there is something in training parameters section ("vae_model": "",). idk what that is!
I ran into the same issue, I noticed the slight differences in the parameters for the training, and my lora does not seem to be creating any effect. I wonder if it was updated and there is a simple fix. Just for context, I am running SDXL thru comfyui.
What size is the final Lora using the parameters you suggest? How does the size/quality compare to the local method you published earlier with those parameters?
It’s about 23MB in size versus 1.7GB for the version trained locally. Part of the reason for that is the 15GB vram limit on the free version of colab. My local guide requires about 20gb of vram to train. I also used 2000 training steps locally versus 500 in colab.
So, the local version is higher quality, but is it 100X higher quality? No!
I’ll do some side by side tests and we can see :)
@@allyourtechai if I have the lowest tier of colab purchased already, what number of steps would you recommend for the best results. Also, some people suggest changing clothes, expressions and environments in the sample photos for better results, do you agree with this?
I would go with 2000 steps for training for a better result. I would definitely try to get variations in both the expressions and clothing. Mine for example tends to put me in a grey polo since the bulk of my images were taken in a hurry with one set of clothes. Normally I try for varied lighting, clothing, etc to create the most flexible model possible.
@@allyourtechai Thanks for the additional info. I was able to get your exact settings working on my 12Gig 4070, but I get how for Collab it is essentially a free video card so I shouldn't complain. :)
@@allyourtechai gotcha! Appreciate the reply. Appreciate the content.
Got a question I could really use your opinion on. If my final aim is to make a comic art style avatar of myself, should I think about training the LoRA on a different base? Something that has already been trained before on the particular style that Im aiming for? I’ve read that SDXL and juggernaut are designed for realistic images. And The google colab method has a fixed amount of bases that I can use, any in particular that you would suggest for this?
Either way, you have earned my sub, looking forward to future videos!
Thanks for the video. Concise and very clear. But I am facing an issue, which from comments, many others are facing as well. I have created a LoRA using the above instructions (not on colab, but on GCP VM), but when I tried to use it on Fooocus with sd_xl_base_1.0 as the base model, the LoRA does not get loaded. Other LoRAs downloaded from civitai get loaded and work perfectly.
On debugging, I found that fooocus is expecting LoRA keys in the following format:
'lora_unet_time_embed_0', 'lora_unet_time_embed_2', 'lora_unet_label_emb_0_0', 'lora_unet_label_emb_0_2', 'lora_unet_input_blocks_0_0', 'lora_unet_input_blocks_1_0_in_layers_0', 'lora_unet_input_blocks_1_0_in_layers_2', 'lora_unet_input_blocks_1_0_emb_layers_1', 'lora_unet_input_blocks_1_0_out_layers_0', 'lora_unet_input_blocks_1_0_out_layers_3', 'lora_unet_input_blocks_2_0_in_layers_0', 'lora_unet_input_blocks_2_0_in_layers_2', 'lora_unet_input_blocks_2_0_emb_layers_1', 'lora_unet_input_blocks_2_0_out_layers_0', 'lora_unet_input_blocks_2_0_out_layers_3', 'lora_unet_input_blocks_3_0_op', 'lora_unet_input_blocks_4_0_in_layers_0', 'lora_unet_input_blocks_4_0_in_layers_2', 'lora_unet_input_blocks_4_0_emb_layers_1', 'lora_unet_input_blocks_4_0_out_layers_0', 'lora_unet_input_blocks_4_0_out_layers_3', 'lora_unet_input_blocks_4_0_skip_connection', 'lora_unet_input_blocks_4_1_norm', 'lora_unet_input_blocks_4_1_proj_in', 'lora_unet_input_blocks_4_1_transformer_blocks_0_attn1_to_q', 'lora_unet_input_blocks_4_1_transformer_blocks_0_attn1_to_k', 'lora_unet_input_blocks_4_1_transformer_blocks_0_attn1_to_v'
Whereas the actual keys in the LoRA are in a slightly different format:
'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.lora.up.weight'
@allyourtechai do you know how to resolve this issue? Or anyone else, can anyone help in resolving this? Thanks!
I haven't come across this one, but chatgpt did provide a way to map the keys properly:
chat.openai.com/share/34c5ada6-f3b5-4ab8-8bb7-f92638d8e922
Thanks for the reply. Found an easier solve, by adding the following as hyperparameter: --output_kohya_format
Some of LORA disappeared, click Refresh did not work. went to extensions - apply, update, still not work, these disappeared LORA files still in Lora folder, how to fix that?
if you refresh I believe it launches a new colab instance and you lose anything related to the old one.
my results didn't come out that well, any troubleshooting? I got images of like other people ( didn't look like the person I put in or celebrity)
How many images did you train with and which software are you using to generate images after training? Are you using base stable diffusion xl to generate the images?
@@allyourtechai I was getting the same thing. I switched from automatic1111 to fooocus, and it works now. For some reason stable diffusion is not recognizing the lora
Hi, thanks for the tutorial
I tried generating lora with same method with 24 images. But when I tested it on fooocus it didn't work.
It's not at all generating the image it is trained on
yeah me too. i guess we're the user testers for this. lol
Just a note: 99% of the time, free GPU connection is not available on Google Colab. For that, user must change the setting from pf16 to bf16.
hey umm how can i put this model on hugging face without downloading it
If you have a hugging face account it will automatically upload the Lora to your account when the generation is complete
does this work for training for styles? what would I need to enter in the prompt field?
It does! You would just provide a prompt trigger that describes the style and ensure that trigger is also in your text annotation files for the pictures you use to train the model. It might be something like “neon glow style” for example
@@allyourtechai in the google colab/ngrok app I don't see an option for text annotation. In the tutorial I just saw that you uploaded images only.
@@allyourtechaiAre you saying that we can just upload txts alongside with the images with the same name but different (extensions obviously) and it will work?
Does this not need regularization images ? Seems like that is an import process to to lora training.
They help but are not required. Simply not an option at all in most cases when you aren’t training locally.
@@allyourtechai how about captioning ? does this support captioned text along with the training images ?
I tried to use this method to train the SSD-1B model but I got an error while training. Have you tried training an SSD-1B model?
I haven’t tried that yet. Let me see if I can get it to work though
@@allyourtechai thanks!
Anyone having an issue using the LoRA produced on A1111? Every single LoRA shows up except the ones trained with this method on A1111.
I use fooocus or InvokeAI (or even comfyui). No idea why a1111 would have issues though.
can you make a video on how to install and run it on lokal hardware and not google colab ?
Yep, I have a couple videos on that already :)
Why does my finished model weigh 20 MB?
Hi, I followed all the steps carefully but when I trigger the propmt (in my case "jim carrey man") I keep getting images of jim carrey and not mine... Whats the problem? Amazing content btw, thanks
I even trained the Lora again setting up a combination of words that only represents me and not working
same here bro.. did u figure it out?
do we not need to tag the images anymore?
Ideally, yes, but running this remotely just doesn't allow for blip captioning.
I get this error after I hit the run button.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lida 0.0.10 requires kaleido, which is not installed.
Error is longer
Are you training with SDXL? It sounds like a missing package, but i'm unsure from the error. Can you also post the full error if you can?
yes sdxl. could not copy paste whole error in chat, youtube algorythm detected as spam likely.
I did not save the error.
@@allyourtechai
@@allyourtechaithis error is probably from autotrain in colab. I'm getting one for protobuf =3.20.3 because it loads with protobuf 4.23.4 and says that's incompatible with tensorflow-metadata 1.14.0. this is like the only error i haven't resolved yet and i believe that's why my lora isn't generating images like my training in fooocus.
i know it's probably obvious to programmers, but how can i get the right protobuf loaded into the colab machine? can i just do pip install from that code box window above the log? or does that need to be included in the code you run before getting the public url for the ui? sorry if im not using the right names.
excellent tutorial! Sadly, Google colab keeps shuting down in the middle of the training... like at 64% (training only 10 images). I tried this for several days. Any solution? anyone? thanks in advance!
Pay for colab
Or monitor the files menu more closely, the model should be finished before it disconnects if you use 10 images.
Its amazing to me how people just upload their images to wherever. Do you know where those images go and how they may be used after you upload them? I mean this is really cool and all but I'm not sure i would suggest people upload personal images of themselves to random sites. Especially in this AI world. Just my opinion take it or leave it. Cool tutorial though!
This is a cloud instance of a virtual machine. The files go to Google cloud, run on colab, then disappear the moment you disconnect and the virtual machine is destroyed. Pretty safe all things considered.
@@allyourtechai What about the site where you see what celebrity you look like?
yep, that one for sure. In general if you are doing a LoRA for yourself, chances are someone in your life has already told you who you look like, so might not be necessary. Use your own judgement of course, but good point.
What about the captions text ?
Unfortunately that’s a limitation of the colab. I’ve been looking for alternatives but so far this is one of the best I have found
Thanks alott 🤍 keep going
Thank you too
Can this be done on a mac?
did u try?
wouldn't it be better to save a checkpoint every so often to google drive? i know i will come back and it will be disconnected and the lora file will be gone
You definitely can as an option. I always stick around during the training personally but not everyone does
pls help, colab is automatically disconnected in 90 minutes, while training is on, and, usually, training takes place around at least 2 to 4 hours, how do I finish the training?
How many images are you using? Running this myself completed in under an hour
There are 11 images I am using@@allyourtechai
❌ ERROR | 2024-02-11 06:40:05 | autotrain.trainers.common:wrapper:91 - train has failed due to an exception: Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/common.py", line 88, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/dreambooth/__main__.py", line 312, in train
trainer.train()
File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/dreambooth/trainer.py", line 406, in train
self.accelerator.backward(loss)
File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 1962, in backward
self.scaler.scale(loss).backward(**kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Expected is_sm80 || is_sm90 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
❌ ERROR | 2024-02-11 06:40:05 | autotrain.trainers.common:wrapper:92 - Expected is_sm80 || is_sm90 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
@@allyourtechai
11 images@@allyourtechai
@@allyourtechai ❌ ERROR | 2024-02-11 06:40:05 | autotrain.trainers.common:wrapper:91 - train has failed due to an exception: Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/common.py", line 88, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/dreambooth/__main__.py", line 312, in train
trainer.train()
File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/dreambooth/trainer.py", line 406, in train
self.accelerator.backward(loss)
File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 1962, in backward
self.scaler.scale(loss).backward(**kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Expected is_sm80 || is_sm90 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
Tried a 1.5 and the lora does nothing in Automatic1111 D:
you might try fooocus. I haven't had any problems, but it's possible that enough code has changed that the colab doesn't work. I see that all the time. These systems have dozens of updates a week in some cases and they break things.
Hello. Does this work with anime character also?
Yes, although I would probably train on top of an anime specific SDXL base model. You might still get good results on SDXL though, but I haven't tried.
We can't use that LORA with SD1.5 in Automatic1111 ??
It would seem that A1111 doesn’t support the format. Seems to work everywhere else
@@allyourtechai there is a setting in both forge and automatic1111 named show all loras etc something. enable it and loras will work. my question is, can we raise network dim and alpha dim? 22mb for sdxl is decrasing quality.
It doesn't work for me. I have tried it multiple times - with and without celebrity and also with different images. The settings are correct, I'm running it on fooocus. It seems to load and create images no problem but they don't look like me. not even close. what could have gone wrong?
Update: i got it working using the .safetensor file wirh „kohya“ in the name. In my case there are two.
@@vespucciph5975😮
@@vespucciph5975 hey bro, did it work well? I saw there was kohya file, but used the regular, now its deleted, have to run train again........ :(
@@Fanaz10 yes. It works as well as in his video.
@@vespucciph5975 yea bro I ran with kohya and it works. HOWEVER, whenever I try to add even one word to the prompt, the end result is unrecognizable. Do I have to do something with weights? I'm just trying to make a simple corporate portrait, like "tom cruise man, corporate portrait"
Your have to buy compute credits now to use colab
No more free credits? Or do you mean after you use all of your free credits?
Don't use colab for a couple of days, you will regain access to a GPU again, but keep in mind that it's like for 4 hours total max before you lose priority and not during peak hours. It sucks, but hey, it's free
I created one today. I don't think this works anymore. I can't get it to work at all. Has anyone created one lately that works?
I’ll have to see if there is another one we can use. These change so frequently
I am very frustrated that it does not allow me to use text files to describe the images. Therefore it is useless for most Lora training purposes!
Yeah, hard to find anything that allows for that unless it is run locally. If I find anything i'll let you know.
For some reason, it only works when training for sdxl, when I try sd 1.5 I get an error.
Anyone experiencing the same issue?
same
I haven’t tried that specific colab for 1.5 training, but all of my old colab I used for 1.5 no longer work, so it would seem that something major changed
@@PSYCHOPATHiO still didn’t find a solution?
@@Zorot99 did the SDXL but it kida crashed at the end or timed out, basically got nothing
@user-qc7rz1ep9d I only tested to train sdxl to see if it actually works or not since sd 1.5 is not working for me.
Omg, johnny sins making a tut on sd!
🤣
I thought he looked faintly like the guitarist from Rise Against
Very good tutorial! Finally I can have my own XL Lora, I couldn't with an 2070 RTX 8GB 😊
Edit: Do you think I could train a Juggernaut XL LoRA with this? It fails with the default settings 🤔
Let me take a look!
@@allyourtechai Great tutorial, still cant get round the error its producing when trying to use juggernaut
thank you ! Am I the only one to get this error after it processed the files ? "You don't have the rights to create a model under this namespace"
Do you have the correct api key for hugging face entered, and does it have write access like it needs?
@@allyourtechai yep i figured that out by reading again what was asked in the notebook :) my token was "read". thank you for replying !
@@axelrigaud Awesome, it's always nice when it turns out to be something simple!
my lora is ignored when i generate
What software, and are you using the trigger in the prompt?
@@allyourtechai what your steps in the tutorial to the dot
You look like "Jerry rigs everything " lol
wow very funny
Haha, not the first time I have heard that
I am Android user
I am Android user please help me 😢
What questions do you have?
@@allyourtechai Will what you say be reduced in Android or not?
great tutorial.. but plz plz use dark mode in your browser.. that white screen is blinding 😎
Haha, I just changed over to dark mode, and my eyes thank you too 😂
@@allyourtechai 🤣👍
training is killed when starting? probably cuz i have a whopping 270+ images in the datasets as im training for style, but dont know how to figure it out..
{'variance_type', 'dynamic_thresholding_ratio', 'thresholding', 'clip_sample_range'} was not found in config. Values will be initialized to default values.
> INFO Running jobs: [2564]
INFO: 180.242.128.113:0 - "GET /is_model_training HTTP/1.1" 200 OK
INFO: 180.242.128.113:0 - "GET /accelerators HTTP/1.1" 200 OK
> INFO Running jobs: [2564]
INFO: 180.242.128.113:0 - "GET /is_model_training HTTP/1.1" 200 OK
> INFO Running jobs: [2564]
> INFO Killing PID: 2564
my code gets struck on
> INFO Running jobs: []
INFO: 103.133.229.36:0 - "GET /is_model_training HTTP/1.1" 200 OK
INFO: 103.133.229.36:0 - "GET /accelerators HTTP/1.1" 200 OK
and it keeps on going and returns the same command again and again and never stops executing
what is the problem here please help?
me too
@@AleixPerdigo actually that wasn't an error it was just an indication for completion of task during this a file named safetensors should be appearing in your folder that is your lora file
Why does my finished model weigh 20 MB?
same here. I believe it's just a weights preset or something like that