LORA training EXPLAINED for beginners

Sdílet
Vložit
  • čas přidán 31. 05. 2024
  • LORA training guide/tutorial so you can understand how to use the important parameters on KohyaSS.
    Train in minutes with Dreamlook.AI: dreamlook.ai/?via=N4T
    code: "NOT4TALENT"
    Join our Discord server: / discord (Amazing people like LeFourbe on there)
    ------------ Links used in the VIDEO ---------
    Folder to JSON Script: drive.google.com/drive/folder...
    KohyaSS: github.com/bmaltais/kohya_ss
    Fastest Model training: dreamlook.ai
    Alpha Rank and Dim post by @AsheJunius ashejunius.com/alpha-and-dime...
    Google Colabs for "free" training: github.com/camenduru/stable-d...
    colab.research.google.com/git...
    Super detailed LORA training guide by: "The Other Lora Rentry Guy" ?: rentry.co/59xed3#preamble
    BooruDatasetTagManager: github.com/starik222/BooruDat...
    ------------ Social Media ---------
    -Instagram: / not4talent_ai
    -Twitter: / not4talent
    Make sure to subscribe if you want to learn about AI and grow with the community as we surf the AI wave :3
    #aiairt #digitalart #automatic1111#stablediffusion #ai #free #tutorial #betterart #goodimages #sd #digitalart #artificialintelligence #kohyaSS #kohya #LORA #Training #LoraTraining #outpainting #img2img #dreamlook #dreamlookAI
    #consistentCharacters #characters #characterdesign #personaje
    0:00 intro
    0:10 What we need
    0:23 Install KohyaSS
    1:38 Thanks to LeFourbe
    1:54 What are LORA
    2:35 Best Datasets
    4:26 How to get the images
    5:06 Best Captioning
    6:32 Captioning but AI POV
    9:38 Captioning Example
    11:00 Using BooruDatasetTagmanager
    13:10 Training decisions
    13:25 Choosing a model
    14:02 Folder Structure making
    14:32 What Regularization does
    15:06 Steps and epochs Explained
    16:50 Aprox recommendation
    17:00 Ill use 14 steps and 6 epochs
    17:32 Creating the folders
    18:00 Training Parameters
    19:25 Learning Rate Explained
    20:40 LR scheduler
    21:02 Use AdamW or AdamW8bit
    21:30 Network Rank and Alpha
    22:10 Resolution and Bucketing
    23:10 Advanced Options
    24:05 Train AI in minutes (sponsored)
    26:10 Test Results
    27:23 Thanks for watching :3

Komentáře • 338

  • @SatouLofthouse
    @SatouLofthouse Před 6 měsíci +27

    I wanna talk about some of the trait of the best loras I have found.
    - They tagged everything. This makes it so you can use the description they gave their character without the lora, and as long as you have it installed, it will give more control because you can generate it without the lora then inpaint over the generation with the lora at full strength, and you will have the perfect image.
    - Them tagging everything helps if you find that the character keeps refusing to wear a different outfit or their clothes tend to be the same color as their hair because the workflow I explained above corrects this issue.
    - Always tag the clothes and zoom in on parts of it and describe those details so you can easily inpaint them later.
    - Please edit data sets so that tiny details that fluctuate a lot in anime look consistent, and zoom in on some so people can easily inpaint and have them turn out great. Hair clips/pins and buttons can be a problem sometimes, so you may need to correct how they look. For instance, my Mamori Tokonome lora has her wearing a small white cat hair pin, and it was so inconsistent in every scene that it made the pin turn out very unfortunate, and now I need to go edit the data set to correct this.
    -- Tag the backgrounds, or your characters will be stuck with the same backgrounds they were trained with. If you don't tag them, stable diffusion will assume the background is part of the character that isn't made to be changed.
    - Get pictures with multiple sizes, or trying to upscale them will result in an all NANs error and/or out of memory error.
    - Tag chest size on women so people can easily adjust the size.
    - Describe their outfits well and their hair styles well so that if you want to keep the same face and change some things about the hair and clothes, you can do so with ease.
    - Please do not intentionally create a lora that tries to keep a rigid structure. By this, I mean, do not try to limit the creative freedom of the person using it on purpose.
    - When training a lora, keep to the same style all throughout. Do not use multiple different styles of the same character so that people can always know they will achieve consistent results. They can combine your lora with other loras and checkpoints to mess with the style. Please let them have that freedom.

    • @Not4Talent_AI
      @Not4Talent_AI  Před 6 měsíci +4

      Super useful and well explained comment. Thank you so much. I'm thinking of pinning it since Lefourbe's comment is already at the top by likes. I'll come back to you on this, but thanks again!

    • @lefourbe5596
      @lefourbe5596 Před 6 měsíci +3

      i'm backing that up !

    • @SatouLofthouse
      @SatouLofthouse Před 6 měsíci +6

      @@Not4Talent_AI Always happy to help! Also, I wanna add more to this. You can use multiple loras blended together to achieve a new style. In the art community, it's called stealing like an artist. What you do is you take bits and pieces of art styles like different eye shapes, different shading, different line art, different noses, etc until you have a brand new character and style! If you can get a successful front, side, 3/4 front, back, and 3/4 view, you can train that character as your own original character with a brand new style!
      You can also use Vroid mixed with Blender to start building a character. if you find some really cool looking eyes or make some yourself, you can stick them on your Vroid model and get pictures of that face from all sides and train the exact face of your desired character! This is another reason to tag the hair because you can change it to anything you need to if you tag it so it isn't stuck on your character. This way, you don't have to learn how to model the hair (which may arguably be the hardest shit in the world) and can just do it in Stable Diffusion.
      You can also do so many edits to the Vroid model before exporting it. I also saw a tutorial on how to add simple 3d nips to the character, and another that teaches how to use proportional editing to change the shape of the character's body in a more satisfying way (because you can only go so big and so small in Vroid before the model breaks). you don't even have to know how to make clothes because you can train the character only naked (or in underwear if you're uncomfy) and you should easily be able to add clothes in Stable Diffusion. However they have some cool clothing shape templates that you can change to anything you like, so if you generate something you like in Stable Diffusion, you can add that to your character in Blender using project from view, and then actually be able to train them wearing the kinds of clothes you want them in. However, if you wanna make an original style and characters you can actually sell for original projects, I recommend kit-bashing to make original clothes so you don't get in trouble for accidental copyright infringement.

    • @Not4Talent_AI
      @Not4Talent_AI  Před 6 měsíci +1

      Really nice, I tried training a 3D model I had done from 0 but didnt get it to work. Now with the new pc I'll probably try again hahahhaa
      And yeah, btw, modeling hair is annoying af. Mainly with stuff like Xgen
      Thanks!!! @@SatouLofthouse

    • @lefourbe5596
      @lefourbe5596 Před 6 měsíci +1

      @@SatouLofthouse now you mentionned Vroid / blender. i really appreciate them.
      i've started on remaking this character (profile pic) on vroid. custom textures with SD. it will take long that's sure, i'm new to vroid.
      as you said, our best shot is to merge a style coz 3D feels off.
      i would love to share what we have on the discord 🤝

  • @nolan6733
    @nolan6733 Před 8 měsíci +8

    You explain things so well, great analogies for understanding, explained with humor and fun, and when you don't understand something you honestly admit it. Fantastic job! I'm having a lot of fun learning how to make art/comics with AI so I appreciate your videos! You've got a great teaching style brother :))

    • @Not4Talent_AI
      @Not4Talent_AI  Před 8 měsíci +1

      comment so encouraging I had to read it twice hahaha. Thank you so much!!! Glad you find it useful

    • @tsentenari4353
      @tsentenari4353 Před 8 měsíci +2

      Perfectly summed up, I couldn't agree more. I loved the two versions of "Popimpokin", it helped me to immediately understand what's crucial here, and why

    • @Not4Talent_AI
      @Not4Talent_AI  Před 8 měsíci

      super happy to hear that! tyty! @@tsentenari4353

  • @justinwhite2725
    @justinwhite2725 Před 10 měsíci +5

    Based on the provided dataset without captions, a popimpokin is a bar stool, a counter, and one or more bottles of alcohol in some combination.
    (I mention this because that's all the things an AI would associate with a popimpokin without proper captions)

  • @TakatsukiRyoku
    @TakatsukiRyoku Před 10 měsíci +6

    Was literally looking for more lora guide videos as I've never fully understood from other guides, like hours before you uploaded. And literally after this video, I can say I am capable of actually manually making proper changes to my training parameters, whereas before, i was only using pre-made configs. Thank you again for another very very valuable educational video! I knew subscribing to you was gonna be hella worth it! Keep it up! And perhaps indulge in some popimpokin? XD

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      Hahahahahha super great, love to read that. Happy that it was useful, tyty!!

    • @TakatsukiRyoku
      @TakatsukiRyoku Před 10 měsíci +1

      ​@@Not4Talent_AI You're welcome! But I have a question, so I realised that my lora,
      when at strength as low as 0.4 to 0.7, it feels like its taking over the prompt (dataset has 60 images, 5-6 images where character has blue shirt), and that color is being transfered through the lora, even though I have "(white shirt:1.5)" written in prompt, is this perhaps an issue with my captioning, or do i have to balance it out with different images in dataset?
      My training parameters are :
      60 images x 15 steps x 3 epochs, which goes up 2700 steps in total. Training batch size is 4, and everything else I followed your guide in the video.
      Your insight into this, is very much appreciated!
      (Edited to fix format and added extra info >

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      @@TakatsukiRyoku Hi! It sounds like a caption issue but unsure. How did you caption the images with the blue shirt?
      (try captioning "blue shirt" in those, or, if you have that already, then caption "-color- shirt" as well in all the other images)
      Btw, if you have 60 images, another possible solution would be to train without those images to avoid issues

    • @TakatsukiRyoku
      @TakatsukiRyoku Před 10 měsíci +1

      Ah I kinda figured it out! After re-training another 2-3 more times, I realised it was caused by prompt blending, having (blue eyes:1.6) before the white shirt prompt.
      After adjusting, it fixed the issue! Regardless, I would very much appreciate if you know why Loras can effect a prompt even at low strength!

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      @@TakatsukiRyoku Same thing as prompt bleeding. Having the lora there will affect in undesired ways. Like if you have a photo of an apple, change it to melon, and then the whole background changes for apparently no reason
      Glad you could fix it btw!

  • @AerysBat
    @AerysBat Před 10 měsíci +20

    I have trained a few LoRAs and found one principle to follow if your LoRA isn't coming out right: The problem is almost always your dataset, not your training settings. If the LoRA is not giving correct results, resist the temptation to fiddle with learning rate or anything like that. Instead, go back to your dataset and review your images and make sure you've tagged them properly. A single bad tag can really screw up a LoRA!
    Once it's working well with the default settings, then you can try things like changing the scheduler, network dim to try to improve your results!

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +2

      Yeee dataset and tagging are pretty much 90% of the work😂😂

    • @TakatsukiRyoku
      @TakatsukiRyoku Před 10 měsíci +4

      I came back to this video to look for more tips, and THIS IS SO TRUE! I tried 1 dataset, without sorting through tags, and one where I did. Cropped my images, and fixed tags! The output is worth the time!

  • @nietzchan
    @nietzchan Před 10 měsíci +2

    Thank you for making this, such a lifesaver~

  • @pokerface5685
    @pokerface5685 Před 5 měsíci +1

    the best tutorial out there, google just recommended shit until i found this one! thanks a lot keep up the work

  • @pastuh
    @pastuh Před 10 měsíci +7

    For proper preparation of the Lora model, time is necessary.
    And, if the model fails and you wish to improve it, it will take twice as much time.
    In conclusion, it's a challenging task that requires dedication.
    The fun may not last for an extended period of time.

  • @piyushsonawane8827
    @piyushsonawane8827 Před 9 měsíci +3

    You are the G.O.A.T for making such an informative video! Really appreciate it! Thank you!

    • @Not4Talent_AI
      @Not4Talent_AI  Před 9 měsíci +1

      Thank you so much! Happy to hear you liked it!

  • @phoenixrogers
    @phoenixrogers Před 7 měsíci +1

    Looking forward to you advanced guide thanks for the help.

    • @Not4Talent_AI
      @Not4Talent_AI  Před 7 měsíci +1

      ty!! it will take a while cuz I need to upgrade my pc and that sht's expensive XD but it will come eventually!

    • @phoenixrogers
      @phoenixrogers Před 7 měsíci +1

      @@Not4Talent_AI I feel that. I want to do the same, however I am lucky enough to have a 2080ti

  • @sherpya
    @sherpya Před 10 měsíci +13

    we definitely need a bottle of pompiko-thing

  • @nims5537
    @nims5537 Před 9 měsíci +1

    Wow, incredible tutorial ! Thx ❤

  • @mkor1234
    @mkor1234 Před 10 měsíci +1

    wow i searched lora video and this is the best what i ever saw keep going

  • @guilhermegamer
    @guilhermegamer Před 10 měsíci +6

    OMG! Just the theme i was wodering you could choose! =D

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      Hahhahaha needed to follow up the last vid😂 this one has two parts, but 2nd one will take while. (Original script had 25 pages xD)

    • @guilhermegamer
      @guilhermegamer Před 10 měsíci +2

      @Not4Talent_AI Can't wait to learn more with you. 😎

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      @@guilhermegamer thank you so much! Hope to provide good value :3

    • @lefourbe5596
      @lefourbe5596 Před 10 měsíci +2

      Daunting task X)

  • @casualgamer3689
    @casualgamer3689 Před 10 měsíci +1

    Great video! I was amazed of the sound of Poppin-pokin!

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      hahahaahha tyty! Weirdly enough I struggled less with that word than with most casual English XD

  • @CBikeLondon
    @CBikeLondon Před 10 měsíci +2

    Keep up the great work

  • @EvilNando
    @EvilNando Před 10 měsíci +6

    I am an absolute noob when it comes to this but I just wanted to share my experience,
    so far Ive been using kohya with all default settings, just adding images croping to 512 with birme , autocaption, and hit run, 1 epoch , 100 steps, and I have been able to get the character I wanted pretty much all the time, now with this knowledge in mind i will try again properly and see how it changes

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +2

      Nice! Hope it helps. Parametrs and tagging can help when you have a bad or not good dataset. But if you have a good one then its pretty hard to mess it up

    • @chuckenergy
      @chuckenergy Před 10 měsíci +5

      please come back after you've tried and report findings :))

    • @letsdwit
      @letsdwit Před 2 měsíci

      how many images you putted in?

  • @ryry9780
    @ryry9780 Před 7 měsíci +2

    Been a long time since I did ai. I've only ever known hypernetworks but I don't have the patience to oversee 20,000 steps worth of HN training.
    Kinda clueless about LoRA so I hope this vid will help :D

  • @lefourbe5596
    @lefourbe5596 Před 10 měsíci +35

    WE MADE IT ! FINALLY !
    i can take any question down bellow and i'll try to answer the best i can.
    EDIT : regularization should not be used as begginer in Lora, instead use network dropout or caption dropout. this would preserve differently the model. it would speed up by two the model training.

    • @kenjin5758
      @kenjin5758 Před 10 měsíci +1

      GG ! Lot of work done here !

    • @ocvala7339
      @ocvala7339 Před 10 měsíci +1

      I made nearly 20 LoRas, but only 2 of them run well. And both were made by my slow GTX 1070 with AdamW optimizer, dataset < 40 images. I trained others with exactly same setting on Colab (for faster) with Adam8bits, dataset 120 - 137 images, but they gave me very bad results when applied. They're all Dim = 8, 100 - 150 steps, default parameters.
      So, the question is: what is the real trick to train a good Lora with good effect? I just want to train a Lora which can apply 90% detail like original images (real person) but I fell without knowing why. My GTX 1070 train 27 hours and 32 hours for the first two ones so I want a faster solution with same result.

    • @MrPangahas
      @MrPangahas Před 10 měsíci

      what happens to the dataset you use and the outputs you generate? Do they become part of SD data set for everyone to use?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      hi! I woud use a higher dim, for starters. Steps do you mean repeats or total steps? If its total steps is really low, if its repeats is super high. With 120 images you dont need regularization, and coud use about 5 repeats. Train just 1 epoch, test it, and if it is not trained enough, import it and train UPON it later. (srry for the late response, Lefourbe cant answer for some reason his yt is fk up. And it isnt notifiying me either)

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      if its in local only you have them. If you use external websites or wtv then it will depend on the settings of each website

  • @ProfessorDJ-px3mc
    @ProfessorDJ-px3mc Před 10 měsíci +2

    Thanks for all your hard work! (¡Gracias por to do Su trabajo duro!)
    I was wondering if you could make a google colab kohya training video? There aren't many resources for learning how to train in this method. A lot of the options in google colab are the same, and my loras come out okay... but I feel like I could be optimizing better. Also very useful for low performance computer users and linux/mac users since the local install process is a bit different. Great Job on this video! I look forward to more!

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      thank you!!
      I wanted to add it in the video originally when I thought about it. But all the times I've tried colabs like that I can't ever get them to work.
      I'll look at it if at some point I'm able to train properly there!

  • @valiantregent
    @valiantregent Před 5 měsíci +1

    Great video, thank you. I was able to make my first one with it. Do you have a video where you recommend the best computer hardware to buy to train LORAs and models very quickly?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 5 měsíci

      Thanks!!
      I dont have a video specifically for that. But I did upgrade my pc very recently and I made a video comparing it to the last one I had. Went from 1080ti to a 4090. There I left some stuff in the description that may help too. czcams.com/video/wQmVXnrBbrU/video.htmlsi=dixUqSPp7bTtOTtw

  • @Gabirell
    @Gabirell Před 10 měsíci

    Jope macho… no has dejado nada para mañana! 😅 buen video! Buenísimo! Gracias!

  • @benzpinto
    @benzpinto Před 10 měsíci +1

    thanks for the tutorial, handsome!

  • @hamid2688
    @hamid2688 Před 6 měsíci +1

    lovely vidoes, and great content love ur tuturoals bro also ur jokes are funny and really makes me laugh :D

    • @Not4Talent_AI
      @Not4Talent_AI  Před 6 měsíci

      thank you so much!! Makes me really happy to read this :3

  • @ESFAndy011
    @ESFAndy011 Před 4 měsíci +1

    11:40 Holy shit, I've been looking for something that could tell me in a practical way whether a caption is valid or not. I'll definitely be using this.
    But I do have a question (sorry if it's a bit convoluted): If I modify my captions in an image that isn't 1:1 and use bucket, will that screw up the AI's understanding of the captions I modified and confuse it? For example, my character is wielding two weapons with their arms up, but the bucket will hack the character in half. The lower half of their body is obviously not wielding any weapons, nor does it have arms to begin with. Will bucket sort this out on its own? I'd imagine so, but I figure I should probably ask anyway.

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci +1

      Yup, that could happen. If you use bucket though, the image shouldnt get cropped.
      If you do want to crop it, I'd do so before tagging

    • @ESFAndy011
      @ESFAndy011 Před 4 měsíci +1

      @@Not4Talent_AI Got it. Thanks!

  • @ash0787
    @ash0787 Před 9 měsíci +2

    In another video about this I found that theres this thing called Microsoft Powertoys you can get and it can resize images automatically and I found that saved a lot of time. Mine looks ok but I had to lower it to 4 epochs because I now have 44 images at 512x512, I set it to 30 steps per image and the total steps becomes way more than 3000 so I don't know if thats bad, it does not actually take very long to train though, under 3 hours. I would have expected to make a good model it needs to train much longer ? Is the 'loss rate' an important number to pay attention to ?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 9 měsíci

      Depends on your GPU, good trainings could take 15min. Loss rate is important when you are more advanced on lora training. Usually if you have a very low loss (like 0.05) it means your training is overfitting. Normal values are 0.1, depending on the training dataset and other stuff. But if the loss was 0.2 during the full training, and all of a sudden it goes down to 0.1. then its probably overfitting too

  • @user-bb4sy1xu7k
    @user-bb4sy1xu7k Před 10 měsíci +2

    Good video, it helped me a lot to understand captioning!
    One question, I have 6gb of vram and i already trained two loras (I am currently experimenting) and want to train my next lora by adding regularization images, does that increase vram usage?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +2

      I dont think it increases the Vram usage, but it doubles the training steps (in terms of time)

  • @moebiusSurfing
    @moebiusSurfing Před 3 měsíci +1

    Hey, thanks for the video! What should I use to train locally? any good guide to recommend me? regards

    • @Not4Talent_AI
      @Not4Talent_AI  Před 3 měsíci

      hi!! The video is aimed at how to train locally, kohyaSS is a web-ui, but it's not a web service. Meaning that it train on your pc, but uses a browser to act as a UI for the tool.
      (unless this isnt what you meant, in that case, my bad hahahaha). Other possible guides for this guy has some good tutorials on it even if long:
      czcams.com/video/7m522D01mh0/video.html
      for shorter tutorials, I understand RoyalSkies is planning on making a lora training guide on 3D characters with Lefourbe helping. So that's something to look forward to as well.
      Hope it helps!

  • @kimweeng5358
    @kimweeng5358 Před 10 měsíci +2

    hey hey @Not4Talent, thanks for the great and indepth video, ive been trying out LORA testing for myself but even with following your settings and instructions the LORA comes out very poor even with 360 images. Especially with human proportions, facial expressions and eyes. Any advice on improving the LORA training?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      Hi! You should not use my settings if you have 360 images. Its wildly different to train with 16 than with more than 100.
      1- make sure that all 360 images are good in quality and resolution
      2-captions are fine
      3- If you want to have better facial anatomy maybe you should consider using a higher resolution when training or even lowering the learning rate.
      4- you dont need regularization images. They can hurt your dataset if yours has that many images
      5- in terms of expresions, do you have a dataset with a nice expresion variety?
      6- Id probably use way lower repeats per image, and maybe less epochs. It will vary with what model you are training too
      (Ty for the kind comment btw :3)

  • @extro2657
    @extro2657 Před 4 měsíci +1

    I'm confused, so, am I supposed to remove tags referring to the character, or am I supposed to accurately describe them with tags to train? And for undesired tags for kohya, is it supposed to be like thinks I dont want like errors or something?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci

      More recently people have started to just tag everything. And cleaning up the things that are not in the image (in case autotagging messes up).
      This will probably give you the most consistency for your character but take up a lot of tokens when using the lore, since youll have to type all of your character related tags.
      Before, and what I say in the video. Youd ONLY tag for stuff that wasnt your character. Abd just use a keyword as your characters name.
      You can still do that ofc. The rest is describing everything that ISNT your characrer (background and such)

  • @greengrendel
    @greengrendel Před 4 měsíci +1

    holy CRAP that tag manager is such a life changer!

  •  Před 10 měsíci +1

    Gracias por el video, creo que lo de la botella lo complicaste un poco de manera innecesaria pero gracias igualmente por hacer todo esto, lleva mucho trabajo. También muy buena la herramienta dreamlook, acabo de pagar por los créditos, muy rápida para iterar y entender el dataset y pensar para entrenarlo mejor. 🌹💎

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +2

      Hahahahaa puede ser, me hacía gracia el nombre y se me fue la olla😂😂😂😂
      Muchisimas gracias btw!!! (Y seh, esa herramienta sta muy bien. Sobretodo si el pc tarda mucho en entrenar cosas como en mi caso )

  • @Shingo_AI_Art
    @Shingo_AI_Art Před 10 měsíci +1

    And for those digital painting half-anime half-realistic images, what would be the best for caption between BLIP or booru tags ?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      I think its booru too. But depends on the model you will train on

  • @guterversuch6337
    @guterversuch6337 Před 2 měsíci +2

    new location for the folder preperation now is in Dreambooth, Training, dataset preperation

    • @Not4Talent_AI
      @Not4Talent_AI  Před 2 měsíci +1

      Thanks!!¡¡

    • @guterversuch6337
      @guterversuch6337 Před 2 měsíci +1

      @@Not4Talent_AI No, i have to thank you sir. but also there is a bug where during th kohya setup installation it doesnt pull the sd-scripts. so you have to download the folder seperatly online and add it into the sd-scripts folder. took me like 3 hours to figure out since i dont have any programming expirience

    • @Not4Talent_AI
      @Not4Talent_AI  Před 2 měsíci +1

      Oh wtf. Thats new. Thanks for sharing, maybe it can help others too!!

  • @relaxation_ambience
    @relaxation_ambience Před 10 měsíci +1

    Hi, do regularization images suppose to be generated in SD or can be from internet ? How much of them I need ? If they must be from SD generated, should they be fixed with "restore faces" or raw ? If I train with 768x768 resolution, regularization images also suppose to be the same resolution ?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      Hi!
      Regul images can be any res, but if they match your dataset better. As for how many of them, if you have less than 60-100 images, then multiply the number of images in your dataset by 10-6. And that should give you a fair number of regul images.
      You can do both, internet and SD. But if you make them with SD you have more control over style and stuff

    • @relaxation_ambience
      @relaxation_ambience Před 10 měsíci +1

      @@Not4Talent_AI Thank you for the answer. One more question: when I make regularization images on SD, prompt for character of woman should be just "woman" or I can add some more words in description ?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +3

      @@relaxation_ambience You can use more words if you want. I usually add stuff like the camera angle or shot. even lighting and pose. Also I add nsfw and nudity to negative prompt, cuz a lot of models have a bias towards nsfw

  • @justinwhite2725
    @justinwhite2725 Před 10 měsíci +3

    0:04 correction on this - you can run Automatic 1111 from the CPU even if your GPU doesn't have Cuda cores nor supports Rocm.
    I don't think you can train from the CPU though (haven't tried because I expect it to be unbearably slow even if it works)

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      True, my bad on that one hahahaha thanks!!

  • @user-zt7hy1ty9t
    @user-zt7hy1ty9t Před 10 měsíci +1

    Nice tut buddy! but If I'm training for a semirealistic style, should I tag with blip or deepbooru?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      Usually just deepbooru is fine. But you can also see how the creator of the semirealistic model you going to train on prompts stuff

    • @user-zt7hy1ty9t
      @user-zt7hy1ty9t Před 10 měsíci +1

      @@Not4Talent_AI okay ❣️

  • @lalayblog
    @lalayblog Před 7 měsíci +1

    Batch Size 4 don't make you training 4-times faster. It makes training faster, but magnitude of acceleration is much lower. Batch 2 can increase your training by 20% maybe and batch 4 can give you +30% or so.
    Batching only puts the 4 images into VRAM in one go, but GPU will take the same time on processing as with Batch 1 (most likely). You only get more efficient uploading to GPU memory from system RAM.

    • @Not4Talent_AI
      @Not4Talent_AI  Před 7 měsíci

      Yeah, dont really remember if I said 4 times faster explicitly but you are right. Ty!

  • @trickdeck
    @trickdeck Před 10 měsíci +2

    When I click "Train Model", I keep getting the error: TypeError: memory_efficient_attention() got an unexpected keyword argument 'scale'. Is there any fix to this?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      Do you have "memory efficient" active in training parameters? Try either activating or deactivating it.
      If its not that I dont know and I cant check atm. Ill see when I can!

  • @Alice-lf5yr
    @Alice-lf5yr Před 10 měsíci +2

    I wanted to thank you for this video, I was able to make my first Lora and it came out so good I was actually surprised! But I have a few questions for the future,
    1) The character I trained has a very complex outfit, leaving it untagged so the AI would learn it worked perfectly (impressive to say the least) and since it's a not so popular character of a game, I wasn't able to have it with any other outfits since there were no pictures and I just screenshotted them myself. I'm able to change it during generation if I increase the clothes weight, but some of them will end up resembling the original unless they have similar shapes like a dress, which then actually look super cute. How would I go about being able to let the AI have more freedom with clothes? Should I tag the outfit in all the images (even if it's the same) so it can easily become dettachable later? Or does it have to do with the regularization images I use?
    2) Hands. Do I need to have good hands in the regulatization images for the AI not to produce extremely bad ones in the future or does that depend on the checkpoints/embeddings/etc and not the Loras?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      thank you!!!
      1) Yes! Tag the outfit in every image so that AI has more flexibility. If you find images with different outfits but same character, adding them to the dataset could help too.
      2) hands are not trainable. Ofc having better hands on regul, and using models that are better with hands can help. But at the end of the day AI just CANT make good hands without help. Same as you cant really train it to count (like 7 watermelons. It will create a random number, not 7).
      In short, I wouldnt bother. It can help, but you'll never get perfect hands without help. unless it is a very very simple handpose, like fully open

    • @Alice-lf5yr
      @Alice-lf5yr Před 10 měsíci +1

      @@Not4Talent_AI This is extremely helpful, I appreciate a lot the time you took to explain all of this and help others. Thanks a lot for the reply!

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      @@Alice-lf5yr no problem!!! thank you for watching and the kind comment :3

  • @lordkhunlord9210
    @lordkhunlord9210 Před 10 měsíci +1

    When training, does all the picture need to be in the size? Also for the quality ( for realistic photos) what is the minimum for quality. I know that pixelated pictures isn't suggested. So does that make a photo made with a webcam or frontal camera of a cellphone a bad choice?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      not sure about the same question. but images can vary in size as they will be downscaled to the correct size when you train. And with buckets you can import diferent aspect ratios.
      Quality you should use a higher resolution on training, like 768. How you take the pictures doesnt matter as long as you can see the subject correctly and there is no blurry parts on it. Looking at the images should be easy to tell if it is a good image or not, (you can see the subject and it has no weird undiscernible parts? then its fine

    • @lordkhunlord9210
      @lordkhunlord9210 Před 10 měsíci +1

      @@Not4Talent_AI so if I use an image at 1080x1920 it will not be cropped unless I ask it to do it?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      @@lordkhunlord9210 exactly. It will be downscaled tho

    • @lordkhunlord9210
      @lordkhunlord9210 Před 10 měsíci

      @@Not4Talent_AI There was an update and this whole process is in the deprecated tab. I cant even train with the new layout

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      @@lordkhunlord9210 yep, I addet that in the video cuz they updated just before I posted.
      And I cant train either. Hoping for a re-update

  • @ghostsquadme
    @ghostsquadme Před 3 měsíci +1

    How do you create a lora of a complex scene. Say for instance, 2 people sword fighting?
    Or 2 wrestlers in a choke hold or something like that?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 3 měsíci

      that's called a concept lora, and I think the method is pretty much the same. Even though I dont really have experience training those

    • @ghostsquadme
      @ghostsquadme Před 3 měsíci +1

      @@Not4Talent_AI I'd love to see a video on that!

    • @Not4Talent_AI
      @Not4Talent_AI  Před 3 měsíci +1

      I'll see if the opportunity comes and maybe I'll do it! @@ghostsquadme

  • @shu1729
    @shu1729 Před 7 měsíci +1

    whats the app/site used for captioning? can i please get a link of it?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 7 měsíci

      I think this was the one:
      github.com/starik222/BooruDatasetTagManager?search=1

  • @Platzhalterxy
    @Platzhalterxy Před 8 měsíci +1

    I dont have any good graphic cards can I use colab pro?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 8 měsíci +1

      Yep, I think so. I dont know if kohya colabs are still working but if they are, then it's fully ok

  • @simonetruglia
    @simonetruglia Před 7 měsíci +1

    This is a very good video mate, thanks

    • @Not4Talent_AI
      @Not4Talent_AI  Před 7 měsíci

      thank you for watching and the nice comment!!

  • @minggnim
    @minggnim Před 9 měsíci +2

    At 13:02, it says, "16x10 = 160 reg. img". What does this mean?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 9 měsíci +2

      We have a dataset of 16 images. And we need 10 regularization images per dataset image.
      So 16 dataset images x 10 reg images each. = 160 total regularization images

  • @hairy7653
    @hairy7653 Před 10 měsíci +2

    you dude, got a link to the advanced video you mentioned at the end?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +5

      Hi! Not yet, its on the making. Originaly had a 25 pages script that we ended up deviding.
      Using the 2-parts oportunity we decided to go more in depth in a lot of stuff. But since a new model released maybe we wait to learn a bit more on that too.
      If there is a high demand for the video to be released soon, we could work upon what we left out from this one

    • @hairy7653
      @hairy7653 Před 10 měsíci +1

      @@Not4Talent_AI cool, thanks for letting me know.

  • @Griffith74
    @Griffith74 Před 9 měsíci +1

    where did you get those Regularisation images?

  • @user-bq5yt6vm4f
    @user-bq5yt6vm4f Před 10 měsíci +1

    I'm having trouble getting the DataToJson file to work
    line 10, in
    sorted_file_list = sorted(file_list, key=lambda x: int(re.sub('\D', '', x)))
    ValueError: invalid literal for int() with base 10: ''
    not sure how to fix

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      that's cuz the json file asumes that the names of the files will be numerical values. Try this one maybe:
      import os
      import json
      import re
      def create_json_from_folder(folder_path, save_path):
      data = {}
      # Get the list of files in the folder and sort them numerically
      file_list = os.listdir(folder_path)
      sorted_file_list = sorted([f for f in file_list if re.match(r'\d+', f)], key=lambda x: int(re.sub('\D', '', x)))
      # Loop through each file in the sorted list
      for file_name in sorted_file_list:
      # Check if the file is an image
      if file_name.endswith('.png'):
      image_path = os.path.join(folder_path, file_name)
      # Get the corresponding text file
      text_file_name = file_name.replace('.png', '.txt')
      text_file_path = os.path.join(folder_path, text_file_name)
      # Read the contents of the text file
      with open(text_file_path, 'r') as text_file:
      text = text_file.read().strip()
      # Add the image-text pair to the data dictionary
      data[file_name] = text
      # Write the data dictionary to a JSON file at the specified save path
      json_file_path = os.path.join(save_path, 'data.json')
      with open(json_file_path, 'w') as json_file:
      json.dump(data, json_file, indent=2)
      print("JSON file created successfully at:", json_file_path)
      # Ask the user for the folder path
      folder_path = input("Enter the folder path: ")
      # Ask the user for the save path
      save_path = input("Enter the path to save the JSON file: ")
      # Call the function to create the JSON file
      create_json_from_folder(folder_path, save_path)

  • @DeFirm-
    @DeFirm- Před 9 měsíci +1

    i'm using 3050 8giga, can i make SDXL or should i rob a back first?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 9 měsíci

      ahahahaaa You can run it, but it'll be rough. In any case you can just ran sd 1.5. There is not that much of a difference (yet)

  • @memelord4704
    @memelord4704 Před 10 měsíci +1

    I have an issue while trying to generate tags with KohyaSS : all the text files are empty. In the console i have this message ending with "returned non-zero exit status 1"
    Any clue why this does not work ? thanks a lot !

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      is your folder structure correcT?

    • @memelord4704
      @memelord4704 Před 10 měsíci +1

      ​@@Not4Talent_AI in fact i had issues with it thanks a lot!
      I was able to continue follow your video and configure everything as you did and encounter some errors with the optimizer AdamW8bit and AdamW. I tried the next one in the list: adafactor. I absolutely dont know how it will impact the training but at least this time it run!
      Will leave it run and see what i finally get...
      Thanks again for your help and videos !

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      @@memelord4704 thank you so much for watching! glad it helped and goodl luck with the training haaha

  • @ash0787
    @ash0787 Před 9 měsíci +1

    Why cant you do a Py Torch 2 ? will it be ok on GTX 1080 ? I want to make a character thats a follower from Skyrim. I've done embeds before but never done Lora.

    • @Not4Talent_AI
      @Not4Talent_AI  Před 9 měsíci

      with my 1080ti is kind of hard to use pytorch 2 and some other stuff tbh

    • @ash0787
      @ash0787 Před 9 měsíci +1

      @@Not4Talent_AI I'm still at the stage of installing the thing so I could change it to Torch 1. I only just opened the GUI and its a bit overwhelming, trying to look at videos that explain how to make a LORA and at first I didn't realize you'd made one explaining it. Lots of research to do before I even start trying to make it.

    • @Not4Talent_AI
      @Not4Talent_AI  Před 9 měsíci +1

      @@ash0787 it is overwhelming yeah. a lot of stuff to lean and do xD

    • @ash0787
      @ash0787 Před 9 měsíci +1

      @@Not4Talent_AI also I still don't see a good reason to change to a new graphics card, RTX 4000 series was really disappointing from the reviews I saw, 3060 12GB being better than 4060 / 4060Ti.

    • @Not4Talent_AI
      @Not4Talent_AI  Před 9 měsíci

      @@ash0787 for gamining there isnt. For AI the newer gens have some technology advancements and weird stuff that makes generations much much faster. But I dont know cuz I havent tried them. I know 3060 is way better for generating than 1080. even tho it's just 1gb more. But not sure which is better between 3060 or 40+

  • @SAVONASOTTERRANEASEGRETA
    @SAVONASOTTERRANEASEGRETA Před 10 měsíci +2

    Could have been an interesting video. I don't understand this mania for speaking fast when you explain it. What do you DJ?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      Idk man I talk fast xD Trying to slow down on the important parts but maybe still too fast? Not sure, so used to hearing this speed that I cant really tell

  • @dpqkkcit199428
    @dpqkkcit199428 Před 7 měsíci +1

    If you say you have a long time compared to the service (dreamlook). What kind of computer do you have? (To know that even if I have something similar, I will still be worthless...)

    • @Not4Talent_AI
      @Not4Talent_AI  Před 7 měsíci

      Main thing to look for is gpu, which I have a 1080 TI. In terms of Vram it aint bad, but it's slow af for AI. If you have newer nvidia gpu's with similar or higher VRAM you might be better served

  • @rewixx69420
    @rewixx69420 Před 10 měsíci +1

    me that understans diffusion models and LoRa s but i dont now good hyper parameters

  • @user-do5eo2gf9r
    @user-do5eo2gf9r Před 2 měsíci +1

    generated images save only in Json format, how to generate to both safetensor extension?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 2 měsíci

      pretty weird that it's saved in json format. I'd guess that's not the training itself but just the parameters being saved in case you need to re-use them.
      Has the training finished? Maybe it hasnt saved any epochs yet

    • @user-do5eo2gf9r
      @user-do5eo2gf9r Před 2 měsíci +1

      @@Not4Talent_AI this is my error:
      ModuleNotFoundError: No module named 'bitsandbytes.cuda_setup.paths'
      File "C:\hjuuk\kohya_ss\library\train_util.py", line 3499, in get_optimizer
      raise ImportError("No bitsandbytes / bitsandbytesがインストールされていないようです")
      ImportError: No bitsandbytes / bitsandbytesがインストールされていないようです
      Traceback (most recent call last):
      I totatlly don't know what to do with it. I would be very greatful if you helped me some how ;)

    • @Not4Talent_AI
      @Not4Talent_AI  Před 2 měsíci

      oh, yt didnt notify. I'd try pasting the error on google cuz tbh I have no idea how to fix it.
      Sorry for the late resposne!

  • @ctrlartdel
    @ctrlartdel Před 10 měsíci +1

    Where do you put the caption of an image?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      same folder as the image, with the same name as the image it is referring to

  • @Carmidian
    @Carmidian Před 5 měsíci +1

    What captions are associated with your character? So if I have a character standing in a rocky desert are rocks associated with it?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 5 měsíci

      if you dont tag them properly, yes. If the model overtrains, also yes. You should always tag the rocky desert if you want to avoid it being a part of the character

  • @DoozyyTV
    @DoozyyTV Před 8 měsíci +1

    booru tags don't have spaces but underscores, I've been wondering should I use underscores with anime models?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 8 měsíci

      probably, yeah. If you use an extension thats called "tag manager" (I think), you can see what tags to use

    • @DoozyyTV
      @DoozyyTV Před 8 měsíci

      @@Not4Talent_AI why doesn't anyone do this then? Do they train them without underscores?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 8 měsíci

      @@DoozyyTV Depedns on the model, and how it was trained. but a lot of people do it. Also, its not that you dont use spaces, its that certain tags consist of two words, and they connect them with an underscore.
      So " a woman wearing a dress, looking at viewer, long sleeves" . would just need an underscore on "long_sleeves" probably

    • @DoozyyTV
      @DoozyyTV Před 8 měsíci

      @@Not4Talent_AI yeah that's what I would think but when I look at images on civitai, the prompt tags never have underscores

    • @Not4Talent_AI
      @Not4Talent_AI  Před 8 měsíci +1

      @@DoozyyTV Again, depends on the model and the tags. Also, keep in mind that most people dont look that deep into taging, they just type wtv. If you really care about tagging in booru format you can install the tagmanager extension, it's pretty nice to have for things like that

  • @eddiemauro.design
    @eddiemauro.design Před 10 měsíci +1

    Enhorabuena :)

  • @punitkk7696
    @punitkk7696 Před 6 měsíci +1

    How do I know if I need a Lora or a Model? How do you decide?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 6 měsíci

      I'd always go for a LORA first, since it's faster and easier. But a model can work great for styles, for more umbrella concepts like "weather" (having a model that specializes in multiple weather conditions and is able to make a wide variety of them). Other than that, I havent touched models, so take my word with a grain of salt

  • @Trumf888
    @Trumf888 Před 10 měsíci +1

    Great job! please tell me how to do this, there are photos of only shoes (it is not on the leg), just photos like on a shelf. And how to make them dress on the leg during training, otherwise when I call them they are called up close-up of the photographs in which I taught (

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      thanks!!
      Hmm sounds tough to do tbh, I would put at least 1 image of someone wearing them, even if you need to make it with photoshop. Other things you may want to do is caption stuff like "product photography" or "no-humans" even "clothing photography" and "empty XsubjextX". Then for the one with someone wearing it, specify "wearing XsubjectX" .
      You will probably need 2 rigger words. Subject + shoes. And regularization images of shoes and people wearing shoes.
      Hope that helps a lil at least!

    • @Trumf888
      @Trumf888 Před 10 měsíci +1

      @@Not4Talent_AI Thanks for the reply, I took a reg photo. I sign each photo in the dataset folder - "skere, no humans, shadow, traditional media, wooden floor, still life, wooden table", based on the video example. and I teach, but I can’t put shoes on a person. The task is difficult and no one has such a video. If you succeed, then there will be good content

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      @@Trumf888 rembember to add Shoe! If your subject (the shoes) are named skere /you can name them however/, you should add the class if you intend to use them in more ways.
      Id caption it like this //dont add shadow unless it is a very important part of the image. If it is in this case then ok/ ((( Skere, shoes, traditional media, no humans, still life, wooden table )))

    • @Trumf888
      @Trumf888 Před 10 měsíci +1

      @@Not4Talent_AI OK thank you very much. I will try :)

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      @@Trumf888 np, good luck!

  • @Beauty.and.FashionPhotographer

    Skin in faces adding actua real skin pores in faces that have plastic skin from bad renderes , would that be possible ?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 28 dny +1

      I dont really think I understand the question. But I guess it's about having better skin with pores that look more realistic?
      I think that would be possible until a certain extent. Since AI works on noise and it can be visible.
      But with SDXL I think that should be doable

    • @Beauty.and.FashionPhotographer
      @Beauty.and.FashionPhotographer Před 28 dny +1

      @@Not4Talent_AI i could show you examples on discord...i dropped the same question there for you, sorry

  • @carloosmartz
    @carloosmartz Před 3 měsíci +1

    Hola, sabrías cómo hacer una Lora a la que le afecten los parámetros? Es decir, quiero que al usar el valor -5 con la lora me haga una persona muy delgada, al usar valor 5 una persona muy gorda y valor 0 una persona normal. Tengo imágenes para entrenar el modelo con los tipos de cuerpos, lo único que necesito saber es como introducir estos valores numéricos en el dataset .txt . Para que al entrenar la IA reconozca que el número 5 es gordo y el -5 delgado. Muchas gracias 😮

    • @Not4Talent_AI
      @Not4Talent_AI  Před 3 měsíci

      hola!! sí! justo es mi último vídeo jajajaa
      czcams.com/video/GaVuQEWqEoM/video.html

    • @carloosmartz
      @carloosmartz Před 3 měsíci +1

      @@Not4Talent_AI que suerte la mía jajaja gracias

    • @Not4Talent_AI
      @Not4Talent_AI  Před 3 měsíci

      nada, espero que sirva ! @@carloosmartz

  • @donmaikurosawa1500
    @donmaikurosawa1500 Před 3 měsíci +5

    Is this guy narrating a horse race or something? Selected 0.25 speed and it was still too fast.🙂

    • @Not4Talent_AI
      @Not4Talent_AI  Před 3 měsíci +1

      Hahahahha yeah pretty much. Im trying to find a good balance to make the vids fast enough for younger generations and slow enough for people with an actual hability to focus.
      Seems Im still on the faser side

    • @amritbanerjee
      @amritbanerjee Před 4 dny

      I am on 2X 🙂‍↔️

  • @keitaro3660
    @keitaro3660 Před 9 měsíci +1

    Oh, so is this the definite method to create "same looking character"?
    Like, i want to create Lora based on a person, so it will be realistic. So i just need to put many image reference, then the Ai will be replicate her with same face and hair, but customized expressions, pose, ane outfit? Whoaaa that's really cool stuff

    • @Not4Talent_AI
      @Not4Talent_AI  Před 9 měsíci +1

      If done right yes! thats exactly it :3

    • @keitaro3660
      @keitaro3660 Před 9 měsíci +1

      @@Not4Talent_AI great, i'm gonna test it later. Thanks for this very clear explanation!

    • @Not4Talent_AI
      @Not4Talent_AI  Před 9 měsíci

      @@keitaro3660 ty for watching! wish you good luck 3

  • @DARKNESSMANZ
    @DARKNESSMANZ Před 10 měsíci +1

    i'm clicking on caption image but nothing happening on the cmd.. please help

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      huh, weird. Is the path correct? Maybe it needs to install the captioner, which it does after a little bit. Also, make sure to click on the cmd and hit "enter". Some times it freezes for no reason

  • @NineSeptims
    @NineSeptims Před 10 měsíci

    Is this video works to make LoRA for the new SDXL 1.0?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      I think there are some options that you need to activate. Maybe captions and parameters affect the training differently. Aside from needing to train at 1024 Ithink. Havent tried tho

  • @lightsout7443
    @lightsout7443 Před 10 měsíci +1

    is the discord link broken? I can't seem to join with the link

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      huh, it shouldnt be
      discord.gg/FWPkVbgYyK
      this is the link in theory

  • @Riinseru
    @Riinseru Před 8 měsíci +1

    for some reason folder to json script just closes itself and does nothing once i put the path where i want it to save :(
    EDIT: nevermind, I just saw dreamlook has a colab for converting the txt files to a json and it worked like a charm

    • @Not4Talent_AI
      @Not4Talent_AI  Před 8 měsíci

      sorry, the script is pretty bad. I think you need to have names that match the format I use in the video, hopefully that helps

  • @switterbeet
    @switterbeet Před 8 měsíci +1

    Theres no way he didnt say popimpokin this often with the intention to make us laugh 😂

  • @stas_lu
    @stas_lu Před 10 měsíci +1

    need a video where it will be told about the style training)

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      for style try going with a low learning rate and caption absolutely everything in the image (or the other way arround is not using captions at all). as always, the more images the better.
      But when making the advanced video we'll go more in depth about that too!

    • @lefourbe5596
      @lefourbe5596 Před 10 měsíci

      i'm trying to make a style LoCon on NijiV5.
      hope it works

  • @artist.zahmed
    @artist.zahmed Před 3 měsíci +1

    hay man i really need your help to do sdxl model pleas

  • @Beauty.and.FashionPhotographer

    A Tutorial to add skin pores to faces? add real skin with actual skin pores? ..of course for MAC ...just to make it more difficult.... now that would be more than amazing

  • @luizfernandotesck144
    @luizfernandotesck144 Před 10 měsíci +1

    Is it possible to train a Lora using Google Collab?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      absolutely. Even though in my experience it always crashes xD but you can, a lot of people do

  • @Kwipper
    @Kwipper Před 10 měsíci +1

    Not gonna lie... I am really loving hearing him say "Popimpokin"

  • @uncreativename775
    @uncreativename775 Před 4 měsíci +1

    im so confused on the folders, whats datasets do, whats the imgs do? You barely explained these, ive looked at like 3 videos now and im having trouble understanding because nobody goes in deph on how to set up the folders correctly

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci

      So, the folders get created automatically with kohya. But if not you can create them yourself.
      The structure is:
      Img
      Model
      Log
      Reg
      Inside of img, you have a new foder with your dataset. The name of that folder will be a number (the number of steps you want to train each image for. In my case I think it was 20). And a name, the name of what you are training. In my case "skere".
      So, your image dataset will be in:
      Img > 20_Skere >
      Then in model and log, you dont need to put anything.
      Model will be where your loras are saved. And log is ignorable.
      Finally, in reg you put your regularization images. But inside a folder structure like the one for img.
      This time the name will be what your regularization images portray. In my case "woman". So, training the reg for 1 step. The final structre would be:
      Reg> 1_woman >
      Leaving you with
      Img > 20_Skere > (/your dataset/)
      Model > (/the trained lora files will be here/)
      Log >
      Reg > 1_woman > (/your regularization images/)

  • @HorseyWorsey
    @HorseyWorsey Před 2 měsíci +1

    "POPIMPOKIN!!!"
    loll

  • @davidklein171
    @davidklein171 Před 9 měsíci +1

    Great video, any one else have to run this at 75% speed to keep up?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 9 měsíci

      Hahahaha tyty! Maybe I should go a little slower

  • @phaylali
    @phaylali Před 10 měsíci +1

    Is there a way to train lora using AMD ?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      Hmmm not sure tbh. Cant really look into it rn either. Maybe someone in discord knows.
      If not you always have colabs or plarfoms like dreamlook ai

  • @tsentenari4353
    @tsentenari4353 Před 7 měsíci +1

    Should I always try to include painted images of a character, if my final goal is to show them painted, or is this something the Lora will be able to take care of?
    Let's say I want to create a Lora of a person, with the final goal is to use this character for drawings, illustrations - and all I have is real life photos.
    From your video it is already clear that anything I want to be able to vary when using the Lora, should be named in the prompt.
    So I should probably include "photo" with all of these images.
    Now let's say I wanted to have the possibility to show the person with an open mouth. In this case, just including the prompt "closed mouth" with every photo probably wouldn't be ideal, instead it would be better to have an actual photo of the person that shows them with an open mouth, correct?
    So my question is:
    Is the same true if I want to draw the person in some artistic style? Should I try to include painted images of this person, by using roop or depth map or whatever?
    Is it correct to say: "Whateve you want your Lora to be able to do eventually, try to approximate it in your training images, as far as it is possible? (as long as you only include high quality results)"

    • @Not4Talent_AI
      @Not4Talent_AI  Před 7 měsíci

      As far as LORA training goes, the expert is leforube, so he is the one that has the more experienced answers.
      My non-definitive answer tho, is that you should try to include images of the character in different styles if you want to change it eventually. You could also use drawn regularization images if you dont get good results.
      Also the captioning as you say is good

    • @tsentenari4353
      @tsentenari4353 Před 7 měsíci +1

      @@Not4Talent_AI I just found this quote in THE guide on Civitai on how to create Loras:
      "When training characters or concepts I recommend including as many different styles of them as possible, so for characters include cosplay photos, fanart, screencaps, etc… Without these it'll be hard to impossible to portray the character in different styles! The same goes for outfits."
      --->
      So for myself, I would sum it up with the following Zen like mantra: "Try to include examples for as many things as possible you want to make with the Lora", or "to train your Lora in an optimal way, try to do the things without the Lora you are creating the Lora for" :) (if you can't find them on the net which you typically can't). (Just sounds paradoxical without actually being paradoxical, sometimes it's possible to approximate something with a lot of work, so that the next times it can be done much easier)

  • @CloudGraywordsVII
    @CloudGraywordsVII Před 4 měsíci +1

    hold on are you the guy that posted your guide for FF14 character training in reddit? i have tried to train my own but man its not working as intended lol

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci +1

      Nope, not me 😂.
      What is the issue your training has?

    • @CloudGraywordsVII
      @CloudGraywordsVII Před 4 měsíci +1

      @@Not4Talent_AI oooh ok i saw the same character and i thought it was you lol. Well the issue i had was in the captions i figure it out actually watching your video so thanks! Very cool explanation, i was adding things that made tbe lora not work as accurate as i wanted. Seems to be fixed now :)

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci +1

      great to hear!! tyty! @@CloudGraywordsVII

  • @galaxyjourney7875
    @galaxyjourney7875 Před 10 měsíci +1

    I tried to test the lora model with X/Y/Z plot with different denoising strength and model strength, but all of the image outputs are exactly the same. what mistake could I have made? thank you!

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      Denoising strength? Dont know why that there. It should be in text to image.
      Is souns like its an scriot error? Id probably check that everything is inputed how it should!
      If it keeps not wrking try the discord :3

  • @USBEN.
    @USBEN. Před 10 měsíci +1

    Bro great video, but POPIMPOKIN is a tounge twister 😅😅

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      Hahahahhaaha I had less trouble with that word that with normal english words😂😂

  • @Kikuri_Dood
    @Kikuri_Dood Před 10 měsíci +1

    Ai Creators make a entire new dictionary for the most specific things ever😂 that's awesome

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      example? Dont really know what you mean. even though it could be xD

    • @Kikuri_Dood
      @Kikuri_Dood Před 10 měsíci +1

      I mena making new words for specific things so the AI knows what they need to generate

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      @@Kikuri_Dood ah yea xD true hahahaha

  • @davedm6345
    @davedm6345 Před 10 měsíci +1

    LORA training, i put save as safestensors but it save as json file and the SD not is able to load it wtf?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      Literaly wtf, have you seen anyone with the same problem?
      Btw, you could be saving the configuration and not a trained lora.
      (Open the jason file and see if thata the case).
      The trained lora should be on the "models" folder!

    • @davedm6345
      @davedm6345 Před 10 měsíci

      @@Not4Talent_AI thanks lol i saved the config now literally my checkpoints are saved in .json format in models folders instead to safestensors. Litterally two days idk what is the problem.. pyton, windows, Lora or my AMD cpu?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      @@davedm6345 wtf I have no idea tbh, its the first time I hear about this problem. Maybe could be the amd, but sorry, literally no idea. Did you research on reddit or google?

    • @davedm6345
      @davedm6345 Před 10 měsíci

      @@Not4Talent_AI yes researching..maybe is a Lora checkpoints save issue.

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      @@davedm6345 could be, I honestly have no idea

  • @thewebheadgt
    @thewebheadgt Před 24 dny +1

    10:18 Bro why do we have to remove tags that describe our character????

    • @Not4Talent_AI
      @Not4Talent_AI  Před 24 dny

      You dont really. Now a lot of loras dont do that and it still works well if not better.
      The reason why you do it is in case you need to save tokens. Lets say that if you dont, the character will be generated properly only if you use all the descriptive tags. (Prompt would look like: skere, short hair, pink hair, blue kimono, blue eyes, etc) which is a lot of tokes and might mess your prompt comprehencion. But at the same time the character will probably be generated better.
      If you mix everything, the character might struggle to generate some parts, like the flower, if you dont train it with super good parameters (changes every training). But, the prompt will be just "skere, etc" meaning that you use less token space.

  • @TestTubeGirl
    @TestTubeGirl Před 7 měsíci +2

    How come installing this stuff is so strange?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 7 měsíci +1

      Hahaahhahhaha idk tbh, open source stuff I guess😂

  • @LALA-dw2ez
    @LALA-dw2ez Před 10 měsíci +2

    Danke!

  • @kdeeuk
    @kdeeuk Před měsícem +1

    cant install kohya the commands don work keeps saying git is not a recognized command there is a installer but there is a cost you have to subscribe to the developers page for the gui

    • @Not4Talent_AI
      @Not4Talent_AI  Před měsícem +1

      make sure you have python installed as path and that you have git instaled

    • @amorgan5844
      @amorgan5844 Před měsícem +1

      ​@Not4Talent_AI yeah, he definitely forgot to set path when installing python, let's hope he installs kohya on the same drive as his training data😂

  • @dr-bf4sc
    @dr-bf4sc Před 8 měsíci +1

    idk what affects lora size, can someone explain?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 8 měsíci

      you mean the final lora's weight in kb? or something else?

    • @dr-bf4sc
      @dr-bf4sc Před 8 měsíci

      @@Not4Talent_AI yes

    • @lefourbe5596
      @lefourbe5596 Před 8 měsíci +1

      @@dr-bf4sc it's entierly tied to the network/alpha dimension. the bigger the number the larger the file will be (and supposedly fit more data)
      it fit the Unified network AND the text encoder.
      usually you train both and that is stacked together.
      however if you set Unet or Tenc learning rate to 0 and the other one to default. you will see that your Lora will have it's size reduced (the text encoder is smaller than the Unet). it mean that Unet and Tenc are what makes the size. seeting it's learning rate to zero will take out one or the other in the Lora file.
      of course the "safe precision" fp16/bf16 should be used every time. it divide by 4 (or close) the Lora size.

    • @dr-bf4sc
      @dr-bf4sc Před 8 měsíci

      @@lefourbe5596 thanks!

  • @holotape
    @holotape Před 9 měsíci +2

    POPIMPOKIN

  • @LazyHead
    @LazyHead Před 5 měsíci +1

    RuntimeError: main thread is not in main loop = This is the error i am getting does anyone in the comments know why this is happening and how can i solve it please

    • @Not4Talent_AI
      @Not4Talent_AI  Před 5 měsíci +1

      trying to look it up but I cant figure out what it is. What are you tring to do when that error happens?

    • @LazyHead
      @LazyHead Před 5 měsíci +1

      @@Not4Talent_AI it got fixed i just restarted my kohya and it worked but now it doesnt detect my images folder. thanks for the reply

    • @Not4Talent_AI
      @Not4Talent_AI  Před 5 měsíci +1

      nice!
      And weird again wtf is going on in your setup hahahaa
      @@LazyHead

    • @LazyHead
      @LazyHead Před 5 měsíci +1

      @@Not4Talent_AI i found the bug as well it was happening because before clicking the prepare folder tab button you have to click the prepare training data, i might have missed it in the video but hopefully this will help someone like me xD

    • @Not4Talent_AI
      @Not4Talent_AI  Před 5 měsíci

      thanks for sharing! glad youfound the solution@@LazyHead

  • @Pfromm007
    @Pfromm007 Před 10 měsíci +1

    Take a shot every time he says popimpokin

  • @TexasGreed
    @TexasGreed Před 2 měsíci +1

    No offence but if 30 seconds into the video you say "Maybe you want to install visual studio though." then the guide isnt for beginners.
    Maybe I do want to install it. Maybe I dont. I dont know what it is and you didnt tell me what it is or why I might want to install it.

    • @Not4Talent_AI
      @Not4Talent_AI  Před 2 měsíci

      True. Idk why it sais to install it either, so if it gives you any errors just install it. I had it beforehand but it never needed it for anything
      So I thought it was just optional in case you want to touch some of the code

  • @cryptobullish
    @cryptobullish Před 10 měsíci +1

    What’s up with that confusing keyword popimpokin? Is it in the fine print on the bottle? Lol

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      hahhahhahhaha idk, I just made some random word up xD

  • @pastuh
    @pastuh Před 10 měsíci +1

    03:57 I don't think you need to flip images

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      It isnt needed but if your dataset is super small it might help create new images and dampen the training a little bit

  • @mustaphaaitakka2568
    @mustaphaaitakka2568 Před 10 měsíci +2

    I used Collabs which lasts like 5 hours and gives you 15 vram I trained my first model with it and it lasts 5 min my dataset contains 60 pngs and I use 10 epochs I don't know how to spell it and 10 repeats with 2 steps it's like 3000 steps anyways if you have a computer is better and google collab gives you a month for 10 dollars or you can create unlimited accounts😆😈😈😈😈

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci +1

      hahahha nice, which colab? I couldnt use any that worked when I tried

  • @Starius2
    @Starius2 Před 10 měsíci +1

    Screw that. EVERYTHING IS NOW A MATSURI

  • @lowserver2
    @lowserver2 Před 10 měsíci +1

    ok time to get dizzy again

    • @Not4Talent_AI
      @Not4Talent_AI  Před 10 měsíci

      Hahahhahahhaa tried to slow it down for this one. You tell me!