Unlimited CONTROL with SLIDERS!! (game changing)

Sdílet
Vložit
  • čas přidán 15. 06. 2024
  • A new technology that unlocks a huge part of Stable Diffusion's dataset. Learn how to train and use SLIDER LORAs for unlimited control and more!
    You can make progress sliders, train simple concepts that should be understood by AI but aren't (like right and left or full and empty), and even enhance or erase concepts from your models.
    I encourage anyone to train their crazy ideas and fully develop the potencial of this incredible tool.
    Join our Discrod server: / discord to learn about this and more!
    ------------ Links used in the VIDEO ---------
    -Installation and project: github.com/rohitgandikota/sli...
    -Sliders website: sliders.baulab.info
    -Google Colab: colab.research.google.com/git...
    -HuggingFace page: huggingface.co/spaces/baulab/...
    -WEB-UI: ko-fi.com/s/2fe3a2d863
    (in beta, let me know if it gives issues)
    -Rohit's x/twitter: / rohitgandikota
    Extensions used in the video:
    Composable LORA: github.com/ashen-sensored/sta...
    Dynamic LORA weights: github.com/cheald/sd-webui-lo...
    Controlnet: github.com/Mikubill/sd-webui-...
    ------------ Project by ---------
    Rohit Gandikota, Joanna Materzyńska, Tingrui Zhou, Antonio Torralba, David Bau. "Concept Sliders: LoRA Adaptors for Precise Control in Diffusion Models" arXiv preprint arXiv:2311.12092 (2023).
    ------------ Social Media ---------
    -Instagram: / not4talent_ai
    -Twitter: / not4talent
    Make sure to subscribe if you want to learn about AI and grow with the community as we surf the AI wave :3
    #aiairt #slider #sliderLORA #lora #aitraining #trainAI #sliders #progress #aicontrol #digitalart #aianimation #automatic1111#stablediffusion #ai #free #tutorial #betterart #goodimages #sd #digitalart #artificialintelligence #latentcouple #couple #composableLora #posing #controlnet #SD15 #inpainting #openpose #depthlibrary #AI #midjourney #interaction #relation #comic #storytelling #manga #anime
    0:00 intro
    0:20 Index + demo
    0:56 Installing
    1:44 UI
    2:31 Training Textsliders
    8:07 Online Training options
    8:17 Google colab
    8:58 HuggingFace
    9:08 Testing the LORA
    11:00 Finding good prompts
    12:33 Training Imagesliders
    15:06 More than two pairs
    15:56 Using SLIDER LORA
    16:06 Transition point
    16:20 Avoiding bleeding
    16:40 Prompt Editing
    18:17 Img2imge and inpainting
    19:05 Thanks to Rohit and the team
    19:34 Enhance and Erase concepts

Komentáře • 125

  • @Semi-Cyclops
    @Semi-Cyclops Před 4 měsíci +4

    anyone else getting
    File "\sliders\trainscripts\textsliders\train_lora_xl.py", line 11, in
    import torch
    ModuleNotFoundError: No module named 'torch'
    error even though torch is there in the environment

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci

      I'm guessing this happens whenever you hit "train"?
      If that is the case, I dont really know how to solve it but we can try some stuff and narrow down what could be happening.
      You could try to open a powershell, then navigate to "sliders" and copying:
      1- conda create -n env_pytorch python=3.6
      2- conda activate env_pytorch
      3- pip install torchvision
      4- import torch
      import torchvision
      5 - python trainscripts/textsliders/train_lora.py --attributes 'male, female' --name 'ageslider' --rank 4 --alpha 1 --config_file 'trainscripts/textsliders/data/config.yaml'
      (just to test if it works. I dont really know much about this stuff tbh. Saw that this could help on "stackoverflow" and just pasting it here as a suggestion).

    • @Semi-Cyclops
      @Semi-Cyclops Před 4 měsíci +1

      @@Not4Talent_AI i cant get anything to work its not the ui issue, when cant get it to run on base either, your solution fixed the no torch module error the then it game another error then i fix that error and it gives another error its just a loop. its fine i will stick to khoyass. the last error i got was FileNotFoundError: [Errno 2] No such file or directory: "'trainscripts/imagesliders/data/config-xl.yaml'" even though its tere

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci

      damn, Unlucky. I'll send it to Rohit so at least they know. Thanks for commenting it and I'm sorry it didnt work! @@Semi-Cyclops

    • @Semi-Cyclops
      @Semi-Cyclops Před 4 měsíci +1

      thanks. i must have something wrong on my setup but ya if the usage process can be streamlined woukl be awesome. i feel like the requirements are not properly downloading, but i dont know either i am also not from a coding backgroung.

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci

      have you tried deleting the folder and installing cleanly again on a new one? (probably yeah, or not worth doing. just asking hahahaha) @@Semi-Cyclops

  • @LissGautier
    @LissGautier Před 4 měsíci +8

    happy that I could help you ❤

  • @slashkeyAI
    @slashkeyAI Před 4 měsíci +1

    Great work by all involved. Thank you!

  • @alyila-6079
    @alyila-6079 Před 4 měsíci +1

    Thanks you for sharing and taking the time to make a ui for this, much appreciated!

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci

      Thank you for watching!!! I hope the UI works as it should 😂😂

  • @cristiansolano1052
    @cristiansolano1052 Před 4 měsíci +1

    Amazing video, so much information!

  • @shadowdemonaer
    @shadowdemonaer Před 4 měsíci +3

    I have a lot of really cool ideas to use with this that I think will improve my life a lot! I have always wanted someone to please share how to do this. I'm really glad you did it. Your channel is super under rated and you deserve more views and subscribers than you have.

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci +1

      Thank you so much!! Appreciate the kind words.
      Hope this helps you get what you wanted!

    • @shadowdemonaer
      @shadowdemonaer Před 4 měsíci +2

      ​@@Not4Talent_AIBy the way, I think you could fill another need by posting how to make negative embeddings. Stuff like EasyNegative and such. Because a lot of people don't understand how to do that process because it has a different tagging situation.
      It would be good to mention in a video like that that it has been said that if you use negative embeds for hands and feet that it will try to force them into the shot since you're trying to fix them, so it's only good to use those if you already know you'll have hands in the shot, for instance.

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci +1

      that's a neat idea. Thanks!!
      It's not on my todo list atm, but I'll keep it noted in case I get some extra time for it. tyty! @@shadowdemonaer

    • @shadowdemonaer
      @shadowdemonaer Před 4 měsíci

      Hate to say it but I thin the huggingface page might be down permanently... I keep checking back with it and it doesn't go anywhere... @@Not4Talent_AI

    • @Not4Talent_AI
      @Not4Talent_AI  Před 3 měsíci

      look's like it yeah, idk why that could be tbh

  • @Nassifeh
    @Nassifeh Před 4 měsíci +12

    I appreciate the "my girlfriend writes my code" part. 😆

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci +2

      hahahahhaa ChatGPT wouldn't do the HTML part of it, she was a life saver xD

    • @LissGautier
      @LissGautier Před 4 měsíci +4

      haha that's me

  • @Luxcium
    @Luxcium Před 3 měsíci +1

    I love those videos and ❤ obviously being a Spañiard gives you an advantage over others but you have something more that no one else has (I don’t know what it is) so I think it’s nice that you are giving to people interested in the broad subject and specific topic the opportunity and privilege of having made those videos it is really appreciated and not only helpful but useful too 🇪🇸🇨🇦🇪🇸🇨🇦

    • @Not4Talent_AI
      @Not4Talent_AI  Před 3 měsíci

      Agree, very big advantage ofc. gotta make the best I can of it.
      And thank you so much again!! really appreciate this type of comments.
      hope I can keep giving useful info to the community

  • @placebo_yue
    @placebo_yue Před 4 měsíci +1

    good video bro, i'll try training a slider soon, as soon as i can think of an use for a slider that hasn't been done by someone else yet!

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci

      thanks!! Hope the training works when the time comes hahahah

  • @fadetoblack3161
    @fadetoblack3161 Před 4 měsíci +1

    Cant wait to try this!

  • @TuanKhai298
    @TuanKhai298 Před dnem +1

    great work ! Thank you so much

  • @AaronALAI
    @AaronALAI Před 4 měsíci +2

    Fantastic video!

  • @pastuh
    @pastuh Před 4 měsíci +1

    Nice turorial, super simple :)

  • @gnsdgabriel
    @gnsdgabriel Před 4 měsíci +1

    Thank you! 🙏🙌

  • @amorgan5844
    @amorgan5844 Před 2 měsíci +1

    I know this sounds crazy but take your prompt and tell chat gpt to "write it so python can understand it better for image generating", it will write it like lines of code and then tell chat how the weight system works and what you want balanced and emphasized it will weight it. Once you copy and paste that in stable diffusion, for me i get an ability to really fine tweak what i want out of the image. It works best with euler/normal with cfg around 6. Just remove [ ] and the word prompt that chat gpt puts in the response, it causes the weights to be affected. Also the weights towards the end need to get a lot higher to have any effect, sometimes going in to the :1.6 range. I knownit doesn't compete or compare to what you are doing in this vid but its a fun little experiment

    • @Not4Talent_AI
      @Not4Talent_AI  Před 2 měsíci

      Thats a pretty smart idea tbh ahhahahha
      Sounds fun, might try it if I find myself not beig able to fine tune a prompt.
      I dont really use text to image for stuff like this thst much, but some times I do and this could be a fun test. There is also other people who might be more interested too. So thanks for sharing!!!

    • @amorgan5844
      @amorgan5844 Před 2 měsíci +1

      @Not4Talent_AI you are insanely talented, and I love your videos. Definitely give it a try. If you have trouble, I'll try to put out a workflow on civitai so you can test with the same models and loras. It can be tricky but it does work once you so how the weights are being factored using that method

    • @Not4Talent_AI
      @Not4Talent_AI  Před 2 měsíci

      @@amorgan5844 cool!! let me know! If yt doesnt notify, you can also share on discord.
      And thanks so much for the kind words too!

  • @sickvr7680
    @sickvr7680 Před 4 měsíci +2

    Buena Hugooo!!!
    Debo decir que parece más sencilla la parte donde simplemente editas los ojos al preparar el dataset que todo lo demás 😂

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci +1

      Jajjajjajajaja si es solo 1 posición quizá sí xD a la que me toque cambiarla más de una vez... Pero bn que en realidad es simple, prompt, train, done hahhahahah

    • @sickvr7680
      @sickvr7680 Před 4 měsíci +2

      @@Not4Talent_AI Lo complicado es entrenar con una 1080Ti ... hace un par de días entrené mi 1er Lora ... tuve que dejar la PC encendida toda la noche :cry: ... y creo que quedó overcooked porque eran espantosas las cosas que hacía, nunca más entreno jaja

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci +1

      sehhh, entrenar con 1080 es ...@@sickvr7680

  • @Kenb3d1
    @Kenb3d1 Před 4 měsíci +2

    Very cool, appreciate your efforts. Is it not faster to just inpaint though?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci

      Depends on what you are looking for. I highly doubt you can make the closed eyes gif nor the side to side gif with regular inpainting

  • @USBEN.
    @USBEN. Před 4 měsíci +1

    This is soo cool.

  • @jonmichaelgalindo
    @jonmichaelgalindo Před 4 měsíci +1

    Awesome!!!

  • @RobertJene
    @RobertJene Před 4 měsíci +1

    20:48 instead of a text box for choosing the floating point method, I recommend using a drop-down.
    It is called in html.
    Like this
    fp32
    fp16
    bf16
    float32
    Then when you go to start training, your form validation part of the script can either check the value of the item "precision" or the selected text, make sure it's not blank, and if it isn't blank, proceed to the next start of validation.
    But the important thing is to keep the value it grabbed from that part of the form.

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci +1

      thanks!! Though of that, as it felt kind of weird having other dropdowns but not in that one. The thing is that at the time I didn't know the exact options and changing it made a part of the script not work (probably just me copying something wrong, but didnt want to risk it and I left it like that, just added a bit of text below it hahaahah)
      I do think that your idea is way better than as it currently is tho, just didnt really want to spend more time on it debugging something that was working before. (again, probably something that someone who knows how to code fixes in 0.2 seconds ngl)
      If at some point I have time I'd like to revisit the script of that and some other stuff.
      Ty for the suggestion and explanation!

    • @RobertJene
      @RobertJene Před 4 měsíci +1

      @@Not4Talent_AI yeah the kind of select field that I described as the easiest kind if you know all of the potential variable values
      I have also programmed ones that are dynamic and are populated based off of other scripts that find out what your options are and that's a whole other ball game

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci

      oh wow hahahaha yeah those second ones are probably not on my poorly copypaste range of expertise xD @@RobertJene

    • @RobertJene
      @RobertJene Před 4 měsíci +1

      @@Not4Talent_AI LOL the first time I did this it was at an I.T. job where I made a toolbox to make my job easier.
      It read the contents of a web page, saved what it found in a certain spot to a list, then populated the drop-down by iterating through the list.
      REASON:
      The ticket system we had at this job didn't let me TAB through it and use the keyboard to choose options. They MADE YOU CLICK EVERYTHING.
      I wasn't having that.
      So I built a shell to run it in 😂⚙🛠🧑‍💻😎

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci +1

      hahahahahah That's why you gotta love programming. I take projects like this to kindda learn. Examples like yours just make me want to do it more 😂@@RobertJene

  • @ctrlartdel
    @ctrlartdel Před 4 měsíci +1

    Yoooooo!!!! Missed your videos! Where you been!? I was worried!

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci +1

      Hahahhahaha I've been right here, just that it takes a while to experiment with stuff 😂

    • @ctrlartdel
      @ctrlartdel Před 4 měsíci +1

      I was just about to comment…. Youve been working on this! Amazing brother! Hope you’re doing well!

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci

      thank you!! I am, hahhaa
      Hope you are well as well! @@ctrlartdel

  • @sevlusify
    @sevlusify Před 27 dny +1

    Hi, thanks for the great video and info. Would you mind sharing your typical settings? I also have a 4090, but my process times are around 30 minutes with 250 Iterations, That is with SDXL model so maybe you were referring only to 1.5 in the video. Any tips would be greatly appreciated, thanks!

    • @Not4Talent_AI
      @Not4Talent_AI  Před 27 dny

      hi!! My usual settings are pretty much the default settings tbh. Probably was talking about sd1.5

    • @sevlusify
      @sevlusify Před 27 dny +1

      @@Not4Talent_AI Okay, thanks for the quick reply! Already made some great sliders, excelent tool!

    • @Not4Talent_AI
      @Not4Talent_AI  Před 27 dny

      @@sevlusify np! happy to hear that!

  • @ash0787
    @ash0787 Před 3 měsíci +1

    Are you using the SDXL primarily now ? would be interesting to know whats different about it.

  • @nikgrid
    @nikgrid Před 3 měsíci +1

    Does this work with SD-Forge? Great Video btw

  • @twilightfilms9436
    @twilightfilms9436 Před 3 měsíci +1

    Wouldn’t be easier if you just make a Lora to adjust the direction of the eyes and share it with us?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 3 měsíci

      I have done that, shared one with lora for direction of the eyes, slider lora for direction of the eyes too

  • @ErvNoelProduction
    @ErvNoelProduction Před 4 měsíci +1

    Any ideas on how to accomplish something like this on RunPod or on a cloud server/GPU?
    I have an intel mac and have been able to run Fooocus on the cloud, but my machine isn’t strong enough to do this locally

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci

      I dont know tbh. It was something I wanted to create, like a template so that ya'll could use this on runpod. But I have no idea how to use that site for this. too confusing. I think other options would be to use colab but the version with a better gpu. or even hugging face

    • @ErvNoelProduction
      @ErvNoelProduction Před 4 měsíci +1

      @@Not4Talent_AI thanks for responding and thanks for the hard work you did with this. I’m gonna mess around and see what I can figure out

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci

      ty for watching!!
      Cool, let me know if you do. I might add it as a link (if possible, and you dont mind it ofc)@@ErvNoelProduction

  • @Bordinio
    @Bordinio Před 4 měsíci +1

    Nice video indeed. I'm not sure but you must have nvidia gpu(cuda enabled) to run it, right?

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci +1

      thanks!!
      I'm pretty sure you do, but havent tried without it

    • @Bordinio
      @Bordinio Před 3 měsíci +1

      @@Not4Talent_AI well, just a tiny update: Rohit told me you can use it without xformers on AMD GPU, but unfortunately it gives the error xD :
      raise RuntimeError('Error(s) in loading state_dict for {}:
      \t{}'.format(
      RuntimeError: Error(s) in loading state_dict for CLIPTextModel:
      Missing key(s) in state_dict: "text_model.embeddings.position_ids".

    • @Not4Talent_AI
      @Not4Talent_AI  Před 3 měsíci

      damn, that's unlucky... No idea what it could be tbh, hopefully rohit does@@Bordinio

    • @Bordinio
      @Bordinio Před 3 měsíci +1

      @@Not4Talent_AI Yup, I'll be looking for the solution anyway :)

    • @Not4Talent_AI
      @Not4Talent_AI  Před 3 měsíci +1

      sorry to not be able to help. good luck tho! @@Bordinio

  • @BZAKether
    @BZAKether Před 4 měsíci +1

    Incredible tutorial as always, thank you and your girlfriend for her code.

  • @hindihits9260
    @hindihits9260 Před 4 měsíci +1

    do we need to crop the images for image sliders? thanks for the video!

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci +1

      I think you can test with activating "dynamic resolution". But I only tested with 1024x1024 images.
      Now that you mention it ai probably should have tried that too. I'll see if I can do so later. Since this trainings dont take too long

    • @hindihits9260
      @hindihits9260 Před 4 měsíci +2

      @@Not4Talent_AI I trained 2 image sliders but they are not working as expected. Is it the image size issue or I am just training a very weird concept? btw I'm not using any prompts in the config..

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci

      what are you training? @@hindihits9260

    • @hindihits9260
      @hindihits9260 Před 4 měsíci +1

      @@Not4Talent_AI nsfw stuff haha

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci +1

      hahaahaha then it will probably depend on the models understanding of what you are trying to train. Remember that this doesnt learn new concepts per say, it just finds the steps to go from one concept to another.
      I'd probably look to train a checkpoint that already understands the nsfw thing you are trying to train@@hindihits9260

  • @pinartiq
    @pinartiq Před 4 měsíci +1

    Didn't work for me out of the box. :(
    One final error I've spent some time on was:
    RuntimeError: Error(s) in loading state_dict for CLIPTextModel:
    Missing key(s) in state_dict: "text_model.embeddings.position_ids".
    Fixed it by "pip install accelerate".
    Maybe will be useful to someone too.

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci

      sorry to hear, what was the error that made it not work? could it be a gpu thing? (Tbh I have no idea on the errors, but could be helpful to gather some data on it).
      TY!

  • @generalawareness101
    @generalawareness101 Před 4 měsíci +1

    your save yaml seems to be broken as it says it saves it but where? I could not find it. Why .yaml for the pip install, but the load and save is in .json? They are not the same

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci

      hi!! .yaml are the files that need to be modified for the training to work. (in sliders>trainscripts>//either imagesliders or textsliders//> data > //here you have 2 yaml files for sd15 and 2 for sdxl)
      The Json file is supposed to store the information you input on the ui, stored in "logs" >. Those are the files that you can load into the ui to get a previous training data input into the parameters. But until you hit "saveparameters" they wont get saved inside the Yaml files.
      Hope I explained it well, let me know if thats not the case!

    • @generalawareness101
      @generalawareness101 Před 4 měsíci +1

      thank you.@@Not4Talent_AI

  • @daikaz4376
    @daikaz4376 Před 4 měsíci +1

    11:19 is this infamous Asmongold's Lair?

  • @Xamy-
    @Xamy- Před 4 měsíci +1

    Do not activate v_pred unless on 2.X

  • @LouisGedo
    @LouisGedo Před 4 měsíci +1

    👋

  • @mysticalfoxie
    @mysticalfoxie Před 4 měsíci +1

    Hey buddy, if you need some help with code I might be able to spare some time for you or at least be available for you for questions. ;)
    Im a professional developer c:

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci +1

      oh damn, thanks!! I'll keep it in mind :3 (I'll see if it works well for people and if I see someone finds a bug and can't solve it myself. I might ping you somewhere)

  • @Rasukix
    @Rasukix Před 4 měsíci +1

    firstttttttttttttttt

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci +1

      Hahahaha niceee ty for watching!!

    • @Rasukix
      @Rasukix Před 4 měsíci +1

      Just finished the vid, I think this tool has a lot of potential, will definitely help for character expressions/positioning relative to a scene people are trying to create, as well as scenery etc.
      I do think it is limited to those with the hardware/time capable to train efficiently tho, make good use of that 4090 xD@@Not4Talent_AI

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci +2

      yeah, that's the only thing I see on it. but with nice tech it trains super fast. I hope they get a good generation service for it that people can use for cheap. (I tried setting one up in runpod but idk how that web works xd)

    • @Rasukix
      @Rasukix Před 4 měsíci +1

      think it's gonna be a case of letting the tech boys do the training and the rest of us do the testing haha@@Not4Talent_AI

  • @Luxcium
    @Luxcium Před 3 měsíci +1

    There should be more girls in tech, and I love that you can have the privilege of a girlfriend who knows how to code and how you are learning how to code with your Chat Assistant its so cool and fun to see how you are lucky to have the chance to have perfect toolings and privileged access to Human Code Assistant ❤

    • @Not4Talent_AI
      @Not4Talent_AI  Před 3 měsíci

      yeap, totally agree once more haahaha. Here there are more and more everyday, which is super nice.
      And yes, being able to learn how to code from a fking chat bot is pretty cool too hahahhaa.
      wish everyone had the opportunity

  • @yoniwoker
    @yoniwoker Před 20 dny +1

    I didn't understand anything at all

    • @Not4Talent_AI
      @Not4Talent_AI  Před 20 dny

      😂😂 that would be my bad. What could I change to make it more understandable in your opinion? (Genuine question, not trying to be sarcastic)

  • @Filolia
    @Filolia Před 3 měsíci +1

    Tried running the Sliders-UI.py, but just gives me an error message:
    from ruamel.yaml import YAML, scalarstring
    ModuleNotFoundError: No module named 'ruamel'
    So tried running it as per the Github without UI, and also get an error:
    FileNotFoundError: [Errno 2] No such file or directory: "'trainscripts/imagesliders/data/config-xl.yaml'"
    The config file is exactly on the position it describes, so idk why it is not working

    • @Not4Talent_AI
      @Not4Talent_AI  Před 3 měsíci

      hmmmm some people have been able to fix this by opening the file with note pad and, on top, erasing the line that says "from ruamel.yaml import YAML", but I'm not sure if that will work.
      If it doesnt let me know and I'll try to look up what could be happening. (some times yt doesnt notify responses, so if I don't come back to you on this dont be afraid to msg me on discord or email)

  • @OZM333
    @OZM333 Před 4 měsíci +1

    on windows, can't seem to get it to work.
    "AttributeError("'{}' object has no attribute '{}'".format(
    AttributeError: 'CLIPTextModel' object has no attribute 'embeddings'
    on powershell with your python code.
    Running it like the OP with cmd I would get a YAML error that was fixed by upgrading pydantic. Now I get FileNotFoundError for the config.yaml despite it being there
    --config_file 'trainscripts/imagesliders/data/config.yaml'
    going to try switching to venv like @semi-cyclops did once I have free time

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci

      Is this error happening with the UI itself or when training? Im guessing the ui? I'll look at it!

    • @OZM333
      @OZM333 Před 4 měsíci

      @@Not4Talent_AI
      it happens during training with no UI. Since it doesn't work raw its not gonna work with the UI. Hopefully I figure something out so I can use the UI since it does help by not having to edit the files manually and in one place.

    • @Not4Talent_AI
      @Not4Talent_AI  Před 4 měsíci

      that sucks.... where you able to try this?
      Rohit: this error has to do with where the project directory is when you run the app.py
      Make sure that app.py is within the sliders directory@@OZM333
      Idk if it will be the same for your error ngl

    • @OZM333
      @OZM333 Před 4 měsíci +1

      @@Not4Talent_AI
      I got it to work now. Incase someone else on Windows is getting this:
      AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'"), I changed the model I was using , worked with Anythingv5Ink_ink.safetensors and upgrading pydantic
      fixing that I got this error:
      class PromptSettings(BaseModel): # yaml のやつ
      File "pydantic\main.py", line 198, in pydantic.main.ModelMetaclass.__new__
      File "pydantic\fields.py", line 506, in pydantic.fields.ModelField.infer
      File "pydantic\fields.py", line 436, in pydantic.fields.ModelField.__init__
      File "pydantic\fields.py", line 552, in pydantic.fields.ModelField.prepare
      File "pydantic\fields.py", line 668, in pydantic.fields.ModelField._type_analysis
      File "\miniconda3\lib\typing.py", line 835, in __subclasscheck__
      return issubclass(cls, self.__origin__)
      TypeError: issubclass() arg 1 must be a class
      I had to change the code in the promp_util.py line 148 to:
      def load_prompts_from_yaml(path, attributes=[]):
      with open(path, "r") as f:
      prompts = yaml.safe_load(f)
      print(prompts)
      if len(prompts) == 0:
      raise ValueError("prompts file is empty")
      if len(attributes) != 0:
      newprompts = []
      for att in attributes:
      copy_ = copy.deepcopy(prompts)
      copy_['target'] = att + ' ' + copy_['target']
      copy_['positive'] = att + ' ' + copy_['positive']
      copy_['neutral'] = att + ' ' + copy_['neutral']
      copy_['unconditional'] = att + ' ' + copy_['unconditional']
      newprompts.append(copy_)
      else:
      newprompts = [copy.deepcopy(prompts)]
      print(newprompts)
      print(len(prompts), len(newprompts))
      prompt_settings = [PromptSettings(**prompt) for prompt in newprompts]
      return prompt_settings
      Note: There is probably an easier fix but I don't know the code so I just worked with what was giving me errors
      now it is training and your UI is working. Going to experiment now. Thanks for the UI!

    • @OZM333
      @OZM333 Před 4 měsíci +1

      @@Not4Talent_AI
      weird I sent how I fixed it about 5 hours ago but I guess youtube thought it was spam. I upgraded pydantic and changed to a different model to train on. then I changed code from prompt_util.py
      line 148
      def load_prompts_from_yaml(path, attributes=[]):
      with open(path, "r") as f:
      prompts = yaml.safe_load(f)
      print(prompts)
      if len(prompts) == 0:
      raise ValueError("prompts file is empty")
      if len(attributes) != 0:
      newprompts = []
      for att in attributes:
      copy_ = copy.deepcopy(prompts)
      copy_['target'] = att + ' ' + copy_['target']
      copy_['positive'] = att + ' ' + copy_['positive']
      copy_['neutral'] = att + ' ' + copy_['neutral']
      copy_['unconditional'] = att + ' ' + copy_['unconditional']
      newprompts.append(copy_)
      else:
      newprompts = [copy.deepcopy(prompts)]
      print(newprompts)
      print(len(prompts), len(newprompts))
      prompt_settings = [PromptSettings(**prompt) for prompt in newprompts]
      return prompt_settings