Create AI Images of Yourself with Stable Diffusion & DreamBooth
Vložit
- čas přidán 1. 06. 2024
- Train Stable Diffusion with custom objects using DreamBooth on a Google Colab Jupyter Notebook for free.
WANT TO SUPPORT?
💰 Patreon: / agiledevart
---
00:00 Introduction
01:40 Fast-DreamBooth Google Colab
02:37 Set up Google Colab environment
03:43 Train Stable Diffusion model with custom images
05:39 Stable Diffusion UI on Google Colab
06:11 Generate AI images of a trained object with Stable Diffusion
---
▶️ Stable Diffusion One Click Install (GPU or CPU):
• Easy One Click Install...
💻 Fast DreamBooth:
colab.research.google.com/git...
💻 Fast Stable Diffusion:
github.com/TheLastBen/fast-st...
💻 Huggingface Tokens:
huggingface.co/settings/tokens
💻 Stable Diffusion Demo:
huggingface.co/spaces/stabili...
💻 Welcome to Google Colab!
colab.research.google.com/
●▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬●
👨👩👧👦 Social:
◆ Twitter: / agiledevart
●▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬●
🎮🕹️🐭 Snappy Mouse Run:
◆ Facebook: / snappymouserun
◆ App Store: itunes.apple.com/us/app/snapp...
◆ Google Play: play.google.com/store/apps/de...
◆ Amazon Store: www.amazon.com/gp/mas/dl/andro...
●▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬●
#stablediffusion #aigeneratedart #machinelearning - Věda a technologie
Great work on this video, I enjoyed trained the model with my photos. I got some hilarious ones
Thaaanks!, the process is a bit different now, you need to name the pictures what you want, I named it them kendarr, so kendarr(1) kendarr(2) and so on, it works great, thank you
Make sure that if you used a phone to create your photos to open in irfanview and turn off the exif rotation and reopen in the program. I didn't check the exif rotation values and made my first model and all images were actually rotated 90 degrees. Prompts do not work right with this orientation.
Everything you open with will present it correctly rotated but the colab code seems to ignore the exif rotation data. You can batch rotate and fix in irfanview.
Great. It works fine for me. Thank you
My friend, it works great. But if a need to work again on the same trained faces. Where should i start?
If you use a custom model of a person don't use the restore faces.
This will remove the detail that makes you you.
2000steps is not sufficient. 3500 to about 4500 steps.
Above that the model will be over saturated with your data and it's harder to add style.
Great work!, I tried training with waifu diffusion 1.2 EMA version and got an error for using EMA, anyone know how to solve that?...thanks
Thanks for a great video!..
congratulations on your timing right when there's inevitably a relative way of traffic coming but also...
Any idea if there are changes that need to be made to any of this process now with SD 1.5, or should it 'just work'?..
I see they already updated the code to 1.5, so it should work out of the box
Thanks great to know. I wonder what kind of results it might yield with the inpainting checkpoint too? Worth some experimenting!.. can you teach it a new face without it forgetting how to do this magical new inpainting?..
@@agiledevart also I wonder, would it make a difference merging a 1.5 checkpoint trained on a new face into a 1.5 inpainting checkpoint, vs training the inpainting checkpoint separately on the face..?🤔
do you need to somehow disconnect or logout when finnished?
Hi there, extremely helpful video. Is it possible to run a trained model online in runpod just like how you did here with colab? Thanks! 😊
I haven't tried it on runpod, but it should definitely be possible
Do you have an updated set of instructions. The colab has been updated and it now looks totally different from your video.
I like it
Wow, can we use another version of models for this one like protogen or anything v3?
Thank you so much for the video! It really works. Subscribed to your channel!
But i have a question: Can you also upload photos of your body along with photos of face for training the model?
Yes you can
@@agiledevart why mine says namerror when starting dreambooth
@@agiledevart
it doesnt give me url it give me this
Embeddings:
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/webui.py", line 171, in
webui()
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/webui.py", line 132, in webui
demo = modules.ui.create_ui(wrap_gradio_gpu_call=wrap_gradio_gpu_call)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/ui.py", line 1241, in create_ui
new_hypernetwork_activation_func = gr.Dropdown(value="relu", label="Select activation function of hypernetwork", choices=modules.hypernetworks.ui.keys)
AttributeError: module 'modules.hypernetworks.ui' has no attribute 'keys'
do you get url? mine i get webui in my drive
xformers is missing
So all was going well - and then it stalled around 2100 steps - when I went to reload it all - it kept saying that session was undefined - even if i typed in session name - What all do I delete on GDRIVE to start from scratch?
Are you using Compute Units? Can we run this locally?
Hello, everyone.
What a model makes it easy to edit a photo (for example upload the specific photo of a house and with AI change color of the house or add something as additional construction like a patio or a veranda or a pool)?
ha very nice video! very clear explanation made me try it, I see they added a new option , called 'NEW FAST METHOD', right before 'Setting up' , i'm doing it but I don't know what i'm doing you know what I mean? :D
Can I add New Images after sometime, or I have to run Everything again?
Does i have to run all nodes again next time...If my CKPT model file is already generated in Gdrive ? or i have to delete that file for fresh execution?
If you exit completely you should be able to load your copy still. Not the ckpt but the information in the folder that colab created. You can name the session and add a number version. When it's fingered it will save the ckpt to the drive with the new name.
@@MichaelWoodrum Any further help? I'm unable to load it.
@@kaushiksunder7207 I haven't tried recently and I'm not sure that the same method works with the newest stable diffusion. It might, I haven't tried yet.
Sorry if this might a dumb question, but: Are those pictures actually uploaded to anywhere publically accessible or can this be only accessed by me?
Only you can access it. It's saved to your google drive
@@agiledevart nothing is safe with google lmfao
What do you mean by accept the stable diffusion term and you will get the token ? I can't find a way to accept any term on t he website ?
Did you manage to fix this ? I don't know what he means by token either
this is awesome! pleaes more 'train your model' content
Hi , I want to traning full body photo of a alien race but look like human , which option should I choose? object or charactor?
and which SUBJECT_TYPE:
I would probably set SUBJECT_NAME to human. I'm not an AI expert, all I did was trail and error experimentation.
This video is awesome. However, at the very end, I'm getting an annoying error/warning:
Warning: Taming Transformers not found at path /content/gdrive/MyDrive/sd/stable-diffusion/src/taming-transformers/taming
It's not running it and consequently not giving me the link to the Web GUI. Anybody encountering the same problem?
Yep, I have the same problem, we need a more updated tutorial with the new method. If anyone has it please post a link.
the training does not work, a message appears saying "Something went wrong"
Do the photos have to be sized to 512 x 512?
No, my photos were 3432 x 4576
How long does Dreambooth train your images?
It was 45min or so
How do you actually get a token ?
There is a new version at that link now.
I think the colab is broken now
@ty young why
how to fix it ???
I have a question: is this whole thing completely legal? it seems too cool to be true
one of these tutorials look like the notebook its so annoying
Bad tutorial
OUTDATED, try it yourself
mine says in final step modules.hypernetworks.ui' has no attribute 'keys'
I am getting
"Running on local URL: :443"
that's bring me into empty page, can somone help me please?
me too
Everything is different for me and i can't follow instructions🥲