Real time Stable Diffusion in TouchDesigner
Vložit
- čas přidán 29. 06. 2024
- Video Tutorial on TouchDiffusion: Interactive Real-Time Generation using StreamDiffusion and TensorRT in TouchDesigner
TouchDiffusion: github.com/olegchomp/TouchDif...
If you'd like, you can treat me to a coffee: boosty.to/vjschool
Socials:
/ olegchomp
/ oleg__chomp
00:00 - Intro
00:09 - TouchDiffusion features
00:42 - Beginning of installation
00:50 - Python install
01:08 - Git install
01:18 - Cuda Toolkit install
01:38 - TouchDiffusion install
01:55 - where python
02:13 - webui install
03:10 - Fix for error in pop up window
03:27 - Model downloading & engine preparation
04:45 - SD Turbo engine
05:07 - TouchDiffusion settings
05:46 - Usage example
07:13 - Parameters explanation
amazing job explaining the entire process...
this is great! lean and clean, thank you!
Thanks for this great solution! Amazing performance on 4070 12GB! SD Turbo and 1 sample step. I will test making other engine now. 3 sample steps were slow on 4070.
instnaly fast and so nice to be in 512, congrats on this one, cant wait for more features!
Custom resolutions already available, but keep in mind, that it will affect FPS.
@@VJSCHOOL I tried building a model with different res and more steps, but it wouldn't finish...
Super impressed with the speed, it's still super jittery and "stabley", but that's just animdif in general I geuss, excited to try out control net!!
Amazing
Thanks bro
hi, i followed every step and there are no erros, but in touchdesigner when i hit pulse on "load engine" nothing happens, can anybody help me?
Create issue on GitHub or Discord and share full log.
hi ,i have followed your tutorial and every step went well .But in TouchDesigner,the log shows :cannot access local variable 'pipe' where it is not associated with a value.I don't know how did it happened
Create issue on GitHub or Discord with full log
I don't understand how it's posssible to have multiple frames per second with 20 steps. Why don't we use TensorRT on normal UI's like Automatic or Comfy ? Sorry in advance for having such a vague question but i find this quite amazing haha.
Real-time can be achieved with Turbo models and Acceleration Lora, on low steps like 1-4 with TensorRT. TouchDesigner allow to copy data from and to GPU, that decrease latency.
does it have to be python 311? i have 310 and i don't wanna break all my other installations to try this...
1) everything installed in venv, so it not affecting main Python env.
2) if you want to try with Python 3.10, then provide path to it in webui & use TouchDesigner version before 2023
How to increase CFG to 7-10 instead of 1?
The quality of generations is very low at 1.
You should try to increase sampling steps (batch size)
Very interesting, but when I clicked load engine in touchdesigner, I got an error like "cannot access local variable 'pipe' where it is not associated with a value". What is the problem?
That’s mean either model or engine can’t be found
Git is installed.
Creating .venv directory...
The filename, directory name, or volume label syntax is incorrect.
Failed to create virtual environment.
What is the problem?
chech where the python with "where python" command and copy paste in to webui.bat like he said
amazing... this is realy faster,
can we put lora and vae to the folder?
what lora and vae for the best creation
and why icant change the seed ?
Tiny VAE already baked in for best performance. Loras not supported yet.
Seed can be changed in TouchDesigner with seed parameter
What's the difference between this and streamdiffusion?
Explained in first 3 min of video
It’s near x2 faster
As I understand it, this miracle cannot be run under Mac?
Only PC & Windows
I really need a computer to do this. is there a way I could offload that GPU onto Google Collab?
TouchDiffusion component required TouchDesigner, so it can’t run in Google Collab. You can try with original StreamDiffusion repo.
@@VJSCHOOL - Sigh guess i just need better hardware.
@@VJSCHOOLsweet I’ll try that