Unlock The AI Video Potential in Stable Diffusion With Deforum Hybrid Video
Vložit
- čas přidán 16. 06. 2024
- Create warpfusion quality videos for free with controlnet and deforum in stable diffusion. In this tutorial I'll walk you through the steps to transform your video into an AI hybrid video.
If you follow this animation guide you'll learn how to use prompts to guide the creation process and transform your videos into mesmerizing dance animations, lego style videos or any other cool idea you have for it.
0:00 Intro
1:44 Videos I will transform
2:18 installing deforum
3:48 change noise multiplier file
4:14 create prompts and settings in the img2img tab
4:40 export still image in davincid resolve
4:52 installing stable diffusion models and embeddings
5:18 settings in img2img follow up
6:18 controlnet settings in the img2img tab
7:28 using loras for better and more intense style
10:16 transfer settings from img2img tab to the deforum tab
15:05 load settings file into deforum
LINKS:
Andrey - Unreal Unit:
/ @unreal_unit
Andrey - Unreal Unit Instagram
unreal unit
Stable swirls youtube channel:
/ @stableswirls
reallybigname:
/ @reallybigname
cat
www.pexels.com/video/close-up...
Insta: manoletyet:
- manoletyet...
models civit:
civitai.com/models/7371/rev-a...
civitai.com/models/3627/proto...
embeddings
civitai.com/models/16993/badh...
civitai.com/models/7808/easyn...
civitai.com/models/4629/deep-...
lora robot:
civitai.com/models/6888/luisa...
lora mechanical cat:
civitai.com/models/99437/mech...
lora lego person:
civitai.com/models/92444/lelo...
Lowra Lora:
civitai.com/models/48139/lowra
Sebastian Kamph's Installation of Stable diffusion automatic 1111 webui - • How to Install Stable ...
flowframes
-nmkd.itch.io/flowframes
Topazlab
www.topazlabs.com/
Davinci resolve
www.blackmagicdesign.com/prod...
DISCLAIMER: No copyright is claimed in this video and to the extent that material may appear to be infringed, I assert that such alleged infringement is permissible under fair use principles. If you believe material has been used in an unauthorized manner, please contact the poster.
Music: CZcams Library - Eternal Garden - Dan Henig - Jak na to + styl
For the LEGO video, apply the same deforum settings as from the cat video. Then transfer the img2img tab Lego info into deforum and you are done. The only thing what I did different for the Lego video is that in the coherence tab, I set it to Video input instead of none and I have set the anti blur to 0.1
These are things that will differ per video and you have to test with it.
You asked for a video showing my results
czcams.com/users/shorts-Zm_9IJcXLI?feature=share
Love your tutorials.
I actually did this a number of days ago and have others I'm working on
But you've now compiled all the different tutorials I watched and found myself into one place for me.
Thank you
I have some other settings you may be interested in
I'll share my workflow with you and you may share it in a later video if you wish
I don't have time enough to edit like you do for tutorials but I'll be happy for someone else to share
❤❤
@@Artifical_Deforum Wow that looks amazing, if you're that good with deforum I don't think you need my tutorials 🙂
That is very kind that you want to share your settings and your workflow with me. and I would love to make a tutorial about it. and mention you just like I mentioned unreal unit. are you on Instagram? I've tried to search you there but I couldn't find you my Instagram account is: instagram.com/digital_magic_1/
The coming week I won't be working on the computer, because of my autoimmune illness I have at the moment, I should give my elbows and shoulders a bit of a rest. The last week's I done a few hours too much on the computer.
wish you a nice weekend 🙂
@digital_magic do you have a discord server I can join or an email contact I may use to send you results and technique?
Thank you 😊
hi i have problem with noise multiplayer i dont see that option any solution?
@DarkSide_AI noise multiplier is in noise tab in deforum
As well as having its own slider in img 2 img and text 2 img
Instead.of a slider you define it via float
Eg 0.8 or 1 or 0.02 its the first option in the tabs list
I’ve been following the other 3 creators for a long time now but you sir, are a legend. The way you’ve combined the knowledge of all 3 into such a comprehensive video is mindblowing. Thank you so much!
Hey there and thanks for your very nice comment. it really makes me smile, this is exactly the reason why I put so much effort in creating this tutorial hoping that there will be people that appreciate it. that's why your comment really makes my day. thank you
Man, of all the M2M tutorials out there I've never been able to get them to work for me until your video. Thank you for sharing your wisdom with us! You earned a sub
Thanx for your kind message 🙂 I am glad it helped you
I watched so many tutorials about this stuff, but yours was the only one that worked. Thanks, subbed!
i am glad it helped 🙂
Liked! Subscribed! Lifechanger, keep posting!!! THANK YOU!!!
Hey man I'm very glad you liked the video. I will be posting more videos but I can't post too much because of my autoimmune illness that I have at the moment. I can only work 2 to 3 hours per day on the computer. but hopefully that will change in a few months and I could produce more tutorials
Hats off! Finally the best tutorial I’ve seen, easy to follow, very well explained in every single details, didactic…..in short….thank you! I wish all CZcamsrs could understand how important is to communicate something to the audience, something that you did remarkably well!
Hey there and thanks for your very nice comment. This is exactly the reason why I put so much effort in creating this tutorial hoping that there will be people that appreciate it. that's why your comment really makes my day. thank you
thanks bro.. you explain step by step, and not convoluted, great tutorial
Hey there and thanks for your really nice comment it is always great to get comments like this, this is exactly the reason why I try to make good tutorials. I wish you a nice day
Thank you!!
PS: Loved the background music! :D
hahaha, i know it isn't the best :-) Thanx for your nice comment 🙂
Hello! Great tutorial, i like it! And it's very cool that Andrey helped you with the creation of this tutorial, he's good at it. Thank you so much!
I am glad you liked the video :-) And yes it is amazing that Andrey helped me so much, he is very talented 🙂
Another great video! Keep em coming :)
Thanx a lot, i am glad you liked the video 🙂
I almost gave up on hybrid video and stayed with temporalkit but I'll give this method a try. Thanks, great video.
Hey there, I'm glad you liked the new technique.and video. I was very impressed by it as well and I'm very grateful that Andrey reached out to me
This is an outstanding guide, thanks so much ❤
i am glad you liked it 🙂
One thing I found, not sure if others get it is the deforum settings wont save to the root folder of stable diffusion (on windows at least, permissions maybe?), I had to put it to stable-diffusion-webui\outputs\img2img-images for it to work, no errors either so was driving me nuts, posting in case anyone else is tearing their hair out :)
Thanks for another great walk through, these videos are very much appreciated!
Thank you for your very helpful comment I really love it when the community tries to help each other. and I'm also glad that you like my videos. wish you a very nice weekend
Great tutorial. I'm glad that my knowledge helped you 🔥
Hey Andrey,
Thank you very much mate, without you it wouldn't have been possible for me to create this tutorial. I really appreciate all you did for me. and hope to stay in contact as we both develop ourselves in AI video.
I wonder if I could get it to work with SDXL 1
@@ben2660 i am very interested as well. and if not , then it won't take long before they will update the deforum extension i guess
Great Video Brother!! Thanks for the shoutout 🙌
Hey Stable swirls, I was hoping you would see the video and notice my shout out to you. I want to thank you a lot cuz your tutorial helped me a lot and I'm looking forward to more tutorials from you. would love to change thoughts about this whole technique and maybe we can chat on Instagram? here is my Instagram address:
instagram.com/digital_magic_1/
I wish you a very nice day
Thank you for another great video!
I am glad you liked it 🙂
I'm gonna be real with you, I usually hate watching long videos. But for some reason I can sit through yours.
Wow thanx, that's a great compliment. Very kind of you to let me know 🙂
Thanks, I finished my first video after this tutorial😀, it's very cool🤖
That's great! Please send me a link of what you've created I am very curious what you have made. wish you a very nice day
Cool stuff. Does this work with only environments as well ? Like using it on a video of buildings to generate a cooler environment?
yeah i guess so
Thank you so much... Just great. 😎💪🏻
i am glad you liked it 🙂
Always use a green or blue background. You save on the data (more available for the character), don't have fidgeting backgrounds. Then just key it later in Resolve.
thanx for the tip, i will try it
I think the title of your video is great!
I am glad you like the title, I know it is a bit clickbait Style but CZcams environment pushes me to this. I have had many CZcams channels before where I always use normal titles, because I don't like clickbait myself. but after many years I've learned that the only way to become bigger as a small CZcams channel you have to use clickbait titles and thumbnails. I wish you were very nice day
Great stuff!
Glad you enjoyed it
nice tutorial bro :) I recently started doing experiments with Ebsynth Utility extension, trying to achieve consistency with faces. :)
Hey there and thanks for your comment. is is the EBsynth utility extension something new? or do you mean the normal version? would love to know more about what you create and would like to see some examples. wish you a nice day
@@digital_magic Ebsynth utility is different from temporalkit, it doesn't create grid images, it is creating key frames and ''.ebs'' files that can be opened with ebsynth and rendered, I think it is not new, I have some examples on my profile, I am just finding random tiktok videos and testing them everyday, tomorrow gonna make another one :)
Hey thanks for this great video !! You are awesome just wanted to ask can we use deforum tab using sdxl model?
I haven't tested it out myself. but I asked really big name on one of his new videos and there he told me that he used sdxl. I will start testing myself soon as well
Oh and I'm very happy that you like the video. it's always nice to hear from people that I liked it so thank you very much for that that does me very well. I wish you a very nice day
It was great, thank you very much
Glad you liked it!
Great video, thanks for the information. I have a question tho, I'm using google colab for my stable diffusion operations and with deforum, it takes too long. For example for a 10 second video, it says 9.30 hours to finish, but meanwhile it uses like 3 gb vram out of 15. Is there a way I can increase the speed of the deforum process?
Not that I know of
u are a great teacher
after see 100000 video, i find you god bles u )))
thanx for your very nice comment, i am glad you liked it 🙂
Hello, thanks for the video :) but why do you use ControlNet parameters on hed/softedge for Deforum instead of openpose like you previously show for img2img ? Is there any reason for that ?
I am not sure what you mean. Maybe that was a mistake i made. Normally i always use the same control net as what i prepared in img2img. I guess it was confuding, cause i used 3 examples?
Great tutorial ! Thank you !
I'm playing with deforum and trying to understand and not importing people's settings...
Can you tell me which parameters give consistency for the background and character ?
Thank you for your reply :)
I can't tell you that in a nutshell you just have to watch all the tutorials because there's all the knowledge I have
Where do you get the control net models?
bro ! you are the best ! i watched some of your videos, i even know you were ill, i wish you get better soon !
i wish you have a discord server that i can join !
bro, thanks so much !
bro, i followed your tutorial and getting a result. can i share it with you and correct what i did wrong ?
Hey there and thank you for your very nice comment. it always does me good when I noticed that people like my tutorials. I think you are the person I have also contact on in Instagram, is that correct? I don't have an own Discord group but there is a Discord deforum group
discord.com/invite/deforum
And this is my username
digital_magic_
Hello thanks a lot, is possible to run more faster to video to video animate on deforum with same results using lora ? or any other tips? thanks i working in a 1920 x 1080 video and 90 days of work says the process lol
1920x1080 sounds very big, i would use sizes that are stable diffusion friendly like: 1024 x 576
By the way, that square Trump dancing video was also done in TensorRT, which doesn't even have controlnet. So, I was just using pure hybrid video.
I am not familiar with tensor RT, but I would love to learn about it I just figured out that I could mail with you on Discord in the deforum Discord group 🙂
Well, last time I tried the dev branch of automatic1111 that you have to run to use tensorRT, deforum isn't working in it now. Something changed and someone will have to fix it. TensorRT is an nVidia thing - a way of speeding up performance. It allowed me to go from like 27it/s to 60it/s at 512x512, but you have to convert models to another format and at a specific size. And it's limited to 512 and 768 on edge sizes. You have run the dev automatic1111 and use the tensorRT extension for it. I'm sure someone will fix it at some point. I think auto changed something about the gradio interface, so it's causing an error now... but, it's a dev version, so.... @@digital_magic
hope your health is going better now! thank you for the latest update of a new technique 💯💯💯🤖
Hey there, I'm glad you liked the new technique. I was very impressed by it as well and I'm very grateful that Andrey reached out to me. I really appreciate it that you ask about my health. it's slowly going better and at the moment I can work 3 till 4 hours per day on the computer, that makes me really happy. hopefully by the end of November I can go skiing biking climbing walking and doing all the stuff that I like to do so much again. I wish you a very nice day
@@digital_magic thank you for you kind words. I would ve a question I can't find the control_v11fle_sd15_tile @ the control net models. Just found another model called controlnet11Models_tileE ... any idea why I don't have this installed and where to find ? thank you Sir
@@digital_magic can you please help with this problem Sir ? where to find control_v11fle_sd15_tile ...
@@Ich.kack.mir.in.dieHos you have to update control net and Download the latest models. Here is the link:huggingface.co/lllyasviel/ControlNet-v1-1/tree/main
@@low.bow_1381 you have to update control net and Download the latest models. Here is the link:huggingface.co/lllyasviel/ControlNet-v1-1/tree/main
Thanks for putting this together!
3 questions:
1. I get decent results but the face and details still flickers some.. what parameters would help to reduce the flicker if I were to test.
2. my hybrid video tab it goes and loads for ever, like its downloading hybrid video model or something.. what is it and how can I stop it from doing that?
3. what's the fly all about?
I am glad you liked the video. you are asking me the golden question, Because that is the most difficult bit, reducing the flicker and getting consistent video. so it's hard to say but the things that I've noticed and where I play with our:
-The control weight in the control net tab
-The amount of the Lora you use, so like 0.5 or 0.3 In the prompt
-Noise schedule in the Deforum tab
- anti blur in the Deforum tab
But in all honesty I have to do much more testing to really understand what makes a lot of difference. so if you go and test around with it please let me know what you learned about it cuz we all learn from each other I am in the deforum Discord group and we could chat there as well. discord.com/invite/deforum
My username there is:
digital_magic_
and about the hybrid video tab, it could be that it is downloading a special model. I noticed that more often in stable diffusion that sometimes the loading takes for ages and then I look in the console and I realized that it's downloading something so in general I always let it download it and then wait for it to be done.
and the fly yeah well the fly was just for fun I saw it on other CZcamsr videos where they do something to keep people not losing the concentration and I thought it would be funny to do it that's all. I hope my messages help and feel free to ask anymore
Just a heads up it was in shared_options and my version of sd was already set as 0.0, also currently no tile for SDXL as of 12/19/2023!
yeah the sd1.6 has it automatically
In terms of the results, they are somewhat satisfactory. However, it seems that DeForum did not effectively address the issues of character consistency and background flickering. Could you also explain the application of masks in DeForum? I wonder if it can help with the background flickering problem.
i haven't dived into that yet, but will do as soon as i find out more about it. But i think you are right with what you said
Perfectly explained, thanks!
I've this problem :(
Error: 'operands could not be broadcast together with shapes (1024,756) (752,) (1024,756) '. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli.
have you tried disabling all big extensions and re-install controlnet and deforum?
What also can be is that you need to uncheck the init image in the init Tab and also delete the path from the init image cuz for this we don't need a image init only a video init. I have had other people that have asked for this and with them this helped. I hope this helps for you and wish you a very nice weekend
Thank you very much for your time, dedication and attention.
Yesterday I was quite stressed and overlooked some of the indications, following the video step by step I have achieved a magnificent result. Soon I will update my new channel to upload content.
Like, subscriber and bell of all notifications activated.
More people like you please.
Surreal question: Do you think there will soon be available an extension to create videos using prompts?
Regards@@digital_magic
Hi man, thank you for your cool tutorial! This is very useful 🔥. BUT, the only thing I don't understand is how to write prompt correctly so that it really will changes my video. Can I ask where you got your prompts from in this video?
I watched several such videos where they show how to style a video in the Deforum, and all the authors also take the already prepared prompt and quickly go through the part where you need to write prompt, that is, they dont fully explain
1. Could you help with this (how to write prompts correctly specifically for video styling in the Deforum)?
2. How do I find out which Lora I need for my video? There are a lot of them \ Thanks!
In general this is just error and trial. what I mostly do is look on Civit.ai To find a Lora that I need. And then on that page I look for example images and mostly I copy 80% of the prompt they used in there. I hope this helps
@@digital_magic thanks man! I learned text embeddings and how to use loras, I have already created different videos in Deforum, but there is one point that is unclear to me (this is about certain stylizations)
How can I find you in discord? Can I write to you? You would make me happy if you can help 😊
I have a problem, at minute 3:22 of your video you say that you have to select initial_noise_multiplier but in my case there is no drop-down menu option that shows me all those options, what could be the problem?
same
Have you tried to update stable diffusion? because normally everyone should have this drop-down menu
Have you tried to update stable diffusion? because normally everyone should have this drop-down menu
Great tutorial, thanks! For some reason Deforum seems to completely ignore my init video. Double checked settings/paths and it all should work. Only way it worked was if I changed the strength schedule in deforum tab to (0.75), but then the results were very inconsistent and bad. What could be the reason as to why this is happening?
have you tried disabling all big extensions and re-install controlnet and deforum?
i had to do this so everything would work properly
what are the parentheses for on the negative prompt
you dont need tjose parenteheses in deforum, if you have them in then it doesnt work
Hi, I'm new to this, i didn't find any models in Deforum>Controlnet section, where i can download them and where to put them please? Thank you!
you have to update control net and Download the latest models. Here is the link:huggingface.co/lllyasviel/ControlNet-v1-1/tree/main
@@digital_magic Thank you
Does it work well if you don't have a lora for your specific look?
Yeah for sure if you do proper prompting then you can also get away without a lora, but in many cases it is just easier to work with a lora. I hope this helps
I am getting this error when 'Generating' in deforum. Any help is very much appreciated:
Error: 'Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.'. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli.
Been trying with so many models, failing in each.
I am on Mac m2 pro.
Please, can you try hybrid video with sdxl models, sdxl loras and sdxl controlnets??
I tried several configurations, but it was a disaster
Hey there and thanks for your comment, I want to dive in all that stuff but at the moment I only have 8 GB vram graphics card and I'm waiting for the 24 GB vram card that I've ordered, then when that is set up i wiil dive into it 🙂
Thank you very much for your work and your lesson! But I can't complete the generation. Has anyone encountered the problem -
User friendly error message:
Error: "compute_indices_weights_cubic" not implemented for 'Half'. Please, check your schedules/ init values.
What to do? macbook, m1
Maybe you can try to reinstall decorum? Or what else can be is that you have to deselect the init image in the init tab
I'm quite sure that pixelperfect does nothing for the tile control net. Also I wonder if the tile control works as intended without the tile preprocessor. Instead of softedge, you could also go for Lineart (realistic) and/or Normal Map (midas mode) while Open Pose might be contra productive due to incorrectly identifying the pose. Anyway, nice guide for Deforum, thanks!
Hey thanks for your nice comment and great tips. I have tried with line art realistic but the result was not very good. but I think it also depends on which video you use. I haven't tried midas mode and I will definitely try that. Would love to change thoughts about this technique with you. are you in the Discord deforum group? otherwise maybe we could chat on Instagram. here is my address, instagram.com/digital_magic_1/
Wish you a very nice day and hope to get in contact
@@digital_magic I'm not using any of these social networks because I just hate the mega corps with all data collection and censoring, with YT being the only exception so far. ;-) Though I recently made throw-away instragram account to help a guy who got stuck getting AnimateDiff running on windows but YT kept deleting my answers for some reason which once again confirms my views on mega corps' social networks. I don't think I can tell you anything useful more, I did tons of i2i experimentation but have Deforum only used once, hence I liked to see a recent guide on this. :)
pixel perfect has no effect on tile during rendering (as far as I'm aware) but automatically sets the resolution at least.
@@Saik_1992 Hey there and thanks for your info, I've had another command from somebody who mentioned the same as you. so you would recommend not to use Pixel Perfect at all?
@@digital_magic I do use it, but it is mainly me being too lazy to set the proper Resolution every time.
The performance Hit isn't as bad once you Cache CN's.
Nice informative video. I followed along but in the Shared.py i could not find the initial_noise. Can you throw some light.
in sd 1.6 it is automatically set, so you should be able to set it to 0.0 automatically if you have an updated version from SD Automatic 1111
How do you get stable diffusion to render using your gpu?
pls help, I am getting error after installing the temporalkit, ModuleNotFoundError: No module named 'tqdm.auto'
Press any key to continue . . .
Sorry but I'm not using the temporal kit in this tutorial. so I don't know how I could help you?
Hi there, just a quick question. My shared.py file has less data than yours and there is no line including "initial_noise". How can it be? Well, i'm kind of newby to a1111, so is there any opinion? Thanks
I think the new SD1.6 has it ( noise option to 0 ) automatically. So no need to change it anymore, as far as i know,....things go fast in AI world :-)
Likewise, I'm going crazy trying to keep up with the updates. Just woke up & voilà; SDXL-Turbo... Good work by the way, thanks.@@digital_magic
Thank you for the video!
Is it possible to prive us with the setting to download ?
I will do this in my upcoming video so it is easier to create a video
you are a very friendly guy!!! i like also how you explain things !!! @@digital_magic
Looks not bad but also kind of complicated. Can you share your deform settings? I've always just worked with img to img batch until now. Is Deform somehow faster in " rendering"?
I am not sure if deforum is faster in Rendering. but it is definitely much better inconsistency than just img2img. I shared my deforum settings in the video, just follow it step by step and then you have your own settings file. loading your own settings files works well but I won't try to load a settings file from Andrey from unreal unit and it really didn't work. I had to do everything manually. but the cool thing is once you've done that you've got your own settings file and then it is much easier. and for me after creating three to four videos with this technique it's not so complicated anymore then. but you're totally right in the beginning it's very overwhelming that's why I try to make this tutorial as good as possible so that everybody can understand it.
@@digital_magic I might do that. But there will probably soon be videocontrollnet that will be wild. "VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with ControlNet" Can you search links are deleted from CZcams.
@@ratside9485 Wow that sounds very exciting I'm definitely going to look it up. how did you find out about it?
@@digital_magic AI News Site the-decoder had reported about it
@@ratside9485 thanx mate 🙂
your tuto is very cool but when i apply all of your advice and all when i generate with deforum it says Error: 'list index out of range'. Before reporting, please check your schedules/ init values. and it keeps telling me this even when i restart all what you did in your video so if u have a solution it could be cool
You can try to disable the init image And delete the path in there. Because you only need the video in it path
Thanks for the video. I was following everything until you do the cat image in image to image. Then you mention copying the path of the cat video. When was the cat video created? I just have the image produced in image to image, so I am confused! Any help appreciated!
Hey there and thanks for your message,
the cat video is the original video that I downloaded from pexels. I Exported a 576 x 1024 version, because that is the resolution I was working in, in img2img. This is also the resolution you use for deforum
@@digital_magic Thanks for clarifying!
Is it possible to use a frame sequence instead of a video?
no not in deforum,...then you should use img2img if you want that
I dont have the Tile in control net selection..
you have to update control net and Download the latest models. Here is the link:huggingface.co/lllyasviel/ControlNet-v1-1/tree/main
please terll how to unistall the extension in satble diffusion
disable the checkbox in the extensions tab, from the big extensions that you are using in SD
thanx man! Do you know how i chance the shared.py on my mac? I open it with text edit. But there is no initial Noise
i am sorry i have no mac. i am glad you liked the video. Maybe ask in the deforum discord group??
@@digital_magicthanx! I found a solution after a long conversation with chat gpt. 😅There was no initial noise in the document. But i was able to set the noise to zero., without changing anything. Maybe they fixed it in an update. ;)
@@BennosProject awesome great to hear, yes i guess it was the update
Hey just a quick question, is warp fusion and deforum hybrid video essentially the same?
not exactly, they are almost the same.
@@digital_magic Thanks for the reply, would you say one is better than the other or both have their perks?
@@danielbarboza4103 Well to be honest I don't have any experience with Warp fusion. the reason I am doing it with deforum and stable diffusion is because it is free and I think many people are interested in it . Warp fusion cost 10 dollar per month. I do think that Warp fusion is a bit more consistent at the moment but I think stable diffusion will catch up soon. I am very happy with the consistency in stable diffusion already. hope this helps
Error: 'Video file E:\AI\Stable Diffusion\stable-diffusion-webui has format 'e:\ai\stable diffusion\stable-diffusion-webui', which is not supported. Supported formats are: ['mov', 'mpeg', 'mp4', 'm4v', 'avi', 'mpg', 'webm']'. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli.
Any idea what could be the issue? I have given the video url in the controlNets and Video Init sections. And the video is mp4 only
this has to do with the update with deforum. i haven't updated it yet and that's why it still works. i have these comments/questions now since the last 2 days on this video. So i dont know the solution or this, bu i would recommend to go to the deforum discord group to ask for help
Thanks for letting me know that. Actually I found a temporary fix by editing directly in to a file.@@digital_magic
@@viralworld6051 sounds great. how did you do that?
@@digital_magic By editing the audio_video_utilities py file.. I gave the video_path manually after 79th line
hey i tried everything and i followed all your steps, even dubbel checked them. But.. i keep getting the same error:
Error: 'operands could not be broadcast together with shapes (540,309) (304,) (540,309) '. Check your schedules/ init values please. Also make sure you don't have a backwards slash in any of your PATHs - use / instead of \. Full error message is in your terminal/ cli.
Time taken: 29.1 sec.
i tries different sizes, schedule settings and so on. but nothing works. ((Any ideas?))
i fixed it! it was the 3d option and putting it on video input made it work! the rest of the setting is the same as yours
i am glad it worked :-)
bro i have a problem
Error: 'Error: No input frames found in C:\Users\DEENEDANYALH\Documents\stable difusion\stable-diffusion-webui\outputs\img2img-images\Jafaar\inputframes! Please check your input video path and whether you've opted to extract input frames.'. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli.
I think you have to deselect the image input in deforum
I am using rundiffusion and I can't seem to get the right video path! I keep trying variations of /mnt/private/... No luck :(
I am sorry to hear. did you solve the problem already?
where can i find the control_v11f1e_sd15_tile [a371b31b] file????
you have to update control net and Download the latest models. Here is the link:huggingface.co/lllyasviel/ControlNet-v1-1/tree/main
For fast change noise multiplier..just point to form ..click right go to inspect ..change using web base only easy ..
thanks for your comment,but I didn't really understand How to do it. What do you mean with just point to form?
@@digital_magic form ..click just point the mouse pointer that area ..then right click mouse you will see the inspect caption..click it it will straight go the noise level ...change to 0 ..then box dialog box also ..2 thing need to change
@@dronearon3085 you mean in the shared.py file?
@@dronearon3085 Or do you mean in stable diffusion setting tab?
Wow! It is amazing tutorial!, now I can`t work on my own computer, my cpu can`t and I use Google Colab for Stable Diffusion, I want try do your tutorial, but I don`t know how input my video in automatic 1111 on Colab, can you help me with this question?
I am very sorry but I can't help you with Google colab, because I don't have experience with that. what you could try is use something like run diffusion or think diffusion, these are also paid cloud-based systems but they use the automatic 1111 web UI. and thanks for your compliment I really like it when people share that they liked the tutorial. it makes me feel good cuz it's a lot of work to create them and then when people share their thoughts that does me good. I wish you a very nice day
In my module shared there's no initial noise to change, can you help me ty!
as far as i know, since sd1.6 it is standard, that you can set initial noise to: 0
you are right ty! @@digital_magic
Error: 'unsupported expression type: '. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli.
help!!!
does anyone have a link to run this from google colab? ps: what a tremendous tutorial
Hey there I am really glad that you think that the tutorial is good :-) I don't have a link to run this from Google colab. But you could try to use run diffusion or think diffusion.
Great tutorial thanks!
Everything was working with deforum/controlnet then suddenly I get this error:
User friendly error message:
Error: Video file C:\Users\k\stable-diffusion-webui has format 'c:\users\k\stable-diffusion-webui', which is not supported. Supported formats are: ['mov', 'mpeg', 'mp4', 'm4v', 'avi', 'mpg', 'webm']. Please, check your schedules/ init values.
does anyone know how to fix this? I checked all settings and even reinstalled A1111 etc, but still occurs. :(
Hey there what is your video format that you are using?
And did you disabled the big extensions in stable diffusion?
@@digital_magic found the answer its a bug in the new version:
Guys for people who get error with video control net to downgrade go to extention tab in automatic1111 extention deforum and write command git checkout 0949bf428d5ef9ce554e9cdcf5fc4190e2c1ba12 it will downgrade to aug13 version.
i gess soon when bug fixed maybe u will need to reinstal deforum or write git checkout master
Initial noise dont appear here for me 😢
Wow that is very strange have you done exactly what I told in the tutorial?
Followed the tutorial but for some reason when strength schedule is 0 my input video is not taken as the frames changes. Only when I increase strength schedule to 0.65 the tile controlnet seem to kick in but result aren't like yours. Not sure why it's not working. Would be really thankful if you could upload your setting file to pastebin. Just to check if its my setup or incorrect settings. Thank you for the video
All my settings files are here in my shop, you can get them for free, just type in,.... 0
ko-fi.com/digitalmagic/shop
maybe also try updating controlnet and deforum and be sure to disable all the big extensions
@@digital_magic Ah top dankjewel! Settings hebben zeker geholpen. Nogmaals bedankt :)
hahahahaha, goed geraden dat ik nederlands ben :-) Ben blij dat de settings file je konden helpen @@eyoo369
Where is the settings file stored?
in the output folder in the img2img-images folder
Hello, I did the tutorial step by step, but my animations don't move, it's like a slide of photos, where could I have gone wrong?
I am not sure where you have went wrong. but you can try to install my free settings file for deforum. You can find the link in my latest video that I produced on my CZcams channel
@@digital_magic I Will try. Thanks
Does this work with stable DiffusionXL
i am very interested as well. and if not , then it won't take long before they will update the deforum extension. i will try it out soon 🙂
I followed the exact same steps, still when i am generating the video, its creating a whole black image seq along a rectangle (and the shape is morphing). can anyone help me in this?
have you tried disabling all big extensions and re-install controlnet and deforum?
@@digital_magic yeah I re installed stable diffusion along with the extensions...it's working now....somehow....but couldn't fix the problem earlier.
You're missing a step. After changing initial_noise_multiplier min to 0.0, you must kill the instance in terminal and restart it.
thanx for the info, but for me it worked like this
how to set denoising strength in deforum? it is always using default 1 and giving bad results, not same result as in img2img...
I used the noise multiplier as I showed in the tutorial. and then In the noise tab set the noise schedule to zero or 0.2
@@digital_magic I did some research but apparently there is no way to match the img2img denoising strength with deforum since it is always using value 1, so I just changed my img2img settings accordingly.
@@Adem.940 great and thanx for letting me know 🙂
Any idea why my Hybrid Video tab keeps loading for infinity?
I do everything the exact same.
Only thing is that i use an mp4 file
i changed the Hybrid Video tab first now and it seems to work, but the output id definitely not the same as what i had in the img2img tab.
Some extensions were causing an issue, not sure which but I disabled all of those that weren't needed.
@@nickaww Yeah it is very important to disable the extensions. I even had to reinstall controlnet and deforum
What if I don't have a "shared.py" folder?
its not necesarry to do this annymore i think
if you have updated Automatic 1111 or use sd 1.6 it works automatically i think
How about the same tutorial but with ComfyUI? Comfy has more than 15x faster(!!) performance as A1111 and no memory issues / "NaN bugs" / extensions problems so I wish I would never have to go back to A1111.
I haven't dived into comfyui yet, but I will definitely in future. it sounds very good that it is so much better. for me it also counts how many of my viewers are using web UI or comfy UI. cuz for me it's important that people that watch my videos can follow along. thanks for your tip and hope to stay in touch
@@digital_magic I just switched recently to Comfy as SDXL was released. Turned out that it was impossible to use SDXL with A1111 on my 2060 SUPER 8GB. So I had no other choice than to switch over to Comfy and I was surprised how fast it runs. I think ther will be MANY users with the same problem now, so in a couple of weeks many of them will either switch completely over or use Comfy together with A1111.
@@MikevomMars thanx for your info, sounds like i need to switch aswell
Very nice. But if you‘re working on a Mac…and youre a user, no a programmer, is there „ another“ way“
i am sorry i don't know that :-(,
But what I do know is that you could do it in DaVinci Resolve Studio. you can also do it in the free version but the result is not 100% as good as the studio version.
I looked up a tutorial for you About how to do this in resolve. At 1.07 the bit starts where he explains about retime and scaling this is exactly how I do it in resolve. I set motion estimation to speed warp though. What I do is, I would drag your clip in the timeline and resolve then right click and choose change speed set the speed to 50% and now you have sort of a slow motion effect. then you do the step what he does in the tutorial at 1.07. That's the step where you create the interpolation, cuz resolves AI now creates an extra frame for every frame. then I export the clip. and import it again and then set the speed to 200% and then you're back to your normal speed of the clip. I hope this all helps
czcams.com/video/UTuMZPxLJsg/video.html
Sir can this work on gtx 1050ti 4gb vram love your videos ❤
I am not sure I have a 8 GB vram card, so maybe it'll work
Otherwise maybe try it on rundiffusion or think diffusion the paid versions
I dont have the tile thing in control net :
you have to update control net and Download the latest models. Here is the link:huggingface.co/lllyasviel/ControlNet-v1-1/tree/main
@@digital_magic I got it thx
@@AyakaVR perfect
Great video! Thanks so much for the mention. I thought I might trip you out a little more and show you a video I did without any controlnet, no embeddings, no special noise multiplier edits(other than the one in deforum), and basically nothing special. In fact, I even made it using TensorRT, which doesn't even support controlnet. I made hybrid video before controlnet existed. I did very little post-processing too. I think I just enlarged it and used a simple unsharp mask. czcams.com/users/shortsXROWPy3-ez4?feature=share
Hey there I am honored to get a message from you, I really love your videos and your work for hybrid video. I would love to communicate with you through message, maybe we can contact on Instagram or Twitter? or maybe there's another way we could mail with each other.
Hi, until yesterday everything worked perfect, I have edited about 30 videos thanks to this technique and following these steps I have achieved great results. For some reason that I don't know, if I activate controlnet in deforum I get this error -Error: ''NoneType' object is not iterable'- and I can't fix it, there is no information on the net either. Does anyone have news about this error?
have you tried disabling all big extensions and re-install controlnet and deforum?
hi, @@digital_magic ,I tried a thousand options, among them the ones you mentioned, with no result. I managed to read a thread in the controlnet github forum in which several users reported the same error. The solution was to leave SD1.5 aside and use SD 1.7.
For some reason, other users were also getting controlnet +deforum errors. Thank you for your attention.
@@vfgamex226 have you tried to ask in the deforum discord group?
Here is the link:
discord.com/invite/deforum
I manage always to solve all my problems there, there are many very experienced users in there :-)
Amazing, more thanks@@digital_magic !!
thanks for the video. it's great. however I can't make it to work. get folowing error all the time. Tried to google it, but can't find anything relevant. perhaps anyone encountered it also and know how to fix it: Error: 'operands could not be broadcast together with shapes (540,960) (536,1) (540,960) '. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli.
I've had many comments like this and most of the times it help if you disable the init image and delete the path in there like I showed in the video. for some reason stable diffusion always creates a path, and this is something I couldn't change in the base deform settings file. please let me know if this helped
@@digital_magic Hello. At which minute of the video is the part you are talking about located?
I used deforum 8 months ago but i preffer do it picture by picture
okay i understand 🙂 i think there are many ways to create cool im2img videos
5/5
i dont found shared.py 😭😭not exist in me file.
not necessary anymore, sd1.6 does it automatically to 0.0
Your health??
I really appreciate it that you ask about my health. it's slowly going better and at the moment I can work 3 till 4 hours per day on the computer, that makes me really happy. hopefully by the end of November I can go skiing biking climbing walking and doing all the stuff that I like to do so much again. I wish you a very nice day
Any google colab for this ?
I am not sure but I guess so. I'm not familiar with Google colab, sorry. But you could also use rundiffusion or think diffusion. the cool thing from those websites is that they run in the automatic 1111 webui.
you need colab pro, otherwise the google will reset your notebook
can't see the ControlNet list
you have to update control net and Download the latest models. Here is the link:huggingface.co/lllyasviel/ControlNet-v1-1/tree/main
@@digital_magic thank you
oddly it did not come even close....I mean in the img2img it looked fine, but in deforum it wasn't even close from the get go
Hey there did you solve the problem already?
I had that in the beginning too that everything was fine in image to image, but then in deforum it wasn't okay. what helped me was disabling all the big extensions and reinstall control net and deforum. Please give this a try and feel free to ask more questions if necessary
@eranmahalu @@digital_magic that worked for me. disabling the other extra addons
Holy crap if it takes this many steps to make a video using SD, then I'm better off hand painting my movie frame by frame.
Hahahahahahaha, the cool thing is once you've done it once, you've got your own settings file and then it is much easier. and for me after creating three to four videos with this technique it's not so complicated anymore
Not as consistant than warpfusion… but good deforum tutorial 👌🏻
Yes you are totally right with that it is not as consistent as warpFusion but we are getting close and I think the potential of deforum is immense
..and Flowframes is for windows only 🤭😵💫
Sorry I didn't know that