How To Use Prompt Travel With AnimateDiff | Automatic1111 Tutorial
Vložit
- čas přidán 16. 06. 2024
- I'm going to show you how to CONQUER AnimateDiff in Automatic1111 by using the new Prompt Travel feature! This will give you SO MUCH MORE control in what your animation consists of!
👑 BECOME A TYRANT & CONQUER YOUR LIFE:
👑 Join the Tyrant Empire: tyrantempire.com/register/tyr...
👑 Conquer A.I. Art: tyrantempire.com/prompt-gener...
🖥️*VIDEO LINKS*
TOPAZ AI: www.topazlabs.com/topaz-video...
PROMPT GENERATOR: tyrantempire.com/prompt-gener...
⛓️*SOCIAL LINKS:*
Instagram: / tyrinthetyrant
Twitter: / tythetyrant
Soundcloud: / rampayj
⏳*TIMESTAMPS*
00:00 | Introduction
01:04 | How To Install Prompt Travel
01:49 | How To Use Prompt Travel Extension
03:20 | How To Use A Master Prompt
03:33 | How To Quickly Adjust Weights
04:14 | Results So Far
04:55 | Prompt Generator
05:22 | Image To Image Prompt Travel
07:14 | Image To Image w/ Master Prompt
08:12 | Upscaling & Interpolating
08:52 | Final Result
09:32 | Outro
🏷️*TAGS*
- Tyrin The Tyrant
- How To Make AI Art
- Automatic1111 Tutorial
#️⃣*HASHTAGS*
#ai
#aianimation
#aianimationtutorial - Jak na to + styl
Join The Tyrant Empire Discord Community: discord.gg/duAudH7cAB
Love your channel - great info, direct, and honest. Thanks the great content!
I learned a lot during this video and really get inspired to step up my creative work, this has a lot of potential, thank you for this upload Ty!
Saw some references for prompt travels in animatediff, but couldn't get it to work. I didn't know it was an extension in itself. You provided exactly what I was looking for. You're the boss!
You just got a new subscriber from this awesome video. Thanks for the great tutorial
Thank you so damn much, this was not exactly what I was looking for but it was another way to acomplish what I wanted, u rock!
short and concise ... that is the way to do it ... ty for the info ...
Thank you so much for this video, awesome work. This will help me so much!
You're very welcome!
That sample in the beginning on the right. That's the holy grail my friend. If you can do that you can do anything. Where did you get it and have you figured out how to do that?
這方法真的有用,非常有趣,感謝你。
🔥🔥🔥. Great video. Def new sub.
I agree topaz is a must have
💗 This is exactly what I need, thanks
Great tutorial! How would I be able to do prompt traveling but using multiple images for different frames? say I already generated 3 distinct images using text2image. How can I use animatediff to prompt travel from the first image through the third?
Hello! Excellent video! Thank you so much! I have some doubts: With the new version of animatediff I cannot create img2vid from txt2img with controlnet as you explained in another video. I only get many frames with the same image. And if I go to img2img as you explain in this video, I can create an animation from the image, and although I have all the parameters as you say, the animation starts with an image similar to the one I put in it, but ultimately not it's the same. Do I have to configure something in settings? Or how can I resolve this? Beforehand thank you very much!
P.S. If I just create a new animation with animatediff, without reference images or controlnet, everything turns out great, even if I use prompt travel
I have the same issues. control net does not work either if enable t all images end up the same-
very informatic tutorial, there is one thing in animatediff for adding video. pls make one tutorial for that also. thanks
legendary
any suggestions on how to remove the sepia or brown tint that seems to always be in any animation i create, regardless of what settings or models i use?
I think that on the latest update of the animatediff extension, there is build in support for prompt travel.
I believe you are right 😅
Love your tutorials, they're very insightful and informative! I am having a problem though, after generating my image in txt2img, I transfer it over to img2img, but upon entering my prompt (which includes a prompt travel sequence), my results are coming out extremely blurry and without any movement at all. Not sure what I am doing wrong but any help would be much appreciated, thanks! :)
If i want to use this for a video with a voiceover and I need the images and transitions to have an specific lenght can I control that in some way? so like If i need clip 1 to be 4s long and then morph with clip 2 which has to be 3 seconds long, can I do that?
I followed the steps from 5:35 but the img2img images just fade and dont animate at all. It works fine with txt2img so not sure what's going wrong.
I have done each of the steps but all I get is a single image with no animation. Any help on why this is happeing?
Does the topaz video have an API to work with it programmatically? I integrated automatic1111 with a chatbot but wanted to see if I could refine a video before sending it to the end user.
I love your channel! i was wondering if you had to disable xformers to get animatediff to work? i was getting a runtime error until i turned off xformers, but animatediff is extremely slow. for 8 frames at 512 x 512 it takes 25 minutes. is there a way to speed this up? i have rtx 3080 gpu. how long does it take you to render 8 frames with animatediff? thank you!
I haven't disabled xformers, it hasn't caused me any issues. It takes around 5 minutes to generate a 24 frame animation with my 3060. I also have --medvram & --no-half-vae enabled too.
So every time I use baddream and fastnegativev2, it makes my animation skip and jump between prompt travel prompts. How are you able to use those without it doing that?
question:
24 frames = frame 0 to frame 23, right? am i getting it wrong?
Awesome! Is tehre a way to morph from one image to another?
Yes, i explain that technique in this video: czcams.com/video/HAAC36X-HEM/video.html
Thanks Ill check it out. I can't get anything to work with image to image though, always comes out faded. Do i need to use canny or depth? dont know how you get these clean results. So no need to reinstall animdiff like in your previous video right?@@TyrinTheTyrant
The script always call when gen img2img or text2img I'm already
*** Error running before_process: /animatediff.py
Traceback (most recent call last):
File "/content/SDVN/modules/scripts.py", line 611, in before_process
.....
....and more
What source you use from ?
Tks
Hi, even following the tutorial, I can't even create a gif. It only creates single photos. Can you help me?
Why not just generated the entire thing in text2img? no need to do that in img 2 img right?
as far as i know you dont even have to install the prompt travel extension since it already is build in into A1111AnimateDiff
That would make sense.
This great but i dont have to patiences already getting impatience with regular 2 second generations.
⭐ Can you do a tutorial where you do video2video with AnimateDiff in A1111?
Why do I get a grid of the images instead of a gif
why it keeps generating 24 images but not gif
This didnt work for me. I get this long error "inops Error: Error while processing rearrange-reduction pattern "(b f) d c -> (b d) f c". Did everything right but it does work no matter what i change.
Wish it didnt break the png info for me, when i pull it off a prompt travel prompt positive area is empty 😂
Dear friend, how is this different from the loopback wave extension that was available to A1111 a while back?
I never used that extension, but looking at it, It seems like LoopBack Wave is a way to produce animations in img2img without AnimateDiff, Similar to Deforum.
What is the url for the tyrant prompt generator? Also is it your website?
Yes It my website.
tyrantempire.com/prompt-generator-landing-page/
how long does it typically take to generate an animation?
Depending on your GPU & the amount of extensions you are using, anywhere from 2 - 20 minutes.
Using this method, it usually takes less than 5 minutes for my 3060
I am not having ANY motion at all in my animations!
why prompt travel dos not work for me ,i installed extention and i have prompt travel in my scripts
another extension might be interfering
Why is the generated sequence suddenly changed into something different after 12 frames? NumbersOfFrames:24, FPS:8.
The changes made the entire generated sequence into two piece of different clips.
Make sure this setting is enabled: "pad prompt/negative prompt to be same
Go to:
Settings>Optimizations
how to make that video at the beginning of this video? The dancing animation that’s follow the music
Tutorial for video to video is coming soon.
Where can I find a tutorial to use animatediff with controlNet like the videos at @0:02???
I'm making one
Also what is the Bad Dream and Fast negetive Styles? How do i get that?
They are textual inversions / styles that work perfectly for the Dreamshaper models.
Links:
FastNegativeV2: civitai.com/models/71961/fast-negative-embedding-fastnegativev2
Bad Dream & Unrealistic Dream: civitai.com/models/72437/baddream-unrealisticdream-negative-embeddings
But bro they are under styles and not embeddings tab @@TyrinTheTyrant
Also I see that you are using mm_sd_v15_v2.ckpt for animate diff. Where do you store this model at ?
They are in my embeddings folder. I created a style using them by putting them in the negative prompt & then creating a style.
For the motion model, put it in the extensions > animatediff > models folder.@@piyushpatelsrm
Thanks Mate :) @@TyrinTheTyrant
Where’s the link for Topaz?
www.topazlabs.com/topaz-video-ai/ref/2339/
Where is the refer link for Topaz bro?
Wow, I forgot to put the link 🤦🏽
www.topazlabs.com/topaz-video-ai/ref/2339/