Shortrocity EP3: Overlaying Transcript on the Video
Vložit
- čas přidán 27. 07. 2024
- In this video I add a transcript overlay to my Shortrocity AI CZcams short generator.
GitHub: github.com/unconv/shortrocity
Support: buymeacoffee.com/unconv
Consultations: www.buymeacoffee.com/unconv/e...
Memberships: www.buymeacoffee.com/unconv/m...
00:00 Intro & system message update
08:29 Plan for the video
09:30 Figuring out how to draw text with CV2
12:49 Styling and positioning the text
18:48 Drawing text based on timings
23:09 Drawing the narration on the video
29:32 Adding narration audio to video
31:30 Syncing transcript with narration
39:08 Putting it all together into a single script - Věda a technologie
This video establishes a strong foundation, offering ideas or techniques that others can further develop into something truly unique.
👍
Liked and Subbed. This is a very interesting project.
Destroy??!
Heck NO!
a DIAMOND! 💎 to be polished more is what it is.
I saw you got bored with it? about 3/4 of the way.
We're learning a Lot!
Please complete the polish!
Make this Diamond 💎 glisten!!
Very Insightful and interesting as usual. Love it! Just wondering if Blender's 3D models and rigs can be incorporated into this workflow. It would be intriguing to see how they can be manipulated within the context of this video creation process.
Use a text to video model, or with those images use a image to video model, use youtube api so it auto uploads the shorts, and use web scraping for finding articles, it could post about 1,000 shorts per day, with that much content it is more probable to get a popular one by luck, your lacking vision, keep improving this project
What about using some stock footage API instead of generating images?
I haven't tried it myself, but it should be more cost-effective since the cost should be per month and not per token.
This way, you can ask GPT API to generate the search terms for the correct stock footage category and select a random stock video using their API.
(I guess this is more or less what InVideo AI does, and I wanted to build a tool like that myself)
Very good idea!
@@unconv please, please, please make a Part 4 with that!
I've already subscribed to your channel because of this series ;)
👍
as a video editor, this makes me calm, that was a very boring output :d as a programmer enthusiast, that was interesting
I'm feelin'..Ya!
Hope he keeps it up though.
Cause it's just a matter of time... Until someone else gets it right!
So keep saving your Bread and Cookies!
Can we see stable diffusion image generate in this project ? It could be amazing
Hello, My images are not resized, I generated 512x512 dal e-2 images becauce of free :(
you opened a pandora box 😆
Oops 😬
on average, how much does it cost to generate a video like those that you tried?
couple of dollars, so many images become quite expensive.
Around $0.61 USD for a 6-image 57 second short + ElevenLabs cost ($5/ month)
@@unconv Yes you are right, I made longer movies of 3 minutes, and TTS1 is not that expensive, nor is GPT3.5, it's the Dall-e that quite dear.
I have a system with an Nvidia, I want to try to run it from Stable Diffusion locally, I don't know if I can use something like LiteLLm or ollma to use the openai protocol locally.
With solar and SD locally you would have only the TTS to pay for, unless there is an opensource TTS model.
But potentially, with the current latest MOE models you could potentially run it for free.
I really wish you used any language other than Python. It's just low-class.
Alright. You choose the next language
i will wait for it and subbed now @@unconv
C# OR JS/TS @@unconv