Robotics Foundry - VJ Pack Compilation

Sdílet
Vložit
  • čas přidán 28. 08. 2024
  • This pack contains 83 VJ loops (71 GB)
    Download it here - www.patreon.co...
    The AI software is here and maturing quickly. And I think having it control all sorts of robot hardware is just around the corner. Things gonna get weirder.
    I finally made the jump to using the A1111 Stable Diffusion web UI and it renders images so much faster thanks to the xFormers library being compatible with my GPU. Also there are tons of unique extensions that people have shared and I have much spelunking to do. I figured out how to run x2 instances of A1111 so that both of my GPU's can be rendering different jobs, which is hugely beneficial.
    For the last few months I've been running backburner experiments inputting various videos into SD to see how it extrapolates upon a frame sequence. The "Stop Motion" scenes are the fruit of these experiments. But the main trouble I've had is the exported frames are very jittery, which I'm typically not a fan of. This is due to the fact that the input video frames are used as the noise source for the diffusion process and hence I have to set the Denoising Strength between 0.6 and 0.8 so that SD has enough room to extrapolate on top of the input video frames. Although SD has no temporal awareness and it assumes you're going to export a solo frame, not an animated frame sequence, and so all of this is a hack. But I found that if I chose a subject matter such as robots then I could embrace the stop motion animation vibe and match the feeling of incessant tech upgrades that we are currently living in.
    I tried all sorts of different videos but ultimately my StyleGAN2 videos with a black background were by far the most successful for inputting into SD. I believe this is because my SG2 videos typically feature slowly morphing content. Plus the black background allows SD to focus the given text prompt onto a single object, therefore narrowing its focus and shortcutting SD into strange new territories. But the real key is inputting a video that features content that contextually parallels the SD text prompt, at least in the overall form, but definitely for the necessary color palate. SD's dreaming is limited in that regard. Also finding the ideal SD seed to lock down is important since there are many seeds which didn't match the style I was aiming for.
    For the "Circuit Map" scenes I grabbed the CPU videos from the Machine Hallucinations pack and used it for the stop motion technique described above. From there I jammed with it in After Effects and couldn't resist applying all sorts of slitscan experiments to make it feel as though the circuits are alive in various ways. And of course applying some liberal use of color saturation and Deep Glow was useful in making it feel electric and pulsing with energy.
    For the "Factory Arm" scenes I wanted to have an industrial robot arm swinging around and insanely distorting. So I started by creating a text prompt in SD and then rendering out 13,833 images. For the first time I didn't curate the images of this dataset by hand and just left any images which showcased any strange croppings, which saved tons of time. In the past I've worried that StyleGAN2 would learn the undesired croppings but have since learned that with datasets this large these details tend to get averaged out by the gamma or I can just stay away from the seeds where it becomes visible. From there I did some transfer learning from the FFHQ-512 model and trained it using my dataset until 1296kimg.
    For the "Mecha Mirage" scenes I grabbed a bunch of videos from the Machine Hallucinations pack and applied the SD stop motion technique. These were quite satisfying since they were more in line with how I imagined SD could extrapolate and dream up strange new mutating machines. I think these videos look extra spicy when sped up 400% but I kept the original speed for all VJ-ing purposes. It is so bizarre what these AI tools can visualize, mashing together things that I would have never fathomed. Again I applied an X-axis mirror effect since the strange tech equipment takes on a new life, although this time I didn't use a traditional mirror effect since I flipped the X axis and then purposefully overlapped the two pieces of footage with a lighten blend mode. So you don't see a strict mirror line and better blends everything. And then the pixel stretch effect was a last minute addition that was some real tasty icing on the cake.. I think it's because machines are often symmetrical and so this really drives home that feeling. In the future I want to experiment with Stable WarpFusion but getting it to run locally is such a pain. Hello my AI friend, what did you have for lunch today?
    #vjloops #vjing #concertvisuals #ai #stylegan2 #machinelearning #stablediffusion #robot #aivideo #aianimation #morph #industrialrobots #robotics

Komentáře • 1