3D to AI - THIS is the REAL Power of AI

Sdílet
Vložit
  • čas přidán 4. 06. 2024
  • Using 3D to control AI Scenes is very powerful. Watch me create a scene in Blender and then render it into a AI scene in Krita with Stable Diffusion XL Lightning
    #### Links from the Video #####
    My 32 min. Patreon Video: / 101132172
    3D Model used: sketchfab.com/3d-models/aband...
    Krita AI install Video: • LCM for Krita - OVERPO...
    #### Join and Support me ####
    Buy me a Coffee: www.buymeacoffee.com/oliviotu...
    Join my Facebook Group: / theairevolution
    Joint my Discord Group: / discord
    AI Newsletter: oliviotutorials.podia.com/new...
    Support me on Patreon: / sarikas
  • Jak na to + styl

Komentáře • 173

  • @NorisSpecter
    @NorisSpecter Před 2 měsíci +57

    After so many years working in Blender, my heart flutters, seeing new possibilities.

    • @johntnguyen1976
      @johntnguyen1976 Před 2 měsíci +1

      My thoughts exactly!

    • @talontoth4402
      @talontoth4402 Před 2 měsíci +3

      Just projection map it to the mesh then bake to your UV set and clean up the texture after. If a few different projections were made in the same style you could probably do a large chunk of texturing done very quickly. Sometimes I wish I worked in 3D still with all the new stuff coming out.

    • @f4ust85
      @f4ust85 Před 2 měsíci +3

      The "possibility" of hacks mimicking great in-depth texturing and rendering in 5 seconds via generators to compete with your job? Yay, so exciting.

    • @joefawcett2191
      @joefawcett2191 Před 2 měsíci +5

      @@f4ust85 luddites can scream at traffic

    • @orangehatmusic225
      @orangehatmusic225 Před 2 měsíci +3

      Pretty soon you won't even need to know how to use Blender.

  • @novantha1
    @novantha1 Před 2 měsíci +16

    Using something like ReShade I could see an interesting situation where a person could get depth maps from games and re-skin them for really cool screenshots, or re-imaginings of old games.

  • @OmerAbdalla
    @OmerAbdalla Před 2 měsíci +25

    That last move you made, rotating the scene in Krita, sold the workflow to me. Now I need to go back and refresh my old Blender brain muscles. Thank you Olivio!

    • @Av-uv6xu
      @Av-uv6xu Před 2 měsíci +15

      last move was rotating in blender

    • @TheSnakecarver
      @TheSnakecarver Před 2 měsíci +4

      He switched back to Blender to rotate the scene.

    • @OmerAbdalla
      @OmerAbdalla Před 2 měsíci +4

      @@Av-uv6xu Thank you for pointing that out. You are right we are not importing a 3d model into Krita. That means a few additional steps to get the scene you like and I am OK with that.

    • @DeltaZ10000
      @DeltaZ10000 Před 2 měsíci

      Then you should check out a addon for Krita called "Blender Layer", that syncs in realtime your blender scene into the AI Diffusion plugin in Krita.

  • @I-Dophler
    @I-Dophler Před 2 měsíci +7

    Having spent years mastering Blender, the sense of excitement I feel now, as I look towards the horizon filled with new opportunities, is truly unparalleled. The journey with Blender has been a transformative one, marked by continuous learning and growth. Today, as I stand on this precipice, eager to dive into what lies ahead, the prospect of exploring these new avenues fills me with an exhilarating sense of anticipation.

  • @Marcus_Ramour
    @Marcus_Ramour Před 2 měsíci +4

    Great workflow & thanks for sharing 👍🏻

  • @renzocheesman6844
    @renzocheesman6844 Před 2 měsíci +7

    this is freaking amazing, thanks for sharing!

  • @dimamitlenko1830
    @dimamitlenko1830 Před 2 měsíci +3

    Thank you, Olivio! That is amazing!

  • @jeffg4686
    @jeffg4686 Před 2 měsíci +7

    Not sure if it would give better results, but you can get a depth map by enabling "z" pass, then using the compositor.
    In the compositor, you can output depth socket into normalize node, then into Math (subtract from 1) node (to invert) then to output.
    Might yield better results than mist but can't really say for sure since SD just using "overall" feel of it.

    • @fernandolener1106
      @fernandolener1106 Před 2 měsíci

      I think he tried to simplify and avoid blender specifics.

  • @aysenkocakabak7703
    @aysenkocakabak7703 Před 2 měsíci +4

    This is a really nice technique, i wanna use it in my next set design project! Thank you❤

  • @user-kq2es6ip2g
    @user-kq2es6ip2g Před 2 měsíci +2

    very cool, thanks Olivio, always love your videos for when I need to find out what awesome new stuff is going on in AI!

  • @AironExTv
    @AironExTv Před 2 měsíci +3

    Wow. Thanks for sharing this.

  • @Patheticbutharmless
    @Patheticbutharmless Před 2 měsíci +2

    By the way, in Blender, in the world settings of the mist pass just set the "Falloff" to "inverse quadratic" and you can get the right b/w gradient at the render.

  • @MrSheduur
    @MrSheduur Před 2 měsíci +4

    Nice to see we are already at that point. This can be nice for future AI based lookdev, once we achieve a decent amount of consistency for style of objects. would love to see some examples of 3d meshes turned to depth maps and how consistent they look.

  • @moviecartoonworld4459
    @moviecartoonworld4459 Před 2 měsíci +4

    I really get a lot out of these always new and magical lectures. thank you!

  • @homayounkarimpourr3923
    @homayounkarimpourr3923 Před měsícem

    Beautiful idea, thanks a lot. How could we do the same process with an animted scene?

  • @user-kt2kz5qg4z
    @user-kt2kz5qg4z Před měsícem +1

    Wonderful. Thank you. Very instructive and it seems a much better approach. More flexible.

  • @waurbenyeger
    @waurbenyeger Před 2 měsíci +2

    Pro tip: If you hold down the Windows key and tap the plus key to open the magnifier tool (or click the start button and type magnifier) and then hold ctrl + alt + i you can invert the colors of your screen when making your adjustment in Blender. Just make sure to set the magnifier to 100% so that it's not zoomed in and shows your full screen.

  • @RonaldH
    @RonaldH Před měsícem

    Pretty cool. But, as it's based on comfyUI in the background, is it possible with this AI plugin inside of Krita to save the workflow? Like comfyUI embeds the workflow that created the image inside the image. Can that be done here?

  • @shahidmahmood7252
    @shahidmahmood7252 Před 2 měsíci +2

    Always something new to learn in Blender but this feels like a quantum leap using AI. Thank you.

  • @nekoanomaly
    @nekoanomaly Před měsícem

    Love your tutorials

  • @erdbeerbus
    @erdbeerbus Před 2 měsíci +1

    this is indeed a great workflow! thank you! is there a way to animate things like this? maybe there is the option to save a sequence of depthIMGs and to import it //as a seq ... into KRITA ... would be great to see a way ... have an egg-siting Easter!

  • @sb6934
    @sb6934 Před 2 měsíci +2

    Thanks!

  • @juanjesusligero391
    @juanjesusligero391 Před 2 měsíci +2

    I'm happy to see Blender + Krita + ComfyUI! Open Source software rules! ^_^

  • @Max-Blaze
    @Max-Blaze Před 2 měsíci +2

    Its funny that you stopped at the difficult part which is when you change the perspective and use img2img (depth) again it still tends not to keep the same building. Unless it is already pre colored till a extend.

    • @liialuuna
      @liialuuna Před 2 měsíci

      of course it's not keeping the same building. just use blender for your project instead of ai.

    • @Max-Blaze
      @Max-Blaze Před 2 měsíci +2

      @@liialuuna This is a combination of both, and soon it will probably possible.

    • @lefourbe5596
      @lefourbe5596 Před 2 měsíci +1

      ​@@liialuuna just use blender...
      Just do this... Still missing the point.

  • @ilanlee3025
    @ilanlee3025 Před měsícem

    Great vid, subscribed

  • @pandelik3450
    @pandelik3450 Před 2 měsíci +1

    I don't think you mentioned about resolution when exporting from Blender. What should be take into consideration? Should I always export on a standard resolution for SD 1.5, respectively SDXL?

  • @Sylfa
    @Sylfa Před 2 měsíci +2

    There's also a plugin for rendering directly with AI in Blender. Not sure how it compares though as I've not tried it.
    Would enable a swifter workflow though.

    • @OlivioSarikas
      @OlivioSarikas  Před 2 měsíci +2

      yes, but krita has all the power of a full image editing program giving so many more options to quickly adapt and edit. This video only shows the 3D input part. But if you check out my recent art jam live streams, you can see me using krita for painting as well

  • @Jackripster69
    @Jackripster69 Před 2 měsíci +1

    This is very cool, lets hope for some simplier workflow soon tho.
    Very handy to just model and allow AI to take care of imagine texturing, lighting etc.

  • @Mranshumansinghr
    @Mranshumansinghr Před měsícem

    I have been learning Blender since 1 year now. Now I have jumped on to the ConfiUI and Stable Diffusion bandwagon. This is exactly what I was looking for.

  • @ceegeevibes1335
    @ceegeevibes1335 Před měsícem

    As a blender-head _ I'm very happy you are directing attention to Blender. this is truly one of the the best workflows, depth map is so powerful,

  • @GS-ef5ht
    @GS-ef5ht Před 2 měsíci +1

    Amazing!

  • @I-Dophler
    @I-Dophler Před 2 měsíci +2

    In a world where the boundaries of reality stretch and bend, we journey from the tangible, three-dimensional spaces we know so well into the vast, uncharted territories of Artificial Intelligence. This narrative unfolds, revealing the genuine, unparalleled potency that AI holds, not just as a tool, but as a transformative force reshaping our world, our perceptions, and perhaps, our very essence.

  • @Thomas_Leo
    @Thomas_Leo Před 2 měsíci +3

    The future is now! Amazing stuff. 👍

  • @chickenmadness1732
    @chickenmadness1732 Před měsícem

    This is huge for game developers and artists. It's going to save so much time.

  • @alejmc
    @alejmc Před měsícem +1

    This is something, you could even get some textures ideas to project back into the model later on.
    It is at least a beyond polished starting point.
    I wonder how it would look with simpler boxes and shapes, if it would be possible to give you a ‘detailed’ home out of boxes that then we can use back in Blender to actually model it away.

    • @chickenmadness1732
      @chickenmadness1732 Před měsícem +1

      Yeah, rotate the image until you're perpendicular to the wall. Then spam a few AI gens and you have a bunch of material you can splice together in photoshop. Really huge time saver.

  • @alxleiva
    @alxleiva Před 2 měsíci +1

    I find that in order to preserve extra details is better to use depth plus canny maps. You also need yo adjust a but what's more important depending on your image, sometimes the prompt is more important, sometimes it's controlnet

  • @seduttoridaincubo1722
    @seduttoridaincubo1722 Před 21 dnem

    Sorry, I installed stable diffusion and all the related stuff, but there's no way to find a folder or file named A1111 on my pc!! I also tried to write the path to stable diffusion folder but no use...

  • @shongchen
    @shongchen Před 2 měsíci

    Great work, but you don't need controlnet?

  • @Suthriel
    @Suthriel Před 2 měsíci

    Krita and AI is really super, especially since it just requires some clicks to install :) I just wonder, is there any AI, LoRA or checkpoint, that can cut out characters to separate them from the background? Like the depth map, just with also throwing out the background.

  • @R0TH81LLY
    @R0TH81LLY Před 2 měsíci +1

    I miss the affinity photo tutorials, i hope you get well soon

  • @marcihuppi
    @marcihuppi Před 2 měsíci

    quicktip for cinema 4D users: you can create a material with a b/w gradient in the luminance channel and use it as material in the render settings with material override to convert your whole scene to a depth map ♥ extra tip: you can hook the start and end coords of the gradient to the position of 2 separate null objects via xpresso to control the gradient.

  • @oneroom2660
    @oneroom2660 Před měsícem

    This technique may answer a lot about our own reality, the black and white misty place is referred as the Ethereal plane, it's somewhat of a 2.5D dimension, and you have a 4D Dimension which is a collective consciousness that projects a 3D reality we all agreed on like a shared dream.

  • @gabrielhabdulea9904
    @gabrielhabdulea9904 Před 13 dny

    do i need a nvidia gpu for this to work?

  • @BabylonBaller
    @BabylonBaller Před 2 měsíci +1

    Interesting 👍👍

  • @seduttoridaincubo1722
    @seduttoridaincubo1722 Před měsícem

    that's EXACTLY what i was looking for...

  • @nathanbanks2354
    @nathanbanks2354 Před 2 měsíci +2

    Finally installed the kirta plugin. It's *so* powerful. The AI selection mechanism is great too--it's easy to touch up only the part I want to change. I can also mask out various layers if one AI generation is better in one part than another. Guess I should experiment with depth maps and poses.
    My favorite part is the upscale--it takes forever, but it seems to reuse the prompt to fill in detail keeping the image mostly the same. A few minutes later on an old P5000 graphics card, and I've got a great 4k image. It's more much powerful than RealESRGAN. It's also more powerful than Dall-E 3 simply because you can modify images, though it's tempting to start with a Dall-E 3 image and tweak it in krita ai diffusion.

    • @Av-uv6xu
      @Av-uv6xu Před 2 měsíci

      is krita plugin going to work fine on gtx 1070?

  • @I-Dophler
    @I-Dophler Před 2 měsíci +1

    The video effectively demonstrates how to blend 3D modeling with AI for creative compositions, offering clear, step-by-step instructions suitable for a range of skill levels. It leverages free tools like Blender, making it accessible, and showcases innovative applications of AI in art, encouraging experimentation and creativity.

  • @cho7official55
    @cho7official55 Před měsícem

    I'm looking forward to see the other way around, from 2d images, to 3D

  • @supersupersocco
    @supersupersocco Před 2 měsíci

    Nice! Can you render a sequence of images?

  • @MaybeLoveHate
    @MaybeLoveHate Před 2 měsíci +2

    Really cool. Is there a way to generate textures from the output, or did I miss that part. What I get for working and playing at the same time haha

  • @DigitalForest0
    @DigitalForest0 Před 2 měsíci +1

    Olivio, how did you know i was looking videos like this! superb! please more videos about marrying AI generated photos and 3dscenes! Thank you!

  • @tanveerahmad2865
    @tanveerahmad2865 Před 2 měsíci +1

    woww.

  • @spidermarcusPOP
    @spidermarcusPOP Před 2 měsíci

    Danke Olivio! Schön dich zu sehen

    • @OlivioSarikas
      @OlivioSarikas  Před 2 měsíci +1

      hey markus, dich gibt's auch noch. Schreib mir mal :)

    • @spidermarcusPOP
      @spidermarcusPOP Před 2 měsíci

      @@OlivioSarikas Jaaa mich gibts auch noch ^^ Aber mit dieser neuen Version von Blender komm ich garnicht klar.... o.O

  • @pn4960
    @pn4960 Před 4 dny

    This is why I learned to use blender ^^

  • @johnanchor2415
    @johnanchor2415 Před měsícem +1

    Why dont you just slap the depthmap into ComfyUI and use it as controlnet depthmap? Must be much easier?

  • @EvokAi
    @EvokAi Před 2 měsíci +2

    Can you make a new 1111 installation tutorial for newbies? I really cant get those earlier tutorials

    • @lefourbe5596
      @lefourbe5596 Před 2 měsíci

      Type on youtube :
      Royal skies 1 min install stable diffusion.
      But instead look not at automatic1111 but ForgeUI instead. Same interface but faster

  • @chrsl3
    @chrsl3 Před 2 měsíci +1

    wow

  • @LockedPuppy
    @LockedPuppy Před měsícem

    Sooner than we think, this is how games will be rendered ..
    and not so long after, not even 3d models will be needed anymore. just some persistence for the AI.
    now all we need is a holo deck

  • @briboy2009
    @briboy2009 Před 16 dny

    I have Blender and never used it, watching this I can see why.

  • @siddharthpanchal718
    @siddharthpanchal718 Před měsícem

    Gone are the days when they made expensive movie sets.

  • @spearcy
    @spearcy Před 2 měsíci

    I don’t get too excited about all the cool single images people can generate. What’s fun to me is the video creation potential of things like this.

    • @CNCAddict
      @CNCAddict Před 2 měsíci

      This is the future of video game rendering, which will be incredible.

  • @cmeooo
    @cmeooo Před 2 měsíci

    whaat is the music at 14.second?

    • @OlivioSarikas
      @OlivioSarikas  Před 2 měsíci

      It's simplöy called "The Heavy Metal" by fatbunny on Envato Elements

  • @legacylee
    @legacylee Před 2 měsíci

    also, I couldn't resist.... "yaml be there..."

  • @pasindumadulupura8462
    @pasindumadulupura8462 Před 2 měsíci

    time to relearn Blender I guess. Let's make some lemon juice. cheers Olivio

  • @HiThere.ItsTom
    @HiThere.ItsTom Před 2 měsíci

    Can generate an image sequence?

    • @OlivioSarikas
      @OlivioSarikas  Před 2 měsíci +1

      should be possible, you could try to use IPA or Img2Img to keep the style more consistent. Or a Lora traint on the style you want

  • @xevenau
    @xevenau Před 2 měsíci +1

    Man I wish I haven't sold my 3d printer.

  • @jasonjohnson4133
    @jasonjohnson4133 Před měsícem

    Someone should do Magic Eye with this depth technique 😅

  • @serhiikalenik1155
    @serhiikalenik1155 Před 2 měsíci +1

    I do the similar, but use the composing and rendering with Z depth (Blender 3D). Then when I get image and depth I send result to the automatic111 img2img with controlnet depth.
    Btw. I'm tried to render dragons, but even when I has a 3D model with textures and depth and canny I could't get fine result from AI.

    • @OlivioSarikas
      @OlivioSarikas  Před 2 měsíci

      does Z Depth also have ranges?

    • @serhiikalenik1155
      @serhiikalenik1155 Před 2 měsíci

      @@OlivioSarikasSorry but I'm not sure what exactly you mean "ranges". If you mean "darker-deeper" - yes, it's has. But no so contrast image like in your video.

    • @garrettbates2639
      @garrettbates2639 Před 2 měsíci

      @@OlivioSarikas What exactly do you mean by ranges? Are you talking about the mist start / stop that you were changing?
      The z-depth doesn't, because it is a rendering of the actual z-buffer from the graphics pipeline. But I suspect that you should be able to accomplish the same sort of result with some additional nodes in the compositing editor.

    • @tstone9151
      @tstone9151 Před 2 měsíci

      I've been using this method ever since controlNET supported depth maps

  • @user-kt7uz9xc5m
    @user-kt7uz9xc5m Před 2 měsíci

    awesome. so you can download any picture, ai recognise it and make a 3d models. in different style. and you can tell it what you like or dont like and moderate this 3d world. 😮 and even download scenes with actors, like from any movie, choose avatars and create even movies, moderate scenes like a director of a movie 😮

    • @OlivioSarikas
      @OlivioSarikas  Před 2 měsíci

      no, this goes the other direction: turning 3D models into 2D images

    • @user-kt7uz9xc5m
      @user-kt7uz9xc5m Před 2 měsíci

      @@OlivioSarikas a "prnt scrn" button? 😳👍

  • @restwiththetablet
    @restwiththetablet Před měsícem

    You can reproject this in texture and make ai-texture artisit)

  • @Hudson1615
    @Hudson1615 Před měsícem

    Definitely useful for creating consistent concept arts. But that's it. Not useful for other 3D tasks.

    • @OlivioSarikas
      @OlivioSarikas  Před měsícem +1

      it's "3D to Ai" not "Ai to 3D". So this isn't intended for 3D tasks, it's intended to use 3D as a versatile input to create better 2D images

    • @Hudson1615
      @Hudson1615 Před měsícem

      ​@@OlivioSarikas Yup. As I said, useful for creating things like concept art.

  • @SosyalMedyaArge-so5bs
    @SosyalMedyaArge-so5bs Před 2 měsíci

    That's great, now AI can actually work.

  • @I-Dophler
    @I-Dophler Před 2 měsíci

    Greetings, my most excellent companions! I'm eager to know, how are each of you navigating the intricate pathways of our vast and unpredictable temporal journey today?

  • @KDawg5000
    @KDawg5000 Před 2 měsíci +2

    If Krita is going to fully embrace AI, it would be nice if they incorporated a tool that works like (and as well as) Magnific for upscaling & detail.

    • @DeltaZ10000
      @DeltaZ10000 Před 2 měsíci

      With the plugin he uses in Krita you can switch from Live to Generate and to Upscale.

    • @KDawg5000
      @KDawg5000 Před 2 měsíci

      @@DeltaZ10000 What is your opinion of the upscale results?

    • @DeltaZ10000
      @DeltaZ10000 Před 2 měsíci

      I´m using 4x NMKD Superscale that comes with when you install AI Diffusion for Krita. I like it, and that i can weight it adding details or not.@@KDawg5000

  • @VisibleMRJ
    @VisibleMRJ Před měsícem

    Can’t you just flip the depth map in blender?

  • @CCoburn3
    @CCoburn3 Před 2 měsíci

    Great video.

  • @nolanzor
    @nolanzor Před 2 měsíci +1

    The intro do be banging

  • @MysteryFinery
    @MysteryFinery Před 2 měsíci

    oh, I thought its an actual 3d model

  • @Amelia_PC
    @Amelia_PC Před 2 měsíci +1

    I've actually been using 3D with Stable Diffusion + ControlNet for a few months now. While 3D works alright with rendered line art and uses the image as a guide in ControlNet for anime line art, it still has limitations. AI-generated art can struggle with clean lines (eew enlarged images) and complex perspectives, especially in building compositions. But it works fine for... that boring perspective, mainly with the whole building on the screen.
    (AI still is dumb af, and I expected it'd be way better than what we have today. Apparently, they will never fix the same issues.)

    • @liialuuna
      @liialuuna Před 2 měsíci

      yes, it's dumb af, but you can make some complex buildings too, a lot depends on your controlnet settings and the details in your depth map.

    • @Amelia_PC
      @Amelia_PC Před 2 měsíci +2

      @@liialuuna Yup, I tried depth with MLSD using line art rendered by 3ds Max as a guide, along with all the other possible combinations. But here's the kicker: I was trying to use it for comics, which demands the most complex perspectives possible. AI just can't handle that yet. So, I gave up and now only use AI to add textures to my 3D renders and drawings. It's definitely unusable for serious gigs at this point in time.

  • @riufq
    @riufq Před 2 měsíci

    So it's 3D to 2D Ai right?

  • @23dsin
    @23dsin Před 2 měsíci

    Great. I was doing this last 12 months. I thought it was a common trick. ;D

  • @ZergRadio
    @ZergRadio Před 2 měsíci

  • @mostafamostafa-fi7kr
    @mostafamostafa-fi7kr Před 2 měsíci

    as 3d artist i already down it just after controlnet invented

  • @jonrich9675
    @jonrich9675 Před měsícem

    just ask midjourney for this feature.

  • @rakly3473
    @rakly3473 Před 2 měsíci +5

    Hmmm, I wonder where you got this idea?! hehehe - Spoiler, it was me in the live stream chat mentioning untextured low poly for depth maps ;-) You can check in your live replay of the chat on the channel.
    Seeing all these positive reactions; You're all welcome! :P

    • @audiogus2651
      @audiogus2651 Před 2 měsíci

      This is the most basic use of control net for well over a year now.

    • @rakly3473
      @rakly3473 Před 2 měsíci

      @@audiogus2651Not of Blender

  • @StephenShafferengineer
    @StephenShafferengineer Před 2 měsíci +1

    Why do you rock soo hard!!??? 😂

  • @HansEgonMattek
    @HansEgonMattek Před 2 měsíci

    Don't just turn the view...that's not accurate.
    You should better rotate the model, just press R, X, 180> R, Z, 180 or press R, hold Ctrl and drag the mouse, to rotate aligned to the grid.
    And don't forget to apply rotation and scale. (Ctr+A)
    If you get to more advanced rendering e.g. with shading, Normal maps a.s.o it will caus problems. So better do it the right way from the beginning.
    Up and Down is very important in the 3D universe
    And you don't have to hide the Overlays, it will not be shown in render.

  • @NotThatOlivia
    @NotThatOlivia Před 2 měsíci

    this can be done way easier in other 3D packages (I mean the whole export/render to depthmap)
    but not all of them are free, of course ;)

  • @InTheCity3D
    @InTheCity3D Před 2 měsíci

    Just use the C4D plugin for SD (and use Forge) its vastly superior and will render 4k

  • @analia390
    @analia390 Před měsícem

    The problem is that when rotating it doesn't matter if you use the same seed it will give you a completely different image.

    • @OlivioSarikas
      @OlivioSarikas  Před měsícem +1

      you could try to use IPadapter to get a similar style than the image that you created and liked. however the details are still going to be different. that's just how AI works at this point in time

  • @olivere5497
    @olivere5497 Před 2 měsíci

    Blender is 4.1??? Anyone think they went from 2.x to 4.x too quickly?

  • @jacosteenkamp8748
    @jacosteenkamp8748 Před 2 měsíci

    As cool as this might be and look, be warned, if you want to use this in any agency setting, make sure you know how to edit your files exactly to client spec and that you know how to make this an open file that clients creative team can amend and alter. Have fired more than enough junior artists that have incredible portfolios and then they cant do the work because they relied on AI.

  • @FulguroGeek
    @FulguroGeek Před měsícem

    to be really fair, i think 3d models to Ai image is a weird non usefull process, sine ai image are already amazing... Me what im waiting its a real powerfull Image to 3d model with texture lol...

    • @OlivioSarikas
      @OlivioSarikas  Před měsícem

      The point of this is control and in that regard AI is anything but amazing. Try to create a AI image from a specific viewing angle, using a specific focal length without guidance from any image input. You can't.

  • @ozzyosbourne6
    @ozzyosbourne6 Před 2 měsíci

    To be honest just using a 3d software to create a single image or small animation is not logical thing to learn. I see lots of people put their time to learn blender and try to create environments etc. However, AI can easily create short videos or a single image. My advise to people is learning technicial parts. Learn a game engine, shader creation etc. There are so many free videos regarding blender. But most of the people can not make money at all. Don't waste of your time. If we consider this video, it can be helpful for architectures or level designers who create a blockout scene of a game.

  • @nicholaspostlethwaite9554
    @nicholaspostlethwaite9554 Před 2 měsíci +18

    Interesting, but why not just render it in Blender? Then you have actual control not some ai random generation. Better still make the model yourself. Is it that modern generations all want no work and no input, just easy lazy results? End results are not really the point! Making is the pleasure. It is like the pretence flicking paint at a canvas is art.

    • @ibrahimsojiv88
      @ibrahimsojiv88 Před 2 měsíci

      No control in project create random stuff not art directable then need many changes 99.9 % specific

    • @EvokAi
      @EvokAi Před 2 měsíci +3

      The only way to stop it is not to support it

    • @nicholaspostlethwaite9554
      @nicholaspostlethwaite9554 Před 2 měsíci +7

      @@EvokAi Not really, it has been invented and will be used. individuals will soon not even know they are consuming ai content.

    • @liialuuna
      @liialuuna Před 2 měsíci +8

      you are right! personally i have wasted a lot of time experimenting with ai while i could have learned to get better in blender. you can make some quick images with this technique described here but it's all just random stuff and you have no control. it's great to explore some ideas though.

    • @travissmith5994
      @travissmith5994 Před 2 měsíci +8

      There's an infinite number of things a person can create, but a finite amount of time and energy they can devote to creating. There is joy in the process of creating things, but I personally struggle with finishing projects when I start getting a lot of ideas for other projects piling up. Using AI to speed up the creation process - even if it means giving up some control - is worth it for those cases. Rendering in Blender requires you to nail down every single detail of the scene (detailed enough model and textures, lighting, grass blowing in the wind, sunsets, etc), which takes a lot of time. If you don't care about the minutiae of the scene but focus more on the big picture, then the time to create the minutiae doesn't bring you joy either. If you can control the big picture using 3D, then use AI to fill in the minutiae, you can complete the work to your liking and feel satisfied with it.
      On some basic level the art community understands this already. It's why there are marketplaces for 3D models and textures for other artists to use. It's not "lazy" to use those provided resources, it's just picking what your project needs.

  • @strong8705
    @strong8705 Před 2 měsíci

    No, no, no! 10 minutes of precise clicking in different apps, and calling it power of AI.
    Just wait a few months and it should really help you.
    A bit selfish, but last time i checked the goal was to be served, not to serve apps.

  • @matbeedotcom
    @matbeedotcom Před 2 měsíci

    Its cool but it really needs to texture the entire model