Video není dostupné.
Omlouváme se.

Crowds in Blender - Ask Blender Bob EP4

Sdílet
Vložit
  • čas přidán 25. 07. 2024
  • Let's see a few techniques that are used in the VFX world to create CG crowds.
    Anima Crowd software:
    secure.axyz-design.com
    Meshroom photogrammetry:
    alicevision.org/#meshroom

Komentáře • 180

  • @DECODEDVFX
    @DECODEDVFX Před 2 lety +7

    I really liked this one.

  • @pipeliner8969
    @pipeliner8969 Před 2 lety +14

    I never worked with crowds so this was very insightful

  • @leongachuiri3626
    @leongachuiri3626 Před 2 lety +1

    This is the best Tutorial I have seen in 2022... Thanks Blender Bob

  • @mrkumaran
    @mrkumaran Před 9 měsíci +1

    wow, i am amazed at your knowledge and skill

  • @brokensigil
    @brokensigil Před 2 lety +64

    I think the 1.57 value comes from radian Pi/2 which means 90 degrees

    • @d0ppelgaenger115
      @d0ppelgaenger115 Před 2 lety +6

      Yes, that is the reason. You could also use a rotate euler node and set it to 90°. That's more intuitive for most people, instead of working with radians

    • @traderz13
      @traderz13 Před 2 lety +1

      True dat

    • @ThadeousM
      @ThadeousM Před 2 lety +3

      Those Erindale tuts paying off

    • @baldpolnareff7224
      @baldpolnareff7224 Před 2 lety

      Yep, that's it, vector operations are done in radians

  • @GaryParris
    @GaryParris Před 2 lety +2

    Excellent as ever BlenderBob, it's always great to see people troubleshooting and explaining it and potentially being given information from blender community that helps you achieve your goals to pass on to all. love your work :O) hope everything is running smoothly for you, love seeing what you can do with blender rather than just saying nope it can't be done. not going to try mentality! So thanks, really appreciate a true professional putting time and effort into making it work for them! :)

  • @johancc22
    @johancc22 Před 2 lety +4

    You just bring amazing content. Thanks a lot blender Bob!

  • @volktanya
    @volktanya Před rokem +1

    Awesome and as always with a great sense of humor!

  • @BeratTurkbkmaz
    @BeratTurkbkmaz Před 2 lety +4

    That is the tutorial which I am looking for. Your tutorials has awesome knowledge as always. thanks for share your know how to us

  • @Maarten-Nauta
    @Maarten-Nauta Před 2 lety +1

    Wow the video geometry was crazy

  • @LPFan4
    @LPFan4 Před 2 lety +2

    I didn't know Blender could be used like that. Thanks!

  • @im_Dafox
    @im_Dafox Před 2 lety +2

    It might seem like i'm overreacting on that but thank you so, so much for taking the time to explain all the vector, euler xyz nodes etc. I've been avoiding geonodes sine the beginning since it always looked so complicated even with respective tutorials (i know i'm missing out on a big blender chunk but heh). Your explanation was so clear and easy to understand, really can't thank you enough for that :]

  • @ciso
    @ciso Před 2 lety +5

    Very interesting to hear about video-grammetry 🍪

  • @marcinderewonko9124
    @marcinderewonko9124 Před 2 lety +1

    Thanks, as always high quality educational content.

  • @billmurray7676
    @billmurray7676 Před 2 lety +2

    Wow, I didn't know videogrammetry was already used by studios, it's amazing! now imagine in a few years we'll be able to combine that with AI retopo and then even the sky won't be the limit lol

  • @SteveWarner
    @SteveWarner Před 2 lety +9

    Awesome video as always. Just a heads up regarding rigged people with geometry nodes though. There's no need to export to Alembic. You can use your rigged character meshes. Just put the armatures in one collection and the skinned meshes in another. Then drag the skinned mesh collection into the Geometry Nodes and use that as your mesh instance. You can even use multi-mesh characters (e.g. head, shirt, pants, etc. all as separate meshes) so long as they're in their own sub-collection within the Character collection. For example, if you have a collection called "Characters" you can put sub-collections in that called "Character 1" "Character 2" "Character 3" etc. Each sub-collection will be treated as a unique object. Anima tends to export out single meshes, so this shouldn't be an issue. But if you're using Character Creator to generate mid to close-up quality people, they'll typically come in with all the parts. Also, Geometry Nodes work great for crowds, but particles work just as well, especially for large stadiums. We created a CG version of Bristol Motor Speedway last year and populated it with 150,000 high poly characters using particles as we weren't familiar with Geo Nodes at the time.

    • @BlenderBob
      @BlenderBob  Před 2 lety +3

      Interesting and good to know. But with particles you won’t be able to have the characters to orient themselves to an empty. Unless you know how to do it.

    • @BlenderBob
      @BlenderBob  Před 2 lety +2

      I’m not sure how Blender would manage to handle 120 rigged characters. The other ones are all instances but you still need to update the original ones.

    • @SteveWarner
      @SteveWarner Před 2 lety +9

      ​@@BlenderBob It works, but Blender doesn't like it. The company I work for makes theme park rides. One of them is a 120-seat theater called the Motion Theater. We just did a visualization for this ride with 120 high poly (~80,000 poly) rigged people in the seats on the ride. Blender slows to a crawl and is all but unusable, but it still renders. We can improve performance with either geometry nodes or particles, but from what I can tell, Blender still seems to be calculating the armature for each instance. So Geometry Nodes and Particles save RAM on the geometry instances. But the armatures aren't instanced, so things are still really slow. The way we've mitigated this is to save the animation out as either a LightWave MDD or a Max PC2 file and then apply that back to the characters using the Mesh Cache modifier. It's basically the same as exporting to Alembic, but you don't have to export out the characters. Just the motions.
      I'm not sure if you've played with Anima's vertex cache export, but you can export your scene using the PC2 vertex cache option. It's slower to set up, as you have to manually load each PC2 file onto its corresponding character in Blender. Once you do that, however, performance jumps way up as Blender isn't having to calculate the armature. It's just reading the point cache off the disk, which is pretty fast. Anima does a nice job of naming the character mesh and the PC2 file, so adding things (while mind numbing) does go by fairly quickly.
      For particles, you're correct that you can't orient them to an empty like you can with geometry nodes. For the work we did with Bristol Motor Speedway, we broke the stadium into "zones" (left, right, front, back) and then used separate particle systems to "point" the people in the right direction. The Advanced settings in the Particle controls let you set a phase (rotation) for each particle instance and then a randomized phase, so everyone is pointing basically in the same direction with a little variation. It's definitely not as elegant as geometry nodes and I'd be hard pressed to say that particles are better than geometry nodes. Just noting that you can do it. As with most things, it's just about how you hack it together to get it to work. ;)

  • @madjidbouharis729
    @madjidbouharis729 Před 2 lety +1

    Merci beacoup bob vous etez vraiment le meilleur

  • @EdNorty
    @EdNorty Před 2 lety +1

    Videogrammetry is mind-blowing.

  • @MMMM-sv1lk
    @MMMM-sv1lk Před 2 lety +1

    so cool great video bob

  • @JeffersonDonald
    @JeffersonDonald Před 2 lety +1

    Awesome tutorial. Very informative.

  • @retroeshop1681
    @retroeshop1681 Před 2 lety +1

    Now I can do some nice crowd scenes hehe ♥ Thanks Blender Bob

  • @bkentffichter
    @bkentffichter Před 2 lety +1

    Thank you so much! Amazing information

  • @Kram1032
    @Kram1032 Před 2 lety +12

    5:35 "Position" vs. "Location" is the difference between the center point of an object (the object's location) and the precise position of each part of the mesh (or whatever)
    I guess both of those could be called "Position" but that difference is used quite consistently throughout Blender and honestly it makes sense
    8:32 why are you using a vector scale here when you take a single (float rather than vector) random value and also end up plugging it into a single float value? It's not actually gonna matter much but effectively you are allocating three memory locations for a single value that you also only intend to use as a single value. It'd be more efficient to use a regular math node's multiply for an otherwise identical effect.
    13:05 all rotations internally are in radians, not degrees. You can put a degree-to-radians in front or you can type in "pi/2" which is in radians what 90° is. (That's why it's roughly 1.5 - it's closer to 1.5707963.... - I'm guessing 1.57 would basically look spot on. But typing "pi/2" into the field is gonna be exact up to floating point limitations)

    • @BlenderBob
      @BlenderBob  Před 2 lety +4

      Thanks for the tips!

    • @matiasguisado3714
      @matiasguisado3714 Před 2 lety +1

      I would guess, not looking at the documentation at all, that position implies also "order" where as location is just "coordinates"

    • @Kram1032
      @Kram1032 Před 2 lety +1

      @@matiasguisado3714 "Position" actually refers to a standardized vector in a path tracing rendering pipeline. (Standardized as in this isn't only Blender. It's how it's called in OSL, the Open Shader Language)
      You got like three fundamental vectors in a path tracer giving you most of the effects of light bouncing around in a scene:
      Position - the ray hit location
      Incoming - the direction from which the ray came
      and Normal - the surface normal
      those three together are really powerful. - So that's what Position really does: If you send a camera ray, it tells you where in the scene, in global coordinates, the ray hit.
      Meanwhile, Location is a constant vector: No matter where in space you are, its value is the precise global coordinates of the centroid, pivot point, or origin of an object.
      It's for instance very useful to do "Position - Location" which will recenter the global coordinates on the object's center. That's pretty close to what Object Coordinates will give you, except those additionally give you the rotation and scaling of the object.

  • @Vignolamaxime
    @Vignolamaxime Před 2 lety +1

    This channel is flawless !!!

    • @BlenderBob
      @BlenderBob  Před 2 lety

      Ah ah! Not if you look at my hair!

  • @AyahuascaMagic
    @AyahuascaMagic Před 2 lety +1

    really valuable content

  • @charlesweaver3000
    @charlesweaver3000 Před 2 lety +10

    Great job! I could never have explained it better than you did at 6:57. 😆
    And it's pronounced "Euler", not "Euler"!

    • @BlenderBob
      @BlenderBob  Před 2 lety +10

      Euler not Euler? I thought it was Euler. ;-)

    • @vintezis
      @vintezis Před 2 lety +1

      @@BlenderBob Its german, so it is "Ajler" or with french characters in phonetic "Aàilère"

    • @nirmansarkar
      @nirmansarkar Před 2 lety

      @@BlenderBob oil-er

  • @P3eeez
    @P3eeez Před 2 lety +1

    enfin de retour !

  • @timvanhalewyck3370
    @timvanhalewyck3370 Před 2 lety +1

    Good stuff man!!! thx

  • @andres4SD
    @andres4SD Před 2 lety +1

    Brutal, Muchas Gracias Master

  • @fernandosuebre
    @fernandosuebre Před 2 lety +1

    Very good video

  • @LightArchitect
    @LightArchitect Před 2 lety +3

    I love this. Great overview Bob. Have you scene the John Adams Mini Series VFX Breakdown? That was the one I saw years ago that opened my eyes to VFX in general...and crowds.

  • @mallickpriyanshu
    @mallickpriyanshu Před 2 lety +4

    1.4 roughly half of pi
    Blender uses radians for rotation in geo nodes instead of degrees
    You can use a math node and set it to Radian and then connect it to X socket of combine XYZ node and then connect it to the Vector math node

  • @johntnguyen1976
    @johntnguyen1976 Před 2 lety +3

    Um...sooooooo...CAN we have a Nuke Bob channel as well? I checked...the name is open. 🤣 I would support it in a heartbeat!

    • @BlenderBob
      @BlenderBob  Před 2 lety +2

      Did you check my compositing series?

    • @johntnguyen1976
      @johntnguyen1976 Před 2 lety

      @@BlenderBob I sure did, sir! You're the one who finally made color space click in my head. Soon after watching your series I accepted my first freelance project that had an ACES + Linear workflow...and I didn't do too bad. But that series played a big part in my first project under a Nuke Indie license! 🙌 So thanks for that!
      But anyways...I'm just greedy and want more nuke LOL. 🤣

  • @yussufabukar32
    @yussufabukar32 Před rokem +2

    please please please make an in depth video about Videogrammetry and how to use it. It looks so awesome and I would like to know how it works!

    • @BlenderBob
      @BlenderBob  Před rokem +1

      End of December or early January

  • @sebbosebbo9794
    @sebbosebbo9794 Před rokem +1

    top Bob.....

  • @kokoze
    @kokoze Před 2 lety +2

    When working with those characters and importing them to blender I think It would be far better to use another blender file.
    - You do your thing in Anima Crowd software and export FBX files.
    - You import them into a blender file called "crowd assets" where you put them in a collection and clean up naming, materials,....
    - And then you link the objects or collections from "crowd assets" file to the Blender file in which you actually have your scene

    • @BlenderBob
      @BlenderBob  Před 2 lety +1

      Well, the actual pipeline is actually more complicated than that. This is a 15 min overview. But thanks for the advice. :-)

  • @jfkpires
    @jfkpires Před 2 lety +1

    wow. thx

  •  Před 2 lety +2

    "How it could be done in blender..." yeah, i'm most interested in is how to get a job with it :D

  • @jkartz92
    @jkartz92 Před 2 lety +1

    Would like to know more about the vectorpass, cryptomatte(1pixel of boundary is coming) and depth pass(not as perfect in native in comp) for defocus and shadow pass and AO(empty space has black unlike other apps) pass

  • @hasanhuseyindincer5334
    @hasanhuseyindincer5334 Před 2 lety +1

    👍👏

  • @BlaBla-sf8pj
    @BlaBla-sf8pj Před 7 měsíci +1

    great video
    if you know where to find models equivalent to anima 4d compatible with blender, I'm interested

  • @gadass
    @gadass Před 2 lety +6

    Hey! :)
    Did you try creating a mask using a color vaules (like a keying) from diffuse texture to make different roughness/specular? :)
    (Talking about the videogrammetry)
    Cheers!

    • @BlenderBob
      @BlenderBob  Před 2 lety +7

      Yeah, we did. We actually had the characters dressed with flat pure colors so we could make some variations.

  • @zelfzack9432
    @zelfzack9432 Před 2 lety +2

    3:50 could merge by distance ✨

  • @Barnaclebeard
    @Barnaclebeard Před 2 lety +1

    I don't know where you've been hiding but we're friends now.

    • @BlenderBob
      @BlenderBob  Před 2 lety

      I’ve been here for a while now. Question is, where were you?

    • @Barnaclebeard
      @Barnaclebeard Před 2 lety

      @@BlenderBob Dude, I was here. What were you wearing? I'm in jeans and a jean jacket.

  • @justlotfy
    @justlotfy Před 2 lety +1

    Amazing...just on time..I've just used anima and it was a struggle...Thank you.
    Also, what do you think about Human generators for mesh, and a real-life captured animation?

  • @FruitZeus
    @FruitZeus Před 2 lety +1

    Hey Bob! Super cool video! Any reason why you can't just move the meshes only into a new collection after the FBX import? Rather than moving everything (including Armatures) into the collection, if you just had a collection of the meshes, the geometry nodes should still be able to select a random character mesh, or is this wrong?

    • @BlenderBob
      @BlenderBob  Před 2 lety +1

      Actually, you can but only if your character is in one mesh. If not, you need to make a collection into a collection. The advantage of using alembic is that you don't need to calculate bone deformation for 120 characters. No way Blender can handle this. It's also better if you need to send to another software, like Houdini

    • @FruitZeus
      @FruitZeus Před 2 lety

      @@BlenderBob great info! Thanks Bob!!

  • @kspmn
    @kspmn Před rokem +1

    I discovered this amazing channel today, why not before? Because it's EPIC!!! The Pixel F#ck video is my all time favorite!!!

  • @PeteDraperVFX
    @PeteDraperVFX Před 2 lety +3

    Any thoughts on randomly offsetting the animation of the character per distributed copy? The only way I've discovered to do is to do it manually by cloning the source and offsetting the animation which would naturally bloat the scene file. If I've got a 200 frame sequence and a 1000 frame animation of the same character, creating such a large number of offsets by hand to set this would be a huge job and seriously increase the file size. Something like even Superspray in max 2.5's particle system could do this so I'm still unsure as to why we don't have this in B3D yet... unless I've missed something... thoughts?

    • @BlenderBob
      @BlenderBob  Před 2 lety +2

      You can’t. And I spoke to the devs and nop, they confirmed that it’s not possible. Probably when they make an alembic geo node reader. So our solution was to user Clarisse as a renderer (the demo here is all Blender) and even there, the Clarisse guys had to make a special custom version for us because getting a random offset was already built in but you also need to offset the texture sequences and that was the tricky part.

  • @antonioya
    @antonioya Před 2 lety +1

    Hi @BlenderBob At 15:14 you talk about an animation that changes the mesh for each frame. How do you do that? Do you have a different block of data and switch between them? do you use a plugin? I'm very interested in this technique to apply it to some clay animation tools we're working on. Thanks in advance!

    • @BlenderBob
      @BlenderBob  Před 2 lety

      It’s part of the videogrammetry process. Each frame is different and they are exported as alembic.

  • @Zhiznestatistiks
    @Zhiznestatistiks Před 2 lety +1

    7:47 you don't need a vector math node to scale down random values if you going to convert in back to float anyway. Just a math-multiply node will be is enough.

    • @BlenderBob
      @BlenderBob  Před 2 lety

      I used to do that but then you need to adjust 3 values, unless you create a value node.

    • @Zhiznestatistiks
      @Zhiznestatistiks Před 2 lety

      ​@@BlenderBob Where did you get 3 values to adjust? You are creating 2 random float from -1 to 1, scaling them down (multiplying) it down and combining to xyz(xy) vector data. Maybe you intended to use random vector data, scale it and compress to single float number for both x and y?

  • @patriciocalvelo1839
    @patriciocalvelo1839 Před 2 lety +1

    Aesome @blenderbob great great one!

  • @deataification
    @deataification Před 2 lety +2

    With the videogammetry part, having the topology change every frame and the need to have new textures and uv:s for every frame seems a bit rough, couldn't you for instance use wrap4d on the face to project the changing geometry into a base mesh (the animation will be transfered to blendshapes) and then use for instance Houdini's point deform to deform a static geometry to the movement of the videogammetry model as it works based on distance rather then point numbers.
    This is the workflow I have used when having to deal with changing topology and this way you can have different parts with different materials and not needing to have a thousand textures for 1k frames of animation.

    • @BlenderBob
      @BlenderBob  Před 2 lety

      We tried but it doesn’t work. For example, if you have two people fighting each other, they touch so it’s not possible to separate them. Or if someone has his arms crossed then they are merged with the body. Or there’s too much changes in the cloth or hair topology. Look at the guy dancing in gray in my clip.

    • @BlenderBob
      @BlenderBob  Před 2 lety

      I spent weeks in R&D on this. I’m resuming it in 15 mins. We had so many issues that we had to resolve, some of them directly with the developers

    • @deataification
      @deataification Před 2 lety

      @@BlenderBob Some of those are issues of the tech yes but others are solvable. For the crossed arms example, it really wouldn't be an issue as long as you separate the arms on the mesh that you apply the point transformations to from the videogammetry.
      When it comes to hair and cloth, I simply don't see the benefit of having them be captured in the videogammetry, that just makes them unadjustable for really no benefit whatsoever when you can just run a basic cloth sim & hair sim after the fact.
      At least for photogrammetry in general, hair will never look good as individual hair strands are way too thin to be captured by any camera in a way that they could be calculated using photogrammetry, and like with cloth there really is no need when you can wear a hairnet and just add the hair later.
      I'm also curious how you are dealing with the fact that with videogammetry with changing topology, you are fully locked in to that exact framerate as you really cant interpolate the animation at all without complex setups so that it doesn't break the textures.
      It just seems like including those things add an unnecessary burden of limitations when compared to the alternative of just capturing performances as videogammetry and simulating the rest.
      Id love to chat more about this as I think that videogammetry can truly make for some spectacular looking hero assets.
      Edit. To add on the people fighting thing, also really don't see that as an issue, just separate the actors in the initial static frame and then set the point deformation distance to a value where the base mesh doesn't pick up the animation of the other actor when they get close, you can even adjust it frame by frame.

    • @BlenderBob
      @BlenderBob  Před 2 lety

      Well, it sounds simple but it’s far from being that easy. You are capturing a performance. You need the facial expression. You need the cloth deformation. We tried remapping on another geometry and we failed. We spent months on this. And vidéogrammetry is not designed to be seen in closeup as a hero model. Too many artefacts. Try to find an example online. Some companies have demo files. Try it and you will see

    • @deataification
      @deataification Před 2 lety

      @@BlenderBob Did you try Wrap4d? its specifically designed to remap facial volumetric sequence captures to a single mesh.
      And videogammetry works for hero assets if you treat it as kind of a glorified motion capture where you extract the motions from the captured data itself rather then use the actual changing topology in the final render.
      This is to the benefit that you obviously get a hero level asset from the photogrammetry that matches 1:1 with the performance.
      I work a lot with Houdini and especially fluid sims so dealing with changing topology is really common for me, it just needs a pretty tight pipeline but getting something usable out of a volumetric capture definitely is not impossible.
      May i ask what software did you use to try to remap them into another mesh?
      I actually talked out of my ass for some reason when it comes to Houdini, I'm so used to the process so i totally forgot to mention that you obviously need to create a reference frame from the mesh that has changing topology that acts as a rest state for the base mesh that you are transferring the motion to.
      But transferring changing topology to a single mesh is all and all really not that complicated.
      If you have a small sample you could share, i would love to take a look and see if there is a way to make it work for you.
      But in all honestly, it would be a lot easier to just use the camera setup to first do a scan of the actor and then use it for a markeless motion capture as you already have so many angles to basically make the motion capture super accurate.
      Edit. Also to add, adding motion blur to changing topology is super easy, just store the point velocities inside a three vertex maps.
      Thats atleast the suggested way, though dont know if cycles supports getting motion blur values from vertex maps as I mainly use Octane and Redshift.
      Edit 2. If you transfer the animation from the videogammetry to a single mesh you can also then rig the single mesh and transfer that animation to bones of the rig, bake it to keyframes and now you can even adjust the animation.

  • @MaxChe
    @MaxChe Před rokem +1

    Great lesson, thanks!
    Tell me how to set the animation time interval for clones? I can't seem to find a way....

    • @BlenderBob
      @BlenderBob  Před rokem

      Actually not possible. They are not going to fix it because the entire particle système will be rebuilt from scratch to work with geo nodes, Then hopefully it will. So you make many versions of your animation.

    • @MaxChe
      @MaxChe Před rokem

      Well... will wait for updates, thanks for the reply!

  • @ukmonk
    @ukmonk Před rokem +1

    love your tutorials so thank you!! cards in VFX are same as Blenders images as planes right?

    • @BlenderBob
      @BlenderBob  Před rokem +1

      Yeah, just a single polygon with a texture on it.

    • @ukmonk
      @ukmonk Před rokem

      @@BlenderBob Thanks for replying Bob :)

  • @UCaPxueORqDShy1qZ5vb
    @UCaPxueORqDShy1qZ5vb Před 2 lety +2

    pls make nuke tutorials sir u r the best love from india 💌💌💌

    • @BlenderBob
      @BlenderBob  Před 2 lety +1

      I did some already but I’m just mid level in Nuke

  • @qwertasd7
    @qwertasd7 Před 2 lety +2

    How do you make good looking fire / explosions in blender (not 3th party software). I know how to make fire and smoke but never get it convincing are there some rules or so one could use?

    • @BlenderBob
      @BlenderBob  Před 2 lety

      I’m not a specialist for that kind of stuff. At the office we just use Houdini because nothing beats it for simulations

    • @qwertasd7
      @qwertasd7 Před 2 lety

      @@BlenderBob ok it's what I see at othe yourubes despite there quite an advanced solver now. I think a few explosion sizes / rockets/fire would be fun. But it's okay I'll keep trying

  • @beddyboy
    @beddyboy Před 2 lety +1

    14:56 I like you Blender Bob you are nice.

  • @MrCosmopolite
    @MrCosmopolite Před rokem +1

    Hello Bob! Love your tutorial. Need something simpler and quicker for now. Spawning planes along a geo via particles. Using a library of 2d crowds i already have. Do you know of a way to randomize the start frames on the crowd sequences? Complete Blender newbie and impressed how quick i got there. Now its just they all start on the same frame sadly….

    • @BlenderBob
      @BlenderBob  Před rokem +1

      No idea. And now it’s the first beautiful day of the year where we can wear shorts and I don’t feel like being in front of my computer. ;-)

    • @MrCosmopolite
      @MrCosmopolite Před rokem

      Hope you enjoyed the weather! Me and my son‘s hike to a new waterfall today was marvelous. In case you stumble across a solution for my issue, let me know! I have gotten zero help/feedback in any of the groups i found and fear i am just not experienced enough to try your 3d approach. 😅

    • @BlenderBob
      @BlenderBob  Před rokem

      @@MrCosmopolite Yeah, you can't control the image sequence per object, that I know of. We found a way to do it for the alembic when we use videogrammetry but it's quite complex and it uses geo nodes. It wouldn't work in this case.

  • @twistedwizardry5153
    @twistedwizardry5153 Před 2 lety +2

    What is the videogrammetry setup like? Is it crazy expensive?

    • @BlenderBob
      @BlenderBob  Před 2 lety +3

      It’s insane and quite expensive. You can find some setups if you google it.

  • @RomboutVersluijs
    @RomboutVersluijs Před 2 lety +1

    Guess would save a lot of time if anima could export into alembic. Wondering why they have not implemented that already. The software has been out there for years

    • @SteveWarner
      @SteveWarner Před 2 lety +1

      Anima will export to PC2, which loads into Blender just fine. When you export your scene in Anima, you choose Blender as the file format for the FBX. Then import that FBX into Blender. Add a Mesh Cache modifier and then copy that modifier to all the characters. From there, you load in the PC2 onto each corresponding character. The mesh character's name will match the PC2 file's name, so it's pretty straightforward. I just added 100+ characters to a scene in Blender and it took about an hour to get the PC2 files added. It's still slow, though, because Blender isn't optimized for this sort of work. You can bring the same number of characters into Unreal with the actual skeleton and it works flawlessly.

    • @RomboutVersluijs
      @RomboutVersluijs Před 2 lety

      @@SteveWarner not sure why anima doesn't then do alembic export. That would so much faster

  • @lobodonka
    @lobodonka Před 2 lety +1

    Another great tutorial. Btw, is there any chance to know the "secret partner" in future, or they will keep "secret" 👽

  • @Nevil_Ton
    @Nevil_Ton Před 2 lety +1

    you don't need to bake them to Alembic, put each character in a collection in other file. link that collections in current scene and just use each of linked objects as a single object for distribute.

    • @BlenderBob
      @BlenderBob  Před 2 lety +1

      Yeah but Blender can’t handle 120 rigged characters at the same time.

    • @Nevil_Ton
      @Nevil_Ton Před 2 lety

      @@BlenderBob For complex rigs yeah. but for simple rigs like you used in tutorial will be faster than Alembic. and benefit of using of "linked collection" is you can hide rigged mesh in view port and use a proxy mesh that is hide in render.

  • @RomboutVersluijs
    @RomboutVersluijs Před 2 lety +1

    I guess you used some python to set all those models in to bounding box or was that alt click?
    That alt click won't work for me as i use a Wacom tablet and need alt to pan, scroll and do other functions. Guess I need to make a custom toggle for that option so I can use it on the fly

    • @BlenderBob
      @BlenderBob  Před 2 lety

      No. You just do it in the original characters and all the instances will change too.

  • @cgdigitaltreats
    @cgdigitaltreats Před 2 lety +1

    How do you make blender show proxies instead of actual geometry? Thanks for the video

  • @thalabathiss7806
    @thalabathiss7806 Před 6 měsíci

    Is it possible to apply displacement effect on crowd

  • @kidkreation
    @kidkreation Před rokem

    How much would you charge to for a scene like the stadium for a music video if I do green screen at home

    • @kidkreation
      @kidkreation Před rokem

      Can’t u just use that one you made and put me in it

    • @BlenderBob
      @BlenderBob  Před rokem

      @@kidkreation I can't because it's only ten different people, I can't show them from the front. I cost us thousands of dollars to make that clip. In order to do this for a client, we would need to redo the videogrammetry for all characters, like 75 at least, and it's 500$ each. Plus 3D setup, lighting, rendering layout and all, about 60k.

  • @ianmcglasham
    @ianmcglasham Před 2 lety +1

    I once had to fill a gladiatorial arena with a crowd from only 6 filmed groups of around 20 extras. Using cards (We just called them patches). Took about a month and I still felt I could identify duplicate actors and motions. We had to cheat the camera focus to hide the problems and the grade was insane. So difficult to do the old way. Great tutorial right here!
    (The vector add value of 1.5 is certainly unusual. Does anyone know why this happens? I have come across similar issues and while the solutions do work - I always like to know why something works and I cant work this one out!)

    • @BlenderBob
      @BlenderBob  Před 2 lety +1

      1.5 comes from using radian instead of degrees

    • @ianmcglasham
      @ianmcglasham Před 2 lety

      @@BlenderBob Sounds reasonable!

    • @gordonbrinkmann
      @gordonbrinkmann Před 2 lety +1

      @@ianmcglasham Yes, as Bob already said, Blender works with radians internally, so 90° equals Pi/2 which is roughly 1.57

  • @rohitkhanvilkr1636
    @rohitkhanvilkr1636 Před rokem +2

    Please Please Please make a separate Video on Videogtammetry....

  • @Amandeepsingh1313
    @Amandeepsingh1313 Před 2 lety

    pls tell me the scope of blender and motion graphics in CANADA

    • @BlenderBob
      @BlenderBob  Před 2 lety

      Motion graphics, no idea. Blender, strong before Tangent's bankruptcy. Now, besides Real bt Fake, I don't know any other companies that are using it, unfortunately

  • @thornnorton5953
    @thornnorton5953 Před 2 lety +2

    Feel free to be nuke bob. I want to see it. I don't have nuke, but I'm so fascinated by compositing.

  • @nirmansarkar
    @nirmansarkar Před 2 lety +1

    Wait. What's your system configuration here to be able to do that at home?

    • @BlenderBob
      @BlenderBob  Před 2 lety

      My system at home is irrelevant as I’m only using it to display the office’s screen on my machine. At the office it’s an AMD ryzen 32 cores 64 gig ram and 2080ti. At home iMac Pro

    • @nirmansarkar
      @nirmansarkar Před 2 lety

      @@BlenderBob When someone says 64 Gigs of RAM .... my head hurts.

    • @BlenderBob
      @BlenderBob  Před 2 lety

      @@nirmansarkar When I started in 95 I had 64 megs of RAM and people thought I was talking about my hard drive

  • @frozthound
    @frozthound Před 2 lety +1

    I'm sad when the anima guy sad

  • @Leukick
    @Leukick Před 2 lety +2

    15:50 What is CPU RAM? Is that just your computer's regular RAM?

  • @rzmotionpicture7701
    @rzmotionpicture7701 Před 2 lety +1

    Hey Bob, please try to recreate the head explosion vfx from the movie ”the boys"

  • @brisayman
    @brisayman Před 2 lety +2

    2:54 wait is blender bob a french speaker?! C’est intéressant ça

    • @BlenderBob
      @BlenderBob  Před 2 lety +3

      C’est ma langue maternelle

    • @brisayman
      @brisayman Před 2 lety

      @@BlenderBob moi aussi, mais t'inquiète pas c'est pas la seule chose que j'ai trouvé intéressant 😂

  • @ektorthebigbro
    @ektorthebigbro Před rokem

    I like the idea of videogrammetry but the problem is that there is no free way to do this and websites like renderpeople are way too expensive i could not find a way to get the people to recreate the concert shot you did

  • @unboring7057
    @unboring7057 Před 2 lety +1

    Excellent video, thanks Blender Bob! I My girlfriend and I really look forward to your videos, you're our fave! We always learn a ton, and your teaching style is accessible and entertaining! We for one would love to see a Blender Bob take on some comet-hits-earth planetary destruction, would love to learn how you might approach this! I loved how they did it on "Don't Look Up" - czcams.com/video/4-zv5Cvg6pM/video.html. Missing from this clip are the floating chunks of Earth with foliage on top and rock/dirt below, which to me sold it the hardest, but I couldn't find a good clip. Thanks again Blender B, looking forward to the next one, hope life is treating you great!

    • @BlenderBob
      @BlenderBob  Před 2 lety +1

      That’s exactly what we are working on right now but it’s a mix of Houdini and Blender.

    • @unboring7057
      @unboring7057 Před 2 lety

      @@BlenderBob Rad! I for one dabble in Houdini and would love to learn the workflow from you, but I know you are not Houdini Bob. 🤖🤖

  • @Elyakim_binenstock
    @Elyakim_binenstock Před 2 lety

    Can u make the foundation’s title opening with all those particles and sand color thing or halo opening title with the sand turn into the suit , please please

    • @BlenderBob
      @BlenderBob  Před 2 lety

      Link please

    • @Elyakim_binenstock
      @Elyakim_binenstock Před 2 lety

      @@BlenderBob czcams.com/video/iWgZBFdRgCo/video.html

    • @Elyakim_binenstock
      @Elyakim_binenstock Před 2 lety

      czcams.com/video/o6qm88h9XQA/video.html

    • @Elyakim_binenstock
      @Elyakim_binenstock Před 2 lety +1

      @@BlenderBob and thanks for the fast reply, I think u r the only one on CZcams that really considers our suggestion

    • @BlenderBob
      @BlenderBob  Před 2 lety

      Do you have a link about the FX you want me to show you?

  • @gppanicker
    @gppanicker Před 2 lety +1

    If possible make a video on render passes.

  • @adilmalik4648
    @adilmalik4648 Před rokem +1

    Ian hubert had 1 minute video doing same with different technique

  • @ProjectAtlasmodling
    @ProjectAtlasmodling Před 2 lety +1

    Blender geometry nodes must make the hudini guy happy he's not getting as much simple stuff thrown at him. So he can focus on the more complicated things.
    Also I think there might be a setting to tell blender to work y up but I may be mistaken

    • @BlenderBob
      @BlenderBob  Před 2 lety

      Ah ah. No way you can make Blender Y up

  • @vertigoderviche
    @vertigoderviche Před rokem +1

    Make Nuke Bob!

    • @BlenderBob
      @BlenderBob  Před rokem

      Nop! Sorry! I did make a few clips on compositing though

    • @vertigoderviche
      @vertigoderviche Před rokem

      @@BlenderBob
      Thanks for the response. I wasn't expecting you to read it. I've watched a couple of videos, I'm a complete amateur novice, just tinkering with Resolve Fusion. But I came to watch specifically to see the magic trick of the composition at the beginning, but the node tree is intimidating. Your videos are very interesting, and I like your style. Greetings from Argentina.