What 3D Gaussian Splatting is not?

Sdílet
Vložit
  • čas přidán 14. 09. 2023
  • Here we continue to delve into the fascinating world of Gaussian Splatting. It's a very hot topic right now in the field of computer graphics. But there are a lot of mathematical concepts and scientific jargon around it, so it can be difficult to understand what this new 3D form is and what it isn't. I try to explain what Gaussian splatting is in practice and where its development is currently going.
    Specs:
    These samples were rendered with Graphics card Nvidia RTX 3070 8Gb Vram
    PC: Asus ROG Ryzen 7 64Gb ram
    Follow these intresting companies:
    Volinga
    volinga.ai/
    Infinite-Realities
    www.ir-ltd.net/
    Gaussian Splatting for Unity game engine:
    github.com/aras-p/UnityGaussi...
    !!! Gaussian Splatting for Unreal Engine is NOW available in UE marketplace !!!
    www.unrealengine.com/marketpl...
    #gaussiansplatting #NeRF #3dscanning
  • Krátké a kreslené filmy

Komentáře • 200

  • @eliasboegel
    @eliasboegel Před 8 měsíci +203

    Nice video. However, I think both you and most of your commenters are conflating splat rendering (like this particular kind) with being anything other than a rendering process. Splatting (incl. gaussian) is a decade old with a lot of associated problems - which is why today we use mostly polygons. The reason they are getting a little bit of attention again now is because the geometry output of methods like photogrammetry is a point cloud and not a mesh. Its important to realize that we use polygons for good reasons and at the time the industry made that decision, splatting was also available already. It is not a revolutionary widely-applicable technique, but instead a technique with properties useful for niche uses - it will see little use outside of photogrammetry and related techniques.

    • @jackquick8362
      @jackquick8362 Před 8 měsíci +9

      Photogrammetry is emerging as THE key asset creation tool for major games. Niche, kind-of still. However that niche is growing very fast! I think techniques similar to this will be heavily implemented in the next 5 years. Perhaps not exactly as we see here though.

    • @eliasboegel
      @eliasboegel Před 8 měsíci +23

      @@jackquick8362 Photogrammetry yes. However, this still goes through a meshing pipeline and is rendered with traditional methods and not splatting. This is a big difference, photogrammetry and the rendering technique used for photogrammetry data should not be confused. I absolutely think we will see more photogrammetry assets (just look at what Quixel has been doing for years already). However, gaussian splatting is not and will probably never be the rendering technique used for this in games. The problem really is that, at the moment, gaussian splatting is not suitable for dynamic scenes, which games always use.

    • @jj72rg
      @jj72rg Před 8 měsíci +16

      @@jackquick8362 Photogrammetry is emerging as THE key asset "data acquisition" tool rather, as in we finally can afford scanning complex unoptimized model and textures and decide how to use them later. But for the production pipeline, even in Cinema VFX level, we still do it the old way which is retopology, meshing, and turning these raw data into actual useful formats, with normal maps, height maps, roughness maps and all that. It's just a higher level of recording data from real world, but you will never mass direct adoption of these raw data in scenarios such as video games, which require a lot of dynamic movement , a limited data size and high level of optimization.

    • @Freshbott2
      @Freshbott2 Před 8 měsíci +1

      @@jj72rgwhy not?

    • @tommasoannoni4836
      @tommasoannoni4836 Před 8 měsíci +15

      @@Freshbott2My best guess, in a few possibilities:
      [pardon my imperfect language, since I am not a dev nor english native speaker lol]:
      1) Because "3D" images made with gaussians are "only" what they are.
      - For example, you can't enter in a 3D environment like a dark room at night, with lots of 'shadows', (made in/with gaussians), and turn the light on, and see a difference of lightning, colors, shadows, reflections, etc. Because the base image does not have, for example, a light and light source information (therefore color grading info, shadows, etc): it only has the colors it has.
      - Differently, when you make a room in a 3D program for a game, you make the "generic" room and can apply any sort of lightning you want, and you will see the changes in real time, because it's all made with poligons/textures/light sources that you can manipulate.
      2) Same for opening a door in that environment: you simply can't.
      Unless, I guess, you film everything and every combination you may need and "gauss" every single frame (every combination means, in the example above: the full action of the door opening and closing (and with different handle position for every frame in between) plus full door action in every light condition (room light on / room light off, etc). And this is just for opening a door and turn on a light. Imagine walking around that room with a flashlight, or making a spark (either an explosion like shooting a bullet or turning on a lighter in your hand): every one of those action would affect the whole scene differently depending on where they happen. So, if made thru gaussians, you would have to execute all those actions in all possible combination and positions/locations/direction etc etc. Otherwise, this gaussian 3D room would not change color when you make a spark. If that makes sense. Therefore, if you include an npc being in the room, moving as they please, suddenly all the combinations of possible shadows the nps would project in the room when you make a spark with your lighter or move your flashlight are exponential, we are talking probably billions of combinations. All of those would have to be "lived thru", created for real first, and then "gaussed" into 3D. But you can't take one "gaussed" situation and make changes to it.
      This, at least, is my understanding at the moment.
      Of course, the big question is: can we just combine techniques? Make vector models of the room + taking the "base" of colors/textures with a gaussian thing?
      I guess it may be possible, but again, the "base" has specific lightning situations that may need to be much more dynamic in game.
      Basically, what you end up with, is just a 3D scene took like it always was, and texture applied to it from pictures. Which is most likely what many high-end realistic games are already doing (with some polish I am sure). I am tinking of games such as Uncharted 4 or The Last of Us 2. I don't really see what the difference would be if they make those "scenes" with gaussian rather than pictures, if the scenes are anyway converted into poligons and texture later. And if you don't make the conversion, you are left with gaussian which as said have no info about anything like materials, lightnings, etc etc. Like walking thru a bush or into a chair will make them react differently: in poligons, you can code that, in gaussians, the scene doesn't even move, there is no info about a "chair" and a "bush", it's just colors.
      So yeah, I don't know how it will be used in the future.
      But it would be pretty insane to have say a movies shot in 2/3 differnt angles (which is already normal) and then "gaussed" frame by frame. It would become full 3D? So you could do that with an F1 race or a football match too? I don't know, maybe I wouldn't even need that.
      But if I want to say see a house before buying and want to check it out before traveling to it, I would 100% prefer to see the "gaussed" house (maybe in morgning and afternoon bcs of lightning) rather than just a few pictures, which never give you the real idea of how it is to "move" inside that house. If that makes sense.

  • @lordofthe6string
    @lordofthe6string Před 8 měsíci +38

    So good, thanks for getting all this info together. That dog made my jaw drop. Finally something new and exciting to look forward to!

    • @randfur
      @randfur Před 8 měsíci

      The different lightings on the dog was captured from reality rather than computed.

  • @swamihuman9395
    @swamihuman9395 Před 8 měsíci +10

    - Thx.
    - Great presentation.
    - I started in 3D over 30 years ago (in '3D Studio DOS'!) - so, developments like 'Gaussian Splatting' are exciting, and fascinating.
    - And, adding to my interest, I once made a living writing custom code for 3D; plus, I'm a math teacher.
    - "What a time to be alive!" ~Dr. Károly Zsolnai-Fehér (of 'Two Minute Papers' CZcams channel) [I have to surmise you are certainly aware of this researcher/channel.]

  • @littlesnowflakepunk855
    @littlesnowflakepunk855 Před 8 měsíci +51

    One of my first thoughts when this tech premiered was that you could algorithmically generate a load of blur-free images of an object from more angles than would be feasible for a human, and feed them *back* into photogrammetry to get a more accurate result.

    • @ericsternwood9812
      @ericsternwood9812 Před 8 měsíci

      Holy shit, this is genius!

    • @darishi
      @darishi Před 8 měsíci +1

      My thought as well!

    • @Integr8d
      @Integr8d Před 7 měsíci +3

      You can do more than that. You can ‘take pictures’ of the splatted (lol) scene at the millimeter level and create point clouds so dense (where appropriate) that the typical shortfalls of photogrammetry are eliminated. I mean, when you get right down to it, we’re nothing but molecular point clouds anyway🙃

    • @Anton_Sh.
      @Anton_Sh. Před 6 měsíci

      Have any attempts to implement this been made?

    • @littlesnowflakepunk855
      @littlesnowflakepunk855 Před 6 měsíci +1

      @@Anton_Sh. I'm not deep into computer graphics research myself. I figure the problem with this idea is that you already need to have a pretty robust point cloud in order for gaussian splatting to look good, so it'd be redundant to use it like that.

  • @NarcoSarco
    @NarcoSarco Před 8 měsíci +30

    Nice overview! I was wondering how far this tech is and I'm glad to see its already making its way into gaming :)

  • @sanderdevries7718
    @sanderdevries7718 Před 8 měsíci +9

    Hey - I love your presentation method! This is a really great state-of-the-art video of this incredible new technology coming out. Keep up the great work man :)

  • @AdamWestish
    @AdamWestish Před 7 měsíci +3

    I've been recording sequences for this for many years in anticipation of neural processing (literally since the early 90s). Wish I could find my old print photos! Mahalos for the updates on this.

    • @NihongoWakannai
      @NihongoWakannai Před 7 měsíci

      Wait, so you have a bunch of footage that you prepared for photogrammetry stuff from the 90s?
      That's great, you've basically got a time capsule there that few people will be able to replicate

  • @hanskarlsson3778
    @hanskarlsson3778 Před 8 měsíci +6

    Excellent video and I am alway glad to see a fellow Nordic citizen making such great contributions to help normal people understand important technology. Keep up the good work!

  • @stadtjer689
    @stadtjer689 Před 8 měsíci

    Great video. I know very little about 3D technology, yet you managed to teach me the broad principle surrounding this technique

  • @veikkaliukkonen951
    @veikkaliukkonen951 Před 8 měsíci

    Absolutely great explanation of this concept and very engaging!

  • @robertYoutub
    @robertYoutub Před 8 měsíci +1

    The good old voxel method, 25 years old and now improved. I remember talks back then, about what games will use in the future, lot people thought about voxel technology. I forgot the name of the first voxel 3d game, but it was famous.

    • @olof103eriksson
      @olof103eriksson Před 8 měsíci

      I wonder if its Outcast you are thinking about and it was glorious

    • @robertYoutub
      @robertYoutub Před 8 měsíci

      @@olof103eriksson Yes exactly. Good old memories. Awesome incredible gfx for that time.

    • @tiefensucht
      @tiefensucht Před 7 měsíci

      it wasn't even real 3d voxels, it only used a height map. the question for the future is: is gaussian splatting scaling better in sense of performance than rendering polygons?

  • @xCONDOGZz
    @xCONDOGZz Před 7 měsíci +2

    This technology has huge potential in the ArchiViz industry. Imagine using a drone with a 360 degree camera and doing a fly over of a job site. This will give you the 3D scene of your environment for which you can import your ArchiCAD model and proceed to render. Very cool and can't wait to put it into practice myself.

  • @bharathball
    @bharathball Před 8 měsíci

    Thanks! was longing for a basic overview.

  • @LTE18
    @LTE18 Před 8 měsíci

    Thank you, I am new to this and this video answers many of my questions about 3D Gaussian Splatting

  • @thedangdablah6850
    @thedangdablah6850 Před 8 měsíci +5

    Dude, this video was put together so well. Insane work, hope your content ends up reaching more people

  • @madhijz-spacewhale240
    @madhijz-spacewhale240 Před 7 měsíci

    a very coherent introduction to someone who just stumbled onto this subject 10 minutes ago👍

  • @santitabnavascues8673
    @santitabnavascues8673 Před 8 měsíci +3

    This looks like unlimited detail all over again... 🫥

  • @alediazofficial2562
    @alediazofficial2562 Před 8 měsíci

    tosi mielenkiintoinen video, kiitos!

  • @Gluosnis9
    @Gluosnis9 Před 8 měsíci

    Very interesting, please keep exploring and reporting this :)

  • @sinanarts
    @sinanarts Před 7 měsíci

    +1 Subscriber. Great input 👏🏼

  • @zeamon4932
    @zeamon4932 Před 6 měsíci +1

    I don't know why you can do this? But you hit all of questions that I wanna ask on 3DGS🎉

  • @robertYoutub
    @robertYoutub Před 8 měsíci +1

    Future usage will be a lot in scanning. Like a real estate in 3d, taken with a phone and placed online, Family movies, or film production where people interact with scanned backgrounds etc. Quality will be lower of course, except for professionals. No effect so on 3d design, as you need to manipulate things in 3d,

  • @TaskerTech
    @TaskerTech Před 7 měsíci

    Awesome loved the video, just missed 4k haha

  • @quintesse
    @quintesse Před 8 měsíci +3

    Could you perhaps provide any information on the VR app you showed, links to the project for example? Looks very interesting and I'd like to try it out myself if possible. Thanks!

  • @--waffle-
    @--waffle- Před 7 měsíci

    Great vid. What's the opening track?

  • @Aiduss
    @Aiduss Před 8 měsíci

    Thanks for explanation

  • @Beikenost
    @Beikenost Před 8 měsíci +1

    i could easily see a tv show in the near future use this as a gimmick for an intro.

    • @maarten176
      @maarten176 Před 8 měsíci

      Mac Donald made pretty cool tv commercial with it

    • @maarten176
      @maarten176 Před 8 měsíci

      czcams.com/video/34KeBnSwvmc/video.htmlsi=KCJj1sYOieNHO28G
      It’s was cooler in my memory haha

  • @uripont4744
    @uripont4744 Před 8 měsíci +14

    Great video. Could you make a simple step by step tutorial on capturing data from a mobile device, to having the gaussian splatting inside a Unity scene up and running? And also, how’s the quality when converting the captured data to a 3D model (point cloud to mesh?). Keep up the good work!

    • @user-sf1gl1hg4y
      @user-sf1gl1hg4y Před 8 měsíci

      The Point-cloud itself isn‘t any different than the one made with nerstudio or so. It is done with colmap. So The quality should be something like the tie point - pointcloud from aligning images in metashape, probably not as much optimized and dense.

    • @OlliHuttunen78
      @OlliHuttunen78  Před 8 měsíci +6

      I'm not much Unity guy my self. I recommend to watch my other videos about Luma AI. It is simple method to create NeRF which can be converted to 3D surface models. Or another great service is 3Dpresso.
      Here is a link: czcams.com/video/kV0OAvlXShk/video.htmlsi=TMl9nxck27eV55Pd

    • @uripont4744
      @uripont4744 Před 8 měsíci +1

      @@OlliHuttunen78 Thank you, already used Luma AI before, I was thinking about a way to capture static scenes with mobile video to then use in Unity with 3DGS (and wondering about all the current intermideate steps until an easier solution is developed), and also converting this data into a regular mesh/3D model. Going to check 3Dpresso out!

  • @Scankukai
    @Scankukai Před 2 měsíci

    Hi,
    Thanks a lot for this clear explanation. Do you think that GS can also be applied to traditional 3D laser scans, or is the point cloud density too high in that case?

  • @midlowreborn
    @midlowreborn Před 7 měsíci +3

    seeing as how transparent and reflective surfaces get rendered fine with this technique, i wonder if this type of technology or something similar could be used to generate more accurate meshes out of them, since meshing very reflective things out of these point clouds has always been a pain

  • @Moshugaani
    @Moshugaani Před 8 měsíci +19

    All gaussian splatting examples I have seen have been generated from photographic data. How about using high fidelity digital graphics and then rendering those with gaussian splatting to make then run more efficient and/or look more realistic than would be possible with polygons?

    • @kz8785
      @kz8785 Před 8 měsíci +6

      I've been thinking about this. Probably using non real time raytracing to achieve high fidelity images, and then use those images to be able to use 3DGS for real time graphics could be plausible

    • @drdca8263
      @drdca8263 Před 8 měsíci

      I’m not sure that doing so would result in something that looks more realistic than the initial rendering.
      More-efficient, however, does sound likely to me.

    • @kz8785
      @kz8785 Před 8 měsíci +7

      @@drdca8263 Yeah the point would be to recreate computer graphics that take days to render, but in real time with high fidelity

    • @Jokker88
      @Jokker88 Před 8 měsíci +1

      Gaussian splatting currently requires very high vram which makes it unattainable for most.

    • @Kenionatus
      @Kenionatus Před 8 měsíci +2

      Baked lighting is already a thing with polygon based rendering by incorporating lighting into the textures.

  • @ABaumstumpf
    @ABaumstumpf Před 7 měsíci

    Rendering dynamic pointclouds (movement or just light-changes) is just incredibly bandwidth hungry. Static pointclouds can be converted into some really impressive visuals with great performance - but that requires a lot of precomputation (a lot as in no way of doing that anywhere close to realtime).

  • @jamesclark7448
    @jamesclark7448 Před 8 měsíci

    Nice tune in the background.

  • @drdca8263
    @drdca8263 Před 8 měsíci +1

    Very cool! I hadn’t seen that people were already working on making it work with moving subjects or changing light-sources!
    I see in the video that someone has combined another 3D model moving within a scene rendered with Gaussian splatting, with occlusion working between the the traditional 3D model and the point-cloud stuff.
    Have you seen anything where two different point clouds (from two separate photogrammetry sessions) are composited together into one scene, where one moves relative to the other?
    Are there any particular issues that would arise when trying to do that?
    Do you know any good video that goes more into the math of how the rendering is done?
    My understanding is that each splat has:
    1: a symmetric 3x3 matrix that specifies the size and shape of the Gaussian (I think it should be symmetric anyway?) (because symmetric, this should only cost 6 numbers rather than 9)
    2: a combination of rgba as coefficients for some “third order spherical harmonics”?
    Uh, my memory about spherical harmonics is a little rusty.
    Were spherical harmonics labeled by 3 integers, or is it just 2 integers and the other one I’m thinking of is me just getting mixed up with energy levels for atomic orbitals having an effect on which spherical harmonics uh, have a corresponding orbital at each energy level.
    Or, I mean, for a non-relativistic model of a hydrogen atom.
    It’s probably just 2, right?
    So it is probably like integer spin values?
    So, 0th total-spin has one (the spherically symmetric one)
    1 total-spin adds 3 more,
    2nd order(?) adds 5 more,
    3rd order adds 7 more,
    For a total of 16?
    Uhh, and then 16 * 4 = 64 ( where 4 is from rgba)
    And then add on the 6 from the shape of the Gaussian, for a total of 70?
    Wow, that’s surprisingly many (to me) floats for one blob.
    But it seems very likely that I’ve misunderstood , and that this isn’t actually the accurate count of the number of numbers associated with each Gaussian (other than the 3 for its position).
    Hmm, when you project a 3D Gaussian (without any of the spherical harmonics or colors or anything, just the plain Gaussian) onto a plane do you get a Gaussian?
    Let M by the symmetric matrix. Let it be based at the origin, for simplicity. The Gaussian assigns to point v a density proportional to exp(- v^T M v / c) for some constant c.
    If we do an orthogonal projection (not the one that would be used in this process, just something easier for me to get my hands on), then what we do is we integrate this over one axis.
    And, hmm...
    v^T M v , if we split it into the parts that do and don’t depend on e.g. the z variable..
    there’s the z^2 • (its coefficient in M) + 2 z • ( (x,y)•(some 2D vector in M))
    as the part depending on z,
    And then there’s the other part (x,y) S (x,y)^T
    where S is the relevant 2x2 portion of the matrix M (S will also be symmetric)
    Yeah, ok,
    the result of this orthogonal projection is a 2D Gaussian multiplied by a factor based on the other direction I guess.
    Oh, I guess we could decompose the symmetric matrix by expressing it as a diagonal matrix conjugated by an orthogonal matrix,
    And then, if we sandwich this between the orthogonal projection matrix of our choice,
    Or, uh, hm.
    Ok, maybe that’s not informative after all.

    • @tommasoannoni4836
      @tommasoannoni4836 Před 8 měsíci

      I have absolutely no idea about those functions, but I found this video (from 8 months ago) that goes into the math used to make gaussians, it explains how it was born and recent advancements (although I don't know if it talks about the math needed for the 3D).
      czcams.com/video/5EuYKEvugLU/video.html&ab_channel=Acerola

  • @SuperZaidin
    @SuperZaidin Před 6 měsíci

    perna kepikiran idea ini dahulu. datanya mirip gambar 2d. tapi kali ini yg 3d data. untuk visual 3d.

  • @o0oo888oo0o
    @o0oo888oo0o Před 8 měsíci

    Nice as always. I really like the transition animation from point cloud through the gaussian splatting to end result. Are these easily done?

    • @OlliHuttunen78
      @OlliHuttunen78  Před 8 měsíci +1

      It is a slider value that can be set as realtime in SIBR viewer.

    • @o0oo888oo0o
      @o0oo888oo0o Před 8 měsíci

      @@OlliHuttunen78 Thank you, i hope to try it out asap.

  • @allonifrah3465
    @allonifrah3465 Před 8 měsíci

    With software like this and UE5 and more similar software being released, developing games is becoming more and more possible for the average person. Just like hardware and software upgrades and releases enabled musicians to start their own home recording studios and liberated them from being dependant on greedy studio recording companies, this GFX software, along with UE5 and other software useful for creating games, is going to enable the average person to start making games.
    And we really need that in this time where game developing companies have become insipid and uninspired and are rarely ever producing actually good games anymore.
    All the inspiration and talent for proper game development can be found among gamers themselves: People who have great ideas about new games, story righting, charachter development, game mechanics..etc but not the means to develop games to put those ideas to work.
    Now if this development and release of GFX software and other software useful in creating games keeps on going, we will start seeing more and more small game developing studios pop up and great games appearing on the market again. And with small game dev studios I mean you, me and 8 other guys could make a game that would equal any triple A game in terms of GFX and blow 95% of all triple A games out of the water when it comes to storylines, atmosphere, game mechanics and pretty much anything else that constitutes the content of a game.
    Just look at what a small bunch of guys made with a simple, sub optimal game dev software called Dreams on PS4 and 5: Jurassic Park Operations. Amazing. (And Universal Pictures shut them down, because of their misplaced sense of pride. Booo!)
    Imagine what these guys could make if they used UE5, Gaussian splatting and other, modern tools for modeling and skinning game maps, characters, objects..etc to make games
    At this point, making highly realistic 3D models for games and texturing them, has become child's play: Just about everyone could do it. It wasn't all that long ago that such advanced 3D modeling and skinning, with such realistic lighting, reflections, shadows...etc was a job only a learned expert could do. 3D modelling and texturing has become much more user friendly, to the point where it doesn't take a learned expert to do a great job at it anymore.
    Now what is still difficult about creating games is the programming part that is required. To make all the physics of the game world work, to give proper AI to NPCs, to bind actions of players and NPCs to certain reactions of the game world...etc. That still requires a level of programming that not many people can deliver. This is why good programmers are very expensive to hire: There aren't many of them and it's alot of complex, hard work that the average joe cannot do.
    Now I expect more and more of the programming required to make games work to be done by AI, the same way software tools have made creating realistic GFX easy and user friendly to non-nerds.
    More and more pieces of code from old games will eventually be released as freeware and could then be used to make new games that are under development work. More and more of the programming will be made easier and more user friendly to non-nerds until it becomes as easy as creating ridiculously good looking, realistic GFX in UE5 for the average person.
    At this point, many of the large game development companies, like EA, Ubisoft..etc, will start seeing their profits plummet and they will either start hiring the right people and making good games again or they will fade away as small game dev companies become big and take over the whoile market.
    The only reason that these big game dev companies are still making profits is because they don't face any competition. So they just made a cartel, agreed to keep their prices high and their games low effort, cheap to produce garbage and effectively agreed not to compete among eachother anymore. No small and upcoming game dev company stood a chance against them...until now.
    Jurassic Park Operations was clear proof of how enthusiastic gamers make far better games than huge, multimilion dollar game dev companies now. And they used Dreams! As more of these software tools that make game development easier and more accesible to the average gamer, we will see more and more great games being made by gamers who run small studios, using software like this and UE5. The hegemony of the game dev titans is coming to an end and great games are coming back again.

  • @hotmultimedia
    @hotmultimedia Před 7 měsíci +1

    i would say these image-optimized gaussian splats very much utilize neural network techniques even though there are no virtual neurons or such. Automatic differentiation frameworks and gradient descent are very much behind the current neural network "revolution". (ie. methods to optimize billions of parameters)

  • @Embassy_of_Jupiter
    @Embassy_of_Jupiter Před 6 měsíci

    I think an obvious next step is using neural networks to do LODs.
    Basically trained to generate higher resolution "models" from lower resolution ones, similar to how ray tracing denoising filters work or DLSS.
    Basically one huge model that can do any scene, what GPT-4 is to text generation.
    There's tons of high quality videos out there, I don't think data will be an issue.

  • @jkotka
    @jkotka Před 8 měsíci

    Olli, one trial i would be very interested in with the gaussian splatting that should be very simple to do, is to have a mirror with object in front, and then view the point could with gaussian splatting by moving behind the mirror. Seems like the reflections in the gaussian splatting are just the mirrored objects behind the mirror plane that are picked up by the photogrammetry as features that just exist behind the plane. If this is true, then the gausian splatting model should visualize this data as an object behind the mirror.

  • @jamesrinley
    @jamesrinley Před 8 měsíci +1

    Cool. I didn't know it could capture movement. I would guess it's very bandwidth intensive, though.

    • @Zelakus
      @Zelakus Před 8 měsíci +1

      No idea how it works under the hood but it may not be as bad as one might think. If the points in the point cloud are what's moving, then it's similar to a vertex animation. Which can be baked into a texture as well, reducing the data that passes around after the initial texture loading. So in that area there is not much of a difference to what we are doing already. The points also look somewhat sparse in the video, so that's a plus as well. But certainly with high density animated chunks it will become costly both for the bandwidth and the VRAM.

  • @luminoucid
    @luminoucid Před 8 měsíci

    thank you olli! :)

  • @djayjp
    @djayjp Před 8 měsíci

    Could this help with denoising RT? Or doing RT with fewer samples per pixel?

  • @JanbroMunoz
    @JanbroMunoz Před 8 měsíci +5

    Do you know of any Gaussian splatting visualization add-ons for Blender?

    • @OlliHuttunen78
      @OlliHuttunen78  Před 8 měsíci +5

      Well not exactly but I have came across this gaussian painter addon for Blender which Alex Carlier is developing. It is not directly the same as this 3DGS but it uses splatting technique. Check his post from X: x.com/alexcarliera/status/1698769174950985787?s=46&t=jD-l-KJrgjY4YOFhRmepeg

  • @mattiebrandolini1796
    @mattiebrandolini1796 Před 8 měsíci +1

    I understand all of the actual practical useful applications of this technique, however I feel like one of the interesting use cases would be for street view photography such as Google or its equivalents. I understand that this is a somewhat novelty use for the technology and might not add a huge amount of value but I think it would be neat.

  • @melody3741
    @melody3741 Před 8 měsíci

    WHAT A TIME TO BE ALIVE!!!!!

  • @tomashutchinson2025
    @tomashutchinson2025 Před 8 měsíci +1

    what was the music at the start? i love the video too

  • @cglifeforreal9271
    @cglifeforreal9271 Před 8 měsíci +2

    Thanks for the video, Is it less resources consuming than mesh?

    • @realthing2158
      @realthing2158 Před 8 měsíci +1

      From what I've seen the splatting scenes use several GB of data so no, meshes with textures are likely more efficient for now,. They will probably come up with more optimized data formats for gaussian splatting though and streaming from disk might alleviate memory requirements.

  • @lexer_
    @lexer_ Před 8 měsíci

    I honestly feel that techniques like these are probably the future for everything related to photorealism be it games or CGI in general. (Or maybe even all rendering.) But I can not imagine that the transition to this technique will be very fast. I imagine years as opposed to to months before anything with substance based on techniques like these becomes available to consumers as an actual product that is not just a tech demo of some kind. I wouldn't be surprised if we see multiple major iteration steps on the technique before it even becomes relevant in the mainstream. It's mostly a function of developing and transitioning the tools and suites to fully support this that is on par with current tooling for traditional triangle based rendering. This is something most people seem to miss especially in the world of game development. This basically goes for almost any question you can ask about game dev. Is it possible to do? Sure. How long would it take to develop tools to a point where an art team can use them to actually produce good content for the system in a reasonable amount of time? Decades.

    • @Cyberdemon1542
      @Cyberdemon1542 Před 8 měsíci

      Why would it take decades.

    • @lexer_
      @lexer_ Před 8 měsíci

      @@Cyberdemon1542 I don't really have a specific reason for this number. It's just the time proper tooling takes for a completely new technology like this just historically speaking. Maybe I am too pessimistic and the tools necessary will reach maturity in a few years.

    • @lexer_
      @lexer_ Před 8 měsíci

      ​@@JorgTheElder I wasn't super clear about the mental leap I took from the technique as it is currently being used and what you could do with it in theory in the future. I am not talking about how the technology is currently being used and the purpose it was develop for initially which is just a better way of essentially visualizing a point cloud into a photorealistic 3D environment. The motivation behind such a fundamental shift is that this technique just scales a LOT better than traditional rendering especially if you are going for a photorealistic level of detail. What I am talking about is instead of traditional 3D rendering you encode all the assets of a game as dynamic point clouds which include animation and lighting and everything else.
      You wouldn't be rendering triangles and vertices at all anymore. That is what I mean with the immense effort and time necessary to redevelop all the rendering and creation tools needed. A whole new pipeline of tools all built on top of gaussian splatting as the core rendering technique.

    • @mike-0451
      @mike-0451 Před 8 měsíci

      I don’t think this is the future of graphics gaming. This technique has no room for style except the disturbing style of being extremely close to reality with grossly disfiguring artifacts. Path tracing lets us simulate light; it can create realistic images, but it can also create fantastic ones. Light is the giver of sight, so aesthetic pleasure delivered as its own rendition into all the forms and futures it invites us to partake in. The world as close to what we “see” on a day to day basis with our own eyes, may be appropriated for delivering one such story (perspective, opinion, imposition, pronouncement, narrative, etc), but being just one frankly banal style among many which are far more alusive and wonderful. “Real life,” is a canvas, but canvas by itself is only as appealing as one of the infinitely plural stories you can tell upon its surface, provided you believe that surface is simply *a* canvas upon which everything is written, as opposed to a canvas which is already the infinite act of articulation (ordonnance, composition, arrangement, in short, creation) or writing itself: a canvas which is already the painting of itself, whose “true” or “bare” form is already the charity of itself as bliss and beauty, freely.
      I just don’t see why this would be appealing outside of very niche circumstances, or as some kind of novelty. It does not contribute to games as art in any monumental way in my opinion.

  • @ayetej5315
    @ayetej5315 Před 5 měsíci

    what was the music playing throughout the video

  • @whud99
    @whud99 Před 5 měsíci

    Wonder how it would look to have a game's sky box rendered using this technique

  • @kyberite
    @kyberite Před 8 měsíci +5

    Wonder how you'd tie physics to the splattfield so you can interact with it in real time, interesting stuff!

    • @techpriest4787
      @techpriest4787 Před 8 měsíci +1

      Well. This splatting is a point cloud so essentially individual vertexes I guess. Which is essentially what particles are too in a particle physics framework. If you can calculate depth and we can otherwise the platting would not overlap points behind it from all angles. Then we essentially know the volume so its form so could do physics too.

    • @Jokker88
      @Jokker88 Před 8 měsíci

      If used in gaming a separate environment collisionmodel would be used

    • @confuseatronica
      @confuseatronica Před 8 měsíci

      it would SUCK, right? Basically a whole alternate invisible polygon geometry for collision, even if its generated at runtime. I don't see how you could do collision without turning it into tris or quads at some point. There may eventually be a way but its going to be a new algorithm.

    • @Jokker88
      @Jokker88 Před 8 měsíci +3

      @@confuseatronica Environmental collision models are used all the time in gaming, because high poly models perform worse for physics. This really wouldn't be much different. Quite often staircases are replaced with sloped planes etc, fences with just crude box geometry and so on.

    • @user-og6hl6lv7p
      @user-og6hl6lv7p Před 8 měsíci

      @@Jokker88 But that's not a solution, that's just a work around. That's also a really bad solution; we want collision to be as accurate as possible otherwise character controllers, NPCs and physics will be buggy and broken. Simplifying the geometry also means you won't be able to accurately project decals onto models. You could somehow figure out precisely where in the point cloud the decal should stretch, but now you're essentially doing double the work to achieve something that is already viable with traditional rendering techniques. At this point Gaussian splats are just here to look pretty, nothing more.

  • @mrmosaic7996
    @mrmosaic7996 Před 8 měsíci +1

    2:40 to 3:30 requires an explanation. Perhaps, in a separate video.

  • @valcaron
    @valcaron Před 8 měsíci

    How hard would it be to take gaussian splat scenes of moving things such as animals? Birds? Insects/arachnids/etc?

  • @Byt3me21
    @Byt3me21 Před 8 měsíci +1

    Top quality.

  • @guisampaio2008
    @guisampaio2008 Před 8 měsíci

    Could it replace polygons in artist generated content?

  • @summerlaverdure
    @summerlaverdure Před 7 měsíci

    Can you turn an already rendered scene in polygons into gaussian splatting, to get higher detail in real time? Or does the origin of a scene always have to be photographs?

    • @ABaumstumpf
      @ABaumstumpf Před 7 měsíci +1

      If you already got the polygons that means you already have a more dynamic and versatile representation. This is a method for visualising point-clouds effectively - not for for rendering scenes faster or more accurately.
      The thing is polygons are way way way faster to render, can be used dynamically (you can for example just turn on a light), and can reproduce details way more accurate. Buuuut creating the polygons initially is a lot of work.
      On the other side we got photogrametry - using cameras to capture the outside world and (nearly always) turning it into 3D point clouds. But points are not polygons so they can not be rendered directly (or you get these familiar images were 90% of the screen is black with a few coloured pixels in between, and converting them to polygons is computationally intensive and often requires a lot of human interaction.
      And that is the niche were this technique can be used: it gives you a good way of visualising the original scene directly from the point-cloud without having to go through the step of re-meshing everything. And this is more than good enough if you really just want to see what was initially captured. But if you want anything else like the vegetation being affected by air-movement or the light changing with time then you are out-of-luck as those are impossible with the point-clouds.

    • @summerlaverdure
      @summerlaverdure Před 7 měsíci

      @@ABaumstumpf cool, thank you for explaining!

  • @Moctop
    @Moctop Před 8 měsíci

    Should be able to take a ton of screenshots of the resulting gaussian result and feed that to a photogrammetry app.

  • @MonsterJuiced
    @MonsterJuiced Před 8 měsíci

    There is a plugin for this for Unreal Engine now :)

  • @eliaskouakou7051
    @eliaskouakou7051 Před 8 měsíci

    time for big tech to change "AI" to "3D Gaussian". Get the popcorn ready🍿

  • @tribaltheadventurer
    @tribaltheadventurer Před 8 měsíci

    I don't understand the difference between Volinga and Luma Ai, keep up the good work

    • @OlliHuttunen78
      @OlliHuttunen78  Před 8 měsíci

      Yeah. They both are basically the same service. Except Luma AI is free and has more features than Volinga.

    • @tribaltheadventurer
      @tribaltheadventurer Před 8 měsíci

      @@OlliHuttunen78 thanks

  • @dinisdesigncorner332
    @dinisdesigncorner332 Před 8 měsíci

    good stuff

  • @evv4198
    @evv4198 Před 8 měsíci

    Is it like a voxel model made of textures instead of "pixels"?

  • @tribaltheadventurer
    @tribaltheadventurer Před 8 měsíci

    I'm having a problem using my insta360 1 inch, I've shot in 6k and I'm getting nowhere near the quality I'm seeing you put out using the same method, I even use 200 bitrate on export and still don't like the end result when I export the model and pull it into Unreal, I went back to normal photogrammetry, with just taking photos, it seems to be giving the best quality, I'd love to use my Insta360 1 inch because it allows you to capture faster and just choose the direction of the video after

  • @nocultist7050
    @nocultist7050 Před 8 měsíci +9

    real time ray traced rendering with limited number of points followed by Gaussian Splatting might be a nice approach to rendering real time environments in games.

    • @flobbie87
      @flobbie87 Před 8 měsíci +2

      No, that's gonna a be a flickering mess.

    • @Freshbott2
      @Freshbott2 Před 8 měsíci

      In principle it’s not that different to how real time ray tracing works now. Use a limited number of points and then a model or pipeline to extrapolate that data to entire surfaces + denoise it etc.

    • @HamguyBacon
      @HamguyBacon Před 8 měsíci

      Not ray tracing, Wave tracing.

  • @blengi
    @blengi Před 8 měsíci

    need a midjourney 3d image diffusion splat model

  • @boredwithadhd
    @boredwithadhd Před 8 měsíci

    google street view needs to take notes

  • @neeeeeck9005
    @neeeeeck9005 Před 8 měsíci

    All you need is 60 point clouds per second and you have moving objects with ultra-realistic graphics.

  • @kaidenalenko7598
    @kaidenalenko7598 Před 7 měsíci

    Will it not change gaming?

  • @kobilica999
    @kobilica999 Před 8 měsíci

    Could Gaussian Splatting be speed up with "ReLu" substitution instead of Gaussian function, with just using Rhombus?

    • @kobilica999
      @kobilica999 Před 8 měsíci

      Though gaussian has very nice properties for merging and raytracing, idk

  • @Mobay18
    @Mobay18 Před 7 měsíci

    What is the most fascinating thing about this to me, is that this could have been done years ago, but why is the mathematical concepts first invented now?

  • @NOLNV1
    @NOLNV1 Před 8 měsíci +3

    Crazy to think they so quickly have come up with a fast and elegant programmatic way to get results almost identical to deep learning 😮

  • @Vpg001
    @Vpg001 Před 8 měsíci

    This looks like space

  • @domenicperito4635
    @domenicperito4635 Před 8 měsíci

    i think its great for vr but creating a game while keeping everything a point cloud seems difficult.

  • @jamesriley5057
    @jamesriley5057 Před 8 měsíci

    All I wanna know is can I take a video and import the 3D model into Blender

    • @jamesriley5057
      @jamesriley5057 Před 8 měsíci

      @@JorgTheElder that sounds just as good as a 3D model. As long as the points scale larger as you move the camera closer then nobody would know

  • @PeeosLock
    @PeeosLock Před 8 měsíci

    has tutorial how to do it ?

  • @Erlisch1337
    @Erlisch1337 Před 8 měsíci +1

    I have a feeling that Unity plugin wont see much use :P

  • @davidearhart3832
    @davidearhart3832 Před 8 měsíci

    I think this is great for capturing the world around us in 3D but what about creating environments that do not exist? How could this process be used to make fantasy worlds?

  • @gridvid
    @gridvid Před 8 měsíci +3

    Thanks. Instead of using photos from the real world it should also work with photorealistic rendered images from blender using cycles... or not?

    • @OlliHuttunen78
      @OlliHuttunen78  Před 8 měsíci +4

      Yes it would. This is interesting question why would you turn something which is already digital 3D model to another 3D model form. I could only see the benefit in that case if you want to scan something out from locked enviroments like console games or dreams game engine which is designed for playstation and has no way to export models out to any 3D format. But it will work. You can render model out as pictures and turn it then gaussian splatting model.

    • @gridvid
      @gridvid Před 8 měsíci +5

      @@OlliHuttunen78 I thought of realtime gaming with photorealistic visuals on nearly any device because of the high performing Gaussian Splattering.
      Also, if the input images are based on a 3D scene, maybe AI can use that additional data to create the Gaussian representations even faster and with fewer images rendered

    • @OlliHuttunen78
      @OlliHuttunen78  Před 8 měsíci +2

      @@gridvid
      Hmm. Interesting thought! I haven't even thought about it from that point of view. Yes! It could work and speed up the presentation of realistic renderings. Because the heavy calculation would only have to be done once and then it could be run in real time. A very good idea!

    • @gridvid
      @gridvid Před 8 měsíci +3

      @@OlliHuttunen78 maybe worth a test for your next video? 😊
      I hope this tech will be investigated even further till we don't need to pre-render 3D scenes... automatic baking them into Gaussians.

    • @MozTS
      @MozTS Před 8 měsíci

      @@OlliHuttunen78what about wanting to do volumetric scenes like dust/clouds

  • @NeoShameMan
    @NeoShameMan Před 8 měsíci

    wait how do we train a Gaussian splatting, I thought the code wasn't out?

    • @OlliHuttunen78
      @OlliHuttunen78  Před 8 měsíci +1

      It is out. Inria has relased it in github github.com/graphdeco-inria/gaussian-splatting

  • @EveBatStudios
    @EveBatStudios Před 8 měsíci +2

    I really hope this gets picked up and adopted quickly by companies that are training 3-D generation on nerfs. The biggest issue I’m seeing is resolution. I imagine this is what they were talking about coming in the next update with imagine 3D. Fingers crossed that would be insane.

  • @stevenunderwood9935
    @stevenunderwood9935 Před 8 měsíci

    This is good News.

  • @JR-dm1oq
    @JR-dm1oq Před 8 měsíci

    Could you apply it to the night sky full of stars? Imagine the universe being one giant Gaussian Splatting. Solution: Cmd + Q

  • @djayjp
    @djayjp Před 8 měsíci

    Should be used for the next Mortal Kombat 😂

  • @Vpg001
    @Vpg001 Před 8 měsíci

    If you want to 3D print with this technology, you might need sound

  • @chekote
    @chekote Před 8 měsíci

    I wonder if this would be helpful for holography 🤔

  • @Inception1338
    @Inception1338 Před 8 měsíci

    The speed of development in general is ridiculous you neither have time to breath nor count to 3.

  • @ElMiniPunky
    @ElMiniPunky Před 8 měsíci +2

    I think that it could be used to do a high quality or time consuming photgrametry method on the 3DGS

  • @tplummer217
    @tplummer217 Před 8 měsíci

    They need to use a drone to scan the environment for you.

  • @cem_kaya
    @cem_kaya Před 8 měsíci

    You can Meshify a 3D Gaussian Splatt to 3d print

  • @meateaw
    @meateaw Před 8 měsíci

    Infinite Realities doesnt seem to use gaussian splatting ... its using Unreal engines reality capture??

  • @HetPorototo
    @HetPorototo Před 5 měsíci

    If we cant really use the models from gaussian splatting .. then gaussian splatting is useless ?

  • @ZorMon
    @ZorMon Před 8 měsíci +1

    To my knowledge, there is a problem to make this technique feasible in videogames: interactivity. This is like a "3D photo", so forget about destruction or dynamics like cloth. We dont need more "realistic but static" enviroments in AAA videogames.

    • @HamguyBacon
      @HamguyBacon Před 8 měsíci

      obviously it would be mixed with polygons, point clouds and gaussian splatting.

    • @boraycobanoglu1372
      @boraycobanoglu1372 Před 8 měsíci

      @@HamguyBacon there is no need for this tech in the gaming industry. the main thing is NOT that the meshes and textures are off, it is simply the lighting that is the problem and it will be solved in this decade by real time path tracing becoming a real thing.

  • @THEF4LLOFM4N
    @THEF4LLOFM4N Před 8 měsíci

    Its like being on mushrooms

  • @orangehatmusic225
    @orangehatmusic225 Před 8 měsíci +1

    It's not animated..... yet.

  • @the-guy-beyond-the-socket
    @the-guy-beyond-the-socket Před 8 měsíci +2

    At this point its just easier to use poly method. Gaussian is fun but i dont see the use other than for laughs. Its not 3d printable, you cant use it in the games and just generally its wont be editable in any way.

  • @TavishMcEwen
    @TavishMcEwen Před 8 měsíci +1

    Ffmpeg user spotted

  • @Kimeters
    @Kimeters Před 7 měsíci

    3D Gaussian Splatting is not a well defined term.
    the answer for the homework questions.

  • @user-og6hl6lv7p
    @user-og6hl6lv7p Před 8 měsíci +1

    1. I don't like how you have to take thousands and thousands of photos. Not only is it creatively bankrupt but it also requires large storage space.
    2. Point cloud systems are really really bad for collision detection routines as the points have gaps between one another.
    3. This is an awful solution for video games, especially if you want a high degree of interactivity.

    • @tiefensucht
      @tiefensucht Před 7 měsíci

      yes and no. if this would be implemented in games, then by converting polygon data to gaussian splatting & render the result, kinda like real time raytracing. if the graphics card is developed exactly for this, it might work well.

  • @acem7749
    @acem7749 Před 8 měsíci

    I believe Dreams(creation development game)} on PS4 / PS5 uses this technique when users creating content /graphics.