Gaussian Splatting Is Awesome!

Sdílet
Vložit
  • čas přidán 5. 10. 2023
  • In a graphics paper released at SIGGRAPH 2023 Gaussian Splatting is taking the graphics world by storm. Similar to NeRF, a way of recreating scanned real world objects from video or photo with stunning results, the major difference with Gaussian Splatting is a big deal for game developers... it can work in real-time!
    There are already Gaussian Splatting implementations in Unreal Engine (paid) and Unity (free! ;) ). So, we check out the free version in this video.
    Links
    ------------
    gamefromscratch.com/gaussian-...
    -----------------------------------------------------------------------------------------------------------
    GFS Patreon : / gamefromscratch
    GameDev News : gamefromscratch.com
    GameDev Tutorials : devga.me
    Discord : / discord
    Twitter : / gamefromscratch
    -----------------------------------------------------------------------------------------------------------
  • Věda a technologie

Komentáře • 307

  • @Jeal0usJelly
    @Jeal0usJelly Před 8 měsíci +103

    What a time to be alive!

    • @Clawthorne
      @Clawthorne Před 8 měsíci +10

      What a time to be alive!

    • @kcfresh53
      @kcfresh53 Před 8 měsíci +26

      Get your papers fellow scholars

    • @brodriguez11000
      @brodriguez11000 Před 8 měsíci +1

      The industrial revolution was indeed a good time.

    • @FuZZbaLLbee
      @FuZZbaLLbee Před 8 měsíci +11

      Holding my papers tightly

    • @Eichro
      @Eichro Před 8 měsíci +10

      Imagine where we'll be two more papers down the line

  • @nicolasdiolez
    @nicolasdiolez Před 8 měsíci +129

    Very proud that you used my Arc of Triomphe model as the example for photogrammetry 😅 cool video, I need to learn gaussian splatting, it seems crazy good!

    • @nicolasdiolez
      @nicolasdiolez Před 7 měsíci

      ​@@JohnDavid888 Thank you for the insight! It seems that, for now, it's not suitable for professional use, but I suppose it's going to evolve.

  • @johnny2552
    @johnny2552 Před 8 měsíci +30

    Bro your channel is unlike anything else, I appreciate you moving like a madman on getting these videos put out for game devs and artists. Thanks!

  • @theaninova
    @theaninova Před 8 měsíci +19

    I think what probably stands out most to me is that no matter what angle you pick, it just doesn't look like a bad render. It looks like a blurry or smeared photo, or maybe a painting with a particular style. I guess the big question is gonna be can we apply lighting in real time to it in some form, and can we compose multiple of them together. It seems to be really really good at rendering trees, they just looks so fuzzy and detailed from a distance even when they're just a few blobs. I'd be interested to see a racing game with a full scan of the track using Gaussian splatting for the more distant environment and traditional rendering for the road and car.

    • @s4shrish
      @s4shrish Před 8 měsíci +1

      I feel like this renders stuff closer to the way we as human perceive stuff.
      Like a point of light when defocused is circular stretched in shape. Basically a bunch of dots that bleed into each other more or less based on focus. Which is kinda this as well.

  • @marsimplodation
    @marsimplodation Před 8 měsíci +18

    No way you are talking about this paper right now, was about to read it tomorrow for college related stuff xD
    I have a course on computer graphics dealing with the latest research on bachelor level concepts and this is one of the possible papers to work with

    • @gamefromscratch
      @gamefromscratch  Před 8 měsíci +9

      Start with the Aras rundown before jumping into the paper, it's a really elegant TL;DR summary.

    • @marsimplodation
      @marsimplodation Před 8 měsíci +3

      @@gamefromscratch thanks I will. Will need to read that paper either way tho

  • @augustday9483
    @augustday9483 Před 8 měsíci +28

    So far it seems like these scenes are basically one big thing that you import into your project. To make it usable for a proper game, I'm imagining a future where you have individual models composed of splats (for example, a bike or house) which can then be imported into a larger scene. However, the problem with that is that these splats seem to have their lighting baked in. If you moved the bike into a different scene with different lighting, it would look really out of place.
    I find it hard to imagine that this would ever take over and replace polygonal rendering.

    • @olwiz
      @olwiz Před 8 měsíci +6

      Oh but its possible. Just not yet. Youre forgetting this has just been released without optimization witch the authors thenselves realize- then theres no hardware tuned to this. Polygon rendering was created in the 60s or so in computers, took a few decades until we had hardware tuned for poligons and the tuning never stopped. The first gpu tuned for this will likely be x3 times the current performance (the first step always have the bigger gains). There may be some algorithm possible to make the blobs shift for light but even if not you forget how software development and games in particular have a history of using tricks... For example they may use 'invisible poligons'(simpler and with no texture) for colision but also someone come up with an way to use said poligons to inform the lighting; So say first iteration would be horrible (splashs AND poligons, heavy) but then they update the methods, find shortcuts between splash data and polygons, simpler polygons needed, gpus get in... In the past few decades we had what, 4 different named anti-aliasing methods, lighting methods and so on- not only each aproach improved so did the gpus tuned for the tricks used, then support software around it too (like directX, vulkan etc)...
      ...and something like the above would be possible without adding AI in the mix- now with ai? The same way ai is being trained for upscaling and frame generation, ai on the gpus maybe even with a dedicated chipset could be trained to deduce and recreate light and shadows from splashs on the fly. And were talking today tech only, god knows what new breakthroughs we will have on hardware or neural networks- all the fast pace weve been seeing had zero new milestone improvements(hardware wise). Just the other day ive read intel is tinkering with glass for chipmaking that could break a physical barrier for computing currently
      I 'predicted' current gen AI like a decade ago when i first read on neural networks at UNI wich at the time was far faaar away from anything usable yet. Im no seer, im far fron the only one. You just need a bit more imagination to extrapolate the likely path of current gen tech, the entire industry does it- the only incognita is how long it will take and how exactly, but very close aproximations are very easy.
      And i dont think this will take a decade to come up. It may, we can never know, but besides the current pace with AI and gpu tuning via AI we have to remenber that nerfs and such splash tech is already like a decade old...
      Heck i just realized the kind of tricky nanite did for polygons could come up for splashs too- something between algorithm, lods and ai around density of blob/splashs, so it could have higher resolution(points) then the examples in the video for things seen up close but dynamically lower the density on the background, distance to camera and all... The more i think about it more possible aproachs come up. You just wait, academia will be all over this with students trying different stuff and as soon as the first gpu or drivers tune for it you bet game devs will give it a spin too- the folks at unreal and unity definetly... heck the way nvidia is the moment they saw this some calls were made for a new team to toy with this

    • @Teodosin
      @Teodosin Před 8 měsíci +3

      Surely that lighting problem can be figured out. Just needs time for people to figure it out.

    • @jensenraylight8011
      @jensenraylight8011 Před 8 měsíci +3

      Hacky solutions always leads to ton of soul crushing Cleanups afterwards.
      professionals found this out the hard and painful way.

    • @Teodosin
      @Teodosin Před 8 měsíci +1

      Wow, such pessimism

    • @jensenraylight8011
      @jensenraylight8011 Před 8 měsíci +3

      @@Teodosin not pessimistic but being a realist.
      It's easy to be overly optimistic if you're an amateur that had zero knowledge of how things work.
      Also people who used this kind of hacky technique don't give a damn about art direction, this kind of thing is just something that gets in their way and should be eliminated.
      Which will result in generic game.
      At this rate, you should just write a prompt to make a full game for you,
      why bother create a model or write a single code?
      You Already generate the model, why did you stop halfway, go generate the whole dang game

  • @capsey_
    @capsey_ Před 8 měsíci +5

    I feel like the perfect usage for this tech is Google Street View (especially in VR). You don't need dynamic lighting and objects, target object details are important and having big open world is not a requirement. I wonder if it's possible to have multiple scenes using splats and smoothly transition between them as camera moves to make road moving in street view less weird than it currently is.

  • @joshwent
    @joshwent Před 8 měsíci +61

    Absolutely jaw dropping technology. So many practical applications for simpler photogrammetry type tech; virtual museum walkthroughs, interior building walkthroughs like google maps but indoors, even maybe self scans to send to a telemedicine doctor. Just endlessly cool possibilities!
    For games however, I'm honestly not excited about this. Graphics with even just a pinch of intentionally designed style are much more immersive to me than just playing in a perfectly representative world. I already spend every day IRL, show me something NEW! 😁

    • @gamefromscratch
      @gamefromscratch  Před 8 měsíci +28

      Ok.... what about a Wallace and Gromit or Fraggle Rock style world, but physically modeled then captured with Gaussian Splatting? Or old style stop motion Harry Hausen style worlds, but scanned and playable! ;)
      Although honestly a traditional pure CG workflow would probably still be cheaper and more effective.

    • @joshwent
      @joshwent Před 8 měsíci +8

      @@gamefromscratch Clayfighter 2023?! I love it! 😆

    • @carpenterblue
      @carpenterblue Před 8 měsíci +10

      ​@@gamefromscratch Actually, youtuber Olli Huttunen did really cool test where he used 3D model made in Blender and converted it to 3D gaussian splat. You absolutely can mix and match. The splat is built from sequence of pictures. Technically speaking you can animate flythrough of a room on paper, scan it, send it to computer and have 3D splat of that.... if you are insane enough that is.
      Also.... I think, there is high potential for someone just straight up building a sculpting tool/painting tool eventually in the vein of quill.
      This is absolutely GIANT thing for games.

    • @JB-fh1bb
      @JB-fh1bb Před 8 měsíci +8

      The Gaussian splat doesn’t have to use point clouds or photos and could be the actual rendering engine for 3D games. The biggest improvement here over traditional pipelines is the *massive* reduction in computing while maintaining (and arguably improving) visual quality. Imagine this being used for a next-gen version of Dreams that can be played on the Quest 2.

    • @Vaeldarg
      @Vaeldarg Před 8 měsíci +4

      The interesting thing to me is this tech looks familiar: when was looking at companies in the VR/AR space, when "light fields" was causing buzz through ones like Magic Leap, found an obscure company named "Euclidean". Their idea was using point clouds (the "light fields") for VR, and liked showing off the detailing. This seems to be a much-improved evolution of that.

  • @Kumodot
    @Kumodot Před 8 měsíci +19

    Amazing breakdown to this tech. One thing that I want to see, and probably will happen real soon is a combination of many Gaussian splatting scenes to cover bigger areas and single assets ready to compose scenes

    • @vitordelima
      @vitordelima Před 8 měsíci +2

      NeRF seems to have something like this already and maybe it can be adapted to Gaussian splatting.

  • @kyoai
    @kyoai Před 8 měsíci +12

    I could think of this being used in parallel with traditional methods : Use this method to render specific static mesh models that are high in detail, while other parts of the game world, especially dynamic parts that are animated, stay as-is with polygons + textures.

    • @vitordelima
      @vitordelima Před 8 měsíci

      It can be animated by transforming the particles the same way it's done to vertices in regular models for example.

    • @jlewwis1995
      @jlewwis1995 Před 8 měsíci

      @@vitordelima yeah I don't see why you couldn't at least to simple animations on the models (though considering the high point density that's probably required maybe it would be best to only use simple n64 style animation (with the different parts of the model being separate) and not full on skeletal animation for now

    • @UltimatePerfection
      @UltimatePerfection Před 8 měsíci +2

      @@uusfiyeyh I'm sure that it's a hurdle that we'll eventually overcome, just like in early 3d games all shadows and depth was baked into the diffuse texture, only later we've got stuff like bump and normalmaps. But yeah, for now it is more of an archvis/survey tool than actually useful for gamedev.

    • @morgan0
      @morgan0 Před 8 měsíci

      and it could be useful to turn a complex highly detailed scene with raytracing into something that could be played on much more normal hardware, maybe with some level of reactivity added in to allow the stuff that can move to interact with it

  • @georgezubat7225
    @georgezubat7225 Před 8 měsíci +42

    This really reminds me of dreams. Very detailed in specific areas, but when you try to remember non-focal elements it just isn't there. This would be a great artistic rendition of dreams!

    • @nebuchadnezzar916
      @nebuchadnezzar916 Před 8 měsíci +1

      Interestingly, when I used to have OBE's, I experimented with focusing on distant details and things were grainy, not unlike this.

    • @bgrz
      @bgrz Před 8 měsíci

      This technology has a lot in common with the game/engine called Dreams on Playstation.

    • @DrkFX
      @DrkFX Před 8 měsíci +4

      Agree, looks similar to how Flecks are rendered in Bubblebath engine (Media Molecule's Dreams, Playstation).

    • @georgezubat7225
      @georgezubat7225 Před 8 měsíci

      I always wondered how the rendering in that game worked!@@bgrz

    • @TeckGeck
      @TeckGeck Před 7 měsíci +1

      I was thinking of Dreams on the PS4 too

  • @Kumodot
    @Kumodot Před 8 měsíci +72

    I really want to see applications using Gaussian splatting in VR in something like the quest3. That needs to happen!

    • @kuromiLayfe
      @kuromiLayfe Před 8 měsíci +8

      It is pretty much the Unreal’s Nanite tech but on a cloud point level instead of polygonal, biggest issue is that at a larger depth you get a noisy dithering effect, which especially in VR can cause nausea when rendered at 90+ fps in real time… amazing for still scenery but not so much for motion.

    • @fledgeking
      @fledgeking Před 8 měsíci +3

      I think the quest 3 would probably have trouble with the polygon count, the render distance might have to be pretty low.

    • @steven11101010
      @steven11101010 Před 8 měsíci +5

      I assume you are referring to being able to navigation in a real world. But that's not really the use case. The key issue is the way point clouds are generated. They are generated *around* an object. That's what enables the recreation of the object in 3D. For recreating environments, you need the inverse, which isn't this. You can see the issues in the video when Mike ventures just a few yards from bike.

    • @pixelfairy
      @pixelfairy Před 8 měsíci +1

      Xr2 is more about texturing than geometry. It's the opposite of what it's made for. You could post process into a low poly model, but then you might as well use nerf or traditional pg.

    • @fledgeking
      @fledgeking Před 8 měsíci

      @@pixelfairy Yeah, they're kind of a package deal

  • @maymayman0
    @maymayman0 Před 7 měsíci

    Mike your channel is awesome and thank you for covering all the different stuff you do!

  • @webgpu
    @webgpu Před 7 měsíci

    i rarely comment on videos about its quality and content, but i had to come here to congratulate this channel's creator because of the good rhythm, speed and clarity of the pronunciation on a technical topic.

  • @kreur
    @kreur Před 8 měsíci +4

    I think we will see this sooner in more static-ish applications like real-estate virtual tours. Maybe some experimental games that happens in static-ish environment like 1 house. But who knows maybe it will be game production ready in a year.

  • @_remblanc
    @_remblanc Před 8 měsíci +3

    I can imagine someone pulling the PS1-era tricks with models running over these at fixed-camera angles to produce a scene. It would be a highly unorthodox workflow, though, and quite pricey at that.

    • @ekstrapolatoraproksymujacy412
      @ekstrapolatoraproksymujacy412 Před 8 měsíci +2

      it works as blobs in 3d space, just like volumetric cloud or fire, no need for any "PS1-era tricks" it can coexist with standar mesh model rendering no problem

  • @carlosrivadulla8903
    @carlosrivadulla8903 Před 8 měsíci

    what a time to be empathic!

  • @MrEnkelmagnus
    @MrEnkelmagnus Před 8 měsíci +7

    Finally someone explained all this cool new tech in a way i understand.

  • @0rdyin
    @0rdyin Před 8 měsíci +1

    This tech can be a great alternative to traditional rasterization for backgrounds in interactive story games like 'Her Story'..

  • @OutrunCitizen
    @OutrunCitizen Před 8 měsíci +5

    What we need is for someone to make a Gaussian Splatting modeling program.

    • @MaikoYT
      @MaikoYT Před 6 měsíci

      Why model when you can simply create that object in real life and scan it in?

  • @Braindrain85
    @Braindrain85 Před 8 měsíci +2

    Point cloud approaches are definitely very cool. And this one is even more pretty. Though they always come with a couple of downsides when it comes to lighting, animation, etc.

  • @gabe2o2
    @gabe2o2 Před 8 měsíci +1

    Pretty cool tech, but I do wonder how it would play with different shaders, fog effects, and lights we control in the scene. At least these are the immediate curiosities of mine. Shaders cause ima stylized boi, and shaders is how I accomplish this even when models are originally made in a more photo-realistic manner. Otherwise, seeing how drastic changes in lights and fog density mix with the tech would truly be an awesome little demo to see

  • @3govideo
    @3govideo Před 8 měsíci

    I’ve been following since Luma Ai got NeRF going and is amazing what we can get just by recording with our phones. Hope soon they can produce a lighter player. 🔥 thanks for the teaching on high-end-terms 🚀

  • @ScibbieGames
    @ScibbieGames Před 8 měsíci +7

    I was thinking about working on a Godot implementation but because instanced rendering (with MultiMesh) can't be culled on a per mesh basis I was uncertain of whether it was achievable with decent performance. But I'd like to hear from more experienced Godot developers, cause I'm a noob.
    The reference rendering implementation is also a fairly complicated one that used cuda to efficiently order the splats for rendering. Which doesn't translate well into the Godot renderer to begin with.

    • @vitordelima
      @vitordelima Před 8 měsíci

      Some renderers use transparent quads aligned with the viewer (similar to Doom's enemies and items) to render this instead.

    • @SMorales851
      @SMorales851 Před 8 měsíci +1

      One could use compute shaders to oreder the splats, but I don't think Godot allows you to perform compute operations on the main RenderingDevice, and there's no way to share buffers between devices without transferring the data to the CPU and then to the other RD, which would be slow.

    • @Polygarden
      @Polygarden Před 8 měsíci

      It's like a smart interpolation between different cameras. But this exact feature is also it's disadvantage, as it's quite hard to remove said lighting from the source, to create usable game assets. You have stretched splats which also contain the lighting. (and in this case only the lighting as you have captured it) Depending from what angle you look at it, they are differently stretched and as such are able to shade your scene correctly, but the "lighting splats" are belonging to your scene in the very same way as solid objects. It is probably possible to make it work, but you will have hard times to remove the lighting from those. (and this is needed if you want to combine multiple assets and/or different scans) It's an amazing tech, but my guess is that's it's rather useful to capture true 3D photos for personal usecases.

    • @NeoShameMan
      @NeoShameMan Před 8 měsíci +2

      It's a point cloud, unstructured. Just bucket the splat in voxel and traverse voxel from the camera, in a dull, then pass the ordered data to the renderer.

    • @NeoShameMan
      @NeoShameMan Před 8 měsíci

      @@Polygarden we aren't talking about mesh, it's not representing surfaces but lightfield. That's why a traversal using voxel make sense as a lightfield query. We aren't trying to reconstruct volume.

  • @Reavenk
    @Reavenk Před 8 měsíci

    I could definitely see this getting momentum for capture; way more practical than lightfields.
    But seems like an uphill battle for real-time uses. The lighting may be dynamic, but those dynamics are baked. And I'm guessing there is a lack of frustum culling, and all the particles need to be sorted to properly alpha blend?

  • @zahir3d
    @zahir3d Před 7 měsíci

    Tks for the video, is it possible to export it at the end to a 3d format? (fbx, obj..)?

  • @Patapom3
    @Patapom3 Před 8 měsíci

    SH is for Spherical Harmonics: it's the precomputed lighting environment.

  • @bcmpinc
    @bcmpinc Před 8 měsíci

    Im amazed at how it captures specular lighting. It's quite visible on the roof of the church.

  • @constantinosschinas4503
    @constantinosschinas4503 Před 8 měsíci

    Gaussian Spatting seems to be just image mapping, on feathered particles. The texture of each blob changes according to the ciewing angle, picking the original photo that best matches the angle, or a close angle that does not blocks view to each splat. Filesize must be quite big, compared to traditional, static texturing.

  • @dvelasco
    @dvelasco Před 6 měsíci

    Basically Maya´s Paint Effects applied to a photogrammetry-derived point cloud. Ingenious!

  • @MylezNevison
    @MylezNevison Před 8 měsíci +1

    Can you say Splats are kinda like multi shaped 3 dimensional pixels that are doing reverse virtual 3D pixel mappings (or 3D pixel projections) based on photographic data?

  • @DanielNistrean
    @DanielNistrean Před 8 měsíci

    M1 Max has a 24Core and a 32Core GPU. Just ordered one for mix of mobile/hobby game development. Continuing to watch the video..

  • @DessertMonkey
    @DessertMonkey Před 8 měsíci +2

    Last time I saw graphics like this, they said "these are grains of dirt".

  • @Arisilde
    @Arisilde Před 8 měsíci

    This reminds me of how Media Molecule's "Dreams" works.

  • @studioopinions5870
    @studioopinions5870 Před 6 měsíci

    I think the best way to make use of this 3D Gaussian Splatting, is to be integrated with AR Glasses, and combine animated Characters into the scene. That way it will seem like a Virtual Holodeck of Star Trek. Maybe can use a kind of camera tracking feature of Blender or Unreal, and such to make it possible to put moving characters in a story like setting. Just my thoughts! Terry

  • @dukemagus
    @dukemagus Před 8 měsíci +4

    This will be crazy if you mix this tech with Google maps/earth data

  • @y1QAlurOh3lo756z
    @y1QAlurOh3lo756z Před 8 měsíci +5

    Could this be used to bake "bake" extremely high fidelity 3D scenes into point clouds and then splat them in runtime?

    • @altongames1787
      @altongames1787 Před 8 měsíci +1

      I understand what you mean, but why would you wan't something less performant?

    • @drdca8263
      @drdca8263 Před 8 měsíci +3

      @@altongames1787hm? The idea is that you could take some other rendering method that *can’t be done in real time*, and use it to create a Gaussian splatting scene which *can* be rendered in real-time.
      (People have tried this. It seems to work pretty well!)

    • @NeoShameMan
      @NeoShameMan Před 8 měsíci

      ​@@altongames1787800 fps is quite performant in my opinion

  • @BrianDamageYT
    @BrianDamageYT Před 8 měsíci

    Kind of reminds me of the landscape rendering technique from the old Ecstatica games.

  • @MrAuxiom
    @MrAuxiom Před 7 měsíci

    Wow that remind me so hard the Virtues in Cyberpunk

  • @ardonnie
    @ardonnie Před 8 měsíci

    Seems like it’s just a matter of generating enough point clouds and pairing them up with descriptions before we can make generative models to create new seems just based on a description.

  • @deluxe_1337
    @deluxe_1337 Před 8 měsíci

    This is great for filmmaking.

  • @ScibbieGames
    @ScibbieGames Před 8 měsíci +2

    12:25 to be fair, you can do with much less if you don't require as high a quality. Also it's pretty likely a lot of speed can be traded for lower VRAM memory usage.
    To train to a reasonable 7000 "Iterations" you could probably get away with way less VRAM, and according to their own calculations it should be possible to train to reference paper quality with just 8 GB of VRAM, but that hasn't been implemented.

    • @vitordelima
      @vitordelima Před 8 měsíci +1

      And the sphere harmonics seem to be overkill for something that is just doing specular reflections most times.

  • @ozanyasindogan
    @ozanyasindogan Před 8 měsíci

    Looks amazing and practical. If they can use some NN on it to remove unnecessary particles and actually convert and split objects, that would be the end I believe.

  • @WifeWantsAWizard
    @WifeWantsAWizard Před 6 měsíci

    It occurs to me that pairing splatting with traditional modeling in video games could be the wave of the future if the splats are restricted to distant background objects and detailed foreground objects are replace by LOD-correct alternatives as the player's avatar approaches.

  • @ludologian
    @ludologian Před 8 měsíci

    thanks for sharing, I saw this weeks ago sorry in another repo sorry to not mention it..
    I want to implement unity volume point editor similar to Nvidia workflow also it's a great thing to implement delighting tool and neural decompression algorithms. ( long project goal) inshallah

  • @paulwhiterabbit
    @paulwhiterabbit Před 8 měsíci

    this could be a thing in the future but will only prosper in static 3d space viewing since a 3d polygon is much more practical in a dynamic real-time environment that games need. I just hope the tech to stream humongous file sizes faster than what we have comes sooner

  • @devilofether6185
    @devilofether6185 Před 8 měsíci

    Gaussian splatting reminds me of the rendering engine in dreams

  • @seraaron
    @seraaron Před 8 měsíci +4

    God it looks like a dream when you get to the perifery of the scan

  • @whtiequillBj
    @whtiequillBj Před 8 měsíci

    This reminds me of the system that is used in Dreams by Media Molecule.
    I know that its Playstation specific but, maybe if there is enough push we can get Dreams onto the PC eventually.

  • @dzft3w
    @dzft3w Před 8 měsíci

    It works well on mobile too! interesting

  • @synapse349
    @synapse349 Před 8 měsíci

    i wonder if one could use nerf to build the point cloud to use for splatting...

  • @juanme555
    @juanme555 Před 8 měsíci

    It looks very interesting, it will be very interesting to see if game developers choose to invest the time needed to properly fake photorealism through Gaussian Splatting, achieving high fidelity very at a very low processing power cost, or they will rather just use all the path tracing methods which will be a lot more convenient but also a lot heavier on the user's hardware.

  • @HasanRx7
    @HasanRx7 Před 8 měsíci +3

    I wonder if this tech could be used as a reference template in 3D modeling programs to model real world environments instead of using 2D reference images. It will be extremely useful to model on top of it since it gives real world scale and helps prototyping the basic shapes and scale of the environment. I'm not keeping up with 3D tech for modeling lately so I'm not sure if there is already a similar solution out there.

    • @steven11101010
      @steven11101010 Před 8 měsíci +1

      I think this is the more realistic use case - as a tool to improve current workflows.

    • @doomgb4994
      @doomgb4994 Před 8 měsíci +1

      I'm rooted for this kind of usage as well. don't want the tech deprive my fun of modeling

    • @mitch9254
      @mitch9254 Před 8 měsíci +3

      Sim racing and golf games, for example, have been doing this for at least a decade: using laser scanned point clouds as 3d reference to make a polygonal version of a real world location.
      If these splat method ends up producing results at least as accurate as lidar, but substancially cheaper and faster, then surely studios and modders will be all in, but again to use it as a reference.

  • @joloppo
    @joloppo Před 8 měsíci

    The spiky bits of light make it look exactly like when scenes load on assassin's creed. ... Which was basically loading into a sim in the context of the game. crazy

  • @oleglinkov
    @oleglinkov Před 8 měsíci +5

    nah, without animation and adjustable lighting (day/night, dynamic lights) nobody going to change their entire pipeline. not in gamedev anyway. cool thing for virtual museums, home tours etc

    • @gamefromscratch
      @gamefromscratch  Před 8 měsíci +2

      I think there is enough data that lighting could be implemented. That said, to mix it in with a traditional rendering pipeline, you'd end up with two lighting paths and that wouldn't be ideal.

    • @euden_yt
      @euden_yt Před 8 měsíci +1

      This came out like 3 months ago, there’s also an experiment that showed that you CAN change the lighting. I’ve also seen someone implement animations in augmented reality and displayed on an iPhone. That’s just in 3 months of progress. Think of GPT 3 in 2020 vs GPT 3.5 2022 vs GPT 4 today.

    • @shlokbhakta2893
      @shlokbhakta2893 Před 7 měsíci

      @@euden_ytand just like that 4D Gaussian splatting dropped lol

  • @shydun
    @shydun Před 7 měsíci

    i feel like this would be good if you could separate each prop, somehow type the dots to a dummy mesh and then decorate the level

  • @leeoiou7295
    @leeoiou7295 Před 8 měsíci

    How does this handle collisions?

  • @nocultist7050
    @nocultist7050 Před 8 měsíci

    I just want to use it on non-denoised raytracing rendering output frames. with depth pass for spatial data. Just let me see if it works...

  • @R1po
    @R1po Před 6 měsíci

    Can't really imagine a use in games. But for VR chatrooms, real estate buros, or VR sightseeing.

  • @tombruckner2556
    @tombruckner2556 Před 8 měsíci

    Now I just need a Unity iOS plugin to create these models in real-time :)

  • @Gigacat2137
    @Gigacat2137 Před 8 měsíci

    That reminds me of how the models work in Dreams.

  • @eddiewalpole
    @eddiewalpole Před 8 měsíci +1

    To answer the question posed in the thumbnail: unlikely

  • @RedstoneNinja99
    @RedstoneNinja99 Před 8 měsíci +2

    I wonder if you could map a scene even faster by just taping a high framerate 3d camera to a pole on your back

  • @jonvdveen
    @jonvdveen Před 7 měsíci

    In a way, Gaussian Splats are like very large atoms - they come together to make everything in the scene.

  • @prozacgodgamedev
    @prozacgodgamedev Před 8 měsíci

    I think a really interesting real-world use case would google maps, google maps is already kinda terrible up close... so they can't make it worse! haha - but no they already have a number of photos and it would probably work 10 times better for lots of scenes, it's "just a data processing issue" ... ish ;)

  • @goodideas5659
    @goodideas5659 Před měsícem

    Possibly a similar idea for use in gaming but a lot simpler is to get rid high polygon models for a basic box outline shape instead and just update the quads (2 triangles) on each face with angle adjusted photos depending on the players view angle. This has been done recently in the new Ultra Engine but I would use a a more perfected method of this so you don't see any clipping between photos and is totally smooth. Maybe its as simple as making a detailed sprite sheet and a shader to smoothly combine and move between images...?
    If an view direction is between 2 image angles then get the shader to create the correct image based on the 2 closest ones...I think some smart people could achieve this.

  • @diligencehumility6971
    @diligencehumility6971 Před 8 měsíci +2

    We are 100% gonna see something along these lines for future rendering in games

    • @BadBanana
      @BadBanana Před 8 měsíci +2

      No we're not.
      For movies yes
      For presentations yes
      Not for games
      Rendering like this is the opposite of why we have render pipelines
      It's unfeasible to ask a user to download hundreds of gigabytes of data per scene
      If you want to create gameobjects from these procures then im sure that can be done.
      But no
      You won't see games made like this ever

    • @eddiewalpole
      @eddiewalpole Před 8 měsíci

      I’d give it 1% tops for games specifically

  • @4.0.4
    @4.0.4 Před 8 měsíci +2

    One thing I'm not sure I understand is if you could combine scenes, or how it handles reflections other than creating a mirror universe room. Like if you have two mirrors on the back of each other.

    • @ScibbieGames
      @ScibbieGames Před 8 měsíci +2

      It would show what was visible on the pictures it was 'trained' on.

    • @drdca8263
      @drdca8263 Před 8 měsíci

      @@ScibbieGamesMakes sense, but, it does make me wonder: what does it do if you record a scene where there’s a mirror that isn’t against a wall, and (in the training footage) have the camera go around the mirror? How will the quality of reflections in this case compare to the quality of reflections when I’m the mirror is up against a wall and it can use the “treat the mirror like a portal” trick?

    • @NeoShameMan
      @NeoShameMan Před 8 měsíci

      ​@@drdca8263the gaussian has directional color, so backface mirror will probably duplicate the color of the background where they are not seen. But that's a mice observation. Remember these don't represent surface and volume but light rays.

    • @drdca8263
      @drdca8263 Před 8 měsíci

      @@NeoShameMan Sorry, I don’t think I understand quite what you mean by “duplicate the color of the background where they are not seen”.
      ... also, in order to represent occlusion, don’t the splats kinda sorta also have to represent volumes? That’s what the opacity value handles, isn’t it?
      Edit: to be clear, I do anticipate it still working somewhat when it can’t make things such that you can “go into the mirror” on account of there being views in the training footage at the locations that “going into the mirror” would take you, and which don’t look like what the mirror world would look like there,
      I’m just expecting that the quality would probably be somewhat lower, and wondering by how much.

    • @NeoShameMan
      @NeoShameMan Před 8 měsíci

      @@drdca8263 they use spherical harmonics, ie directional colors. These are used in game with lightprobe to illuminate a scene. They don't represent volume, they represent light rays from the source image, basically its like they are fuzzy blurry cubemap, the overlap of all them reconstruct the view of a source image. It's like you took the 2d pixels of the source image and move it to where it's the most probable, and many pixel at the same place merged into a cubemap.

  • @daysetx
    @daysetx Před 8 měsíci

    Is this something like an Unlimited Detail engine?

  • @jerkofalltrades
    @jerkofalltrades Před 8 měsíci +1

    I was wondering, since these are just pictures that are used to create the point cloud data, why couldn't you set up a scene or model with many cameras (or just one that flies around the scene like a drone) and use that data to create the point cloud. Would it be economical? I don't know. I'm just curious if it's possible. Like would there be a noticeable difference in data sizes between a gaussian splat model vs traditional polygons.
    Also, didn't the Sony game Dreams do something similar?

    • @kayobro1234
      @kayobro1234 Před 8 měsíci

      It's being done: czcams.com/video/KriGDLvGDZI/video.html

    • @jmalmsten
      @jmalmsten Před 8 měsíci

      Not sure about the Sony Dreams. But I do have hazy memories reading that the Blade Runner point and click adventure did something similar.

  • @VideaVice25
    @VideaVice25 Před 8 měsíci

    It's looking great but it also means things I rather see die will survive and look even better in the future.
    Just imagine Metaverse+Nanites+Gaussian Splats+VR... Hell awaits.

  • @namelessalias0007
    @namelessalias0007 Před 8 měsíci

    Imagine combining this tech with what was used in gta 5 enhancing photorealism demo that came of a couple years ago...

  • @claudiusraphael9423
    @claudiusraphael9423 Před 8 měsíci

    "It has a price-tag of a $137.43." -- "Yeah, we'll not be using that today ..."

  • @megasupernewbie
    @megasupernewbie Před 8 měsíci

    just waiting for this to become standard monitor tech

  • @cygnos4612
    @cygnos4612 Před 7 měsíci

    I use the Luma AI plugin for unreal engine. Super easy to use and free.😊

  • @jimj2683
    @jimj2683 Před 7 měsíci

    This is the future of Google Street View in 3d!

  • @linuxrant
    @linuxrant Před 7 měsíci

    If that would be available in Godot, I would immediately implement it in my project I have at least two really cool ideas how to use this tech. I wonder if the splatting could be switched from gaussian into other methods of splatting, for example...paint brush splatting... ifkywim...

  • @WolfCatalyst
    @WolfCatalyst Před 8 měsíci +2

    Bro, your computer chugs on everything. Most people can probably get 60fps 😂
    UNF did a tutorial a while back called make a full game in 2 hrs (or something similar) and he used one of the free monthly cities. He was chugging too but then went into the properties of the UE editor, changed a couple settings and it was perfect. Don't remember what he changed, but there's definitely a fix.
    Looks like I found a new use for my drone though. Thanks!

    • @ScibbieGames
      @ScibbieGames Před 8 měsíci +4

      It's an unoptimized, experimental renderer for a generally unsupported rendering pipeline which is implemented on top of Unity.

    • @atsignsarestupid
      @atsignsarestupid Před 8 měsíci

      He probably changed "virtual shadow maps" to regular shadow maps, that's a one-click triple the speed of Unreal engine button right there.

  • @stonekase
    @stonekase Před 8 měsíci

    Finally

  • @BIFLI
    @BIFLI Před 8 měsíci +1

    Has anyone done this with the Zapruder film yet?

  • @kryob1
    @kryob1 Před 8 měsíci

    Is the bike static?

  • @Homiloko2
    @Homiloko2 Před 8 měsíci +4

    Is it even possible to apply lighting on this? I don't understand much from photogrammetry but it seems pretty much imutable, e.g. can't move objects, can't alter lighting and so on, which greatly reduces the applications for this

    • @gamefromscratch
      @gamefromscratch  Před 8 měsíci +3

      Yes and no I believe.
      Yes, in that you have positional and color data and it's being rendered in real-time by the GPU. You could certainly implement virtual lighting (assuming you are much much much better at math than I am).
      No, in a traditional pipeline, like in Unity. This isn't rendered along side the rest of the scene as I understand it, more in parallel. So if you added a light in the Unity scene, nothing would happen to the Gaussian Splat you've imported. I do think you could make it work, but you'd essentially have two parallel lighting paths (I think).

    • @ScibbieGames
      @ScibbieGames Před 8 měsíci +4

      The colors and reflections are stored inside "Spherical Harmonics", they hold the color value when looked at from different angles.
      You could technically, probably, somehow, bake in the lighting from your scene into these harmonics, but that would still be immutable.
      To do that in real time, for some million points in space, It might be a bit much, perhaps you'd require some sort of fragment shader, but for splats. lol

    • @vitordelima
      @vitordelima Před 8 měsíci

      @@ScibbieGames Surfels, which are similar to this, were used for realtime global illumination over triangles in the past. Still it would require the use of lighting probes around clusters of splats or other simplification.

  • @MonsterJuiced
    @MonsterJuiced Před 8 měsíci +2

    It's slow because it uses the particle system to render. It's the with the unreal engine one. Each splat is rendered using Niagara and Niagara, even on GPU, gets real slow when rendering point clouds or particles systems above 1mil points.

    • @vitordelima
      @vitordelima Před 8 měsíci +1

      The original demo uses screen space 2D rendering.

    • @MonsterJuiced
      @MonsterJuiced Před 8 měsíci

      @@vitordelima Original demo of what? Made by whom? If you watched the video the guy even confirms it's using the particle system to render the points and this scene is over 3million points. Why did you upvote yourself?

    • @JunglismVFX
      @JunglismVFX Před 8 měsíci

      I wonder can cache them though, I’ve had quite a few intense Niagara systems running in a level but I cached them and the worked well, this for video production. Not sure how it would in game dev

    • @vitordelima
      @vitordelima Před 8 měsíci

      @@MonsterJuiced Of the technology being explained in the video. I didn't, you are just insane.

  • @TiagoTiagoT
    @TiagoTiagoT Před 8 měsíci

    Wait, what do you mean NeRF's are not real time? I thought I heard of even some VR stuff using NeRFs, and I've definitely seen NeRFs rendered on web pages interactively...
    I know the original was slow, but there were many improvements on the techniques after the original paper was published...

  • @minty4018
    @minty4018 Před 8 měsíci

    Sorry, this isn't very related to the video, but have you ever released any games anywhere?

  • @hanniffydinn6019
    @hanniffydinn6019 Před 8 měsíci +1

    Folks the real use here is cinematographers using real life backgrounds in unreal volume, as the film “the creator” proves is real life backgrounds are what really matters. This allows real life backgrounds to be used in volume filmmaking! Real-time NERF backgrounds is the future of volume filmmaking! 🤯🤯🤯🤯😎😎😎😎👍👍👍👍

  • @insertoyouroemail
    @insertoyouroemail Před 8 měsíci

    It might be useful asset bake target.

  • @JonoSSD
    @JonoSSD Před 8 měsíci +15

    I've been reading about this for a while and it looks like real innovation. Unlike real time ray tracing, something extremely taxing on the GPU and that was pretty much forced down our throats by a company out of ideas on how to charge thousands of dollars for their products that aren't worth half of their asking price in real performance.

    • @poetryflynn3712
      @poetryflynn3712 Před 8 měsíci +10

      Raytracing actually came from independent scientists in the 80s, we just needed the fire power to catch up for the consumer.

    • @JonoSSD
      @JonoSSD Před 8 měsíci

      @@poetryflynn3712 I know, I even read a few of the scientific papers about it. It's really interesting stuff.
      The problem was Nvidia forcing the technology onto the consumer well before it was ready as this "crazy new thing that totally makes new GPUs worth double last gen" even though we're now 3 generations in and most of their lineup can't even handle it without upscaling (which was essentially invented because games could barely reach 60 fps on flagships when using ray tracing at the time. Remember the RTX 20 series?).
      I'd say it'll be some 4~5 more generations before real time ray tracing can become viable without upscaling.
      Nvidia (and AMD, which does very little besides copy its competitor) should be focusing working more closely with developers to better optimize current games, so they don't suck up 32 gigs of ram all the time and need 200 GB of storage to run.
      But no, that's not flashy enough to sell thousand dollar giant pieces of inefficient heatsinks.

    • @jmvr
      @jmvr Před 8 měsíci

      @@poetryflynn3712 and Gaussian splatting is in a similar boat, except that the firepower has already been there for a while, just that it wasn't used for consumer applications until just now. Usually it was used for mapping out stuff like CT scans, and the original paper from 1993 ( web.cse.ohio-state.edu/~crawfis.3/Publications/Textured_Splats93.pdf ) used it to map out wind and clouds

    • @philbob9638
      @philbob9638 Před 8 měsíci +5

      Real time ray tracing is not something that was forced down your throat by a company out of ideas, it's a technology that has been promised and pursued for decades and will continue to be pursued for a while yet. What you have now is barely scratching the surface.

    • @vitordelima
      @vitordelima Před 8 měsíci +2

      @@philbob9638Then hardware accelerated realtime raytracing was forced down everyone's throats by a company out of ideas.

  • @HighlightRiel
    @HighlightRiel Před 8 měsíci

    So again, how is it better than photogrammetry? I saw no difference other than you saying it's higher quality. But I can get higher quality with photogrammetry with reality capture too.

    • @0LoneTech
      @0LoneTech Před 8 měsíci

      It is a different target format for photogrammetry, reconstructing a light field with arbitrarily placed detail. It does notably better with small details like spokes in a bicycle wheel, but still struggles a bit with reflections and partial transparency.

  • @developerdeveloper67
    @developerdeveloper67 Před 8 měsíci

    I can't see this being used in games for anything but hyper casual games on the PC (which there are not many out there). Maybe in fighting games due to having very few meshes being rendered? But even there, I think the performance is not there.

  • @brodriguez11000
    @brodriguez11000 Před 8 měsíci

    Hu-po has a two and a half hour YT video on all the details.

  • @MR3DDev
    @MR3DDev Před 8 měsíci +1

    Reason why this doesn't work for games (yet) is cause you can't clean it up, unless I am missing something this isn't geo and you will have a very small space.

  • @billfrank7906
    @billfrank7906 Před 8 měsíci

    I heard "authoring" does it means it work with DOTS?

    • @zdspider6778
      @zdspider6778 Před 8 měsíci

      It uses compute shaders to render a point cloud as particles. DOTS is irrelevant.

  • @sunbleachedangel
    @sunbleachedangel Před 8 měsíci

    You can make a cool psychedelic game with this

  • @zodchiy3d
    @zodchiy3d Před 8 měsíci

    Has anyone tried running Unity in VR mode with this plugin? In theory it should work. I'll have to give it a try.

  • @NunSuperior
    @NunSuperior Před 8 měsíci +1

    Novalogic omg. That's a name I haven't heard in a long time.

  • @ritzenhauf
    @ritzenhauf Před 8 měsíci

    How do you capture splats?

    • @gamefromscratch
      @gamefromscratch  Před 8 měsíci +1

      There are a few apps (LunaAI or LumaAI, something like that for IOs, there is a Poly something one for cross platform). There are also websites that can create them from uploaded photos or videos

  • @astrahcat1212
    @astrahcat1212 Před 8 měsíci +4

    All we gotta do is kick down those polys and make it able to make stylized non-photo-realistic 3d models and we'll be good.

  • @VitorGuerreiroVideos
    @VitorGuerreiroVideos Před 8 měsíci

    Apart from back plates/matte paintings/Skyboxes, I don't really see much use to it in games? It's static, no collisions, no interaction, etc. ? I'm not sure but at least it seems like it's a one off, static, just one use type of thing?

    • @vitordelima
      @vitordelima Před 8 měsíci +2

      It doesn't seem to be too hard to implement collisions because it's almost a collection of elliptical particles.

    • @steven11101010
      @steven11101010 Před 8 měsíci

      Even easier just to rough out the outline via an invisible mesh and use that for collision detection. The main issue with interactions would be lighting and destruction of the model, those interaction would have a much harder time looking "right". @@vitordelima

  • @mmmuck
    @mmmuck Před 8 měsíci

    I just want a tool to convert these to polygonal mesh and texture