Unlocking the Potential: Mastering 3D Gaussian Splatting using Pre-rendered Images

Sdílet
Vložit
  • čas přidán 8. 06. 2024
  • This time we take pre-rendered images of 3D models and train them to 3D Gaussian Splatting. What will be the benefits on that? Since Gaussian Splatting can show things in real time with very good quality, I think this could be revolutionary new way to implement 3D renderings.
    Specs:
    These samples were rendered with Graphics card Nvidia RTX 3070 8Gb Vram
    PC: Asus ROG Ryzen 7 64Gb ram
    #gaussiansplatting #3dscanning #blender3d
  • Krátké a kreslené filmy

Komentáře • 233

  • @gridvid
    @gridvid Před 8 měsíci +25

    I'm so glad you did this Proof of Concept, it looks absolutely amazing... and thanks for the shout out 😊
    I hope this will be integrated in every 3D program at some point. There is so much room for optimization in this process. For example the point cloud generation could be done automatically inside the engine. Also, there are already early tests of dynamically lighten and animating Gaussian Splats. We've got an interesting tech right here that could revolutionize the way we render 3D scenes all together 😊
    Keep up the fantastic work... 😊👍
    Btw... I wonder how fictional or cartoony content would turn out 🤯

  • @EmanuelSer
    @EmanuelSer Před 8 měsíci +9

    This is going to be a game changer! Clients always want to change camera movements like it doesn't take hours or even days to do so

  • @cyber_robot889
    @cyber_robot889 Před 8 měsíci +1

    The answer was on the surface all this time! This guy idea is genius, and thank you that you're point it out and show how to do!!

  • @khkgkgkjgkjh6647
    @khkgkgkjgkjh6647 Před 8 měsíci +63

    This is pretty insane. I think in theory it should be possible to render directly into 3d gaussian splats, without ever having to do any point clouds or training process.

    • @m.sierra5258
      @m.sierra5258 Před 8 měsíci +25

      Not sure about training, but the point cloud could definitely be generated directly from the geometry. Remember gaussian splats are an extension of a point cloud, there is no way to avoid a point cloud.

    • @AlexTuduran
      @AlexTuduran Před 8 měsíci +2

      Sierra is right. The points are at the basis of splats. Splats even keep their position and color while fitted.

    • @AMessful
      @AMessful Před 8 měsíci +5

      But if you use the geometry as a point cloud, does it have the render quality on it? F. E. the reflections and refractions of the water bubbles?

    • @productjoe4069
      @productjoe4069 Před 8 měsíci +3

      @@AMessfulI think you’d need to convert the mesh into a textured form first and derive the point cloud from that (using edge detection or similar). This cloud could probably be pruned a bit (more pruning means lower quality but faster). Then you could path trace from each point to get lighting/reflections etc. to set the spherical harmonics coefficients of the colour components. Just guessing here (I’m not a 3D graphics researcher) and I don’t know if that’s any more practical than just using carefully chosen probe cameras and doing the entire original pipeline. One thing that would probably be better though would be fewer floaters and more accurate edges (because we know exactly where in 3D each point is by transforming the texture coordinates by the geometry’s transform).

    • @loookas
      @loookas Před 8 měsíci +2

      I thought that this videos was about that

  • @ArminDressler
    @ArminDressler Před 7 měsíci +1

    Olli, this is a fascinating technique and the results in the real-time animation look amazingly good. Especially when you consider that it was the first attempt. Congratulations!

  • @DaveBrinda
    @DaveBrinda Před 2 měsíci

    This is amazing…the unlocks so many creative possibilities. thanks for sharing!

  • @fhmconsulting4982
    @fhmconsulting4982 Před 8 měsíci +5

    This 3DGS actually could be the "front end" to so many tools that have limited interfaces because the finished visual is a great approximation of reality. Every geographic information system, facility management system, BIM\Fabrication package & surveying application could use this technology. The limiting factor for so long has been differing file formats and interfaces. This could be the 3d printing press that makes digital data almost agnostic. 3D has come a long way since the 1980's but is still the same method and tools, but faster. If you use your room as an example you could have all the services, construction materials, fabrication plans, building approvals, energy calculations etc all use the same 3DGS format to display information on any pixel generating viewer. Exciting times.

  • @JHSaxa
    @JHSaxa Před 8 měsíci +1

    Wow! This is amazing! One of the most impressive things I've seen all year. Thanks for sharing this experiment.

  • @technomaker777
    @technomaker777 Před 8 měsíci +1

    I was thinking about this as soon as you uploaded video about GS. Very interesting!! Great videos!

  • @davebrinda8575
    @davebrinda8575 Před 2 měsíci

    This is amazing! 🤯 Thanks for sharing....will follow with interest!

  • @resetmatrix
    @resetmatrix Před 8 měsíci +1

    Great job, I think tihis is the future of 3d realtime graphics, and sure a new evolution of 3D gaussian will incorporate animation, with animated elements inside the scene

  • @tokyowarfare6729
    @tokyowarfare6729 Před 8 měsíci +1

    I'm amazed how casually you record video and you get theese cool ressults. THis test was also super super cool.

  • @ilyanemihin6029
    @ilyanemihin6029 Před 2 měsíci

    Amazing idea and implementation, thanks!

  • @havocthehobbit
    @havocthehobbit Před 8 měsíci +1

    Thats a bluddy briliant use case thfor GS that i never thaught possible.

  • @semproser19
    @semproser19 Před 8 měsíci +5

    I'm loving these.
    The most obvious gaming usecase would be realtime cinematics.
    Cinematics are often one of two things:
    - an ingame cinematic with obviously game-level graphics/lighitng/models, or
    - a prerendered cinematic with much higher quality assets and methods that can't be done realtime.
    If you're using a mix of gaussian splatting for the environment and then spending all your traditional rendering power on your moving parts/characters, then you could have cinematics nearly as pretty as pre-rendered cinematics. And that's just until we get animated splats.
    This way you can have faultless splats too, because the camera would only ever use the path the splat input references took.

  • @sunn2000
    @sunn2000 Před 8 měsíci +1

    I was thinking about this ... thanks for posting!!! I know what I'm doing this weekend!? I'm in 3dsmax and corona.

  • @lescreateurs3d
    @lescreateurs3d Před 8 měsíci +1

    Amazing, thx for your experimentations !

  • @konraddobson
    @konraddobson Před 8 měsíci +1

    Games were my first thought too. Very interesting!

  • @diaopuesto7082
    @diaopuesto7082 Před 8 měsíci +1

    I now have infinite ideas thanks to this video.

  • @matslarsson5988
    @matslarsson5988 Před 8 měsíci +1

    Very interesting stuff. Keep up the good work!

  • @harriehausenman8623
    @harriehausenman8623 Před 8 měsíci +3

    Fascinating idea and great video! Wonderful accent 🤗 The effort put into clear speech is much appreciated! A real problem for most "native speaking" channels 😉

  • @railville
    @railville Před 8 měsíci +7

    Fantastic. Makes me wonder if you could go to a film like the Matrix and use a bullet time scene as your image base and render that to splats

  • @pabloderosacruz
    @pabloderosacruz Před 8 měsíci +2

    this is the future of 3d

  • @Because_Reasons
    @Because_Reasons Před 8 měsíci +1

    Looking forward to seeing how this progresses!

  • @AlexTuduran
    @AlexTuduran Před 8 měsíci +2

    Natural next step. We'll done! I'm already attempting to try to fit the splats using genetic algorithms. Fingers crossed.

  • @Geenimetsuri
    @Geenimetsuri Před 8 měsíci +1

    Brilliant stuff! Having well above real time rendering of a complex "photorealistic" 3D landscape is nothing less than sorcery!
    I also wonder what the uncapped FPS would have been.

  • @badxstudio
    @badxstudio Před 8 měsíci +14

    Olli Fantastic video and great show of the usecase! Even with NeRFs, a friend of ours managed to get a 3D scene created from a Spiderman Game and it was awesome! We were thinking of testing that out with Gaussians and see how it would turn out. Clearly it is going to look awesome!!

    • @BlackAladdin_
      @BlackAladdin_ Před 8 měsíci

      Yeah y’all definitely should glaze our life out with that. Yall the reason why I know about nerfs.

    • @tyler.walker
      @tyler.walker Před 8 měsíci

      Ay, Bad Decisions! I just got finished watching you guys' video on this tech! It was really great, too!
      Scanning a 3D scene from a game is something I've wanted to try since I first heard about NeRFs, but I haven't known the best way to go about doing it. Has your friend made a video or documented how he did it?

    • @badxstudio
      @badxstudio Před 8 měsíci +1

      Hey mate

    • @OlliHuttunen78
      @OlliHuttunen78  Před 8 měsíci +1

      You can also check my video abot that topic here: czcams.com/video/GnJOFbEwXrw/video.htmlsi=VBTzPpHA3FB8VSox

    • @GooseMcdonald
      @GooseMcdonald Před 8 měsíci +1

      Do it with the first Matrix movie gun scene :)

  • @RandomNoise
    @RandomNoise Před 8 měsíci +2

    Well, this is something interesting and very useful

  • @Erindale
    @Erindale Před 8 měsíci +2

    Fantastic experiment! It'll be fantastic once we can go direct from scan or DCC into real time gaussian splats. I wonder how we could do dynamic lighting within Gaussian Splats though. Right now, it would probably be faster to bake lighting information into the textures of your 3D so you can get Cycles style lighting in real time in Eevee. Looking forward to seeing how this tech progresses!

  • @mujialiao6088
    @mujialiao6088 Před 5 měsíci

    This is a game changer for the future of VFX industry

  • @ThomasAuldWildlife
    @ThomasAuldWildlife Před 8 měsíci +1

    This is getting Nuts!

  • @ALERTua
    @ALERTua Před 8 měsíci +1

    OK, so this might be a game-changer for the interior design. My wife renders her Revit interiors using Corona. She positions the camera, sets lighting, and makes a rendered shot. Corona renders only on CPU so this takes up to one and a half hours for one shot. There are at LEAST 20 shots per project, which means 30-40 render iterations. If she could instead capture the whole project and just fly with the camera and make screenshots of such perfect quality, this would drastically lower the time and electricity that it takes to finish a project visualization. I would love to see how your project turns out. I can see big commercial (or open-source) potential in it! Would be glad to help if this goes open-source! Would gladly consider buying it for my wife if it is commercial!

  • @estrangeiroemtodaparte
    @estrangeiroemtodaparte Před 8 měsíci +1

    Awesome content!!

  • @OllieNguyen
    @OllieNguyen Před 8 měsíci +1

    amazing !

  • @darviniusb
    @darviniusb Před 8 měsíci +5

    I wonder if the old Iradiance map could be converted to Gaussian splats or same technique could be used to generate a perfect gs scene.

  • @hamidmohamadzade1920
    @hamidmohamadzade1920 Před 8 měsíci +1

    wow what a great idea

  • @KyleCypher
    @KyleCypher Před 6 měsíci

    I would love to see someone use the 3d model to inform the machine learning model about generated point cloud in order to help remove noise/ghosts, and to make the models more accurate. or perhaps create the pointcloud directly from an engine

  • @nolanzor
    @nolanzor Před 8 měsíci +1

    very cool!

  • @danielsmithson6627
    @danielsmithson6627 Před 8 měsíci +1

    THIS VIDEO WAS MADE FOR ME!!!!

  • @pocongVsMe
    @pocongVsMe Před 8 měsíci +1

    awesome content

  • @farhadaa
    @farhadaa Před 8 měsíci

    I was thinking this same thing, where some intense renders could be viewed in real time.

  • @merion297
    @merion297 Před 8 měsíci

    Yay, finally, someone tested it! 🙏 Now replace the Blender Render(!) step with a generative AI process, like Runway ML or ControlNet, and use just a 3D OpenGL output (as a "3D Skeleton") from Blender where colors correspond with different object prompts for the generative AI. Or any similar process you can make. Consistency is key though.

  • @Bluetangkid
    @Bluetangkid Před 6 měsíci

    This is really cool - I'm sure someone will move to generating these splats directly from blender and avoid the loss of detail from generating frames then training the splat. The renderer knows the geometry, camera pos, etc. and could provide much more useful detail when creating each surface instead of inferring. Interested to dive deeper in to this.

  • @realthing2158
    @realthing2158 Před 8 měsíci +2

    Great test, this is the kind of content I'm excited about right now. I'm holding off trying it myself though until I can get a 4090 graphics card.

  • @dialectricStudios
    @dialectricStudios Před 8 měsíci +1

    Siiiiiiick. I love the future

  • @NeoAnguiano
    @NeoAnguiano Před 8 měsíci +1

    kinda imagine there must be a more direct way to convert from 3d model to the "point cloud" skipping the render , but it indeed is very promising technique

  • @MrGTAmodsgerman
    @MrGTAmodsgerman Před 8 měsíci +1

    Basically future games could render a high quality interior simular to traditional texture/light baking such as in Unreal Engine, but then use it as Gaussian Splatting while at the same time having a defined playground that can't be exited because Gaussian Splatting will then blur the area. This would offer a overall higher quality photorealism experience then Unreal Engines light bakes for ex. . But the question is if that could be merged with interactions inside that world that it looks right. As the light on interactable props have to be rendered somehow. Also over the years the actual light baking that looks photorealistic haven't been used in games, mostly by archviz companys only. Interesting what could this offer then. I guess film production or product visualisations benefit more from that.

  • @TheShoes43
    @TheShoes43 Před 8 měsíci +1

    thanks for doing this video.. I always assumed it would work in theory but was lazy and never created a scene to test it :). great stuff
    what about the refractions in the water? do those still hold up and change on view change?

  • @sugaith
    @sugaith Před 8 měsíci +1

    impressive

  • @Damian-rp2iv
    @Damian-rp2iv Před 7 měsíci +3

    Instantly though about this on the first video I saw, especially as here you've took the "classic" route, but what if point could be trained separately then put together with some kind of 3D tools (a bit like the simple color to image IA we had few time ago)?
    I really think that the capturing still image to cloud point step are not the big thing (as it's the same as classic photogrammetry obviously) and that 3DGS could lead to even more crazy result with another kind of source of data
    But first the obvious next step would be to generate automatically all the needed point of view and go straight from 3D rendering with maxed ray casting stuff to cloud points as the 3D engine would be able to identify points by itself. I wonder how much better points would help this tech

  • @LianParma
    @LianParma Před 4 měsíci +1

    Very cool!!! Would love to try processing the room scene on my 3090 to se if it gets do 30k steps.

  • @antoinelifestyle
    @antoinelifestyle Před 6 měsíci

    You are a genius

  • @r.m8146
    @r.m8146 Před 8 měsíci +1

    awesome

  • @SuperCartoonist
    @SuperCartoonist Před 8 měsíci +2

    Maybe in the future 3D over-the-air broadcast will exist or 3D live streaming surveillance.

  • @MrEmiXaM
    @MrEmiXaM Před 6 měsíci

    Amazing

  • @lowellcamp3267
    @lowellcamp3267 Před 8 měsíci +1

    With computer-generated geometry like in this example, I wonder if it would be better to ‘manually’ place gaussians according to real scene geometry, rather than using photogrammetry to place them.

  • @cjadams7434
    @cjadams7434 Před 8 měsíci

    This is where an M2 Mac Studio with 192gig shared vram and a “neural engine” has an advantage here

  • @3dvfxprofessor
    @3dvfxprofessor Před 8 měsíci +1

    This idea for using #GaussianSplatting is so obvious. Brilliant! #b3d

  • @ATomCzech
    @ATomCzech Před 8 měsíci

    Very interseting worklow. And great idea to try this way. Btw. Unreal Engine can compute lightmaps for every surface, calculate how much light will be on every surface by using path tracing and then you can basically change location of camery and get instant raytraced result. It of course don't work when they are moving objects or light in the scene. But for static scene like this it is awesome. Would be great if Blender could to the same.

  • @cobracoder6123
    @cobracoder6123 Před 8 měsíci

    It would be absolutely incredible if this could be incorporated with VR

  • @impactguide
    @impactguide Před 8 měsíci +2

    Hey Olli! Thanks for the super cool videos you make, they are always a treat! Would you have an idea what the lowest amount of VRAM and CPU power are, necessary to view a Gaussian splatting scene? Or if there is still room for optimization on that front? I have a slight fascination for "beautiful graphics through optimization and new technology on old hardware", and I would think it would be super cool if something like this could be run on (very) low end hardware, like an older generation gaming console.

    • @OlliHuttunen78
      @OlliHuttunen78  Před 8 měsíci +2

      Yeah! I haven't tried it yet, but it seems that Gaussian splatting can be viewed even on older and less powerful devices. Creating Gaussian Splatting model it requires a RTX level card. But just for watching a pre-trained model it could work on less powerful PC also. SIBR Viewer for exsample does work on other cards as well. I tried it on my nephew's gaming PC with a basic GTX card. And it worked very well! It would be interesting to find out how low level of machines the viewer app would still work on.

    • @NeoShameMan
      @NeoShameMan Před 8 měsíci

      UNless you use compute, it's equivalent to a massive particles only scene. Which mean tons of overdraws. If we can fit the code of a single splat inside the VU2 of the ps2, it's probably possible on ps2 with a cap at 4 millions particles. lol Overdraws, therefore fillrates, it's the main limiter for non compute (which would rasterize directly).
      Also current scene are probably the equivalent of using RAW 4k texture density everywhere. We can probably find way to optimize and compress scene, by first reducing "resolution" by dropping some particles, then find way to compress and sort the remaining into a nice format. If we can get rid of transparency too, I wonder how bad are splatting without transparent shapes, might be good enough for some use case.
      If I were to try, I would first sorts particle into a 3d grid, such that we can march the grid, per rays, and only get the relevant splats, then I would try to guess how to mipmap every sets in a grid, which could then load based on distance. Then I would find a way to trim the grid from empty spaces, then find a compression method for chunks of the grid.

    • @impactguide
      @impactguide Před 8 měsíci

      @@NeoShameMan By coincidence, I saw an article on Hackernews this morning explaining that using clustering, it is possible to quite easily reduce the file size of a Gaussian Splatting scene by about a factor 10 without much loss of quality. You can reduce even further, but then you start noticing artifacts, although I still think the images look pretty good. The author notes that in the bike demo scene from the original paper, 4 million gaussians were used... I haven't read the original paper yet, nor do I know a lot about real time rendering, but if a gaussian splat equals a single particle, then 4 million particles + reduced file size might not be outright impossible on the PS2... although you would probably have to also optimize the renderer, like you described.
      I don't think you can post links on youtube, but the article name was "Making Gaussian Splats more smaller". The idea of the author seemed to be to reduce the file sizes, so that gaussian splatting scenes could be used in Unity. Pretty cool!

    • @NeoShameMan
      @NeoShameMan Před 8 měsíci

      My assumption about gaussian on ps2 is based on gamehut's video: " Crash & LEGO Star Wars "Impossible" Effects - CODING SECRETS " ytcode: JK1aV_mzH3A and Aras unity's implementation which use billboards, gaussian rendering on a billboard is analogous to a stretch image, so we would simply stretch the billboard (using the matrix) and not do complex gaussian rendering, we would let the alpha saturate by itself@@impactguide But I think the bottleneck would be bandwidth to pass the unique position, in the video it's all procedural, so it can achieve peak more easily, and I have no idea how feasible is the spherical harmonics, but a close approximation should be possible anyway.

  • @marceau3001
    @marceau3001 Před 8 měsíci +4

    Very interesting
    I indeed believe the point cloud could be directly produced from the 3d geometry tesselated to a high poly count and the texture baked to vertces. It should be faster than rendering all the images and their wouldn't be any forgotten or occluded portions of the model.
    Thank you for your good videos

    • @mortenjorck
      @mortenjorck Před 8 měsíci +3

      The part that’s missing is still the path tracing. Though maybe there’s a way to bake that in as well?
      Failing that, I wonder if there’s a way to write an algorithm to calculate the optimal camera path through a scene to maximize coverage while minimizing redundancy.

    • @TimmmmCam
      @TimmmmCam Před 8 měsíci

      @@mortenjorck Yeah I think you're right, but isn't like 99% of cycles render time effectively just calculating lighting? I can't see why you wouldn't get equally good results just by using Eevee with baked lighting.

  • @mat_name_whatever
    @mat_name_whatever Před 8 měsíci +1

    It's so strange to see a rendered image being turned to a point cloud heuristically rather than with the already available 100% accurate depth buffer information

  • @DREAMSJPEG
    @DREAMSJPEG Před 8 měsíci +4

    Hi Olli, love your work - really appreciate your experiments.
    I have a somewhat similar question that you address in this video but with pre-existing point cloud data.
    Do you think it would be possible to create a gaussian splatting from point cloud data like .las, instead of images.

    • @OlliHuttunen78
      @OlliHuttunen78  Před 8 měsíci +2

      I’m not sure. Training needs also the source images to generate Gaussian Splatting model. The point cloud itself is not enought. This source code uses .ply point cloud files. But I think that any point cloud format can be convert to .ply format.

    • @DREAMSJPEG
      @DREAMSJPEG Před 8 měsíci

      @@OlliHuttunen78 Thank you for the reply - appreciate it :)

  • @user-bs3jd2hj3z
    @user-bs3jd2hj3z Před 8 měsíci +1

    Very good, I must try it python stuff on my 4090

  • @jaakkotahtela123
    @jaakkotahtela123 Před 8 měsíci +1

    Tätä tekniikkaa voisi käyttää myös 3D-pelien toteutukseen. Saisi ihan upeita pelejä kehitettyä, vaikka tässä onkin paljon rajoitteita verrattuna polygoniperustaiseen toteutukseen. Esimerkiksi joku lentosimulaattori voisi toimia ihan hyvin tai autopeli. Saisi tosi realistisen näköisiä pelimaailmoja tehtyä, kun voi ensinnä vaikka Unreal Enginellä renderöidä tosi yksityiskohtaisen ja upeasti valaistun maailman, jonka muuntaa tuollaiseen pistemalliin

  • @Nikola_Botev
    @Nikola_Botev Před 8 měsíci

    Verry informative video! I wonder with what GPU you have achieve this result in such a short time?

  • @harriehausenman8623
    @harriehausenman8623 Před 8 měsíci +4

    Shouldn't it be easier* to generate the point cloud directly from the 3d model? 🤔
    *(veeery relative word here 😄)

    • @matbeedotcom
      @matbeedotcom Před 8 měsíci +1

      I agree, I like nerfs but couldnt you just run this in real-time using Unreal? I'm unsure of the benefit of NeRF here

    • @AnotherCyborgApe
      @AnotherCyborgApe Před 8 měsíci

      @@matbeedotcom I see it as a "two papers down the line" situation. This is the best nerf-like thing available today, significantly better than what was available 2 months ago.
      On the practical side of things, we have path tracing combined with AI-based denoising/upscaling/ray reconstruction that's made its way into mainstream games, and that's likely to remain the "sane" path to high quality real time rendering for a while.
      But it's easy to start daydreaming about where successor technologies of gaussian splatting might take us, and this video gives us a little taste of what could be, even with its awkward "ok let's just compute 260 renders and pretend we don't know the geometry and compute it again" approach.

    • @NeoShameMan
      @NeoShameMan Před 8 měsíci +1

      It's very relative, gaussian splat encode light rays not mesh surfaces, the issue is the placement of the gaussian, the gaussian are triangulated using 2d images, we can probably triangulate from casting sampling rays from a sampling points and try to find a minimizing functions to figure out best placement. 3dGS are very similar light probe volume, with the caveat that it's the superposition of gaussian splat that create the final colors, I see the superposition as a big problem, but placement is probably CLOSE to ambient occlusion problem.

    • @harriehausenman8623
      @harriehausenman8623 Před 8 měsíci

      👍@@NeoShameMan

  • @RoyMendezCastro
    @RoyMendezCastro Před 8 měsíci

    Excelente,
    Es posible exportar la nube de puntos para convertirla en un mesh?
    Nuestro interés es tener el modelo 3d del terreno y de ahí poder hacer las modificaciones y lecturas necesarias para diseños constructivos.
    Actualmente lo hacemos con fotogrametria con fotos tomadas con dron, con esta tecnología podria hacerlo con un video.

  • @narathipthisso4969
    @narathipthisso4969 Před 7 měsíci

    Wow 😮

  • @JuXuS1
    @JuXuS1 Před 3 měsíci

    great

  • @carpenterblue
    @carpenterblue Před 8 měsíci

    Gosh, I don't care for realism, what i want is to hand paint the world in 3D!

  • @technomaker777
    @technomaker777 Před 8 měsíci

    Please make a tutorial how to do GS model! And about installing software and what hardware you need.

  • @Mr_i_o
    @Mr_i_o Před 8 měsíci +1

    over 9000!

  • @unadalabs
    @unadalabs Před 5 měsíci

    Hey there, you have very effective way of doing it i would love to have the ready made scripts for this

  • @spyro440
    @spyro440 Před 8 měsíci +1

    This might become big...

  • @longwelsh
    @longwelsh Před 8 měsíci +6

    Great video. I'm surprised by the lack of ports of these tools to Mac since the unified memory architecture means one could give the graphics 50-60GB of free memory. I still have hopes for projects built on Taichi as hopefully their backend could be ported from CUDA to Vulkan.

  • @philipyeldhos
    @philipyeldhos Před 7 měsíci

    I think right now within the scope of this project, 360 videos could be looked at for training data. Much more information and should fill in the gaps nicely. In the future it should be possible to extract texture information from the 3d file and use it along with the point cloud information to skip the pre-rendering entirely.

  • @jad05
    @jad05 Před 8 měsíci

    This, this is what i first thought of when i found out about 3d gausian splatting. now i wonder how long until we get this but animated?

  • @ericljungberg7046
    @ericljungberg7046 Před 8 měsíci +1

    Do you think this would work if rendered in 360? Currently working on a project where I'm rendering out a bunch of very realistic 360 images for a restaurant so cooks and personnel can train how to use the space before it's built. The idea struck when watching this video that it would be cool to show the client the entire space in real time.

  • @joseph.cotter
    @joseph.cotter Před 8 měsíci

    Interesting for potential future prospects in generating real time 3d under specific parameters, but currently you would get better results converting the file to a real time 3d engine like Unreal.

  • @italomaria
    @italomaria Před 8 měsíci +1

    I am absolutely fascinated by the potential of this stuff. I had a couple questions - if anyone has ideas or answers that'd be awesome. 1) Does Gaussian splatting work only on freeze frame moments, or would it be possible to record a real-world event (say a ballet dancer) from multiple fixed angles, then playing it back in 3D? 2) Would it be possible to integrate this with AR or VR and able to walk around pre-recorded events?

    • @NeoShameMan
      @NeoShameMan Před 8 měsíci

      1) yes but it's costly, you will have to capture every frame with enough multiple views, each frame would cost the same of a single 3dgs, there is probably way to compress, but you would have to invent it, or hired a programmer.
      2)yes it's been done, the problem is the raw file size, see if using less splat can help retain a good enough quality that fit the memory.

    • @OlliHuttunen78
      @OlliHuttunen78  Před 8 měsíci +2

      I recomment to follow Infinite-realities on Twitter (X). They have done experiments on animated sequences on Gaussian Splatting. And they have some special custom version on that SIBR viewer which I would be very interested to get my hands on. Check for example this: x.com/8infinite8/status/1699463085604397522?s=61

    • @italomaria
      @italomaria Před 7 měsíci

      @@OlliHuttunen78 Oh man, thanks for that recommendation. Love your work, super excited for all the insane stuff this new tech is opening up.

  • @FredBarbarossa
    @FredBarbarossa Před 8 měsíci +3

    Really interesting, I need to test this at some point, do you know if this works with AMD cards as well?

    • @OlliHuttunen78
      @OlliHuttunen78  Před 8 měsíci +1

      Well, this Gaussian splatting generation relies very heavily on CUDA, so I don't really think it would work on AMD cards. At least not yet. But the pre-calculated Gaussian model can at least be viewed on AMD cards as well. At least I would think so.

    • @FredBarbarossa
      @FredBarbarossa Před 8 měsíci

      @@OlliHuttunen78 Thanks, I know at least on linux using ROCm, you can run pytorch. But not on windows yet as far as I know.

    • @Kaalkian
      @Kaalkian Před 8 měsíci

      @@OlliHuttunen78 this was awesome video!!! is there repo of pre-calculated models that can be viewed? what format are these files? if its *.ply we can use any viewer?

  • @andereastjoe
    @andereastjoe Před 8 měsíci +1

    Wow FANTASTIC!!!. This is definitely what i am waiting for. I do architectural visualization and some interactive walkthru using UE5.
    My question is, is there anyway to view this in VR?

    • @Thats_Cool_Jack
      @Thats_Cool_Jack Před 8 měsíci +1

      At the moment you can import it into unity

    • @andereastjoe
      @andereastjoe Před 8 měsíci

      @@Thats_Cool_Jack cool. Thanks for the info

    • @OlliHuttunen78
      @OlliHuttunen78  Před 8 měsíci +1

      Yes. There is recently developed plugin for Unrel in UE markeplace. It is not free as the Unity plug in the github but with that you can probably make it work in Unreal with Unreals VR templates.

    • @andereastjoe
      @andereastjoe Před 8 měsíci

      @@OlliHuttunen78 ok cool. Thanks for the info

  • @henriidstrom
    @henriidstrom Před 8 měsíci +1

    Interesting topic! But it would have been even better if you had added some computer specifications. For example what GPU and how much VRAM does it have?

    • @OlliHuttunen78
      @OlliHuttunen78  Před 8 měsíci +2

      GPU: Nvidia RTX 3070 8Gb Vram
      PC: Asus ROG Ryzen 7 64Gb ram

  • @zenbauhaus1345
    @zenbauhaus1345 Před 7 měsíci +1

    genius

  • @JfD_xUp
    @JfD_xUp Před 4 měsíci

    This try is great, I will follow your future tests,
    I quitted the computer graphics world, but still throwing an eye on new techniques.
    just one point : Nerf and 3D Gaussian Splatting are not exactly the same techniques (when saw D:\NeRF\gaussian-splatting)

  • @viniciusvmrx2845
    @viniciusvmrx2845 Před 8 měsíci

    The experimentation phase is always amazing. But at the moment its faster to bring the modeling to Unreal Engine if we need hi quality in realtime.

  • @niiranen
    @niiranen Před 8 měsíci +1

    Fascinating or as we say: todella mielenkiintoista. En ole GausSplattingiä tutkinu aiemmin ja mietin että sellainen voisi olla kova jos esim tuosta huoneesta saisi exportattua ajettavan tiedoston ja asiakas voisi avata sen omalla koneellaan ja pyöritellä kameraa ympäriinsä. Kuinka isoja tiedostoja noi sitten on mitä python laskee?

    • @OlliHuttunen78
      @OlliHuttunen78  Před 8 měsíci +1

      Tää on erittäin uutta tekniikkaa. Lähdekoodi julkaistiin tuossa syyskuun alussa 2023. Katseluohjelmia on hyvin vähän vielä. 3DGS malleista voi tulla aika isojakin: 800Mb - 1.3gb mutta näihin on jo kehitelty pakkausmenetelmää jolla datasetin koko saadaan merkittävästi pienemmäksi. Mielenkiintoista tutkia mihin tämä homma menee.

  • @rotors_taker_0h
    @rotors_taker_0h Před 8 měsíci +1

    I should be possible to produce point cloud without intermediary step of Colmap, the blender already has all that information and it is redundant to restore it with noisy process of rendering images and then back to 3d.

  • @UcheOgbiti
    @UcheOgbiti Před 8 měsíci

    Metal FX, DLSS & FSR have proved that image upscaling is the future of real-time graphics. Can this same method be used to build a real-time engine that can be comparable to cycles, arnold, octane, etc

    • @NeoShameMan
      @NeoShameMan Před 8 měsíci +1

      Not really, it doesn't solve real time lighting, it's more like baking. So you will have to render your scene normally into a few shots and baked it into 3DGS. like he did.

  • @shadowproductions969
    @shadowproductions969 Před 8 měsíci +1

    60 fps looked like it was locked to that, vsync maybe? Pretty great tech, i have seen many getting 500-600 fps on photoreal looking 3dgs models. Truly the beginning of the future of 3d worlds l capturing

  • @MDNQ-ud1ty
    @MDNQ-ud1ty Před 8 měsíci

    The better way wouldn't be going to 2D image space but compute the gaussians directly from the scene. It would be faster and more accurate. You convert the geometry directly to gaussians by sampling then do the training to reduce the number.

  • @SuperIslandFlyer
    @SuperIslandFlyer Před 8 měsíci

    How long did a single frame of the room?

  • @djayjp
    @djayjp Před 8 měsíci

    Is this more efficient than just baking the GI lighting...?

  • @dsamh
    @dsamh Před 8 měsíci

    like ... it's cool ... but what is it for....
    Are we going to see movies shot in some future tech 3DGS cameras maybe?
    For maybe stronger ai's to reinterpolate actual hybrid 3DGS and vector format or "baking"?

  • @BardCanning
    @BardCanning Před 8 měsíci +8

    Is there any reason why this wouldn't be used to make games run with prerendered raytraced scenes at a high frame rate?

    • @OlliHuttunen78
      @OlliHuttunen78  Před 8 měsíci +4

      Absolutely! I can't think of any reason why it couldn't.

    • @andreasmuller5630
      @andreasmuller5630 Před 8 měsíci +6

      @@OlliHuttunen78 Its blurry, it´s not dynamic, it has a very big memory footprint, its not even clear to me that its faster than something similar done with realtime RT in Unreal.

    • @3DProgramming
      @3DProgramming Před 8 měsíci +1

      i guess some problems can arise also with collision detection, unless the scene is not cleared someway, I suppose lot of random points can be floating around

    • @BardCanning
      @BardCanning Před 8 měsíci +5

      Isn't collision usually a separate invisible polygon layer?@@3DProgramming

    • @3DProgramming
      @3DProgramming Před 8 měsíci +1

      @@BardCanning yes, you are right, yes maybe it can be manually specified, I was thinking more of some automatic way to derive it from the data, but I suppose it is possible with some automatic cleaning

  • @alkeryn1700
    @alkeryn1700 Před 7 měsíci +1

    i wonder if AI could use the point cloud so you can skip the training time.

  • @SerjLimitless
    @SerjLimitless Před 7 měsíci

    Im using Luma AI app - it also generate Gaussian Splatting now, but importing as point cloud in Blender I still cannot retain the color information, does anyone know any soluion to this?

  • @eka_plays7447
    @eka_plays7447 Před 8 měsíci

    How do you use the point cloud generated by Gaussian Splatting? Blender doesn't load the textures when importing the ply file.
    Couldn't find any help online about this issue I have