Photogrammetry / NeRF / Gaussian Splatting comparison

Sdílet
Vložit
  • čas přidán 30. 09. 2023
  • Workflow and resources:
    Photogrammetry model on sketchfab: skfb.ly/oLOQw
    Church Rock dataset ZIP: drive.google.com/file/d/1ttkl...
    Agisoft Metashape: www.agisoft.com/
    NerfStudio: docs.nerf.studio/
    3D Gaussian Splatting for Real-Time Radiance Field Rendering: github.com/graphdeco-inria/ga...
    Aras-p’s Unity Project: github.com/aras-p/UnityGaussi...

Komentáře • 236

  • @ggavilan
    @ggavilan Před 6 měsíci +295

    I love how you can hear the fan going with the gaussian splatter

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci +57

      😂 my poor comp was breathing heavy

    • @pbjandahighfive
      @pbjandahighfive Před 6 měsíci +18

      You can hear it from the jump when he switches to NeRF too.

    • @Panzer_the_Merganser
      @Panzer_the_Merganser Před 4 měsíci +3

      @@MatthewBrennanThought it was raining there for a moment, then realized I’ve head that same sound in my office. Def taxing the GPU, but great video.

  • @error-4518
    @error-4518 Před 5 měsíci +147

    NeRF and gaussian splatting are so realistic that they even captured the wind.

  • @BrightAfternoonProductionsPlus
    @BrightAfternoonProductionsPlus Před 6 měsíci +106

    Not exaggerating, when Gaussian Splatting became to sudden prominence, I was waiting for a video exactly like this.

  • @DangitDigital
    @DangitDigital Před 4 měsíci +13

    For anyone interested, here's my TLDR of the video: Biggest advantage of photogrammetry method (polygonal): model is metrically valid, can be used for measurements, model is fairly lightweight as it's a model, polygonal, and also less computationally taxing to post-process for the same reason, can use in many software aand share easily. Advantage of NeRFs (radiance field): includes more scene information (distance, sky), but this info is estimated, generated (via color info aka a radiance field) in realtime so perhaps it's not the most scientifically accurate. But because a full scene is being computed, creating new "footage" from novel viewpoints is possible. Advantage of Guassian Splat (static point cloud): includes scene information like NeRFs, but is not being computed in realtime aka it's a static point cloud (or splat cloud). Because the visualization is static, Guassian Splats can be used in many visualization software (game engines like Unity, Unreal, as well as 3D software like Blender, Cinema 4D). It's "the best of both worlds". Also, of course, it's the most fun to say :)

  • @buroachenbach703
    @buroachenbach703 Před 6 měsíci +72

    Hi, great video - I think it clarifies to a lot of people the difference between the three different technologies.
    Just one thing that would have been important to mention, is the biggest advantages of NERF and GS is the ability to capture reflections, and even transparency, which is just about impossible with Photogrammetrie. Granted that your example is, of course, not the right one to demonstrate these features but maybe you have a different set of images where you can demonstrate that difference in more detail.

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci +26

      You're absolutely right - in fact I just went out and took some video of my car and some reflective surfaces in the rain - which would be very hard for photogrammetry to reconstruct well - I'll do a side-by-side!

    • @JorgetePanete
      @JorgetePanete Před 6 měsíci +1

      photogrammetry*

    • @zyang056
      @zyang056 Před 4 měsíci

      GS can also rasterize thin structures much better than mesh recon or nerf. Give it a few months my bet is GS can surpass meshing in rendering quality and file size.

    • @DangitDigital
      @DangitDigital Před 4 měsíci

      Good point

  • @linecraftman3907
    @linecraftman3907 Před 5 měsíci +4

    I have never used either of these techniques, however ever since I've seen these technologies become popular, i had a really poor understanding and had no idea how they compared, but still remained curious. This video filled the gap in my understanding perfectly

  • @LorandNagy_89
    @LorandNagy_89 Před 6 měsíci +7

    This was exactly what i was looking for! I tried gaussian in unity in vr. I have to try with your conclusion. Thanks!!! Subscribed! :)

  • @360Pros
    @360Pros Před 5 měsíci +2

    Thanks for making this video! I've been playing with photogrammetry for about 6 - 7 years, and have only been a curious bystander with respect to neRF and GS. It's interesting but I envisioned something like neRF and GS in conjunction with 3D meshes several years ago, before learning that they exist. Your video does a wonderful job of explaining the distinction between the three, especially between neRF and GS. I watched til the end! Thank you for creating it. I'll connect with you on your social accounts and hopefully we'll run into each other in the unfolding "metaverse".

  • @BunkerSquirrel
    @BunkerSquirrel Před 2 měsíci +2

    The splatter is really cool looking. Looks like a hallucination or a dream

  • @nosyb
    @nosyb Před 6 měsíci

    Very cool work thanks!

  • @plyczkowski
    @plyczkowski Před 6 měsíci +31

    Could be cool to use an example with more variance in material properties, to showcase how different techniques deal with things like reflectivity and transparency.

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci +6

      See my video here :) Reflective Object: Gaussian Splatting radiance field vs. Photogrammetry mesh
      czcams.com/video/gheD8vrOJNI/video.html

  • @mikailmaqsood818
    @mikailmaqsood818 Před 6 měsíci

    Thank you!! I’ve been looking for a video like this since I learnt about Gaussian splatting. Also the music you played in the showcases was chilling :))

  • @CRivlaldo
    @CRivlaldo Před 3 měsíci

    Amazing comparison! And very cool flying scenes for NeRF and Gaussian Splats.

  • @MrDmonahan96
    @MrDmonahan96 Před 6 měsíci +1

    thanks this was super helpful

  • @jackjansen7265
    @jackjansen7265 Před 6 měsíci

    Thanks a lot! That was exactly the quick introduction to the subject that I needed!

  • @Melvin420x12
    @Melvin420x12 Před 6 měsíci +4

    I have absolutely no affiliation with anything 3D-related though somehow I find this Gaussian Splatting thing so intriguing though I had no clear understanding what it was haha. Just that you could make high quality looking 3D renders of things with just a video. Cool to actually see a more technical and comparison video about it. Thank you for making this video!

  • @KevinMerinoCreations
    @KevinMerinoCreations Před 4 měsíci +1

    Thanks for the good comparison video! You did a great job highlighting many of the topics of interest! 👏👏👏

  • @MikkoRantalainen
    @MikkoRantalainen Před 5 měsíci +1

    Great comparision of different methods! Looking at the drone video vs the output, it seems clear that all these technologies will get better when we get more processing power. The current output is nowhere close the detail level of input video but there's no reason to think it couldn't be given enough computing resources.

  • @ramonteleco
    @ramonteleco Před 5 měsíci

    Thanks for the video! The best way to learn the difference about NeRF and gaussian splatting 🙏🏻

  • @ishibaro
    @ishibaro Před měsícem

    thank you very much for this video :D superb for future developments in archaeology. I am already checking NERF with Kiri Engine, but I loved to see how to do it with the tools you mentioned. Coool!

  • @qbert4325
    @qbert4325 Před 6 měsíci

    The last shot is really good!

  • @rubenbernardino6658
    @rubenbernardino6658 Před 5 měsíci

    Very valuable information for us 3D creators. Thank you very much!

  • @98SE
    @98SE Před 6 měsíci +43

    This is absolutely amazing! I think Gaussian Splatting might replace rasterised/polygonal rendering in the near future!

    • @dmitriytuchashvili8594
      @dmitriytuchashvili8594 Před 6 měsíci +11

      GS is great, but it will be a real challenge to invent an optimized way of applying real time lighting

    • @ruymascarua
      @ruymascarua Před 6 měsíci +4

      @@dmitriytuchashvili8594 agree, interactive lightning is the main issue

    • @constantinosschinas4503
      @constantinosschinas4503 Před 6 měsíci +1

      Curious to see how GS handles reflections, normals, displacements, transluscency and so on.

    • @miroaja1951
      @miroaja1951 Před 6 měsíci +4

      There's also a problem with physics, interactivity and the rendering of anything besides real world data (procedural generation would be hell), which altogether makes it likely to only have niche use cases, though I do admit it's cool

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci

      @@constantinosschinas4503 see this video: czcams.com/video/gheD8vrOJNI/video.html

  • @Redranddd
    @Redranddd Před 5 měsíci +7

    I think photogrametry and some kind of Gaussian splatting will be fused one day

  • @tommy_s
    @tommy_s Před 6 měsíci

    Wonderful work, really appreciate it

  • @FireballVFX
    @FireballVFX Před 6 měsíci +2

    Thank you, it was very educational and enjoyable video!

  • @NOLNV1
    @NOLNV1 Před měsícem +1

    That's an amazingly beautiful rock formation, not that it's the topic of the video but just felt like mentioning it

    • @MatthewBrennan
      @MatthewBrennan  Před měsícem

      It has an interesting backstory too! The legend goes that a utopian community wanted to hollow out the rock to use as a church - and even went so far as to begin chiseling a doorway (you can see the opening in the video/model).

  • @AerialWaviator
    @AerialWaviator Před 6 měsíci +6

    Very intriguing comparison. Had not heard of Gaussian Splatting previously. With Photogrammetry it would be possible to model different sun angle and lighting effects. Could be interesting to explore how the various could take advantage of video captures taken at different times of day.
    For example, could allow for animating time and motion. Just a thought that might be interesting to explore.

    • @DangitDigital
      @DangitDigital Před 4 měsíci +4

      While it's true with photogrammetry you could relight the scene, the shadows are still "baked" into the texture. So in this case, Church Rock would still be casting that shadow even if you put virtual lights into the scene to reimagine it.

  • @sennabullet
    @sennabullet Před 6 měsíci

    superb explanation...thank you for sharing your knowledge.

  • @DangitDigital
    @DangitDigital Před 4 měsíci

    Ooh thanks for this overview. Very helpful.

  • @fraizie6815
    @fraizie6815 Před 6 měsíci +3

    Nobody gonna talk about how the rock looks like a space ship that turned into stone?

  • @SimiVideoCreator
    @SimiVideoCreator Před 6 měsíci +7

    honestly I love the gaussian splatting look. Especially when you move "too" close :D

  • @robertbogu4794
    @robertbogu4794 Před 5 měsíci

    Thank you for explaining brother! 💪

  • @antimatters6283
    @antimatters6283 Před 5 měsíci

    Good comparison and review. Good notes, links in the video info grey area.

  • @coalbanksYQL
    @coalbanksYQL Před 6 měsíci +11

    Great comparison! Could you have cropped the splats that were interrupting the sky in your Unity example for a cleaner look - or is that limited?

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci +7

      In theory it should be possible, because the splats are directly related to the sparse cloud from COLMAP - I'm planning to investigate this.

  • @chasechampagne867
    @chasechampagne867 Před 6 měsíci +11

    I love the examples, and the discussion on the technical differences. Though could definitely use better quality screen capture.

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci +8

      You're right - I captured at 1080 (my screen's max res) and upscaled it, but didn't realize it because I had my premiere clip previews set to 1/8 res, so didn't notice the blurriness! Sorry!

    • @chasechampagne867
      @chasechampagne867 Před 6 měsíci +3

      @@MatthewBrennan Some great points in there. Thanks for the vid.

    • @lemovision
      @lemovision Před 6 měsíci +2

      Looks more like 360p mate, maybe you set premiere to render using preview cache @@MatthewBrennan

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci +1

      @@lemovision could be- I think I fixed it for subsequent exports, at least 🙃

    • @AerialWaviator
      @AerialWaviator Před 6 měsíci

      CZcams compression likely not helping either.

  • @Daniel-xz6cm
    @Daniel-xz6cm Před 6 měsíci +4

    you can also compare nvidia's neuralangelo. that would be great

  • @notso_usualyoutbuer
    @notso_usualyoutbuer Před 5 měsíci

    Great work! Keep doing the stuff! Like and subcribed!

  • @keterbinah3091
    @keterbinah3091 Před 6 měsíci

    interesting , good informative vid , thankyou, as a side note, lets consider Bob ross once painted 50 sheds and 1 tree.

  • @leanderren4548
    @leanderren4548 Před 6 měsíci +4

    Great video, I think if you reuploaded this with better quality it could gather even more attention.

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci

      Unfortunately I don’t think you can replace previous uploads on CZcams

  • @AlexanderBukh
    @AlexanderBukh Před 6 měsíci

    Great job, subbed. 🎉

  • @16pxdesign
    @16pxdesign Před 2 měsíci

    Well described ❤ Appreciate ❤

  • @andrasliptak
    @andrasliptak Před 6 měsíci +1

    intersting fact is that nerf technically uses ml to search for the camera positions. the process stops when the render of the estimated volume matches the photo itself. ( within a threshold )

  • @josiahjack455
    @josiahjack455 Před 6 měsíci

    Oh hey I've been there. Recognized it before you even said "Church Rock". Nice stretch of road just outside Canyonlands National jawdropping park.

    • @gardenofadam79
      @gardenofadam79 Před 4 měsíci

      That's why I'm here, I have driven past that rock hundreds if not thousands of times in my life. I have no knowledge about the actual subject matter of this video but I'll watch the whole thing just out of gratitude for the nostalgia.

  • @ruperterskin2117
    @ruperterskin2117 Před 5 měsíci

    Cool. Thanks for sharing.

  • @simonhartley9158
    @simonhartley9158 Před 6 měsíci +3

    It seems that the next step is a neural/AI enhanced version of guassian splatting to improve quality/render performance/data size.

  • @donaldnewlands1737
    @donaldnewlands1737 Před 4 měsíci

    Thanks - I'd like to see a comparison of how you would actually use these in production - especially NERF and splatting.

    • @MatthewBrennan
      @MatthewBrennan  Před 3 měsíci

      Right now I get the feeling it's very much a "solution" in search of a problem. But here's another video with some thoughts on the potential practicality: czcams.com/video/Ksi_RfY77SI/video.html

  • @zershuan
    @zershuan Před 6 měsíci +4

    Hey matthew, thank you very much for making this video comparison. I have been wondering about Gaussian Splating all week long. But still I fail to see how is this a 3D rendering revolution like many in the community are saying, I don't understand the hype. Which could be the applications for this? Is it possible to make meshes from this point clouds? Can we make it collide with other objects in a scene? Can it be re lighten? To me it feels like baked lighting textures without a body. Please take me out of my miserable ignorance, what are the possible/theoretical applications?

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci +5

      Right now, I think it's a 'solution in search of a problem'. As I mentioned in the video - virtual production could benefit greatly from this: you can plan camera shots well in advance using limited input data to get a realistic representation of scene. Likewise, if you capture a scene well, you can do all of your camera moves virtually, and composite with actors, etc... and not have to worry about "getting the perfect shot" on location.
      It is possible to mesh NeRFs. But the volumetric clouds (NeRFs and GS radiance fields) cannot be re-lit (by their nature). In a Unity or Unreal scene, they don't have collisions, but you could add some invisible geometry to act as colliders. I'll be making another video demonstrating some of that this week!

    • @zershuan
      @zershuan Před 6 měsíci +2

      @@MatthewBrennan What an excellent answer. Thank you so much!

    • @camerbot
      @camerbot Před 6 měsíci +2

      its essentially faster and cheaper photogramery which is in itself already quite useful. its better to ignore self compounding twitter hype but the "revolutionary" part is in that its a entirely new real time rendering technique that is entirely based on "splatted" point clouds not on polygons or anything so its a whole new area of research its possible to come up with an entirely different real time rendering pipeline based on "splatted" points with physics and everything and it might even be good or better! but will that actually happen? will it be actually good? will it fall to same problems as all other point/voxel based rendering systems did? i dunno

    • @zershuan
      @zershuan Před 6 měsíci

      @@camerbot that adds a lot of insight. Thank you very much

  • @ryukisai99
    @ryukisai99 Před 18 dny

    Thanks for the good video and for providing the nice dataset. What is the focal length of your drone's camera (full frame equivalent)?

    • @MatthewBrennan
      @MatthewBrennan  Před 18 dny +1

      35mm equivalent is ~28mm. It's a 1" CMOS 20mpx sensor (for still images). However this dataset uses 4k video.

    • @ryukisai99
      @ryukisai99 Před 18 dny +1

      @@MatthewBrennan thanks for your answer. I'm trying to run your dataset using micmac photogrammetry. I'll let you know if I get good results!

  • @lukassarralde5439
    @lukassarralde5439 Před měsícem

    Hi Matthew. Great video explanation. Which drone did you use for this test? Do you have by any chance any more drone footage? Have you use the DJI Mavick 3 pro Cine for photogrammetry? Thanks.

    • @MatthewBrennan
      @MatthewBrennan  Před měsícem

      I used a Mavic 2 for this model. I have used a number of different drones for photogrammetry in the past, but haven't tried the Mavic 3 yet, although I don't think the Cine model adds anything particularly useful for traditional photogrammetry.

  • @thenozon
    @thenozon Před 6 měsíci

    Thx for the vids - deep respect for your knowledge and sharing it. (needed to watch it in 1.5x tho - otherwise it would have been kind of as if told in slow motion xD)

  • @alblez
    @alblez Před 6 měsíci +2

    Seeing Matthew's analysis of these three technologies was quite insightful. It brought to mind a question an architect friend once posed: Could one feasibly craft an architectural blueprint of a home or apartment using video footage? Considering your experience with these three tech contenders, would you say we're on the brink of making this potential a reality?

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci +3

      You could definitely build a rough model from video footage (provided that the video entered every room). No digitization technology (yet) will output a plan useful to an architect without substantial work by hand- however the power of these techs is that you can achieve results based on very little information (I.e. a series of photos or video) that can then be interpreted by an architect or draftsman and turned into a polished representation, like a plan or section.

    • @alblez
      @alblez Před 6 měsíci +1

      @@MatthewBrennan Thank you very much for your response; I have more clues so my friend can make things more efficient.
      It's a matter of time before new papers are published. 🔜

  • @NithinJune
    @NithinJune Před 5 měsíci +1

    very very interesting

  • @bytesandbikes
    @bytesandbikes Před 3 měsíci

    Interesting how the NeRF has captured the changing cloud shadow over time as a positional aspect

    • @MatthewBrennan
      @MatthewBrennan  Před 3 měsíci +1

      3DGS does something similar, as everything is based on the interpolated “viewing angle”/position of the scene, which of course is tied to the conditions/time (shadow or sun) that each photo was captured under/during.

  • @simonelorenzoni
    @simonelorenzoni Před 2 měsíci

    Big up my friend!

  • @LaunchedPix
    @LaunchedPix Před 6 měsíci

    Very well done. You mentioned using data resulting from aligning photos in Metashape in place of ColeMap for NeRF studio. Can this substitution of processing also be done for Gaussian Splatting? I'd love to use the aligned photo data & camera model data from many Metashape (standard edition) projects for GS without using ColeMap as Im already confident in the alignment. Any suggestions on how to accomplish that?

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci +1

      Yes - I use this script: github.com/agisoft-llc/metashape-scripts/blob/master/src/export_for_gaussian_splatting.py

    • @LaunchedPix
      @LaunchedPix Před 6 měsíci

      @@MatthewBrennan Thanks! Would I be correct in assuming that this must be run in (or at least with) the professional edition (not the standard edition) of Metashape?

    • @MatthewBrennan
      @MatthewBrennan  Před 5 měsíci

      hmm... I'm actually not sure if the standard version supports python scripts. I am using the professional version to process this data.

  • @aubydauby
    @aubydauby Před 6 měsíci +2

    Is this purpose-driven for a particular field? I've always been fond of the intersection between geospatial tech and the broader CS/gaming world.

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci +1

      Right now, I think the primary application is in virtual production. This is a relatively new method, so I'm sure as it evolves, new applications will develop. At the moment it is not a straight replacement for any existing digitization or visualization technology.

  • @AlisonBLowndes
    @AlisonBLowndes Před měsícem

    Hi Matt, are you testing with NV Omniverse? Great video!

    • @MatthewBrennan
      @MatthewBrennan  Před měsícem

      No. I tried it about a year ago but didn't find it very compelling.

  • @mattiasfagerlund
    @mattiasfagerlund Před 6 měsíci

    Cool stuff! I was thinking regarding the fact that extracting images from a movie gives poor photos: is there a way to extract higher quality images that you're aware of? I'm thinking it would be fairly straight forward to create an ai that takes five images in a row and creates a de-blurred version of the middle image using data from all five images. Or are there additional issues?

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci +1

      The best type of image is a high resolution digital still photo, with low iso (low sensor noise) and high sharpness (I.e. high f-number… usually f/8 or higher)

    • @mattiasfagerlund
      @mattiasfagerlund Před 6 měsíci +1

      @@MatthewBrennan I see - but much could be gained if we were able to produce similar images from moving images - it would just be way faster to capture that way, not only for drones. Probably someone will have a go at it. I'm feel better images than a random frame from a movie could be generated, but never as good as the type of image you're describing.
      BTW, someone mentioned that within a few years, a team shooting footage on location will map the site using photogrammetry as a matter of course. We're used to voice dubbing in post production, but it's hard to shoot a new scene on a site that's distant - or in different weather/season. I think there are interesting things ahead!

  • @JustinDeRosa
    @JustinDeRosa Před 5 měsíci

    This is nuts for set design, remodels... Be interesting to see what could be done with scopes for plumbers, both doctors and the ones with the butt crackin.

  • @khairummaksudahoqueadeeba9911

    Hi Matthew! Thanks for this video. I'm new and a total noob to this field. I'm a Marketer and my line of work I'm having to learn a lot of these things including reality capture, photogrammetry, NeRFs, 3D GS, Digital Twin. Do you have videos that are educational about these aspects which would help a beginner like me to understand the basics?

  • @chumleyk
    @chumleyk Před 9 dny

    Ok. Why did the psychedelic sky of the gaussian splat give me a panic attack? I'm going to call it a Splat Attack.

  • @Tonatar
    @Tonatar Před 6 měsíci +2

    Imaging google earth with gaussian splatting.

  • @pauldorman
    @pauldorman Před 4 měsíci +1

    Pity about the low resolution. I assume it's low resolution as that's how it appears on my computer, even at 4K. Very interesting though!

    • @MatthewBrennan
      @MatthewBrennan  Před 4 měsíci

      Yea, I accidentally screen captured at 1080, but rendered everything at 4K!

  • @raspas99
    @raspas99 Před 6 měsíci

    I don't know what is the third method more than I did before starting your video

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci

      This wasn't meant to be a technical video, but if you want to know more about the technicals behind 3DGS, this is a good one: czcams.com/video/HVv_IQKlafQ/video.html

  • @Shrek_Has_Covid19
    @Shrek_Has_Covid19 Před 5 měsíci +1

    That's a rock

  • @darviniusb
    @darviniusb Před 4 měsíci

    Working with photogrametry for 15 years or more to. And i tested Nerf as soon as they got out, and all nice and cool untill you give them uniform reflective surfaces. Photogrametry and Nerfs do not like uniform glossy surfaces. GS have no problm, they even get transparency,. Is an insane technology, far from perfect, have very limited use case, but i can see already GS being perfect solution for then next generation of realistic 3D google maps. Are very small and a lot easyer to do then NerFs and can be used with night shots to.

  • @m.sierra5258
    @m.sierra5258 Před 6 měsíci +3

    I wish the source videos were higher quality... Especially the Nerf part is painful to watch

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci

      Ah yeah, I see that now. Unfortunately I screencaptured at 1080 (my monitor's max) and then upscaled to match the NeRF/Gaussian videos (4k), and had my premiere preview set to 1/8 res. Whoops. I'll fix it for future ones - thanks for pointing it out!

  • @vexnity460
    @vexnity460 Před 2 měsíci

    I'd say, if your using reflective or translucent surfaces,i 100% recommend nerfs instead of photogrammatry, cuz it does it somuch better

    • @MatthewBrennan
      @MatthewBrennan  Před 2 měsíci

      I made another video explicitly comparing the two- the photogrammetry model actually turned out pretty well.

  • @RolandHa23
    @RolandHa23 Před 2 měsíci

    Where is the panorama sphere texture coming from that you used in the end?

    • @MatthewBrennan
      @MatthewBrennan  Před 2 měsíci

      It’s a panorama I took using a UAV directly above church rock.

  • @serk_la_patata_espacial
    @serk_la_patata_espacial Před 6 měsíci +1

    It's me or from 8:30 and a half the quality of video seems 480?
    I can appreciate the quality correctly because seems a 480p video rescalated.
    Aside of that the video is very interesting.

  • @DataJuggler
    @DataJuggler Před 4 měsíci

    It would be nice if in a few years the Gaussian splatting has a way to erase things you don't want, or correct blurry parts of a scene. Gaussian to mesh would be the holy grail.

    • @MatthewBrennan
      @MatthewBrennan  Před 3 měsíci

      It's possible (albeit somewhat crudely) to edit the Gaussian cloud now, in Unity.

  • @Apollotwente
    @Apollotwente Před 2 měsíci

    Great video, thank you. Could you advise me how to scan an object very sharply for gaussian splatting? As it happens, I can't get it sharp. The letters are not clear. I own an android (s22 samsung, nikon z50, insta 360 x3). Which one would be the most accurate? As I am using the texture for training purpose. Thanks in advance. Greetings, Sebas

    • @MatthewBrennan
      @MatthewBrennan  Před 2 měsíci

      The Nikon z50 would likely be the best (physical shutter + megapixels), although of course it depends on what lens you are using. In my experience, I get the best results using a high-resolution mirrorless camera (compared to an iphone or action camera). Of course - more data = longer processing times, so there is always a trade off or compromise.

  • @Apollotwente
    @Apollotwente Před 2 měsíci

    Hello Matthew, thank you very much for your response. I assume I need to create an mp4 file if I want to scan a gaussian splatter. What are the settings? I am really a novice in this. Previously I was doing an fps 30. Is it convenient to set fps60? Thanks in advance.

    • @MatthewBrennan
      @MatthewBrennan  Před 2 měsíci

      In my experience, still images (photographs) work much better than video! Follow good photogrammetric practice for capture, and then process as a 3DGS.

  • @sevenblah
    @sevenblah Před 6 měsíci

    which would work best for a house with trees all around? and is there a way to see past the trees in any way? or would i need to fly my drone down into and past the tree line?

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci

      I would use photogrammetry, then edit out/delete the trees

    • @sevenblah
      @sevenblah Před 6 měsíci

      so go in front of the trees to get the house? drone whole area and then camera to move into more detailed spots? @@MatthewBrennan

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci

      Yes, you can also combine the drone data with photos taken by hand using a high quality mobile phone or still camera

    • @sevenblah
      @sevenblah Před 6 měsíci

      @@MatthewBrennan awesome thank youf or the help.

  • @klangr4usch
    @klangr4usch Před 5 měsíci

    hey :) question regarding the 3Dmodel in the sphere. did u shoot the panorama exactly on top and center of the poi (the rock)? and is there a tutorial how to do the hole 3D process in blender? thx, kind regards sebastian

    • @MatthewBrennan
      @MatthewBrennan  Před 5 měsíci

      Yes, directly over it. I don’t know if there’s a tutorial…I had the idea years ago as a way of “faking” a more extensive scene (see some of ‘digital Hadrians villa’ project videos). Basically you just create a sphere/geo-sphere, invert the normals, and apply a spherically mapped panorama. Hope that helps!

    • @klangr4usch
      @klangr4usch Před 5 měsíci

      thx for your response and infos! have tried it right now on one of my models and it was a success :) @@MatthewBrennan

    • @MatthewBrennan
      @MatthewBrennan  Před 5 měsíci

      @@klangr4usch Awesome! Send me a link if you post it on sketchfab!

  • @JorgLippmann
    @JorgLippmann Před 4 měsíci +1

    It should be no problem to import the photogrammetry model into blender, import a world texture (photosphere) and render a similar camera path.

  • @maxmeier532
    @maxmeier532 Před 5 měsíci

    What do you thing is currently the best technology to scan faces to create highly accurate 3d files? Which are also future proof in terms of working with them? The priority is accuracy and feasibility for a non-professional. If we are limiting to cost to maybe 5 to 10 grand? Photogrammetry, a 3D scanner like from Einscan, NeRF or anything else? Do you know of any software that could benefit from having more than 1 camera at a time for photogrammetry? When I look at professional studios, they have like a hundred cameras surrounding a person that gets scanned.

    • @MatthewBrennan
      @MatthewBrennan  Před 5 měsíci

      Photogrammetry would fit the bill, but you'd need a multi-camera rig. I've scanned a live subject (just the bust - shoulders + head) with a single camera, but it required quite a bit of cleanup in 3D sculpting software. A calibrated high-resolution, multi-camera solution would be the way to go.

  • @nurbdailym
    @nurbdailym Před 4 měsíci

    great video. except that the definition is not good even in 2160, it appeared blurry ?

  • @kozyboiiii1341
    @kozyboiiii1341 Před 5 měsíci +1

    Matthew, may i know what is your computer specification when creating this?

    • @MatthewBrennan
      @MatthewBrennan  Před 5 měsíci

      The photogrammetry and NeRF were processed on a desktop computer with a Ryzen 9 3900X CPU + 4070ti GPU. The Gaussian Splatting was processed with an nVidia A100 GPU.

  • @VerdonTrigance
    @VerdonTrigance Před 4 měsíci

    Hi, you said we can export camera positions from 1st method and import it into NeRF to reduce amount of work for neural network and speed up a process. Don't you know how to do this with MeshRoom and colmap (or instant ngp)? Recently I was trying to make a NeRF with those tools but found that even after 2 days of work it was still processing the data (which are around 1200 frames from video).

    • @MatthewBrennan
      @MatthewBrennan  Před 4 měsíci

      I use metashape instead of COLMAP, because COLMAP is very slow and gives subpar results (it’s open source though, which is nice).

    • @VerdonTrigance
      @VerdonTrigance Před 4 měsíci

      @@MatthewBrennan I'll check this as well, thanks.

  • @hangli1622
    @hangli1622 Před 3 měsíci

    Hello, Brennnan, May I ask you how long did the Gaussian Splatting method reconstruction takes you to get that church rock? I used a RTX 3080 machine and it will take me about 24 hours to reconstruct it(I start reconstruction last night, and it is still running now finished 45%). I dont know if it is OK......

    • @MatthewBrennan
      @MatthewBrennan  Před 3 měsíci

      It takes about 15 minutes using a 40GB a100… sounds like you could be running out of vram and instead of crashing it’s just hanging. Shouldn’t take that long…

    • @hangli1622
      @hangli1622 Před 3 měsíci

      @@MatthewBrennan Yes. I changed a 3090 machine and now it is less in one hour

  • @matheussalabert392
    @matheussalabert392 Před 5 měsíci

    Man is that a rocket fan?

  • @NSXtacy-
    @NSXtacy- Před 4 měsíci

    I hereby dub this rock...The 890 JUMP 😉

  • @sierraecho884
    @sierraecho884 Před 5 měsíci

    How exactly do I make the sphere itself ? Can you make a step by step please. I do lots of photogrammetry models but I laways fail to add some background

    • @MatthewBrennan
      @MatthewBrennan  Před 5 měsíci

      I'll make a quick video outlining it, but the gist is: place your 3D model at the center of a sphere geometry created in blender, for example (you may have to invert the normals so that the polygons are "facing" inwards). Then use a spherical UV-map to apply your 360-image.

    • @sierraecho884
      @sierraecho884 Před 5 měsíci

      @@MatthewBrennan The "create sphere part" is not a problem I donßt know how to make a 360 photo or to project it. I use CAD software and Agisoft Metashape.

    • @MatthewBrennan
      @MatthewBrennan  Před 5 měsíci +1

      @@sierraecho884 You need to take a panoramic photograph at/on location... Or you can find a spherical photograph online (google street view). Equirectangular images are typically 2:1 aspect ratio. These will easily UV map using a spherical projection onto sphere geometry.

  • @rowanw5912
    @rowanw5912 Před 6 měsíci

    Good video, but I have some tips. 1: Get a capture card so your system usage doesn't effect quality, it's hard to tell which method is better when the resource
    intensive ones are in 240p. And 2: either write a script or, if you want to maintain your natural manner of speaking, do a dry run first. Go though all your talking points once, then immediately start recording and do it again. Should help you move along a little faster and keep your pauses and "ums" to a minimum.

  • @AyushBakshi
    @AyushBakshi Před 5 měsíci

    Why the video is blurred at 1440p?

  • @darius3.14
    @darius3.14 Před 3 měsíci

    does anybody know if nerf or gaussian spalatting work with 360 images?

    • @MatthewBrennan
      @MatthewBrennan  Před 3 měsíci

      It will, but you need to split them into overlapping frames (8-14 per spherical image).

  • @_aethy_
    @_aethy_ Před 4 měsíci

    what is the name of the music used at 14:30?

  • @orkunsevengil336
    @orkunsevengil336 Před 2 měsíci

    What dhrone and camera you used? :)

  • @Haldi4803
    @Haldi4803 Před 6 měsíci +1

    4k60fps video. But you recorded your Desktop in 720p or what?

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci

      yep. Oops. I've fixed it in subsequent videos.

  • @jamesm464
    @jamesm464 Před 5 měsíci

    you can make a camera path around photogrammetry just as easily though, no?

    • @MatthewBrennan
      @MatthewBrennan  Před 5 měsíci

      Yes- but you won’t have the sky/context. The photogrammetry data is essentially limited to just the object of interest. I think it’s illustrated slightly better in some of my other videos- 3DGS is great for capturing “unbounded scenes”, whereas photogrammetry is better suited to discrete objects (where you don’t need context).

    • @lunabeige
      @lunabeige Před 4 měsíci

      @@MatthewBrennan the sky is captured very bad though

  • @Buraak_87
    @Buraak_87 Před 3 měsíci

    The capture of the SW is really blurry 😞

  • @natelawrence
    @natelawrence Před 6 měsíci

    15 years ago, you say?
    Were you a user of the original Photosynth desktop app?

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci

      Yes, and 123D Catch, original photoscan back in 2010, etc…

  • @crestz1
    @crestz1 Před 6 měsíci

    Could you release the generated models in this video? I’m interested to run my own model for comparisons

    • @MatthewBrennan
      @MatthewBrennan  Před 6 měsíci +1

      There's a google drive link to the Church Rock dataset in the description

    • @crestz1
      @crestz1 Před 6 měsíci

      @@MatthewBrennan Found it! cheers

    • @crestz1
      @crestz1 Před 6 měsíci

      I write papers related to NeRF and published them in CVPR. This field is always hungry for new datasets and if you're keen to release some scenes (say around 5-8 real scenes), do let me know! I'll be happy to collaborate with you to publish these datasets for the benefit of the research community!

  • @thesteammachine1282
    @thesteammachine1282 Před 2 měsíci

    So basically , photogrammetry is still king when it comes to realtime applications (games) as it holds the detail even up close and you can retopo it later, and things like gausiang splatting are currently good for video production/vfx, correct ?

    • @MatthewBrennan
      @MatthewBrennan  Před 2 měsíci +1

      Yep, essentially :)

    • @thesteammachine1282
      @thesteammachine1282 Před 2 měsíci

      @@MatthewBrennan In that case subscribing as I don't want to be left behind on the info XD (and difficult to keep track on everything) . I suspect that when a good and quality method for converting gaussian splatting to 3d mesh comes out, I will see the news here to keep up to date .
      Cheers mate !

  • @thepoppunx
    @thepoppunx Před 5 měsíci +1

    the main problem with GS its that i cant work with the model,,, i have no geometry to work with... at the end of the day y have a geometro model with a texture that i can modify and use in a 3d enviroment i create...

    • @MatthewBrennan
      @MatthewBrennan  Před 5 měsíci

      Yep - see this video for a discussion of Neural Surface Reconstruction: czcams.com/video/qFkCGvscsMQ/video.html
      I don't think NeRFs or GS will replace meshes anytime soon, but I do like Gaussian Splat point clouds for video rendering: czcams.com/video/Mi27jpUC5nU/video.html

  • @androwaydie4081
    @androwaydie4081 Před 6 měsíci

    19:32 Music name please ?

  • @foundationofficer8250
    @foundationofficer8250 Před 3 měsíci

    This video is rendered in 4K, yet the original video quality is more like potato 480p

    • @MatthewBrennan
      @MatthewBrennan  Před 3 měsíci

      Yep. I rendered the NeRF/3DGS at 4k, without realizing my screen recordings were set to 1080. In premiere I had the video previews at 1/8 quality, so I didn't notice the blurriness. Lesson learned! I fixed the problem in subsequent uploads :(

  • @petterlarsson7257
    @petterlarsson7257 Před 5 měsíci

    am i the only one who thought the image at the beginning was a rock that was only like a meter long