3D Gaussian Splatting Complex Spaces
Vložit
- čas přidán 17. 10. 2023
- A compilation of 3D Gaussian splatted architectural interiors and urban spaces. Created for CAAV 2023.
The scenes (and their architects/artists, when known):
Pantheon, Rome
Sant'Ivo alla Sapienza (Francesco Borromini), Rome
Saint Teresa in Ecstasy (Gian Lorenzo Bernini), Santa Maria della Vittoria, Rome
Sant'Andrea al Quirinale (Bernini), Rome
San Carlo alle Quattro Fontane (Borromini), Rome
Spanish Chapel (frescoes by Bonaiuto), Santa Maria Novella, Florence
Fontana del Moro (Bernini), Rome
Capella Strozzi (frescoes by Filippino Lippi), SMN, Florence
Sound Credits:
Fountain noise from: freesound.org/people/dibko/
Ambient background noise from: Meydan, "Away" (CC attribution)
General ambient noise: recorded on location in Rome and Florence
The music blending with the church is spot on.
It must be pretty awesome to see your "old" datasets reinvigorated by improved techniques!
It's a very impressive collection
Thank you! I periodically go back into the "archives" and re-process old photos in the hope that the results improve - this 3D Gaussian Splat rendering is one of the rare times I was amazed by the outcome!
Great
It would be awesome if there was some way that the splatted result could be used in the photogrammetry processing to repair where the mesh falls apart due to reflections
or and maybe extract roughness information (I would love that).
This would be pretty interesting - the "inverse" of that is using the Gaussian 'scene' as the background, and then replacing elements of it with photogrammetry scans (which can cast shadows/be re-lit). I experimented a bit with it here: czcams.com/video/qFkCGvscsMQ/video.html
What software did you use to obtain this extraordinary result? With classic photogrammetry I don't even get anywhere near these details!
This is called 3D Gaussian splatting - see some of my other videos for a more in-depth explanation and walkthrough! I agree - in some use cases, the Gaussian splat radiance field appears more detailed than the photogrammetry mesh, but there are many things you can't do with this type of result (re-light, 3d print, etc...)
@@MatthewBrennanyet. There’s seems to be more support in unity for blending gfx effects .. such as a fire burning the Gaussian… the effect looks amazing… there’s some work being done for impotent into blender as well. Exciting times.
Could you use a combination of lidar and pics to splat this? What program takes LAS files and 3d splats it?
All of these are photogrammetry camera positions processed through the GS train.py. I supposed it's possible you could 3DGS a laser scan cloud, but I'm not sure it would be any better than a photogrammetry data set.
hi, very nice work. How did you render your 3D gaussian scenes?
Unreal engine with a plug? Unity? Or any other software?
Thanks
Thank you! These are rendered in Unity using the Unity 'Recorder' (a package available in Unity). I'm also using this Unity project by Aras-P as a starting point: github.com/aras-p/UnityGaussianSplatting
@@MatthewBrennan Thanks a lot for your answer. I can see clearer :)
Great job. Gaussian looks great but I think it needs a little more polishing before being ready for prime time action. Not yours specifically but the overall process.
wow! what camera did you use?
These are all with either a Sony a6000 or a Sony A7R2
Did you process the data for the Gaussian Splats locally using the Python library from GitHub. If yes what resolution did you use during train.py? Or did you create the nice Splats usind Luma AI or Polycam? Great job, thank you so much.
I used the github method. I allowed it to automatically downscale the imagery to 1.6k for train.py. I have not used Luma or Polycam for GS, so can't comment on those, but from what I've seen, the quality is not as good because the input imagery isn't as high-res (in these cases, it was all 24 or 42mpx still photos).
@@MatthewBrennan Thank you, I thought you even used a higher resolution since the quality is great. But on my machine, if i do not downscale the time explodes :)
are you using video or stills? I have found that I get much better results when using good/technical still imagery taken as part of a traditional photogrammetry campaign.
@@MatthewBrennan I tried both, but i agree. Even with 4K Videos that are downscaled to 1.6k during training, the results are not as good a processed fullres RAWs which will also the downscaled. It does make a difference. Moreover its a lot of work choosing the right image from a video, since the amount of images should not be to high regarding processing. Its easier to "choose" during snapping the Images, to make sure the amount is low but important parts a part of the set.
is there a way to clean up noise?
Yes - the current version of Aras's Unity project uses volumes to "cut out" or hide noisy splats. In this video I used that in 3-4 of the scenes, and left noise in other scenes when it wasn't detrimental to the aesthetic.
Would you be willing to share links to any of these scans if they're already in the cloud? I'd love to play with the models and I'm sure many others would too! 🙌
Hmm you mean the raw data (photos), or the GS point clouds? I'll look into some cloud storage and see if I can get a few uploaded... I'm perpetually at 99% usage on my gdrive
@@MatthewBrennan it would be great to have photo datasets for those and try to recreate it :)
@@MatthewBrennan I would gladly use the photos or if storage is tight and it's convenient, if you upload the images to Luma AI or Polycam you could test out how the results look with their GS pipelines and then share links directly to the models in either of their web 3D viewers. Whatever is best!