My observations on Gaussian Splatting and 3D scanning

Sdílet
Vložit
  • čas přidán 13. 05. 2024
  • the Postshot program has taught me a lot about Gaussian splatting. I have also read interesting articles and found useful material with which I managed to improve the quality of my 3D scanning.
    First you should follow:
    radiancefields.com/
    Read this interview of Yulei He
    radiancefields.com/gaussian-s...
    Check Interactive indoor scanning of art gallery:
    current-exhibition.com/labora...
    Overhead 4d CZcams Channel:
    / @overhead4d
    Jawset postshot:
    www.jawset.com/
    Game developer Verumbit Channel:
    / @verumbit
    And if you are interested Insta 360 RS one 1-inch:
    www.insta360.com/sal/one_rs_1...
    #gaussiansplatting #3dscanning
  • Věda a technologie

Komentáře • 112

  • @joannemagi
    @joannemagi Před 2 dny

    Thank you for the video ! It's glad to see such much new discoveries about gaussian splatting.You help me a lot🙇‍♀🙇‍♀

  • @johnw65uk
    @johnw65uk Před 2 měsíci +11

    Think the problem with 360 video is you are walking in a low light environment (underground car park) so there must be issue with noise from the sensor , motion blur and also video compression . All influencing the final output. I would personally get a decent dslr and prime lenses and a monopod with a remote shutter release and just work your way round. It will take hours rather than minutes but the quality would be much cleaner results.
    Thanks for sharing

  • @hanskarlsson3778
    @hanskarlsson3778 Před 2 měsíci +18

    Excellent Olli, I tremendously appreciate your presentation style and easy-to-understand narration. We here in Japan are looking hard at incorporating Gaussian Splatting into our cultural heritage work. Really useful, I wish you get a million followers - at least :)

    • @deniaq1843
      @deniaq1843 Před měsícem +1

      Hello Sir :)
      You say you are looking forward to integrate it more into your cultural heritage work. That sounds so interesting but yet i can not imaging what you mean exactly. Woule you mind to share some more information about your thoughts? Greetings from Germany:)

    • @hanskarlsson3778
      @hanskarlsson3778 Před měsícem +1

      @@deniaq1843 Hello, well, many objects and buildings we recreate in VR exists somewhere outside. Right now we are proposing to recreate a wooden art sculpture that stood outside. We will need to model it, as it doesn't exist anymore, but the surrounding area with grass, trees and bushes, little hills etc. would be a nice project for gaussian splatting, as it works with vegetation.

  • @jorcher
    @jorcher Před 2 měsíci +3

    keep going! you are a shining star in the GaussianSplatting communtiy (if there is such a thing:D)!

  • @KalleKarppinen
    @KalleKarppinen Před 2 měsíci +8

    Great tutorial once again! It's always nice to hear your input on these matters and follow your experiments.

  • @KalkuehlGaming
    @KalkuehlGaming Před 2 měsíci +2

    Thank you Olli for all your updates on Gaussian Splatting. You are my favorite youtuber on this.
    Could you make a video on ways on how to get an interactive 3D viewer on your own website and import an edited gaussian splatting file?
    I am a little bit worried to use third party websites to imbed the viewer into someone elses website.

  • @mankit.mp4
    @mankit.mp4 Před 2 měsíci +5

    Wonderful insight Oli again learnt an awful lot from your sharing please keep it coming, big fan. x

  • @joelface
    @joelface Před 2 měsíci +3

    Great video Olli! I appreciate the website recommendation (radiancefields) and will check that out. I think your latest result with 300K iterations on the parking garage turned out near perfect! I hope to see gaussian splatting used for "spatial video" one day, where a small array of cameras record a scene from multiple angles and compile a frame-by-frame spatial video that can be explored from any angle in virtual reality. In the meantime, creating photo-real static environments is extremely cool.

  • @pixxelpusher
    @pixxelpusher Před měsícem +11

    This is a great summary of where Guassian splatting is at. How about instead of shooting video you set the 360 camera to take a timelapse of say 1 photo every second? That way you'd end up with a lot less images. 10 minutes would be 600 images.

    • @pedrogorilla483
      @pedrogorilla483 Před 12 dny

      You can also speed up the video or extract key frames.

    • @pixxelpusher
      @pixxelpusher Před 12 dny +1

      @@pedrogorilla483 True but it's a bit more work to do. Timelapse photos would also be much higher resolution too.

  • @violentpixelation5486
    @violentpixelation5486 Před 2 měsíci +1

    Thank you so much for doing this RnD work for all of us interested in this technology! 👍💯⚡

  • @nathanjgl
    @nathanjgl Před 2 měsíci +1

    Awesome work Olli! Thankyou for sharing these insights!

  • @fireum92
    @fireum92 Před 2 měsíci

    Love your videos! Thank you so much for your time and effort

  • @Meat-N-Fries
    @Meat-N-Fries Před 2 měsíci

    Amazing content as always!

  • @rikvandenreijen
    @rikvandenreijen Před 2 měsíci

    Amazing content. Thanks for helping us stay up to date with actual practical implementation of Gaussian splatting! Keep up the great work!

  • @hp651106
    @hp651106 Před 2 měsíci

    I Love your channel with Gaussian splatting.👍
    I look forward to every video you post.

  • @vidayperfumes7514
    @vidayperfumes7514 Před 2 měsíci +2

    Thank you for all of your advices , they are really useful.

  • @tobias.sieben.360
    @tobias.sieben.360 Před 2 měsíci

    Thanks, Olli. Again a great video that I really enjoyed. Keep on going

  • @Aguiraz
    @Aguiraz Před měsícem

    Great video, this is the same process I would have followed so refreshing and helpful to see it already done. Thanks and keep them coming dude!

  • @nav-unger
    @nav-unger Před 2 měsíci +2

    Thanks. You doing great stuff.....

  • @tribaltheadventurer
    @tribaltheadventurer Před 2 měsíci

    Thank you so much Olli🙌🏿

  • @MrCatoblepa
    @MrCatoblepa Před 25 dny

    What an amazing video! Thanks a lot, you provided a huge amount of very useful insights.

  • @DronoTron
    @DronoTron Před 2 měsíci

    Great video and thanx for sharing your thoughts

  • @liuksataka759
    @liuksataka759 Před 2 měsíci +1

    Thanks for the information!

  • @ya3d
    @ya3d Před 2 měsíci

    Excellent Olli !!!

  • @chrisfaber9926
    @chrisfaber9926 Před 19 dny

    This is really good information about shooting and processing. Thank you very much 🙏

  • @MatDeuhMix
    @MatDeuhMix Před 2 měsíci

    Thank you for the video !

  • @rockybalboa8085
    @rockybalboa8085 Před 21 dnem +1

    Dear Olli, thank you for your amazing tutorials - learned a lot from them! A bit dissagree about the resolution: on the bigger scale drone shots it makes sense and brings so mush details with 2000K interations :) 3800px is a bit too much, but 2400-2800 works much better than standard 1600. 1400px didn't even try.

    • @OlliHuttunen78
      @OlliHuttunen78  Před 18 dny

      Ok. It is good to know. I need to try that.

    • @rockybalboa8085
      @rockybalboa8085 Před 18 dny

      @@OlliHuttunen78 Do you know how to merge few gaussian splats .ply into one bigger? Tried to import them into Postshot, but it seems that this is not possible with this app :(

  • @DLVRYDRYVR
    @DLVRYDRYVR Před 2 měsíci +2

    Thank you 👍

  • @paultoensing3126
    @paultoensing3126 Před 2 měsíci +1

    Love it!

  • @grafpez
    @grafpez Před 2 měsíci

    fabulous thanks! ;-)

  • @jafilm3488
    @jafilm3488 Před 2 měsíci

    Great video.

  • @FredPauling
    @FredPauling Před 2 měsíci

    I've often wondered if 360 cameras would be suitable for this application. Thanks for sharing useful tips to make it work.

  • @63pixel
    @63pixel Před měsícem +1

    Thx for this! I used photogrammetry most of the times for my 360 images. But will now head to Posthot and give it a try! I´m curious how the images from the insta360 Pro2 will works. Its helpful that for each lens the images are saved seperately.

    • @OlliHuttunen78
      @OlliHuttunen78  Před měsícem +3

      Wow! You got quite cool 360 camera. Hope that it’ll work with Postshot. Remember that images with no fish eye distortion works best.

  • @DroneSlingers
    @DroneSlingers Před 2 měsíci +2

    Hey Olli, I got a question for you about how PostShot trains the models during the last step.
    Did you notice that the more ksteps you train it to, the smaller the file size becomes? The only reason I can think for that to happen would be as it refines the model it's possibly reducing the rouge splats and trimming down existing ones.
    I've done several test so far, you can save and export the model at any point during the training without interrupting the training. So during a 90kstep phase I exported at 30k-60k-90k and each file was decently reduced in size each time while also being clearer.
    Because I'm using a ShadowPC I unfortunately can't leave things running over night because after 30min inactivity Shadow disconnects you, so I was wondering if you had the chance could test if there is a limit to how small the file will get depending on the amount of ksteps ran? The difference between 90k and 300k is a lot but would be great to know if a file size is too large that I can just keep training it to reduce the size to where I need or if at a certain point it begins to increase in size.

  • @breadslinger271
    @breadslinger271 Před 2 měsíci +3

    I have already made an interactive game like tour of 2 parts of my home using splats and Spline.desgin straight from a web browser. I'm currently working on making full property game tours for real estate

  • @360_SA
    @360_SA Před 2 měsíci +3

    This is amazing video I love taking videos with 360 RS 1. I would like to know how did you come up with 18 videos? is it from one video or 3 videos or more. and how did you select the area for the square videos. Thank you I like to watch your videos because you give us the best explanation in the shortest time so we will not get lost.

    • @excurze7377
      @excurze7377 Před 6 dny

      I was thinking the same to be honest, i tried capturing a Garage too but it always ended up messy. The Camera tracking was off everytime.
      Maybe i really should switch from my Phone to a 360.. im not sure what im doing wrong

  • @jorgeviloria4315
    @jorgeviloria4315 Před měsícem

    Amazing vid dude! Thanks for your time and effort to GS research, i love those techniques. Im sorry which 3D software is best to place the detailed models? thanks a lot Olli

  • @ns194
    @ns194 Před 2 měsíci +3

    Hi Olli, great video, very informative! Two questions about your process: why not use the 360 camera in several positions to capture stills instead of video? And, I take it that you exported framed videos from your 360 camera (i.e. not equirectangular 360 videos)?

    • @OlliHuttunen78
      @OlliHuttunen78  Před 2 měsíci +1

      Thanks. Yes still images would be better solution. But it takes just more time. Equirectangular image would be nice if it would work with postshot but perhaps in future.

    • @ns194
      @ns194 Před 2 měsíci

      @@OlliHuttunen78 Indeed, it would be great if the program could parse spatial metadata - certainly a time saver for capturing 3D environments!

    • @dinoscheidt
      @dinoscheidt Před měsícem

      While it would be mathematically quite straight forward to slice equirectangular images for processing…. my question would be: whats the benefit? You would be in the frame (someone needs to hold the thing) making it quite useless 👀

    • @ns194
      @ns194 Před měsícem +1

      @@dinoscheidt not necessarily, you can leave the frame entirely if it’s a static shot and the tripod can be masked out fairly easily. But if it’s a moving shot, that’s another story. At that point though, the benefit of a 360 camera compared to a high quality video camera of any kind becomes negligible. You end up framing your coverage in post instead of during production.

    • @bradleypout1820
      @bradleypout1820 Před měsícem

      nerfstudio lets you do this. Equirectangular images just ns-process-data in nerf and drag the files to postshot. Processing normal images or videos in colmap yourself for me always produces way better nerfs and splats or tweak the settings in studio. Postshot processing data not that great yet. @@OlliHuttunen78

  • @robmulally
    @robmulally Před 2 měsíci +6

    Good info I been experimenting as well and have come to similar conclusions. 8k 30 fps in my phone and not over shooting can work better than a really long video , but great to know you can chop out parts and use as multiple clips. Do they have to be the same camera lens..

    • @OlliHuttunen78
      @OlliHuttunen78  Před 2 měsíci +1

      It is recommented that scan material should be recorded with same camera but I haven’t try it. It would bee cool if I could scan first with drone and then walk in ground and scan with another camera.

    • @robmulally
      @robmulally Před měsícem

      I could not find how to add second video it only seems to accept the first ​@OlliHuttunen78

  • @jmalmsten
    @jmalmsten Před 2 měsíci

    Now, these techniques look very promising indeed. :)

  • @HosniElmolla
    @HosniElmolla Před měsícem

    great

  • @echobass3D
    @echobass3D Před 27 dny

    Great video thank you. What’s the reason for using video over a series of stills? Surely you could just walk around taking a shot every few degrees of movement? That way you can set the shutter speed to avoid motion blur fix your depth of field and avoid taking so many frames in the first place.

  • @tamiopaulalezon9573
    @tamiopaulalezon9573 Před 19 dny +1

    postshot with high iteration or Luma Ai with upgrades... Have you compared them with the same video? Which is better if ever🧐

  • @punio4
    @punio4 Před 14 dny +1

    A question for capturing with a smartphone camera: I know you should lock the exposure, ISO, and WB, however should you also lock autofocus? I can't imagine you'd be able to take good photos of an object with a fixed focal length.

    • @OlliHuttunen78
      @OlliHuttunen78  Před 13 dny

      Yes. Naturally focus should be sharp. the more sharp images without DOF you can get is better for good 3D scanning.

  • @anidea8012
    @anidea8012 Před měsícem

    Hey, thank you for your awesome guide about this postshot. I'm a newbie here and I have some doubts. I understand that the original Gaussian-splatting project expects unique hyperparameter setting for each and every individual scene, but how come postshots can be able to give good outputs without any tunning?

  • @gerardosanchez6045
    @gerardosanchez6045 Před 2 měsíci +3

    really amazing video, im doing 3dscan enviroments and start using 360 cam for this, but havent achieved great results in luma ai, maybe this is the solution. Cheers

    • @gerardosanchez6045
      @gerardosanchez6045 Před 2 měsíci

      One question, how do you export the video from insta 360 to postshot?
      - 360 type
      - reframed

  • @antons6146
    @antons6146 Před 2 měsíci

    Thank you so much for explaining and doing all these test, I had a question would it be possible to feed larger resolution still frames plus the videos from the dark areas to improve quality.

    • @OlliHuttunen78
      @OlliHuttunen78  Před 2 měsíci +1

      Well the high resolution of the images are not the answer to create more accurate Gaussian models. It just choke the calculation and takes much more time the higher resolution images you put in to it. Images just needs to be sharp and there should not be lot of noise or motion blur.

  • @buroachenbach703
    @buroachenbach703 Před 2 měsíci +2

    Hi Olli,
    Great insight into Postshots Settings - do you have similar info on the resolution of the images? I keep it low because I’m worried about the memory but have you tried the difference between 1600 or lower or even higher than that? Does it the quality as much as the training steps?
    Regards Kai

    • @OlliHuttunen78
      @OlliHuttunen78  Před 2 měsíci

      Yes. The resolution is interesting thing. Gaussian Splatting training performs better on lower resolution. If you put higher like 4K images in training will choke and it takes huge amount more time to complete the calculation. So if you are doing the training with postshot it is good to stay in default values that it offers when you import images into it.

  • @Grigoriy360
    @Grigoriy360 Před 2 měsíci +1

    Great tutorial, thanks to you video I started experiment with Gaussian Splatting. I do a lot of photogrammetry, what is the best way to combine it with GS ?

    • @OlliHuttunen78
      @OlliHuttunen78  Před 2 měsíci

      To 3D scan Gaussian Splatting model is very similar process as a photogrammetry. Same rules applies in here except you don’t have to avoid transparent and reflective surfaces. But GS works best when you scan enviroments. Smaller object are challenging to capture.

    • @Grigoriy360
      @Grigoriy360 Před 2 měsíci

      @@OlliHuttunen78 Thank you!

  • @trollenz
    @trollenz Před 2 měsíci

    I recommend subscribing to YOUR channel 👏🏻👌🏻

  • @sharpvidtube
    @sharpvidtube Před 2 měsíci +2

    Maybe timelapse mode would be ideal for making these?

    • @OlliHuttunen78
      @OlliHuttunen78  Před 2 měsíci

      Definitely! I thought about it too, but I haven't tested it in practice yet. Timelapse is a good idea!

  • @lucbaxter69
    @lucbaxter69 Před 2 měsíci

    Hello Olli, Thank you for all the Videos. I try to work with postshot... can I export the croped model as a PLY? If I export the model have always the same size. Greetings André

  • @Healthy_Toki
    @Healthy_Toki Před 22 dny +1

    I think an array of 3x 360 cameras mounted on a stick at different heights all connected to a remote shutter button that you manually activate. This will allow all photos to be taken fairly 'still' between stick movements so details can resolve with minimal motion blur as well as speeding up time for a full coverage scenario. Anyone try this yet?

    • @OlliHuttunen78
      @OlliHuttunen78  Před 22 dny

      I have had a similar idea. Such a 360 photography stick would be reasonably easy to build. it would only take three 360 ​​cameras. I only have one. Would insta360 sponsor such a scientific experiment?

  • @studiodevis
    @studiodevis Před 2 měsíci

    Kiitos Olli! Fantastic videos about GS. I have a Canon R5c max video rez 8K with 50p RAW and a 360 camera -QooCam 8K. Which of these two cameras do you recommend me to use ? I read in the comments that PostShot works better with lower rez videos (max 4k). Depends on the PC hardware?
    Keep up the good work!

    • @OlliHuttunen78
      @OlliHuttunen78  Před 2 měsíci +2

      Well 360 camera is more effective because it covers larger view and you get so much with one shot. And the resolution is not the important aspect in Gaussian Splatting.

    • @TheInglucabevilacqua
      @TheInglucabevilacqua Před 25 dny

      @@OlliHuttunen78 thank you Olli, I was just wondering about the influence of the source images pixel count on the final result, it seemed to me likely that - especially if one strikes the best balance for the number of source images and the number of iterations the pixel count in the source material can become a factor again... should we wait for future improvements in the algorithms and in the GPU hardware for it?

  • @hollisatlarge
    @hollisatlarge Před měsícem

    Hey Olli. If you had to do the car garage project without a 360 camera and just took regular video or images, how would you shoot the videos (or take pictures) that postshot uses to train? Do you take a bunch of videos circling different areas of the car garage? I've really only captured aerial images and used postshot for exterior 3d models, I've haven't done interior yet. Cheers!

  • @AdrienLatapie
    @AdrienLatapie Před 2 měsíci

    this is awesome, would you recommend to buy that 360 camera or could you achieve the same results with an iphone? what if you have a really old 360 camera, the training frames have to be high res?

    • @OlliHuttunen78
      @OlliHuttunen78  Před 2 měsíci +2

      Well yes the 360 camera is very handy, but you can get same kind of results with phone camera too. It is not about resolution. Images just needs to be sharp. Im counting these GS models from 1440 x 1080 res images. Wide lens where is no fish eye distortion is also useful. At least you can try your old 360 camera as well.

  • @underbelly69
    @underbelly69 Před 2 měsíci

    Is it possible to convert iPhone 15 lidar scan (not photos) eg. Export as .ply file from lidar scan app

  • @christianblinde
    @christianblinde Před 2 měsíci +2

    What kind of system do you have, that you can process 1000 images? My system struggles when i go above 300 images. Ist there a way to get around the system limitations? Like processing parts an merging them?

    • @OlliHuttunen78
      @OlliHuttunen78  Před 2 měsíci

      I have Nvidia RTX 3070 Graphics Gard with 8gb vram and CPU is Ryzen 7 and 64gb RAM. I have also thought about whether the scan could be done in parts. Yes, it is certainly possible. The difficulty arises from the fact that training always creates models on a different scale. It is quite difficult to get them to fit with each other, but it is not impossible.

  • @orbitall3d-capturadarealid774

    Hi guys.
    I have a question, maybe I'm completely wrong.
    I would like to know if it is possible to obtain a visualization of a point cloud obtained by LIDAR, similar to Gaussian Splatting.
    I've always worked with the process of capturing images in the field with a laser scanner, so I know practically nothing about the technical side of files and programming.
    I understand that the images obtained in the 3D scanning process are used to colorize the points.
    So I thought I could skip the training stage with images, and use a point cloud obtained by the scanner (I think it's more accurate and noise-free).
    I apologize if what I've said is nonsense, or if it already exists.
    TIA

  • @GSXNetwork
    @GSXNetwork Před 2 měsíci +3

    Watch some photogrammetry tutorials to eliminate the guess work.

  • @mishadodger
    @mishadodger Před měsícem +1

    Hi Olli, thank you for your posts, I'm doing a project for the university with Gaussian splattings, and I'm also trying to make a scan. The problem I'm having is it's taking very long; what format are you using when making your videos? I've done some 8k videos with my Samsung s23 of a small room; all the process is running ok but is getting stuck when is reaches 10k steps from that point is barely moving, and the time is jumping from 30 minutes to 20+ hours, I have a 4070 card so should not be the problem with the power, I was thinking maybe my videos are bad? should I make maybe FHD 60fps? please advise if you had this issue before. I changed the number of shuts from 400 to 200 and still is moving slow after that 10k

    • @OlliHuttunen78
      @OlliHuttunen78  Před měsícem +1

      You should lower your image resolution radically. 8K images are choking the traning process. Use for example 1920x1080 resolution images. I made my tests in 4:3 aspect ration whre resolution was 1440X1080 and it seems to be quite good format for gaussian splatting training.

    • @mishadodger
      @mishadodger Před měsícem

      Thank you for coming back to me; I will also try that.

  • @melankolistaja3792
    @melankolistaja3792 Před 2 měsíci +1

    How about using google streetview as source material?

  • @VerumBit
    @VerumBit Před 2 měsíci +1

    Thanks for mentioning! ;) Check the links in the description, mine and the Overhead are not working :)

    • @OlliHuttunen78
      @OlliHuttunen78  Před 2 měsíci +1

      Hey! Good that you noticed. Now the links are fixed.

    • @panonesia
      @panonesia Před 2 měsíci

      can you make videos how to build simple game from gaussian splatting scan? basic level is okay...

    • @VerumBit
      @VerumBit Před 2 měsíci

      @@panonesia You just need to add collisions because gaussian have none : set a floor plane then make many cubes, each one with same dimension of objects (cars, walls, columns, roof, etc). The material of these cubes must be transparent (or set them hidden in game). Finally put the player&enemies inside the scene and play. This is what I did, nothing more.

  • @jimj2683
    @jimj2683 Před měsícem

    Someone needs to apply these 3d techniques to google street view!

  • @user-yw1lj5et9p
    @user-yw1lj5et9p Před 2 měsíci

    Why is it that after I register, I can't bring up the startup screen for the software,postshop

  • @NervusOne
    @NervusOne Před 18 dny

    I've downloaded the UNITY pluggin ...but I just see the point cloud...is it under RGB ?

  • @CPlat
    @CPlat Před 2 měsíci

    I was wondering how many images I should use for a scan of my street with 54000 frames

    • @bradleypout1820
      @bradleypout1820 Před měsícem

      as many as you want to. just make the images smaller if you want to use more? but your limits are your pc and your time lol. Put as many as want in these programs you will soon find out if the it crashes or lags that you have put too much data in!

  • @hanskarlsson3778
    @hanskarlsson3778 Před měsícem

    Olli, I am mystified by your workflow here. In 360s you appear in the shot. But you reframe to 4:3 and output a section of the full 360 as I understand it. However, do you output several videos in that format from Insta studio to get all angels (except the parts where you appear yourself). If so, do you frame them so that you have overlap? I would really appreciate more details about the steps in Insta Studio. I also don't know how to get the menu you show when you talk about how to get rid of fisheye distortion?

    • @hanskarlsson3778
      @hanskarlsson3778 Před měsícem

      Sorry, I figured (most of it out). You are setting keyframes in Insta Studio (press the plus button). I put one at the start, then copied and pasted to the last frame to get the same values. I then captured several reframed shots from the same 360 video, trying to cover everything using a bit of overlap. I walked the same route back and forth to get frames of everything without me in the shot. Now trying this, hoping it's the right workflow :)

    • @OlliHuttunen78
      @OlliHuttunen78  Před měsícem +1

      Yes. The 360 camera has the awkward feature that you, as the photographer, will inevitably be included in the pictures if you use it to shoot a video with a selfie stick. I simply photographed the scanned material so that I limited one 4:3 view to point to the left and one to the right. I didn't render an image that points forward or backwards, because then the stitching seam will be in front and I myself will be behind. So in one round I was able to photograph two directions at once. And I did three of these rounds, each from a different height. At the highest rotation, I turned the viewing angle slightly downward so that the floor could be seen, and at the lowest rotation, correspondingly, slightly upward so that the details of the roof could be seen. Hope these tips help.

    • @hanskarlsson3778
      @hanskarlsson3778 Před měsícem

      Follow-up: I followed your method, using my little Insta EVO. The scene was a path with blooming cherry trees here in Japan. Results were bad, which I blame on the choice of subject. There was no detail in the cherry trees at all, they look like an impressionist painting. I think this kind of subject requires photographs, not video, and a high-grade camera. I also just did one height, which was a mistake. On the other hand, to capture such a large scene with a mirrorless or DSLR camera is a huge challenge, not the least because of the height of the trees. The camera is too heavy for a drone (unless it's a really expensive drone). Interesting to note that buildings in teh scene look decent, so it seems that your method is suitable for "hard" subjects, like your garage with cars. Vegetation and flowers etc. is tough, I think partly because cutting out a small piece in the 360 video like you did here results in very low resolution. You can notice this in 180 VR videos, which look much sharper than 360 from the same camera (like my EVO), because the same amount of pixels only need to cover half the area. But you are 100% right, this is a field that need a ton of experimentation and experiences for success!

  • @boskobuha8523
    @boskobuha8523 Před měsícem

    I have issue with postshot software or some settings. I have about 200 images and train gaussian splatting. Everything goes fine, result seems to be exellent and when training was finished results was very bad and messy, useless. With 30.000 processes. What's happened? Do you have an explanation?

  • @alejandrocapra8654
    @alejandrocapra8654 Před 2 měsíci

    Make a tutorial to export from gaussian to unreal and basic questions!

  • @NervusOne
    @NervusOne Před 19 dny

    Olli or anyone with the answer 😁. I'm a newb with Unreal and/Blender I come from a filmmaking background. Can you help me understand how to use the PLY file from JAWSET and use it in either blender or UNREAL but where I can see the colors ! Now I'm only able to see the point cloud and the camera point....

    • @OlliHuttunen78
      @OlliHuttunen78  Před 19 dny

      Well. In Unreal you can use Gaussian Splatting PLY files with for example Luma AI plugin. And that you can found in Unral Marketplace. In blender it is little bit difficult. There is one addon that can display Gaussian Splatting PLY in blender. It is quite slow because it uses cycles render engine. I recommend to watch my other video where I list these tools: czcams.com/video/9pWKnyw74LY/video.html

  • @dainjah
    @dainjah Před 2 měsíci +2

    what are your system specs? 4090?

    • @OlliHuttunen78
      @OlliHuttunen78  Před 2 měsíci

      Nvidia RTX 3070 Graphics Gard with 8gb vram and CPU is Ryzen 7 and 64gb RAM.

    • @dainjah
      @dainjah Před 2 měsíci

      @@OlliHuttunen78thank you. Im currently on 3050 laptop GPU and wanted to figure out how much of an upgrade I need to speed up my splats creation.