How to Use 360 Video for 3D Gaussian Splatting (and NeRFs!)

Sdílet
Vložit
  • čas přidán 7. 09. 2023
  • In this video, I show you how to take 360 video or images and use them to train a 3D Gaussian Splatting scene. This is an absolute beginner's guide and is the easiest way to get started. However, it is not the BEST way to do this. I will make a second video in the future that is more involved.
    You will need the 2021 version of Meshroom. Download it here: www.fosshub.com/Meshroom-old....
    Here is the image extraction command:
    ffmpeg -i path/to/360_video.mp4 -vf fps=(fps) -qscale:v 1 (output_folder)/ image_%04d.jpg
    Here is the Meshroom command:
    aliceVision_utils_split360Images.exe -i (input 360 image folder) -o (output 2D image folder) --equirectangularNbSplits 8 --equirectangularSplitResolution 1200
    Find my beginner's tutorial for 360 Gaussian Splatting here: • Getting Started With 3...
    If you get stuck, as questions in the comments! I will do my best to help!
    Please follow my channel for advanced tips and more informational videos on computer vision!
    Follow me on LinkedIn: / jonathanstephens
    Follow me on Twitter: / jonstephens85
  • Věda a technologie

Komentáře • 143

  • @Aero3D
    @Aero3D Před 4 měsíci

    Buying one of these today just for GS generation! I am super excited to try this out!!!

  • @mcmulla2
    @mcmulla2 Před 9 měsíci +6

    Perfect! i've been messing with this thanks to your Splatting tutorial, excited to mess with some 360 footage I captured too!

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      Awesome! Follow me on social. If you share anything you come up with, tag me and I’ll repost it.

  • @secondfavorite
    @secondfavorite Před 9 měsíci +3

    thanks a bunch! This is what I needed. I have an Insta360 but didn't know where to start.

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      Great! Let me know if you run into any roadblocks.

  • @360_SA
    @360_SA Před 9 měsíci +2

    Thank you much-needed video

    • @thenerfguru
      @thenerfguru  Před 9 měsíci +1

      I was rushing this one out for you! I could use tips on how to get better 360 video footage. I have the two cameras at the start of the video. A Insta360 Pro II and a RS One 1-Inch

  • @choiceillusion
    @choiceillusion Před 9 měsíci +1

    Very cool. I'm headed to a cabin on top of a mountain and Im going to do some loops with a drone in attempts to turn it into some sport of radiance field. Thank you for this tutorial.

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      Loops are amazing for this technology!

  • @AD34534
    @AD34534 Před 9 měsíci +1

    This is freaking amazing!

  • @Photonees
    @Photonees Před 9 měsíci +2

    Awesome, def gonna try and play with it. But how do you get the sky/ceiling rendered? As you said you didnt include the top? Also wonder how you can remove yourself if you use a 360 camera. I wonder if this would work with a fisheye and 4K video. Then you are always out of the image and can get very high res images or just pictures on my Canon R5 with fisheye. Any idea on what command you need then?

  • @RelicRenditions
    @RelicRenditions Před 9 měsíci +5

    Such a great video. Thank you. I have been doing historic site and relic capture for a while now using photogrammetry and different NeRF solutions like Luma AI. I am excited to get started with Gaussian Splatting because: 1. It should render a lot faster for my clients, 2. may look better, and 3. It honestly seems easier to set up that many of the cutting edge NeRF frameworks I've been experimenting with that require Linux. Much of my workflow involves Windows because I also do a lot of Insta360 Captures, Omniverse, etc. This is great stuff!

  • @underbelly69
    @underbelly69 Před 9 měsíci +1

    outstanding - see you in the next one

  • @deniaq1843
    @deniaq1843 Před měsícem

    Thanks for your time and effort. I want to try it out myself soon. Was the whole progress in real-time? I especially mean the creation of the 3d gausian file. I just wonder how fast this can be. Thanks so far and best wishes :)

  • @mattizzle81
    @mattizzle81 Před 9 měsíci +1

    Insane idea. I was thinking of using iPhone lidar to capture point clouds but then that has a limited field of view and hence more waving the camera around.
    Capturing in 360 could be much more efficient.

  • @tribaltheadventurer
    @tribaltheadventurer Před 8 měsíci

    Thank you so much

  • @benbork9835
    @benbork9835 Před 9 měsíci +2

    Wow this is epic! What I did not quite understand, in this training data did you only record the alley way once or did you record it multiple times walking different paths as you told us to do?

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      This was a single walk through. You can see that I didn't have the best movement freedom in the end. Unless I stuck to the single trajectory, the result falls apart fast.

  • @caedicoes
    @caedicoes Před 8 měsíci +3

    This is such an important video you can't even imagine!

  • @marcomoscoso7402
    @marcomoscoso7402 Před 9 měsíci +3

    Looks so straightforward. I wonder what will this technology look like in 5 years.

    • @c0nsumption
      @c0nsumption Před 9 měsíci +2

      Can’t help but think that gaming and sims are going to change dramatically 🤔
      Like is this the future of memory? 🤷🏽‍♂️

    • @marcomoscoso7402
      @marcomoscoso7402 Před 9 měsíci

      @@c0nsumption there are implementations with Unreal engine already with this technology. I think this is the future of games

    • @c0nsumption
      @c0nsumption Před 9 měsíci

      @@marcomoscoso7402 dang, been switching over to unreal since about a year ago cause I had iffy feelings after being a Unity dev for 10 years. Thank God I did. I gotta try em out. Researching over the weekend

  • @DanyDinho91
    @DanyDinho91 Před 8 měsíci +1

    Hi, thanks for these tutorials. Is it possibile to export the point clouds or a 3d model of these results? Thanks

  • @lucho3612
    @lucho3612 Před 8 měsíci

    fantastiic technic

  • @melkorvalar7645
    @melkorvalar7645 Před 9 měsíci +1

    You're great!

  • @three-diverse
    @three-diverse Před 5 měsíci

    Just found you on youtube after following you on linkedin for a while now! Great stuff! One question, do the scans have the correct real world measurements, for example could i measure a kitchen counter that is scanned and it be correct?

  • @JWPanimation
    @JWPanimation Před 2 dny

    Thanks fro posting! Would it have been better to shoot stills every 5' or 2m with the 360 1 inch? As per your suggestion, would a higher elevation pass walking one way and then a
    lower elevation going back the other way be ideal?

  • @allether5377
    @allether5377 Před 8 měsíci

    oh nice!

  • @Povilaz
    @Povilaz Před 8 měsíci +1

    Very interesting!

  • @bradmoore3778
    @bradmoore3778 Před 9 měsíci +2

    Really great! If you mounted three cameras to one post at different heights could you combine the three videos to make a better result? Or does the source have to come from the same device moving the one device to different heights? Thanks

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      That could work. However, I would want all 3 cameras to be the same camera model.

  • @user-ui2hw5of9l
    @user-ui2hw5of9l Před 9 měsíci +1

    thank you very much~!!

  • @mankit.mp4
    @mankit.mp4 Před 8 měsíci +1

    Amazing tutorial thanks! While I don’t have a 360 camera I do have a full frame camera and a fish eye lens, how would you compare this workflow to if I was to take 4k video with the fisheye and obviously walk back and forth multiple time at different height?

    • @thenerfguru
      @thenerfguru  Před 8 měsíci +1

      Walking back and forth work. Just make sure you don’t make any sharp rotations with the camera.

  • @brettcameratraveler
    @brettcameratraveler Před 9 měsíci +3

    When it comes to NeRFs and GS, can you foresee any advantage to shooting with that larger 360 camera when in 3D mode? I have the Canon dual fisheye 3D 180 8K video camera and hoping to take advantage of it in new, unintended ways, but seems like stereoscopic wouldn't help for this purpose as you could just take more pictures with a single lens, no?

    • @Thats_Cool_Jack
      @Thats_Cool_Jack Před 9 měsíci +1

      It can help but what helps the most is constantly moving the camera. I have my camera at the end of a stick and I sway it back and forth as I walk to create the most parallax

  • @Thats_Cool_Jack
    @Thats_Cool_Jack Před 9 měsíci +10

    I find meshroom's image outputs from 360 video to be very limiting, it only goes along the middle of the frame which has you missing out on up close things and only focusing on things in the horizon. My solution was to put the video on an inverted sphere in blender, with some cameras (12 of them) facing outwards from the center at varying angles, and then create a bunch of camera markers (Ctrl b) that switches between all the cameras every frame. I found I got way better results doing this, especially because I have a lower end 360 camera thats only 5k res. Hope this helps someone

    • @Thats_Cool_Jack
      @Thats_Cool_Jack Před 9 měsíci +1

      You want to avoid high fov to minimize distorted edges, which tend to be useless in photogrammetry

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      @@Thats_Cool_Jack interesting. I have almost zero experience with Blender. What is your experience with your method being trained into a NeRF or used for photogrammetry output?

    • @Thats_Cool_Jack
      @Thats_Cool_Jack Před 9 měsíci +1

      @@thenerfguru it works really well. The images are the same quality as they would be if you were using the meshroom method but you can choose the angles that the cameras are looking. When I record the 360 video I sway the camera on the end of a camera stick back and forth while walking to create as much parallax as possible, which gets the best depth information, but can be somewhat blurry in low light situations. I've done both nerf and photogrammetry. I made a vrchat world using this method in a graffiti alleyway.

    • @jiennyteng
      @jiennyteng Před 6 měsíci

      Thanks for your awesome try cloud you introduce more details about how to import 360 video in the blender and output multi-view perspective images

  • @benjaminwoite6136
    @benjaminwoite6136 Před 9 měsíci

    Can't wait to see how you make a Gaussian Splatting scene from Insta360 Pro footage.

  • @user-nk5oq5mb4s
    @user-nk5oq5mb4s Před 8 měsíci +2

    Thanks for your detailed and professional video. We follow your steps and we can indeed get gaussian splatting results, but we also found that the 6K panoramic video (200MB bitrate, H265) shot using insta360 RS one is converted into a 360 image by ffmpeg, and then converted into perspective images using alicevision. The images are not very clear. Could you please give us some guidance on how to improve the clarity of the picture?

    • @RobertWildling
      @RobertWildling Před 8 měsíci

      Having the same problem. But it seems to be the insta360 RS One that simply does not deliver good image quality.

  • @joselondono
    @joselondono Před 2 dny

    is there an option within the aliceVision command to also include the view upwards?

  • @Instant_Nerf
    @Instant_Nerf Před 8 měsíci

    It seems like the algo has a hard time with more data. For example you normally go around something and have a small area to look at with nerf or Gaussian .. however how do you maybe even combine for a larger scene ? You go around something and then you expand by getting more footage and try and combine all the data so you have more to look at .. or just create a larger scene .. it seems to have problems with that .. any thoughts ??

  • @hangdu4417
    @hangdu4417 Před 7 měsíci

    Can I measure the relative width in the result of Gaussian? Which software you suggest? Thank you!

  • @frankricardocarrillo1094
    @frankricardocarrillo1094 Před 9 měsíci +2

    Hello Jonathan, I already have a NeRF and a Gaussian Splatting from the same scene, and I would like to make a video comparision to show how better are the GS, any recomendations about how to do it??
    Thanks

    • @thenerfguru
      @thenerfguru  Před 9 měsíci +1

      You bet! You can either manually resize all of your photos ahead of time, or when you prep the images it should make a half, quarter, and 8th scale version.

  • @vassilisseferidis
    @vassilisseferidis Před 9 měsíci

    Great video Jonathan. Thank you. Have you tried any footage with the Insta360 Pro to compare the results with the One RS 1-inch?

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      I would like that too! Do you mean the x3? If only Insta360 could send me a loaner :)

    • @vassilisseferidis
      @vassilisseferidis Před 8 měsíci

      Hi Jonathan,
      When I follow your workflow the quality of the generated Gaussian splatting looks good only if you follow exactly the same path with the original (recording) camera.
      In your video you show the 6-camera Insta360 Pro model. Have you tried to create a Gaussian Splatting using that camera? I'd expect that the higher resolution would produce better results(?).
      Keep up your excellent work.

  • @loganliu1573
    @loganliu1573 Před 8 měsíci

    Sorry, I am using machine translated English. I hope I can understand it
    --------------------------------
    Thank you very much for your video. I have learned a lot,
    Here is a small question to ask, if I created a ply model in a video I filmed
    I found that the ground was missing, and then I was filming a video of the ground and creating another ply model
    So how can we merge these two ply models into a complete one? If I can merge, then I can segment and shoot more videos, making a scene perfect without dead corners.
    笔记

  • @monstercameron
    @monstercameron Před 9 měsíci +1

    I'm gonna give it a try

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      Comment if you get stuck! I was literally losing my voice while making the video. 😅

    • @monstercameron
      @monstercameron Před 9 měsíci

      @@thenerfguru well I ran it last night and I was seg faulting. I think it's my cuda toolkit version, I hope. Thanks for sharing I'll reference your videos for help

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      This also works for NeRFs and photogrammetry!

  • @ThomasPlaysTheGames
    @ThomasPlaysTheGames Před 9 měsíci +4

    Regarding the conversion of the rectangular images to cubemaps - I'm afraid I don't understand the need for this.
    My experience with COLMAP is intermediate, but I typically experienced fewer camera pose misalignment issues when I didn't perform any operations on the input images. Not to mention the extreme slowdown on bundle adjustment & block matching when you start having tons of image tiles.
    Does Insta360 Studio not allow you to export the raw video from each camera independently? Or are you performing this workflow for some other reason?
    Additionally I'd love to hear why you're using meshroom for the cubemaps instead of something like 'ffmpeg -i input_equirectangular.mp4 -vf "v360=e:ih_fov=90:iv_fov=90:output_layout=cubemap_16" cubemap_output.mp4'

    • @thenerfguru
      @thenerfguru  Před 9 měsíci +3

      Great questions:
      1. I cannot export raw images from each lens. I use that workflow from my Insta360 Pro II, but I still drop a lot of the extreme warped sections of the images.
      2. As far as FFMPEG, That shows you have far back in updates I’ve been focused on with this software! After a few comments, I have written a Python script to extract 8 images and added some additional controls for optimization.
      3. For getting 8 cubemapped images, I’m going off of what I have tested in the past and works best. Using just the front, back, left, right, up and down images do not yield a great result.

    • @ThomasPlaysTheGames
      @ThomasPlaysTheGames Před 9 měsíci

      @@thenerfguru Thank you very much for the clarification.

    • @jtogle
      @jtogle Před 9 měsíci +1

      I have an Insta360 Pro II also and would like to try your workflow out! Other than dealing with a balling ball on a pole overhead, does the workflow for a Pro II differ from this video?@@thenerfguru

    • @panonesia
      @panonesia Před 4 měsíci

      @@thenerfguru if you dont mind, can you share your Python script to extract 8 images please

  • @EconaelGaming
    @EconaelGaming Před 9 měsíci +1

    Why do yous plit the images with meshroom? Can't colemap deal with fisheye lenses?

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      That’s a good question. Give it a shot. I bet you’ll have a fun time with COLMAP 🙃. Also, not sure how to export native fisheye images from this camera. I can do it with my Insta360 Pro II, but I still prefer using my only dewarp calibration.

  • @martondemeter4203
    @martondemeter4203 Před 6 měsíci

    Hi!
    What are the exact convert.py parameters you run the 360 vid?
    I tried with mine, I shoot with insta 360 X3, good, slow recording, 4K equirects, I do exactly how you show and colmap only finds 3-6 images... :S

    • @thenerfguru
      @thenerfguru  Před 6 měsíci

      Do you have plenty of parallax in the scene? Of all of the objects are far away, not enough parallax this can happen.

  • @XiaoyuXue-xw9wf
    @XiaoyuXue-xw9wf Před 7 měsíci

    What's the camera name?

  • @pixxelpusher
    @pixxelpusher Před 9 měsíci +2

    Asked on a previous video, but wondering if you'd know how to view these in VR?

    • @thenerfguru
      @thenerfguru  Před 9 měsíci +1

      My next video will be how to view these in Unity. I’m not Unity expert, but I think you can do it in there.

    • @pixxelpusher
      @pixxelpusher Před 9 měsíci

      @@thenerfguru Sounds great, look forward to it.

  • @kyle.deisgn4626
    @kyle.deisgn4626 Před 2 měsíci

    hi I went through the process of the convert.py but ‘Mapper failed with code’ showed up after hours of processing. 😢

  • @hyunjincho5972
    @hyunjincho5972 Před 3 měsíci +1

    Can I know your gpu spec used to build the gaussian splatting model? Thanks

  • @sashachechelnitsky1194
    @sashachechelnitsky1194 Před 9 měsíci +1

    @thenerfguru i wonder if using this method, you can create stereoscopic 3d gaussin splatting using a VR180 camera? i have footage i can provide for testing purposes

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      Interesting. My next video will be how to display this all in Unity. I bet it can be accomplished in there.

    • @sashachechelnitsky1194
      @sashachechelnitsky1194 Před 9 měsíci

      @@thenerfguru rad! ill be on the lookout for that video - keep crushing it man

  • @27klickslegend
    @27klickslegend Před 6 měsíci

    Hi, Do i need GPS data in my photos for this? The QooCam3 can only do this by pairing with my phone

  • @narendramall85
    @narendramall85 Před 9 měsíci +2

    How can i download the 3d environment into some .glb or other file format?

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      Not possible with this current project. However, this workflow will get you okay results with software like Reality Capture or Object Capture.

  • @KeyPointProductionsVA
    @KeyPointProductionsVA Před 9 měsíci +2

    I’m still having issues just getting my computer to run python and such so I can start making nerfs. But I have a drone with 360 camera attachments I would love to start making using this

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      What’s happening with Python? Is it not added to your path?

    • @KeyPointProductionsVA
      @KeyPointProductionsVA Před 9 měsíci

      @@thenerfguru I am not sure why it wasn't working with my C: drive as that is where my OS is, but I put it on an old OS drive, now python is working just fine. technology, its weird sometimes 😆

  • @devanshusingh3887
    @devanshusingh3887 Před 3 měsíci +1

    What would be the result if you didn't move back or turn around while capturing the video? I tried to create a nerf after capturing a video inside a room, moving from one end to the other, but it didn't work out. Why its happening?

    • @thenerfguru
      @thenerfguru  Před 3 měsíci

      Was it a 360 camera? Rooms can be tough if the walls are bare. You end up with cubemapped images without unique features.

  • @GooseMcdonald
    @GooseMcdonald Před 9 měsíci +3

    Do you know a way to use a point cloud, i.e., some Leica scans, and use that point cloud to 3D Gaussian Splatting?

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      You need source images. I am not the most well versed in Leica solutions. Do you get both a point cloud and images from a scan station?

    • @RelicRenditions
      @RelicRenditions Před 9 měsíci

      I know that for the Leica BLK2GO, it captures both the LiDAR scans and the 360 panorama stills as you go. In the Leica SW, you can export the images every x feet that you want. The devices use both the laser and the RGB sensor to do SLAM as you move.

  • @JaanTalvet
    @JaanTalvet Před 5 měsíci

    You mentioned around 15min that you could have gone back over the scene again. Would that significantly increase processing time but also significantly improve image quality (remove floaters, blur, etc.. )?

    • @thenerfguru
      @thenerfguru  Před 5 měsíci +1

      It probably wouldn't make training time too much longer. However, it would reduce floaters and bad views. You basically would have a greater degree of freedom.

  • @panonesia
    @panonesia Před 4 měsíci

    can you make custom FOV, i like to add more top part to the exported frame

    • @thenerfguru
      @thenerfguru  Před 4 měsíci

      Maybe, I have not looked into the python scripts provided by Meshroom. However, you may be able to modify them.

  • @tribaltheadventurer
    @tribaltheadventurer Před 8 měsíci

    Is anyone getting a this app can't run on your PC, check software publisher, even though this has worked before?

  • @R.Akerblad
    @R.Akerblad Před 5 měsíci

    Looks well made💪, but Abit unnecessary ;)
    I usually use a long screw,40 mm. Screw it in 20mm into the corner and stick the magnet to it. Completely hidden by the sensor 🤙

  • @Aero3D
    @Aero3D Před 4 měsíci

    Ok, so I bought one and tried this and my resulting GS seemed to be as if it was a single frame? a tiny section of the total recorded space. Any ideas why this may happen? I might be doing something wrong, my first attempt ever
    I have all my 360 frames each. I split them with ffmpeg. I see all the split frames, I put them into the "input" folder of my COLMAP root. Although after its done, I see in COLMAP "images" there is only 3 and that is the spot that i see in my GS. It only processed 3 of the 4600 images

    • @thenerfguru
      @thenerfguru  Před 4 měsíci

      Are you attempting to work with the equirectangular images or splitting them with Meshroom?

    • @Aero3D
      @Aero3D Před 4 měsíci

      @@thenerfguru splitting them with meshroom

    • @Aero3D
      @Aero3D Před 3 měsíci

      I tried with an all new data set and got the same result. I must be missing something

  • @felixgeen6543
    @felixgeen6543 Před 9 měsíci

    Anyone knows how to use equirectangular images without breaking them into separate FOVs? This would seem the best use of the data.

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      Perhaps your best bet is to try Nerfstudio's 360 image supported training. Then, convert it to 3D Gaussian Splatting format. I don't have a tutorial for this though.

  • @S41L0R
    @S41L0R Před 9 měsíci +2

    how long did this take to convert and train for you?

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      It really depends. Convert takes usually around 5-20 minutes depending on the scene. Could take longer for a lot of images. Train takes 30-45 minutes.

    • @S41L0R
      @S41L0R Před 9 měsíci

      hm thats weird. small videos ive done have taken hours and hours just to convert. maybe i missed this in your tutorial video but do I need to capture at a lower res?@@thenerfguru

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      Perhaps. Maybe less total images in the end. Set the fps to like 1 or .5

    • @S41L0R
      @S41L0R Před 9 měsíci

      OHH ok ive always done 30fps@@thenerfguru

  • @briancunning423
    @briancunning423 Před 9 měsíci +1

    Would this work using Google Street view 360 images?

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      I have not tried it. Can you great a clean image extract from Google?

    • @briancunning423
      @briancunning423 Před 9 měsíci

      Yes, there is way you can download and view them. I took 1080 X 1920 stills and free then into photogrammetry software but the result was a sphere with the image protected onto it.

  • @wrillywonka1320
    @wrillywonka1320 Před 5 měsíci

    so after we get a gaussian splat where can we even use it? no adobe programs can run them, da vinci cant, blender does it very poorly, ue5 costs $100, i think maybe unity is the only program that can use a gaussian splat. they are awesome but its like havin 8k video and youtube only plays 1080. where can i actually use these splats to make a cool video?

  • @AnnaMironenko89
    @AnnaMironenko89 Před 7 měsíci +1

    😮

  • @lolo2k
    @lolo2k Před 9 měsíci +1

    I have worked with the insta360 RS one inch, and it is not worth the price tag! Bigger sensor is great for low light and higher dynamic range but this model has a few drawbacks. 1 high price, 2 the flare as seen in this video. I suggest buying a qoocam 3 at much lower price and better specs. It just released but will be on shelves soon.

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      That sun flare issue is terrible!

    • @RelicRenditions
      @RelicRenditions Před 9 měsíci

      I have been using The Insta360 RS One Inch, the Insta360 X3, and the iPhone 13Pro. All three have their place in captures. The higher resolution and the larger sensor on the One Inch is great, but I really find the in-camera, realtime HDR video of the X3 helpful in outdoor scenes. If you can keep your subject in front of you as you orbit, even an older iPhoneXR is worlds better than the Insta360s. If you need to get in somewhere tight like a smaller building, out come the 360s. The 13Pro has much better low light and high pixel density captures than either, if you can orbit your subject. This is especially true now that they added shooting in RAW as an option in the Pro phones. Keep capturing!!

    • @RelicRenditions
      @RelicRenditions Před 9 měsíci

      ​@@thenerfguru Indeed. You already know this, but for others on here, try to walk in the shadows like a thief in Skyrim. You can often pull up a map of your target area, evaluate when you will be there to do the capture, and try to stay in the shade as the sun moves during the day. This is a little easier in towns and cities since you can use the buildings' shadows. Sometimes, you just need to sidestep a foot to the right or left and it makes all the difference. Not always an option, but it can help. You can also tape a piece of paper up to the camera on the side with the sun (just wide enough) so it will keep the sun off the lens. You will lose some degrees of capture on the side with the sun, but what you do capture will be glare free. might be a fair trade.

  • @CristianSanz520
    @CristianSanz520 Před 9 měsíci +1

    Is it possible to extract a point cloud?

    • @thenerfguru
      @thenerfguru  Před 9 měsíci +1

      Not currently. I wouldn’t be surprised if a new project comes out where geometry is exportable. I’ve seen a paper on it and a demo code, but it’s not usable today.

  • @alvydasjokubauskas2587
    @alvydasjokubauskas2587 Před 7 měsíci +1

    How can you remove your head or body from all this?

    • @Thats_Cool_Jack
      @Thats_Cool_Jack Před 7 měsíci +2

      when recording the video always have your body at the end of the camera stick, turn off horizon stabilization

  • @spaceghostcqc2137
    @spaceghostcqc2137 Před 8 měsíci +1

    Can you multicam nerfs and splats?

    • @thenerfguru
      @thenerfguru  Před 8 měsíci

      Do you mean record with multiple cameras at once? Could be achieved if all of the cameras were the same model/lens

    • @spaceghostcqc2137
      @spaceghostcqc2137 Před 8 měsíci

      @@thenerfguru Thank you, I'm picturing two 360 cameras. perhaps one on a stick for sweeping around and one on a pole sticking up from a backpack? Or two at different heights on a walking stick. Do you have any guesses as to how two insta360 X3s used like that would do vs a single RS ONE 360 edition? Also imagining a frame to put 3 of them for quick one pass scanning of cooperative humans.

  • @Legnog822
    @Legnog822 Před 8 měsíci

    it would be nice if tools like this could eventually take 360 photos as input natively

    • @thenerfguru
      @thenerfguru  Před 8 měsíci

      You could batch it and not have to deal with the different steps.

    • @foolishonboards
      @foolishonboards Před 3 měsíci

      apparently LUMA AI allows you to do that via their cloud service

  • @hasszhao
    @hasszhao Před 9 měsíci +1

    WHAT KIND OF CAMERA?

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      In this video I used an Insta360 One RS 1-Inch Edition.

    • @hasszhao
      @hasszhao Před 9 měsíci

      @@thenerfguru thanks dude

    • @hasszhao
      @hasszhao Před 8 měsíci

      ​@@thenerfguru Hey, I got the same device and wanted to try reproducing the similar thing like you did, but I could only generate almost-one frame result after rendering although the aliceVision_utils_split360Images did a lot "subimages". I checked the result "output" directory, actually there were only few images used.
      Do you have any idea about the problem I had?

  • @lodewijkluijt5793
    @lodewijkluijt5793 Před 9 měsíci

    I just tried a dataset of 1456 images (1200x1200) and my 24 gb vram wasnt large enough, going for 728 (half) now to be safe

    • @lodewijkluijt5793
      @lodewijkluijt5793 Před 9 měsíci

      727 of the 728 images linked, and uses around 18gb of dedicated VRAM

    • @foolishonboards
      @foolishonboards Před 3 měsíci

      how does the model look ? @@lodewijkluijt5793

  • @marcelwinklmuller5622
    @marcelwinklmuller5622 Před 9 měsíci +1

    would be awesome if it could just directly process 360 pictures directly to get it all

    • @thenerfguru
      @thenerfguru  Před 9 měsíci +2

      This call could be batch scripted so you don’t have to go through all of the steps one by one.

  • @Moctop
    @Moctop Před 9 měsíci +1

    Feed in the all the street view data from google maps.

    • @thenerfguru
      @thenerfguru  Před 9 měsíci

      I don't know how to scrape all of the street view data, but yes that would technically work.