Quest 3 Depth API is a GAME-CHANGER for Mixed Reality! (Dynamic Occlusion)

Sdílet
Vložit
  • čas přidán 30. 10. 2023
  • Demonstrating the Depth API's Dynamic Occlusion experimental feature on the Meta Quest 3. Using the Unity demo by Meta. Without it, virtual content will appear either on top or below the real world video stream in mixed reality. With it, the Quest will depth scan continuously and create environment depth estimates from the headset's point of view. This is a must to enable more believable visual interactions between the virtual and the real world.
    Oculus Quest 3 will scan the space for changes in real-time and can make real world objects and even human body (e.g. your own hands and body) appear above virtual content.
    This demo will demonstrate Soft Occlusion and Hard Occlusion. Soft occlusion require more GPU computing power.
    Disclaimer: This is an experimental feature using Meta's Depth API. Demo from the Oculus samples repository. Compiled using Unity. I'm still learning about the Depth API feature, so I might have some inaccuracies with the information.
    #quest3 #occlusion #depthapi
  • Věda a technologie

Komentáře • 28

  • @SkyGrel19
    @SkyGrel19 Před 7 měsíci +4

    this is so motivating!!!
    I still cannot fully imagine how much possibilities this hew AR tech is giving to developers and users. I'm gonna go and write code right now!

  • @martinmcgirk2692
    @martinmcgirk2692 Před 8 měsíci +2

    Finally! This is awesome

    • @SpookyFairyVR
      @SpookyFairyVR  Před 8 měsíci +1

      Yes, I thought it doesn't exist at all. It is still experimental, but even though it is, for many apps, it can be an amazing or even a must-have feature to avoid ruining immersion in MR. I do hope to see improvements, but even with my current experience, I would rather have this feature in some of the apps I tried than not have it at all. As long as it doesn't negatively impact the performance that will ruin the entire experience, of course.

  • @LeoniOrtiz
    @LeoniOrtiz Před 8 měsíci +3

    I love this! Occlusion is critical for MR experiences. Specially for hand traking expereinces.
    Seems weird that they don't seem to be using a segmentation map or the hand tracking info to imprive hand segmentation specifically. Is it treating just hands as any other object? I bet a performance related problem.
    I can't wait for this to be better, with some kind of implamantstion of Meta's Sam or something similar to improve hand oclusión to near pixel level accuracy. Maybe a future quest with enough processing power or faster segmentstion models.

    • @SpookyFairyVR
      @SpookyFairyVR  Před 8 měsíci

      It works for any object, even the entire body: czcams.com/video/8kHP6uZjmDo/video.html

  • @joe-5D
    @joe-5D Před 4 měsíci +1

    Can you provide links to be able to do this!!!!??

  • @damonhtoo
    @damonhtoo Před 8 měsíci +1

    Dynamic occlusion on hands is the obvious one but I'm also interested in other applications like being able to sit or lay on a virtual couch and see your real body.

    • @SpookyFairyVR
      @SpookyFairyVR  Před 8 měsíci

      My next video is about just that.

    • @SpookyFairyVR
      @SpookyFairyVR  Před 8 měsíci

      czcams.com/video/8kHP6uZjmDo/video.html

    • @damonhtoo
      @damonhtoo Před 8 měsíci

      @@SpookyFairyVR Works exactly as expected. I know you did what you can with your demo app but I was thinking a more compelling demo for the general public would be a virtual couch mapped to your real couch so you appear to be laying on the virtual one. And a virtual table where some objects on it are real like a mug etc. So the effect is almost an inverse of AR where instead you're bringing your body and select real objects into your virtual world. If you showed yourself laying on your real couch but a virtually skinned version of it, then maybe changed the color/texture on the fly, I could see it going viral. It can be a longer video but make the important part fit into a CZcams short for easy consumption and short attention spans.
      Of course such a demo would take way more work. I don't know why Meta hasn't teased demos of dynamic occlusion like yours and like I'm describing. It would get even non-VR members of the public excited about mixed reality.

    • @LeoniOrtiz
      @LeoniOrtiz Před 8 měsíci

      ​@@damonhtoo I'm not sure if I understand correctly, but if I do then what you are thinking is a little hard to achieve right now.
      You would need to use the inside out tracking cams to track the position of your body to be shown laying down in the virtual couch when you do it in the real one. Meta's working on upper body tracking with the quest 3 so not impossible, but not there yet.
      For the real mug in virtual reality... you would definitely need some kind of segmentation map telling the glasses "this is a cup" so it can track it and show it to you correctly inside the virtual environment. And there are some like that as meta's on SAM, but the problem is they would take too much processing power from the quest.

  • @RiverReeves23
    @RiverReeves23 Před 8 měsíci

    This is great. They need to get rid of the virtual hands, and just use AR pass through, it looks way better and doesn't require high poly count.

    • @SpookyFairyVR
      @SpookyFairyVR  Před 8 měsíci

      I also prefer the real hands seen in many apps but it needs to be improved.

  • @1kar1m
    @1kar1m Před 2 měsíci

    Link???

  • @Fares_Dev883
    @Fares_Dev883 Před 8 dny

    from where you get the scene that u did test with ??

  • @guillermosanchez1224
    @guillermosanchez1224 Před 8 měsíci

    hand tracking would have been more easier if they have integrated it with a gloves

  • @seweso
    @seweso Před 8 měsíci +3

    This is worse than what my iphone does. Why are you impressed?

    • @SpookyFairyVR
      @SpookyFairyVR  Před 8 měsíci +4

      Hi, thanks for your comment. I'll explain. Because whatever a phone does, it's through a phone device. I can experience it all without needing to hold a phone in my hand, let alone segmenting my hands that I use one hand to hold the phone in the first place. It has limited field of view for obvious reasons. Some Phone's might have more advanced features using ARCore/ARKit and performance, hardware components and optics to back up those features - the phone factor is a huge limit and always has been. I found having those features on the Quest 3, especially coming from the Quest 2 to be very impressive.
      Being able to experience it that way, like holding that virtual guitar in my hands and it felt so visually convincing and immersive (can be even better if it was with ability to lock hold using grip and haptics but it wasn't meant for that, it was as simple demo), yet it was an amazing experience, something I never experience using a phone with this technology, and this is just one example. I also comprehend what can further can be achieved with it with in more complex apps and games.
      So yeh, I am super impressed and I'm positive this will be further optimized and improved in future updates. I stopped using MR on phones quite some time ago, mainly because of its phone-factor limitations, no matter how sophisticated and advanced those features were. With the Quest 3, it felt like that's how it suppose to be and all the other things (e.g. hand tracking, color passthrough, depth scanning, wider field of view, sharper display, advanced haptics) supported it and enhanced it
      Therefore, experiencing certain technologies, even those that existing on phones can lead to a much more immersive experience, that let you experience it in a much more exciting way than ever before and can lead to a more impressive experience by a large margin. I hope this answers your question.

    • @LeoniOrtiz
      @LeoniOrtiz Před 8 měsíci +6

      'couse your iphone doesn't need to do it on 2 4k screens at 72hz with ultra low delay probably while running full game on top, and the difference is quite more important in an immersive experience like this one compared with a simple ar phone experience.

    • @seweso
      @seweso Před 8 měsíci

      @@LeoniOrtiz How is the resolution the depth map is rendered relevant? I don't get it. You got 'voxels' which are 1cm by 1cm ( the width of a finger). At arms length my iPhone clearly has a higher (real world) resolution.
      I'm not impressed.

    • @LeoniOrtiz
      @LeoniOrtiz Před 8 měsíci +4

      @@seweso 1. How is the resolution relevant? At higher resolution, higher processing power required for processing and rendering the scene. If you "don't get" that, it's clear you won't get anything about this. This is literally the simplest of concepts.
      2. Voxels? Wha... why are we talking voxels? Not the RGB cameras, not the depth sensor, not even the rendering is made in voxels. What do you think a voxel is?
      3. The glasses can't simply take the dRGB input and print it directly on screen as your phone does. For limitation on the cameras' position the glasses need to analyze 2 video streams (4 if we count tracking ones) and 1 depth stream, do feature extraction, recreate a 3d model of the room warp the stereo video feed over it just to show you the room in a consistent 3D view without making you dizzy. All this in real time, in one core and with a AAA game running on top.
      4. The glasses are running at the same or maybe even higher resolution than your phone, but you have it literally taped to your eyes, making the relatively low res depth feed look jagged compared to a phone. Tape your phone to your face and tell me you can't see individual pixels on the depth maps.
      5 and more important: nobody is trying to impress you particularly. This is impressive for those who really understand how hard this is to achieve.
      This, running in real time? On one core? While running a SLAM algorithm to track the headset at the same time, and another ML algorithm to recognize hands and controllers, and maybe even voice recognition at the same time? All this with a full game running on top, on a mobile chip? Amazing work. Those men need recognition, from those who understand their hard work.

    • @FlorianBuchholz1
      @FlorianBuchholz1 Před 7 měsíci

      @@LeoniOrtiz The quest doesn't have 4k screens