How the Kinect Depth Sensor Works in 2 Minutes
Vložit
- čas přidán 15. 02. 2013
- The kinect uses a clever combination of cheap infrared projector and camera to sense depth.
References:
• Video
www.google.com/patents/US20090...
campar.in.tum.de/twiki/pub/Cha... (p. 33)
en.wikipedia.org/wiki/Range_im... (Stereo triangulation) - Věda a technologie
I can hardly find a word to express how much you have helped in my research study using kinect.
Thank you!
Glad it helped! The best thanks is a link to this video from a website or forum.
incredibly well explained and simple video.
I believe the important thing about the pattern is that it's random, so that the camera can differentiate between groups of specs. The broader term for this is "structured lighting". Google Structured-light_3D_scanner
Thanks, you're a good presenter. Simple & Concise :)
I've checked your channel and can confirm, you're a genius.
Very clear and concise. Great ! Thanks !
Fantastic! I knew there was a reason I subscribed!
Please let me know how kinect comes to know the angle of speckle pattern pattern..?
Thanks for your knowledge
Good! Easy to understand the theory!!
Good clarification. Should have said irregular pattern instead of random. I wonder if the dot pattern is the same on each one, or they're all calibrated to their own pattern.
Thank you very much! This was a very helpful video!!! ;-)
Good! Easy to understand the theory.....can you tell me how can i used it with matlab?
so does the camera recognise each part, ie the sectors in the red grid example, via unique speckle clusters?
wonderful thank you for explain
thank you! that was great!
Good one. Can you tell me what software you are using for the drawings?
SketchUp, Bamboo Tablet, Serif DrawPlus, CamStudio. All the drawing scenes are usually sped up during the editing process.
hows does an IR sensor help to calculate the depth better compared to a secondary camera? Can you please explain that part again?
Thank you!
Do you know if the Asus Xtion series of depth sensors work the same way? Would they have the same limitation of a single sensor in a room?
Totally guessing here, but I suspect there's some variance in the manufacturing, and that each unit gets calibrated at the factory.
Thank you
Do all units have the same fixed speckle pattern, or is it learned after it's created?
subbed!
But lets say i was to take out the cameras from the kinect and make a diffrent distance between the cameras that will affect the andgle right? so it won't be able to recreate the image?
well this is for a 3d scanner basicly
So you're saying somehow the lights are randomized (the leds are moved somehow), then that it's recalibrated? No. The pattern is predetermined. It might have been random at some point, but I doubt it.
you can use multiple kinect, main problem is that the usb bandwidth is to high for two kinects, on one computer
Cool
One way to use 2 Kinects is to have one Kinect shaking side to side while the other one is still. The dots of the moving camera will look stationary to that camera while the other camera's dots are blurred and vice versa. V Motion Project did this. They also used one computer for each camera.
It's not random. It appears random but the device has to be aware of the pattern it is casting.
There is no such limitation with either...
I thought it was a time-of-flight lidar.
+Qinggeng Zhuang new one is ToF, old uses triangulation.
***** That doesn't work, just like before the two kinects would confuse each other and wouldn't be able to triangulate points
What if i told you, you don't need depth-sensor or any software.....well i just did...but will i tell you how...that's a billion dollar answer..but i'll take a couple hundred million. my name is not 4D for no reason
Very clear and concise. Great ! Thanks !
Please let me know how kinect comes to know the angle of speckle pattern..?