3D video capture with Microsoft’s Kinect was a concept first conceived by Oliver Kreylos in 2010. Mr Kreylos extracted the raw data stream from the depth and colour sensors of the Kinect and combined them via custom software to produce a 3D reconstruction of the target.
Fast forward to 2014 and Kreylos has taken the concept a step further, combining the 3D video data of a trio of Kinects set out in the shape of an equilateral triangle to capture all sides of an object/person, with VR via an Oculus Rift to create a real-time 3D scanned representation of himself which synchronises his head and body motions to create a life-like virtar.
As Keylos explains, this was done with the first gen Kinect which has quite low resolution cameras with 640×480 colour stream and 320×240 depth stream. The resolution of the Kinect 2 is considerably better with full HD colour stream and a 512×424 depth stream.
Note that the latter is only a nominal depth resolution and is a poor indicator of effective resolution. The first gen Kinect had a depth measurement of approx 1 in every 20 pixels, the Kinect 2 has a ‘time-of-flight’ depth camera which captures depth-measurement for every single pixel. So while Kinect 2’s nominal depth stream resolution is only marginally increased, the effective resolution is likely substantially higher, potentially by as much as a factor of ten or so.
This is all very technical but the BLUF is that using Mr Kreylos’ concept with the new Kinect will hopefully result in a near-perfect real-time 3D body scanning and hence pixel perfect virtars and 3D motion capture for VR. Boom!