It's interesting that he mentions "additional software algorithms". The discussion is about processing postures kinect may not recognize, and how much work and processing it takes to recognize those new postures. It sounds like it can be quite intensive, but they're using GPU more than CPU. So, my question is, is the basic skeletal tracking that Kinect provides free? I'm guessing no, since it doesn't look like there is any significant memory, or a good size ASIC/FPGA or processor inside Kinect.
Sounds like they built a pretty flexible API that covers a lot of basic generic cases, but that is also extensible. I'm glad it's not a rigid system. Sounds like devs will really be able to tune their results.
Another interesting thing from hearing about GPU being used to process the data, is how that might affect the design of the next 360. DX11 and shader(compute) heavy, or do they go with a heterogeneous CPU with vector units?
Sounds like they built a pretty flexible API that covers a lot of basic generic cases, but that is also extensible. I'm glad it's not a rigid system. Sounds like devs will really be able to tune their results.
Another interesting thing from hearing about GPU being used to process the data, is how that might affect the design of the next 360. DX11 and shader(compute) heavy, or do they go with a heterogeneous CPU with vector units?