http://www.roadtovr.com/zero-latency-7-million-investment-expansion-warehouse-scale-vr-tech/Melbourne-based firm Zero Latency has raised $7 million in venture funding. The company specialises in out-of-home ‘warehouse scale’ local multiplayer VR experiences, and is beginning to open attractions around the world
Editor VR is designed to make developing both VR and non-VR content much easier by allowing creators to make games without needing to remove their headset. It uses the Vive Wands and Oculus Touch controllers for accurate, intuitive control, allowing you to access traditional settings via 3D windows and also take advantage of new ones like faster object placement. All of the features we’ve seen on-stage so far are expected to be included in the experimental release, though we’ve reached out to Unity to confirm this.
https://scontent.xx.fbcdn.net/t39.2365-6/15363893_1774761836111478_5342883442994446336_n.pdf
Reading this guide for Oculus's 3-camera roomscale setup I can't help but hear the voice of Vincent Price saying "It's as easy as one, two, three."
I recognize the cause - I was more teasing on how obviously untenable this technology approach is to roomscale tracking. If outside-in IR camera tracking didn't seem like a dead-end before, it certainly does now. More seriously though, it makes me wonder just what sort of future the Touch controller can expect to have. Previously I was thinking that Oculus could get away with keeping Touch and carrying it over to a gen2 Rift, but now I'm thinking that they'd be a whole lot better off by scrapping all camera-based tracking and moving to a Lighthouse-derivative system. If they're already pushing the bandwidth limits for USB controllers, then it's not like they have room to increase the frame rate, resolution, or most importantly their vertical FOV of the cameras. Lighthouse on the other hand can be updated in any number of ways and likely do so while being backward compatible.
Why ? Just increase the sensor FOV currently its 120x70 . Also 4 Sensors work well enough in large rooms so they could simply get the price down while newer hardware will handle multiple sensors better. Or you might need less cameras if they have sensors inside the headset themselves
Increasing the FOV means sacrificing more precision, unless of course you also increase the camera resolution to compensate, but it seems to me like we're well beyond reasonable bandwidth usage already. Each camera can't cover the full room volume and the fact that the tracking precision drops off so dramatically over distance means that the extra cameras aren't just there to avoid cases of occlusion, but to help compensate for its own shortcomings (distance tracking and the vertical FOV). Having inside-out sensors in the headset would still leave you with having to track the controllers through some other means.
Most of the time the controllers should be in view of the sensors on the helmet , the other outside sensors will be for those few times when its not
I don't think that statement can be made so easily and just assumed that it will work. Tracking the hands from the HMD itself would be plagued with occlusion from your arms and shoulders to such a high degree that you would have to assume that in most instances it wouldn't work, which puts you right back to the same situation where the external cameras are required to cover the full volume of the room. On the other hand you could just use something based on Lighthouses which allows you to scale the coverage and precision in any number of ways. My issue is not just that cameras are poor right now, it's that if I wanted to improve their precision by 2x or 10x over the coming years that there's no obvious way to do that. The idea that having an HDD or some other USB peripheral suddenly access the USB controller is potentially enough to cause tracking to collapse is absurd - it seems to me a fundamental misuse of a technology that's intended to provide a shared bus across many devices simultaneously.
http://www.tomshardware.com/news/mainstream-vr-hmds-intel-microsoft,33217.htmlOn the lowest end of the spectrum, the new HMDs will offer 1200x1080 resolution per eye. That’s striking, because that’s the same per-eye resolution of the two titans of the industry, the Oculus Rift and the HTC Vive. On the high end, Microsoft expects its hardware partners to offer up to 1440x1440, which would outpace the Rift and Vive.
However, note that a major difference is that these mainstream HMDs will offer just a 60Hz refresh rate at that resolution (versus 90Hz for Rift and Vive), which could prove problematic for the user experience. Again, however, at the high end, these new HMDs should meet or exceed that refresh rate.