Other odds and ends:
1) Both consumer Rift and Vive eye-lens relief are much improved over DK1/DK2,
but I would still recommend contacts or roughing it without glasses if your eyesight is not that bad.
With DK1/DK2 you were encouraged to get your eyeballs as close as possible to the lenses to maximize FOV (to the degree that I trimmed my eyelashes so they wouldn't brush the lenses), which meant that wearing glasses was out of the question, and you were constantly having to clean the lenses of smudges from coming into contact with your skin. With the Rift and Vive I've yet to have to clean the lenses at all after almost a month (this is probably one of the most important ergonomic and usability improvements coming from the devkits), and there appears to be a healthy enough amount of relief to fit eye glasses in both
But as has been mentioned elsewhere, the Rift does not have the width in the facial gasket to accommodate the arms of your average pair of glasses (especially if you have the trendy thick black plastic kind). The Vive in this case
does have the width to handle glasses comfortably,
but I still don't feel 100% comfortable using glasses inside the Vive because you're risking having your glasses coming into contact with the lenses and potentially scratching them, and you do have to sacrifice some FOV by adjusting the eye-lens relief further away from the face. Without glasses on I'm able to wear the Vive at the minimum eye-lens relief setting with no risk of eyelid or forehead smudge. The Rift does not have any adjustment in this regard.
2) The Rift's lenses seem to me to have greater clarity across the full area of the lenses than the Vive, but the Vive is still much better than the DK1/DK2. The fresnel ridges on the Vive seem less offensive, likely because the artifacts are much more structured looking while the Rift's look like a blur that you would normally associate with smudged lenses. Also good to note that most of the Rift loading screens seem to use white text/logos on black backgrounds, which is pretty much the worst case for showcasing the scattering and probably makes public opinion on this worse than it is.
3) Last week I tried the vrgirlz/veiviev 3D photo-sourced models on the Rift. It's surprising how difficult it is to establish scale and eye-to-floor height simply by eyeballing reference points and adjusting the virtual camera to the appropriate standing height. Even juggling the HMD on/off to compare the virtual floor and the real one is weirdly ambiguous in whether they appear to be the same height to even within a foot of accuracy. This may not seem like a big deal, but your proprioception does all sorts of weird shit to try and make sense of what it's seeing and feeling. If you sit down in a chair in VR with your feet touching the ground and are facing a human-sized model at eye level (as would be the case with any traditional FPS in seated VR), your brain fights to establish the floor distance that you feel on the soles of your feet as being the same as the virtual one that should be below your feet. What this ends up doing is giving you very fuzzy signals about scales of objects and distances that you see.
With the Vive however part of the room setup involves calibrating where the floor is. You set your controllers on the floor, it establishes the height of the plane and that's that. Also the Vive controllers and base stations are modeled 1:1 in the environment, so you've got a few different supporting points of reference that establish and maintain scale and distance. You can take your controller and tap it against the floor and see+feel it hitting the floor. You can take the two controllers and touch them together and see+hear+feel the plastic contacting each other at the same moment (you have to do this very gently though because you're at the mercy of IMU and prediction - sharp acceleration/deceleration from tapping/banging causes tracking to overshoot by a few millimeters and you can see the controllers clip through each other before they get corrected.) It sounds pretty mundane, but this might be the most impressive thing you can do in VR right now - I struggle to think of anything comparable where a collection of vertex data and texture maps manages to trigger "plastic" in a real way in my brain.
4) For me the jury is still out on just what sort of future room scale VR has in terms of game design. While it's nice to have the flexibility to know you have 3-4 full strides of movement in any direction, the content that tends to involve any sort of locomotion through larger environments (whether it be teleportation mechanics or other trickery) you quickly become accustomed to using those mechanics to move rather than moving your legs. While that bodes well for folks with less space available, the lack of uninterrupted locomotion through an environment does reduce your sense of being in a real space. Even playing something like Quake or Minecraft on a monitor will give you a distinct sense of moving through a space, but a lot of that gets lost in VR using teleportion ("Is the room I'm standing in now 10ft from where I started or 100ft?"). Definitely an area that needs more research.
I managed to break a sweat last night shooting arrows at stick men (one of The Lab demos). Feeling the haptics while drawing the arrow back is quite pleasing, and learning the trajectory and drawing distance of the bow and arrow is very satisfying. An unexpected aspect to this is how my posture seems to impact how I feel playing - there's something cheesily heroic feeling by striking a pose with an arched back and a strong, proper fully-drawn bow stance. While the game itself is probably too short and cartoony to get any real mileage from that, I could imagine a sword and sorcery, medieval combat, Zelda/ElderScrolls type game being quite powerful.
Thought this was a cute story:
www.youtube.com/watch?v=wdgYOh5oofY&feature=youtu.be&t=1h35m50s