Microsoft HoloLens [Virtual Reality, Augmented Reality, Holograms]

No, maybe I didn't write that out clearly. The AR world is as large as you want it to be. From what I understand, the AR world is modelled within Unity, so anything outside the unity application won't show up unless it swaps over to a new unity level/instance. Your viewport is limited however. If you want to see everything in your viewport without having to look around, you're going to have to step back, or it's going to have to be smaller.
Ah... okay, I still don't get the full picture here. I thought that you see the real world like you see it with your regular glasses (or your own eyes) so you can wear Hololens and play a game on the TV, like since time immemorial, and play it like that, so wearing Hololens would be always a seamless transition from the real world.

Say you play a strategy game like Age of Empires on the TV, and you see the TV as usual in order to play, but you could have VR windows to the sides sometimes, to check some buildings and unit creation..etc. Is it possible to play anything like that?
 
Yes, except that the windows on the side won't be visible until you turn to look at them. When looking at the TV, you'll mostly only see the TV because the AR window is about the same size. Superimposing AR content over the TV image probably wouldn't work too well because it's a direct light source. If you turn your head to the side though, the viewport of the glasses could overlap a wider area of content and show floating menus. And a look down at your lap/coffee table could show a 3D map. Then you'd look back at the TV and the Hololens stuff would disappear and you'd see just the TV.
 
Yes, except that the windows on the side won't be visible until you turn to look at them. When looking at the TV, you'll mostly only see the TV because the AR window is about the same size. Superimposing AR content over the TV image probably wouldn't work too well because it's a direct light source. If you turn your head to the side though, the viewport of the glasses could overlap a wider area of content and show floating menus. And a look down at your lap/coffee table could show a 3D map. Then you'd look back at the TV and the Hololens stuff would disappear and you'd see just the TV.
Thanks Shifty, that's what I wanted to know. If that's so, then it sounds more appealing now, the least interference with real life the better. Judging from iroboto's words, you can tune the size of the windows or when they will be enabled to your liking.
 
http://gamasutra.com/view/news/242441/Handson_Looking_at_AR_game_dev_through_Microsofts_HoloLens.php
hololens_imagecap.jpg

This is a still from the sizzle video Microsoft rolled when it unveiled HoloLens in January. If this guy was surveying his AR kingdom through the prototype HoloLens I tried, he'd only see the cottage directly in front of him and a bit of the pastoral plateau behind it.

In practice, it feels like you’re peering at your virtually augmented world through a porthole. For example, I could look directly at a real-world coffee table a few feet away with a virtual game board anchored to it and see the board just fine. But if I walked forward to examine a virtual sphere floating above the board, the board on the low table would quickly scroll off the bottom of the HoloLens display area -- which falls quite a bit short of the edge of the visor itself. Out of the corner of my eye I could still see the real table just fine, but the virtual game table had disappeared like a figment of my imagination.

In roughly two hours of use I never quite got used to this effect, and was regularly dismayed to see virtual objects abruptly flickering into view or having their edges abruptly cut off as I moved my head. I’m concerned that similar limitations in the final version could hold back what game designers can do achieve with HoloLens.

On the other hand, AR isn’t really the ideal medium for immersive, vision-eclipsing experiences -- that’s what VR is designed for -- so perhaps it’s for the best that HoloLens games effectively disappear when you aren’t focusing directly on them. It makes them more approachable to players who are easily sickened by VR, or who don’t want to wall themselves away from the outside world while they play.
 
OK say AR worked perfectly, i.e. no judder, practically 100% vision covered, knows what you're focusing on etc

Im curious as to what games could possibly be better with AR than VR, any ideas?
 
OK say AR worked perfectly, i.e. no judder, practically 100% vision covered, knows what you're focusing on etc

Im curious as to what games could possibly be better with AR than VR, any ideas?

Technically if Hololens or any other AR device had a FOV similar to "current-modern-about-to-be-release VR solutions", wouldn't a snap-on blacked out lense practically turn it into a VR device?

There are game concepts I guess one could do thats more feasible with AR. Play rainbow six where your home is the setting of a hostage situation. Or "Home Self-Defense: The Game (sponsored by the NRA)", where given the same hardware all the performance is poured into rendering the NPCs and effects versus rendering a whole scene thats a reproduction of your home.

I guess you can do a lot more graphically when the hardware is only rendering portions or certain aspects of a scene.
 
Last edited:
OK say AR worked perfectly, i.e. no judder, practically 100% vision covered, knows what you're focusing on etc

Im curious as to what games could possibly be better with AR than VR, any ideas?
Construction games
Board games
Things for the young ones
Virtual pets
A rollercoaster simulator that fills an entire room
Things like PS3's Eye of jugement, and lots of Vita AR experiments would be more interesting with this headset.

There's a lots of potential if everyone in the family have a headset. Everyone can see a shared virtual environment. An RPG board game would be interesting. (but again, wouldn't it be better to be in a VR environment instead?) It has to be something where you specifically do NOT want virtual presence, and you want to keep a focus on your living room.

It's like all the other AR products...at around $1000 per headset, they're not a console peripherial, they're not consoles. They are being sold as expensive business device. Maybe later it will be more gaming oriented and priced accordingly.
 
This one has a pretty good representation of the small window of AR you see with the current hardware proto-type which the shipping version is likely based on.

https://www.thurrott.com/windows/windows-10/3251/hands-on-with-a-near-final-microsoft-hololens

It's really small.

Smaller than back at the January event...

Holographic objects that were not dead center with the prototype in January were visible. But with the final hardware, you can’t see anything that is not right in the middle of your field of vision.

And it's pretty universal that people find this to be incredibly constraining. IMO, I think it's most likely a "safety" in the workplace type of thing. But it would be nice if Microsoft would expand the FOV. And like I said previously, if it's a safety thing, have HoloLens detect when the user is moving (walking) and restrict the FOV though software to a smaller one. But if the user is sitting, then allow it to expand to fill all or most of the user's FOV.

However, while that was noted as bad. The HRTF sound is described as absolutely amazing...

The biggest improvement, however, was actually the spatial sound experience. If you think about interacting with holograms in a 3D environment—i.e. real life—you tend to think about the visual stuff, but of course sound—and ambience, which is not sound, but can be impacted by sounds—plays a big role in the experience as well. And the sound stuff is amazing. When you turn away from objects making sounds, the sound audibly moves accordingly so that it is coming from behind you if you turn your back to the source of the sound.

Regards,
SB
 
This image found at Neogaf may show quite well I think what factors into the FOV:

jVqyYan.png
 
That area up close would be far larger FOV than other reports. That's basically equivalent to the lens size of my spectacles!
 
Is there currently any visual cue as to where the visible area/hologramatic border lies when you're wearing them?
 
Thanks Shifty, that's what I wanted to know. If that's so, then it sounds more appealing now, the least interference with real life the better. Judging from iroboto's words, you can tune the size of the windows or when they will be enabled to your liking.
You can alter the AR world, which was what was demo'd to me. I cannot make a claim that you can alter your hardware view port, which is the issue at hand. To have a heads up display around your TV means that your whole TV must fit in your viewport with additional space around the side of your TV for your heads up display. Hololens cannot project to the side at this moment, that viewport is locked dead middle, and the only way to manipulate that viewport is to turn your head entirely, so any head twitching will cause some logistical issues for your headsup display if you cut it too close, you'll constantly be clipping your own heads up display if you can't keep your head level headed and never turned.

Hololens still requires at this point in time a level to be built in Unity. You would have to model your room in Unity with all your stuff and run that as a separate application for Hololens to know where to place stuff.
 
Is there currently any visual cue as to where the visible area/hologramatic border lies when you're wearing them?
I didn't see any boundaries, nothing that ever said, hey, you're at the edge of the level. I want you to picture this.

Walk into a room. You are told to look at a computer screen, regular windows in which you can manipulate a program. You are told to drag the mouse cursor to the edge of the left screen with your mouse.

Then you are told to keep going left effectively dragging the mouse cursor into the air.. and guess what, it's floating right there in the air. Perfect transition from computer monitor cursor and directly transferring to AR. It was pretty seamless. Not sure how they pulled it off, but using the mouse cursor in the air that was floating, I was somehow still able to manipulate the building.
 
Saw on another forum that it was mentioned that it will only render holograms for a certain radius out. So for example if you are using them outside, you would only see inside 30' or whatever the # is. I guess that would make sense, if you try to render a whole city you would need more processing power. On the other hand, it could be a sensor limitation as they can only see so far.

That FOV area is interesting, that looks much larger than what users are stating. So I wonder if the device already has the lens area for more FOV, but it is all currently limited in software as they work out optimizations. I never noticed that lens area in other pictures.
 
http://optinvent.com/HUD-HMD-benchmark
These are the techniques that are currently being used for AR.
Looking at most older products and prototypes, MS reaching 40 degrees is already a great achievement.

All of these methods have limitations in term of FOV, some of them are related to angle of incidence to the diffraction grating, or related to TIR limits, which means it has nothing to do with the size of the square in front of the eyes.

The holographic technique is quite close to the diffraction grating technique described above with the exception that a holographic element is used to diffract the light [3]. Holograms work by reflecting certain wavelengths of light. In this way, the incident light is reflected at a certain angle with regard to the hologram. Holograms are intrinsically limited when used in a waveguide due to the fact that the reflected light loses intensity with angular variation.

Only limited angles are possible in order not to lose too much light and to keep good image uniformity. Therefore, this technique is intrinsically limited in FOV. This technique is also plagued by color issues known as the “rainbow effect”. Holographic elements reflect only one wavelength of light so for full color, three holograms are necessary; one that reflects Red, Green, and Blue respectively. This not only adds cost but since the three holograms need to be “sandwiched” together, each wavelength of the light is slightly diffracted by the other color hologram adding color “cross-talk” in the image. Therefore, the eye sees some color non-uniformity or color bleeding when viewing the virtual image. Some of this color non-uniformity can be corrected electronically but there are limits to this as the human eye is extremely sensitive to this phenomenon. The holographic technique is used in the Sony and Konica Minolta as shown in figure below.
If we compare this to MS vague description of how hololens works, it sounds similar to the Holographic Waveguide method above. I'm hoping it's a new method (or variation), and if so, the patents might appear soon and we'll know for sure.
 
I wonder if its a limitation of processors or batteries onboard and not the optics. The earlier prototypes were pushing larger FOVs so its not a limitation of the tech. And Wired which had exclusive access to the industrial design and earlier prototypes didn't mention any difference in the FOV between the different units.

Hololens is producing an image for each eye and has whats looks to be what is essentially Kinect v3. Its using a bevy of sensors and a lot more processors than your average tab (cpu, gpu, hpu (?), kinect related processors) in a form factor that doesn't seem to offer any more space than your average netbook. Its not hard to imagine the hardware chewing through the capacity of your typical battery found in mobile devices.

Reducing the FOV might have been the easiest way to reduce power consumption in a case where MS wanted to avoid Hololenses running out of juice during these 2-4 hour demo sessions.

MS may be waiting for improvements in performance per watt in the x86 space or better performing batteries before Hololens is launched for primetime.
 
I seriously doubt the reduction is related to "safety" or "processing" issues. The hardest aspect of AR is optical.

Simplest explanation: Most journalists are saying the new prototype have enough eye relief for prescription glasses, while the previous didn't, or not as much. Increasing the distance between the optics and the eyes causes a dramatic loss of FOV, no matter what technique is used.
 
I didn't see any boundaries, nothing that ever said, hey, you're at the edge of the level. I want you to picture this.
Thanks. With a persistent UI/vista bigger than the FOV then the boundaries would be evident by the cut off of image but I just wondered about more discrete visuals - of course a boundary at the limits of the FOV is trivial to add in software.

All of these methods have limitations in term of FOV, some of them are related to angle of incidence to the diffraction grating, or related to TIR limits, which means it has nothing to do with the size of the square in front of the eyes.
I recall in a past life that full canopy holography was something that many defence company's pursued, so rather than having a small HUD directly in front of the pilot, the avionics package could project any piece of information anywhere. Because no two canopies are exactly the same shape (even in the same aircraft) and deform under stress like high-G or even with sudden changes in barometric and hydrostatic pressure, each projector was coupled with a laser that measured the topography of the canopy before projection.

Clever stuff and definitely where we're headed.
 
I seriously doubt the reduction is related to "safety" or "processing" issues. The hardest aspect of AR is optical.

Simplest explanation: Most journalists are saying the new prototype have enough eye relief for prescription glasses, while the previous didn't, or not as much. Increasing the distance between the optics and the eyes causes a dramatic loss of FOV, no matter what technique is used.

I am not asserting "safety" or "processing" but rather power consumption. Plus they pre-measured everyone and matched them up to pre-configured Hololenses, so it seems odd that they would reduce everyone's FOV to accommodate a fraction of the users who had on glasses.
 
Last edited:
I am not asserting "safety" or "processing" but rather power consumption. Plus they pre-measured everyone and matched them up to pre-configured Hololens, so it seems odd that they would reduce everyone's FOV to accommodate a fraction of the users who wanted to wear glasses.
I'm stubborn and I could be wrong, but I'm still doubting all explanations other than optical so far. For three reasons:
1. There's no indication they are cropping what the display can produce between the 1st and 2nd prototype.
2. With a smaller FOV the prototype could be much smaller, and it isn't.
3. There are clear indications that the eye relief was increased.

#1 and #2 could be for power reason, solved at launch.
BUT... #3 would explains the loss of FOV by itself.
http://arstechnica.com/gadgets/2015/05/01/hololens-still-magical-but-with-the-ugly-taint-of-reality/
The Ars Technica journalist wears glasses, and said the previous prototype had the hololens glass pressed against his prescription glasses. While the new prototype didn't. So it's clear to me they increased the eye relief, and that would absolutely reduce the FOV.

I wonder if there are issues with accessibility laws. Looking at VR products, they would be much smaller if they didn't have to accomodate glasses users, yet all of them did. The ones that didn't were changed to do so. Morpheus had plenty of eye relief at the first public prototype, and it skewed the FOV comparisons with A vs B lenses of DK2. OTOH, I think it was established that Crescent Bay had increased eye relief, and the FOV was reduced. Now both provide a similar perceived FOV at similar eye relief distance. And they are very bulky.
 
Last edited:
Back
Top