Microsoft HoloLens [Virtual Reality, Augmented Reality, Holograms]

Surely VR with cameras achieves the same thing, enabling AR? Hololens then would only have the advantage of portability, but it also doesn't create as solid an experience. You can't try out an idea with Hololens and see what its presence in a room is like in the corner of your vision because the FOV is too tight. Nor can you see the whole object up close because the FOV is too tight. You'll also be experiencing a disruptive encounter because the content will keep appearing and disappearing due to the limited window. With VR, the whole room and virtual contents are present.

Hololens has the advantage for hand-tracked interaction, but it hardly stands alone as a solution for industrial application. Well, save maybe no-one making VR headsets thinking to include cameras!
 
Surely VR with cameras achieves the same thing, enabling AR? Hololens then would only have the advantage of portability, but it also doesn't create as solid an experience. You can't try out an idea with Hololens and see what its presence in a room is like in the corner of your vision because the FOV is too tight. Nor can you see the whole object up close because the FOV is too tight. You'll also be experiencing a disruptive encounter because the content will keep appearing and disappearing due to the limited window. With VR, the whole room and virtual contents are present.

Hololens has the advantage for hand-tracked interaction, but it hardly stands alone as a solution for industrial application. Well, save maybe no-one making VR headsets thinking to include cameras!
Yea I think VR with cameras would work. But it's certainly going to work differently however; let's take an example: I want a new chair to go into my existing office setup.
Using hololens I can browse a catalog of chairs and position it by my desk, and take a look at different examples of what it would look like in terms of styles and colours.
You could do this in VR, but it requires so much more power. Hololens would only need to render the chair, and only the chair, while VR would have to render an entire room, and the desk, and all the stuff around it to have the same effect. With VR we're trying to emulate reality (something we're only getting somewhat close to today), while AR relies on reality as a foundation and augments the virtual pieces into your world.
 
Yea I think VR with cameras would work. But it's certainly going to work differently however; let's take an example: I want a new chair to go into my existing office setup.
Using hololens I can browse a catalog of chairs and position it by my desk, and take a look at different examples of what it would look like in terms of styles and colours.
You could do this in VR, but it requires so much more power. Hololens would only need to render the chair, and only the chair, while VR would have to render an entire room, and the desk, and all the stuff around it to have the same effect. With VR we're trying to emulate reality (something we're only getting somewhat close to today), while AR relies on reality as a foundation and augments the virtual pieces into your world.
Why is that? It can use a video background. There no difference really other than a small tradeoff of video quality for versatility.
 
Why is that? It can use a video background. There no difference really other than a small tradeoff of video quality for versatility.
Even streamed you're talking about an enormous amount of hardware. You're asking the VR camera to detect all objects in 3d space, gather information about light and texture, and then render them based off what the camera is capable of seeing in real time?(I think this is the ask?)

Even as you're right that you are streaming the data to the headset, it still isn't rendering better than just leaving it as reality.
 
Even streamed you're talking about an enormous amount of hardware. You're asking the VR camera to detect all objects in 3d space, gather information about light and texture, and then render them based off what the camera is capable of seeing in real time?(I think this is the ask?)
You use video feed of the room. You superimpose the VR render on top of this video feed. It'll have exactly the same requirements as Hololens, which only needs enough room understanding to correctly superimpose objects.

That said, I think MS are well ahead of the curve with regards computer depth vision and AR integration.
 
All of that has to be done whether its light coming thru a transparent visor or off a video display. You're still analyzing the scene and projecting an overlay on it.

The big gains for Hololens are the subtle visual gains and the significant power savings. The give back is VR.
 
You use video feed of the room. You superimpose the VR render on top of this video feed. It'll have exactly the same requirements as Hololens, which only needs enough room understanding to correctly superimpose objects.

That said, I think MS are well ahead of the curve with regards computer depth vision and AR integration.
oh, just take a complete feed, don't render everything into 3D. Yea that would work I think. Aside from the quality difference, I suppose the only other thing you miss out on is portability.

At best it would be nearly the same experience as Hololens, at worst it would still be a decent substitute. One thing I found, I didn't get sick using Hololens, it felt quite natural, you didn't have to try to see these objects, they just appeared. Your eyes were free to dart around as usual. I do get sick with VR though, I'm not sure if the video feed would be an improvement over rendering. I assume it would, but you'd need to record enough at a fast enough frame rate for there to be less issues I imagine.
 
Last edited:
The fact your eyes adjust to distance unlike VR might make a notable difference in the long run, but we don't have any studies yet to compare both and see if one is better than the other.
My rule of thumb would pick up using my eyes as usual + overlay rather than having a screen 2 inches from them...
 
But I want generalized hololens, I want to use it as I drive to get distance informations and traffic jam and whatnot, I want to use it in a shop to compare prices with nearby shops...
And of course you can make quite fun advertisements if people were wearing them at all times, like Jaws in Back to the Future 2...
I see something that could be useful to everyone one day, unlike VR which will always be niche.
 
Wait this is actually going to be for sale? I always thought it was a tech demo, not a real product. You sure?

Yes, it is going on sale at the end of this year or start of next. Some corporations are already being seeded with devices.

It is not going on sale to the general consumer, however, that won't happen for another 2-3 years at the earliest depending on a variety of factors the most important of which is what the software eco-system looks like in 2-3 years and whether consumer friendly applications have been made or are in the pipeline.

MS have stated that they have no intention of pushing this into the market with their vision of how it should work. They are providing it to corporations and developers and letting them determine the path that HoloLens takes.

You use video feed of the room. You superimpose the VR render on top of this video feed. It'll have exactly the same requirements as Hololens, which only needs enough room understanding to correctly superimpose objects.

That said, I think MS are well ahead of the curve with regards computer depth vision and AR integration.

Which means it'd need all of the depth camera's and sensors and computing units. Additionally it would need to be able to properly map that to an incoming video feed in real time or faster than realtime at the same time as it needs to then map the "holographic" constructions on top of that. It's an additional layer of complexity that doesn't exist by just using real life.

As well, the experience will be compromised compared to HoloLens if the screen used for the incoming video feed isn't of sufficient pixel density that you find yourself being able to easily discern each individual pixel of the display (IE - you'd want higher resolution screens than will you will get with Morpheus or Oculus Rift).

In other words, to provide a similar experience, the device would not only require more hardware, but would be significantly heavier. Not least just for significantly larger batteries just to be able to power the screens. Additionally you'll likely require higher computing power for similar performance as the device will be tasked with doing more than the current HoloLens.

Basically it needs to have more of everything the HoloLens already has, except replacing the holographic display with an extremely high PPI display. Drawback is weight and more heat generated while having the benefit of potentially a wider FOV (and the safety implications if one were to move around with full FOV AR overlayed on reality).

Regards,
SB
 
Something else that VR lacks compared to AR. You can easily use AR while using a variety of computing devices and non-computing devices. The NASA Mars demonstration (using actual NASA written software) was using HoloLens while using a desktop PC at the same time and easily switching between the two and even with both in the same scene simultaneous.

You could also potentially use it with an ARC welder. Would you trust your safety to a completely enclosed VR device while handing an ARC welder with your own hands? A glitch in the video stream could pose a serious health risk. Basically using any variety of power tools. Would you feel safer using them with an completely enclosed VR device and an incoming video stream or an AR device which allows you to see what is actually happening in real time because you're actually looking at it.

VR is obviously superior for how games are currently made. AR has potential, but will rely on people to actively exploit that potential.

In other words, as someone else has stated and I questioned early in this thread. HoloLens doesn't really belong in the gaming forum. At least not currently. While it has been shown with a Minecraft, that isn't Microsoft's current near term target for the device, even if it's something they're interested in leveraging in the future.

What Minecraft's main purpose was for is to increase exposure of the device. To get developers thinking about what they'd like to do with something that isn't potentially a product, but will be a product. And to drive home the message that you will soon be able to program for HoloLens as the API is integrated into Windows 10. The programs to make AR experiences already exist as industry standard content creation tools already support (not publicly) creating content for HoloLens or any other AR device that works with Windows.

That NASA demo (which is planned to be deployed as a tool at NASA before the end of the year) was created with existing software development tools.

Regards,
SB
 
Which means it'd need all of the depth camera's and sensors and computing units. Additionally it would need to be able to properly map that to an incoming video feed in real time or faster than realtime at the same time as it needs to then map the "holographic" constructions on top of that. It's an additional layer of complexity that doesn't exist by just using real life.
Processing requirements are exactly the same Hololens. Hololens needs to read the room in realtime and generate graphics to overlay. Considering HL does this with a video feed to the processor, there's no reason VR with a (depth) camera feed should be at any disadvantage.

You could also potentially use it with an ARC welder. Would you trust your safety to a completely enclosed VR device while handing an ARC welder with your own hands?...
Absolutely. There are plenty of use cases where AR is preferable. The current VR comparison is in relation to a specific industrial application visualising virtual products in their real setting. That's something VR can do nicely, quite possibly better than HL because of the wider FOV and more solid representation of the virtual content.
 
Absolutely. There are plenty of use cases where AR is preferable. The current VR comparison is in relation to a specific industrial application visualising virtual products in their real setting. That's something VR can do nicely, quite possibly better than HL because of the wider FOV and more solid representation of the virtual content.

Sure there are going to be areas where the devices overlap in sensible ways providing basically similar experiences and usability. There are also areas where they overlap but one is clearly inferior to the other. And then there are areas where they will overlap and one is neither superior or inferior but offer a different experience.

FOV is a matter of implementation. There is no reason that FOV can't be wider on Hololens. In fact that original (clunkier) demonstration unit that the press used back in January had a wider FOV than the current unit. So it's only of relevance when comparing it to this specific incarnation of HoloLens which may or may not be representative of a consumer version of HoloLens or even a corporation specific version of HoloLens (versus the currently more general corporate focused unit). In the same way you can modify a VR headset to mimic some of the capabilities of HoloLens, you could modify HoloLens to mimic some of the capabilities of a VR headset.

Regards,
SB
 
Absolutely. There are plenty of use cases where AR is preferable. The current VR comparison is in relation to a specific industrial application visualising virtual products in their real setting. That's something VR can do nicely, quite possibly better than HL because of the wider FOV and more solid representation of the virtual content.

Hmm, as much as I want to agree with you here, for some reason I can't, nor can I place the words to describe why. Addressing differences in this case, HL has very little to draw, in such it doesn't interrupt the natural flow of your eyes. With HL, currently today the AR is always properly focused, this is because they measured your pupil distance to determine where you focus looking straight down the middle. This also happens to be the maximum size of the screen. With HL, what you focus on and ultimately what gets out of focus is dependent on you, and not the camera. VR + camera you are relying on the camera to bring something your eyes want to focus on, but it may not be focusing on what your eyes are looking at. So there's that aspect: it's unlikely to keep the whole FOV in perfect focus at all times.

Secondly, the FOV isn't a limitation of the AR, but likely just a limitation of drawing AR where your eyes wander. If they manage to do eye tracking, perhaps it's possible we will see a larger FOV. As we can see the camera is capable of displaying all the AR in the given room, but the hololens itself has a limited view of the AR.

Overtime I expect the FOV to widen, as the technology for drawing the AR onto that glass improves.
 
The fact your eyes adjust to distance unlike VR might make a notable difference in the long run, but we don't have any studies yet to compare both and see if one is better than the other.
My rule of thumb would pick up using my eyes as usual + overlay rather than having a screen 2 inches from them...

Counter-intuitively, this might actually be an area where AR enabled VR (pass-through video AR) could offer a more flexible/general experience in the short term. Hololens as well as the current crop of VR are all fixed focus depth, but with pass-through AR both virtual objects and the environment could at least be focused on the same depth plane (infinity, as is with VR). Hololens has to choose a best-fit focal distance for virtual objects so you don't get large accommodation mismatches when looking at physical and virtual elements that are side by side. Supposedly the near clipping plane on Hololens is relatively far away (2 feet?), which would help prevent the more glaring instances of mismatch.
 
There is no reason that FOV can't be wider on Hololens.

https://www.google.com/patents/US20130250430

It would seem as though increasing FOV for holographic waveguides isn't so simple. After reading that it makes a lot of sense why MS is saying they're not planning on making the FOV bigger for the release hardware - they're likely already bumping up against refractive index limits to get the FOV they've got now.
 
But you have no sense of self, you don't see yourself or the objects in the room. You cannot use existing physical areas as boundaries for where AR should begin and end in VR. But in AR and with Hololens it's constantly detecting the space you are in.

Isn't this the point of VR? That you are put in a virtual reality space, somewhere else. I've never seen somebody push VR as a solution to exist in the place you already exist in but slightly different. That's what AR is for. As for not seeing yourself in VR, you might want to checkout some of the Morpheus coverage because I've seen demos that do exactly this. You look down at your hands which are in front of your and you can see your hands as you expect too. And they are holding that virtual gun you just picked up. And there are my legs and feet.
 
Isn't this the point of VR? That you are put in a virtual reality space, somewhere else. I've never seen somebody push VR as a solution to exist in the place you already exist in but slightly different. That's what AR is for. As for not seeing yourself in VR, you might want to checkout some of the Morpheus coverage because I've seen demos that do exactly this. You look down at your hands which are in front of your and you can see your hands as you expect too. And they are holding that virtual gun you just picked up. And there are my legs and feet.
It is, which is great for VR applications and games, but for industry use, I feel as though it's not as ideal as having AR line up in real life, if that makes sense.
 
Back
Top