Microsoft HoloLens [Virtual Reality, Augmented Reality, Holograms]

Any jitter to the holograms, or are they as stable as shown with the demo camera/video rig?

Do they plan to just sell wifi eyeballs in a few years, wonder if it will be cheaper than lasik?
Stable.

The camera has full FOV that your hololens won't.
 
  • Like
Reactions: Jwm
Interesting. Watching this video...


Which is a freeform talk about the Hololens with the head of Hololens team.

Anyway, the camera that they use to grab images of what people see with Hololens is actually basically a Hololens unit with the camera sensor replacing the holographic lenses. Everything else that exists in the Hololens unit exists on the camera as well. So you actually get to see what people using Hololens see. Only in higher resolution (they use a 4k x 2k camera). They start talking about it after the 24 minute mark.

So, the footage you see of Hololens is what the users are seeing when using Hololens. Not some marketing rendering or anything like that. Only at a higher resolution.

Pretty cool that the holographic experience is share-able. But it does rely on a client/server (cloud) infrastructure. That was also explained as part of the reason they aren't targeting the consumer market for at least 2-3 years.

[edit] One of the 7 outward facing sensors on the sensor unit, is just a standard camera that the user can use to record what they are seeing with Hololens. It's meant to allow the user to share their experiences with others.

Regards,
SB
 
Last edited:
I'm back from my hololens demo.

Too sum it up simply without over hyping it: it's quite different from VR and it's quite impressive.

I got to do the architecture demo and I can see its uses. The AR only occurs in a small
Box in front of your eyes the rest of the glass around it is clear. At first I thought this was bad. But then. Considering how hard they can project an image over an object you could easily just "walk" somewhere unsafely. So to have your peripheral tell you what's safe and what's being drawn there is good. Though you lose the immersion.
I would give VR the immersion title still. The AR feels like you are using your headset to find ghosts lol if that makes sense.

It is powered by two servers very massive servers so it's a lot of work on the backend. The device cannot possibly have so much computational power without heating up at all.

Over all it responds well to gestures and voice commands.

Ask away.
Do you think that it will work well with the environment around you? Did you see everything clearly like in real life? Do you believe it would be feasible to play (and see clearly) games on the TV while wearing Hololens to make the experience more compelling?

News; Hololens will cost significantly more than the Xbox One, according to the New York Times.

http://www.winbeta.org/news/hololens-will-cost-significantly-more-xbox-one-reports-new-york-times

Holographic games will be shown at E3: http://www.nowgamer.com/microsoft-will-show-holographic-games-at-e3/
 
http://www.winbeta.org/news/i-spent-90-minutes-hololens-and-walked-away-amazed

Some interesting information there...

This time around, it seemed to me that the holographic field of view was smaller compared to January, but the visuals appeared to be much improved. Both colors and translucency were much more vibrant from what I remember, but with the smaller field of view (probably a hardware rather than a software issue), it didn't seem as immersive an experience.

So, FOV is either smaller or unchanged. But color vibrancy and translucency were better. So my guess with the "shield" blocking out more light may be correct. Or the Holographic lens/projector has been improved.

As Brandon, one of the HoloLens team members leading the demo, said, "all HoloLens apps are Universal apps, and all Universal apps can be made to be HoloLens apps". This is not a vaporware idea running unproven technology.

Building a holographic application for Hololens is purportedly as easy as building a universal app for Windows 10. This should help immensely with adoption.

In fact, if you've ever worn a baseball cap, that's about what wearing a HoloLens felt like. Well balanced, light, and comfortable.

That's very promising from a wear-ability/use-ability standpoint.

It also has HRTF for audio, so you can hear any holographic entities that may be behind you and accurately know their relative position, for instance.

When we changed the code to see the 3D grid that mapped the space HoloLens played in, that underlying grid itself was slow to refresh and not very finely defined. Cranking down the complexity of the grid would help to make things more responsive, and for this demo it's understandable that it wasn't fully up to speed. Complexity of the grid is a computing power problem, one that game developers and gamers have been tweaking since the first graphics engines ran the first video games, and I expect that the HoloLens that ships ("sometime in the Windows 10 timeframe", by the way, and yes we asked), will be well cranked up from the units we wore. Still, you can only wrap so much computing power around your head, and just like everything else there are limits and tradeoffs.

Appears to indicate that it maps out the room in real-time whenever you ask it to display a 3D grid of mapped real life surfaces.

Regards,
SB
 
Yes the device is actively scanning. It is the only method of input.

By actively scanning I mean if you were to raise your hand, would it be producing a new depth map of your hand for every rendered frame such that it could accurately clip the AR scene that's obstructed by it. All of the stage demos they've shown have only shown AR objects mapped onto forward facing static surfaces (I would lump your ball/couch example together with those), or the moving robot with tracking LEDs, and have shown no examples of AR objects being occluded by physical objects. Obviously the orientation/position of the HMD is updated frame by frame in real time but I'm still very doubtful of the speed and precision of the mapping, which is the primary hurdle that AR faces compared to VR. This might sound like nitpicking, but the need for robustness and completeness is pretty crucial when you move from carefully chosen demo rooms to real world offices or homes, and considering the great pains that microsoft took to orchestrate the camera work and presenter positions in the stage demos, I'm guessing they feel it's pretty important to hide this if it's indeed a limitation. However cool it might sound to have virtual objects or screens that can be seen irrespective of the foreground physical objects, it's incredibly destructive to the perception of the virtual and physical objects existing in the same space, and pretty annoying when your eyes are battling back and forth to try to converge on a particular depth (akin to trying to watch a TV screen that has glossy reflections.)


edit: Someone's more comprehensive overview of the tech demo:
http://doc-ok.org/?p=1223

So no occlusion of AR content from physical environment, either static or dynamic. Additive only (no opaque objects or objects that are darker than the physical background.) Fixed/single focus plane.
 
Last edited:
Paul has his impressions up . https://www.thurrott.com/windows/windows-10/3251/hands-on-with-a-near-final-microsoft-hololens
His opinions are interesting because he used the original one shown off and now this one.

What’s interesting to me about the spatial sound capabilities of HoloLens is that no one could have predicted that this would be the device’s most unassailable win. It reminds me of the voice control capabilities in Kinect, where you assume that the big deal is hand gestures but then realize over time that voice works much better. Gestures are the point of Kinect, yes. But voice works better.

But it also served to highlight a problem in the shipping version of the hardware: The field of vision is far too small. That is, as you look forward out through the HoloLens headset, you can of course see peripherally, but the area in which you can see holograms is a small rectangle in the middle of your vision. It’s like looking through a small portal, or a submarine periscope.

So this is a problem, and I’m guessing that Microsoft will get enough feedback about this issue that they will address it. Oddly—and a few I discussed this with agreed—the biggest improvement would be to expand the field of view vertically (up/down), not horizontally (left/right).

He also says that he had to move his head around a lot and he thinks head tracking would fix it. Its interesting that he wants more vertical FOV.

The tech is extremely interesting and I really want it but I will wait for one with a larger FOV and eye tracking. Hopefully they delay it until they can launch it
 
I'm back from my hololens demo.

Too sum it up simply without over hyping it: it's quite different from VR and it's quite impressive.

I got to do the architecture demo and I can see its uses. The AR only occurs in a small
Box in front of your eyes the rest of the glass around it is clear. At first I thought this was bad. But then. Considering how hard they can project an image over an object you could easily just "walk" somewhere unsafely. So to have your peripheral tell you what's safe and what's being drawn there is good. Though you lose the immersion.
I would give VR the immersion title still. The AR feels like you are using your headset to find ghosts lol if that makes sense.

It is powered by two servers very massive servers so it's a lot of work on the backend. The device cannot possibly have so much computational power without heating up at all.

Over all it responds well to gestures and voice commands.

Ask away.

Where did you get your information about the two servers ? A friend of mine at build told me it was running just off the internal hardware and during his session they even turned off its wifi for him to use it.
 
Where did you get your information about the two servers ? A friend of mine at build told me it was running just off the internal hardware and during his session they even turned off its wifi for him to use it.

It doesn't require a server to run, but many of the applications that were demo'd were being served from a server. Thus multiple people in the same area could see either the same things or what other people were working on.

That makes sense as some business/professional oriented applications are going to require data-sets far greater than can be hosted on the device itself. It also makes it easier to share data and work on the same project at the same time with multiple people on the same or even different teams.

Regards,
SB
 
OK, this is freaking cool as all F...


Go to the 2:47:40 -ish mark. (there's a LOT more Hololens demonstrations before this also)

This also shows Hololens scanning the room. This particular segment is about using Hololens to control a real life robot in the real world. It also shows it can see the changes in real-time. Like when one of the presenter's stepped into the programmed path of the robot. Hololens saw it and saw that the path was no longer valid.

That video isn't all Hololens though. But there's quite a bit of it in there. The nice thing is that for a lot of stuff they are getting real developers to relay their experience with each of the things that they've demonstrated (Surface, Win10, Azure, Hololens, etc.). There was one for Hololens from an architecture firm, for example. And they brought on the developer of the sheet music writing program for Pen input for Surface as well. I also didn't know that Azure is used by Steam (Valve) to an extent.

Regards,
SB
 
Last edited:
Anyway, the camera that they use to grab images of what people see with Hololens is actually basically a Hololens unit with the camera sensor replacing the holographic lenses. Everything else that exists in the Hololens unit exists on the camera as well. So you actually get to see what people using Hololens see.
I'm confused. The examples seem to have the hologram filling the FOV, but reports are that the view is a small window, maybe even reduced since January. What exactly the is window size of the projection?
 
By actively scanning I mean if you were to raise your hand, would it be producing a new depth map of your hand for every rendered frame such that it could accurately clip the AR scene that's obstructed by it. All of the stage demos they've shown have only shown AR objects mapped onto forward facing static surfaces (I would lump your ball/couch example together with those), or the moving robot with tracking LEDs, and have shown no examples of AR objects being occluded by physical objects. Obviously the orientation/position of the HMD is updated frame by frame in real time but I'm still very doubtful of the speed and precision of the mapping, which is the primary hurdle that AR faces compared to VR. This might sound like nitpicking, but the need for robustness and completeness is pretty crucial when you move from carefully chosen demo rooms to real world offices or homes, and considering the great pains that microsoft took to orchestrate the camera work and presenter positions in the stage demos, I'm guessing they feel it's pretty important to hide this if it's indeed a limitation. However cool it might sound to have virtual objects or screens that can be seen irrespective of the foreground physical objects, it's incredibly destructive to the perception of the virtual and physical objects existing in the same space, and pretty annoying when your eyes are battling back and forth to try to converge on a particular depth (akin to trying to watch a TV screen that has glossy reflections.)


edit: Someone's more comprehensive overview of the tech demo:
http://doc-ok.org/?p=1223

So no occlusion of AR content from physical environment, either static or dynamic. Additive only (no opaque objects or objects that are darker than the physical background.) Fixed/single focus plane.
Correct. When I went for a air tap. Or when the presenter did other gestures I'm pretty sure it did not occlude. It basically draws what is in the "project".
 
ah okay. My friend told me they programed a game and had it run on the hololens
It my demo there was an operator controlling a desktop. There were two of them. And when I discussed with everyone else in the one on one demo, they also had 2 full ATX cases in there. My assumption since he had control over what I saw and did was that, that device must have been serving me material.
 
Yah, seeing a bunch of dummies standing in a circle, each holding up one finger and "clicking" the air is kind of amusing. An even weirder future than having people walking around staring at their smartphones, oblivious to the world around them.
 
Do you think that it will work well with the environment around you? Did you see everything clearly like in real life? Do you believe it would be feasible to play (and see clearly) games on the TV while wearing Hololens to make the experience more compelling?

News; Hololens will cost significantly more than the Xbox One, according to the New York Times.

http://www.winbeta.org/news/hololens-will-cost-significantly-more-xbox-one-reports-new-york-times

Holographic games will be shown at E3: http://www.nowgamer.com/microsoft-will-show-holographic-games-at-e3/
As of it's current setup, no. The system just requires way too much hand holding in its current iteration. Sure you could play games, but its difficult to play a game where the 3D can easily extend outside your field of view (if you are trying to play a game that uses the walls of your house as a level for instance).
 
I'm confused. The examples seem to have the hologram filling the FOV, but reports are that the view is a small window, maybe even reduced since January. What exactly the is window size of the projection?

I actually believe to a degree the size of the FOV may vary. They measure your pupil distance before you put on your headset, and the headset is configured for it. To give you an idea of roughly the size of the 'AR', take your current eye sight. The bottom quarter has no screen. The rest is covered by screen.

I would say that the AR is the distance between your two pupils if you were looking straight, or possibly just beyond it, and the top and bottom will fill out in a 16:9/16:10 sort of ratio. It leaves a lot of your peripheral vision open, which may not be a bad thing, if you are at a construction site. Probably a bad thing for games though.
 
Image1.jpg

If this image was a person wearing Hololens looking at someone else playing Hololens*, do any of the rectangles describe the FOV? Like the cyan rectangle, of the magenta rectangle but only as low as the blue rectangle's base?

And regards opacity, you've said it can be solid and transparent. Was that set within the rendered image or basically a product of the brightness of what's behind? Obviously the transparency can be set on the renderer but are the glasses capable of ensuring a pretty solid image even with bright scenery, or did the look end up more a holographic ghostliness when the environment/background was too bright?

*Which is perhaps not accurate to the human FOV but maybe you can find and edit a more representative image?
 
View attachment 679

If this image was a person wearing Hololens looking at someone else playing Hololens*, do any of the rectangles describe the FOV? Like the cyan rectangle, of the magenta rectangle but only as low as the blue rectangle's base?

And regards opacity, you've said it can be solid and transparent. Was that set within the rendered image or basically a product of the brightness of what's behind? Obviously the transparency can be set on the renderer but are the glasses capable of ensuring a pretty solid image even with bright scenery, or did the look end up more a holographic ghostliness when the environment/background was too bright?

*Which is perhaps not accurate to the human FOV but maybe you can find and edit a more representative image?
I would say that the smallest rectangle, captured the AR FOV better than the rest of the rectangles. but you're right, it's hard to capture the FOV of person, because depending on your pupil distance, it will vary a bit person to person.

The opacity was likely a bit unfair. The room lighting was not super bright, truthfully a little dimmed, slightly brighter than a lounge setting. Similar to hotel lighting, well, we were in a hotel, so that might explain it. There were no windows IIRC. They covered them all with a false brick background. But overall in that particular setting I found it 'solid' enough, if you look carefully you can see through it, it's not like VR in that regards. You'll always be able to see anomalies - but it's solid enough to not be a distraction by any means. They controlled for the most part where I stood as well, so generally speaking the device was working it its optimized demo state. Nothing could have gone wrong with the amount of control they had over the demo.

The transparency is hard to describe to be honest - it may have been just a result of seeing the device in lighter settings and darker settings. I had so limited time with the device, I didn't get into the SDK demo where they built a level using Unity engine.
 
I imagine under the usage demoed by MS, FOV is not as important as it is to something like VR.

Ultimately its a AR device that replaces/augments a general PC not a gaming device. My desire for a bigger display screen isn't produced by me wanting to have my periphery filled with blurred portion of a display. From a gaming standpoint having the visual imagery fill as much of my vision as possible makes sense especially for VR. But for more general PC uses, I don't really see the point of MS of building a more expensive headset to accommodate that type of functionality.

After all, its just cherry trails and how far you push visuals to the fringes of the a person FOV will come at the expense of the overall image.
 
Back
Top