Microsoft HoloLens [Virtual Reality, Augmented Reality, Holograms]

So from the looks of this there is a good chance that the unit that was used on stage during the unveiling might well have been a working prototype.

Anyway, there's some quite obvious changes to the design (functional rather than ergonomic). The actual hololens itself appears to be wider. If so, it should give a wider FoV than the demo units a few months back. That would at least partially address one of the biggest drawbacks that was noted about the experience. The shaded "shield" appears to be darker as well. Perhaps to increase contrast and decrease the effect of bright objects.

Interesting to look at the construction. There are at least 7 forward facing sensors held in a unit that appears to be at least 1-1.5 inches away from the user's forehead. Not much detail shown on the hololens (the actual lens) units so not sure how many sensors that one may have. It'll need at least 2 minimum just to track the users eyes and may have more.

All the computing hardware appears to be contained in the upper unit that is horizontal to the ground. So heat generated will be above and in front of the users forehead. This should make it unlikely for the user to feel much if any heat when wearing this. I'm assuming it's almost all batteries in the shell surrounding the headband as that would be a good way to balance the unit without adding extraneous weight.

One thing I'll say is that the word "cheap" does not come to mind when I see that tearaway view of the internals. I still think it's going to be in the $600+ range. Maybe I'm wrong.

Far too cheap, especially because of...

Talking to the devs here. The unit is not meant to be commercial for 2-3 years. Looking at business enterprise markets so far. It's a costly machine likely

First units are likely to cost upwards of 2,000 USD, IMO. But if Microsoft aren't looking to make a large margin on the device (and perhaps just break even or take a loss), perhaps it'll cost in the 1,000-1,500 USD range. That said, I do not think this will end up costing less than a mid to high end Surface Pro.

Regards,
SB
 
Talking to the devs here. The unit is not meant to be commercial for 2-3 years. Looking at business enterprise markets so far. It's a costly machine likely

I can confirm this to whatever degree a private tweet is worth from an MS person. ;) Maybe something out for public testing in a year. No response on the costs. So I am really bummed...
 
I still have hope for a strip down console version. One thats dependent on the internals of the xb1 and kinect to lessen costs. The optics look like the cheapest component of the whole design.
 
I still have hope for a strip down console version. One thats dependent on the internals of the xb1 and kinect to lessen costs. The optics look like the cheapest component of the whole design.

There's a chance it may come out at a similar point in XBO's lifetime as Kinect did in the X360's lifetime. By that time, there may be at least some economies of scale for the unit. And 2-3 years from now would be roughly a similar timeframe to when Kinect hit the X360.

Regards,
SB
 
No real info here, but it is video of people wearing and using hololens, so we know the one they had on stage actually works.




 
Great to see this tech finally coming to fruition but it's very early days. Even if a product is going to be available in a few years we're looking at commercial viability only. It's an exciting step though with Google Glass and Hololens becoming a reality. It truly is science fiction from movies becoming a reality.
 
I'm back from my hololens demo.

Too sum it up simply without over hyping it: it's quite different from VR and it's quite impressive.

I got to do the architecture demo and I can see its uses. The AR only occurs in a small
Box in front of your eyes the rest of the glass around it is clear. At first I thought this was bad. But then. Considering how hard they can project an image over an object you could easily just "walk" somewhere unsafely. So to have your peripheral tell you what's safe and what's being drawn there is good. Though you lose the immersion.
I would give VR the immersion title still. The AR feels like you are using your headset to find ghosts lol if that makes sense.

It is powered by two servers very massive servers so it's a lot of work on the backend. The device cannot possibly have so much computational power without heating up at all.

Over all it responds well to gestures and voice commands.

Ask away.
 
Is there any indication that the device is actively scanning/mapping the physical environment in real time such that the AR overlay is able to compensate for new or moving physical objects (people, hands, etc)? Were the hands-on demo environments as carefully designed as the stage demos to hide instances where holograms and physical objects might overlap and produce depth ordering mismatches? Do the screens actually perform occlusion such that the AR overlay is able to draw opaque objects as seen in the stage demos?
 
@iroboto Did you find that the "holograms" looked fairly solid? How was the colour accuracy and resolution?
It can be very solid yes, and it can also be translucent.

Accuracy was very good. They had me look at a computer monitor, use the mouse to move the cursor. Kept moving it off the monitor screen and it switched to hololens and I had a floating mouse cursor. Then using that cursor I could still manipulate the holograms in a way that is fairly natural.

I also had them render a real "world" something you'd take out of google maps. It was rendered and I was standing on the street looking at a textured version of my building as if it were built. Fairly good.
 
So that brings up something I was wondering, how dependent will this product be on the cloud I wonder?

With FOV, they could always detect when something below is unsafe and then not render any holograms over it. Allowing you to walk around a bit more safely.

How did the resolution feel, could you perceive any pixels? Did you ask any questions about how the light engine works in relation to current display tech, is it closer to DLP or does it use LCD displays for example? In the cut-away view it was really hard to see what they were doing.

Edit: Thanks for the updates!
 
Is there any indication that the device is actively scanning/mapping the physical environment in real time such that the AR overlay is able to compensate for new or moving physical objects (people, hands, etc)? Were the hands-on demo environments as carefully designed as the stage demos to hide instances where holograms and physical objects might overlap and produce depth ordering mismatches? Do the screens actually perform occlusion such that the AR overlay is able to draw opaque objects as seen in the stage demos?

Yes the device is actively scanning. It is the only method of input. What is projected to me is likely the result of the servers doing the number crunching for the hololens input. Whatever was happening there was enough bandwidth to send and receive over wifi.

The presenter also did some motions for me that I didn't know you could do and it caused the simulation to change.

The SDK is Unity engine. You develop the world in unity, the hologram world and your hololens is required to figure out how to draw it correctly.

Part of the SDK showed what hololens was able to perceive as a real wall that's how it knows.

One demo my friend went to included tossing a ball onto a couch. He did that and the ball rolled and eventually was held by the gap between the cushions. That was clearly real time.
 
So that brings up something I was wondering, how dependent will this product be on the cloud I wonder?

With FOV, they could always detect when something below is unsafe and then not render any holograms over it. Allowing you to walk around a bit more safely.

How did the resolution feel, could you perceive any pixels? Did you ask any questions about how the light engine works in relation to current display tech, is it closer to DLP or does it use LCD displays for example? In the cut-away view it was really hard to see what they were doing.

Edit: Thanks for the updates!
It did not feel like a lcd screen. At all. The resolution that was being projected must have been very high but the rendering had a resolution if that makes sense. Akin to seeing 1080p on a 4K monitor.

By all understanding I would say it is projecting directly into both eyes. It is not something I see on a screen at all.
 
Any jitter to the holograms, or are they as stable as shown with the demo camera/video rig?

Do they plan to just sell wifi eyeballs in a few years, wonder if it will be cheaper than lasik?
 
No real info here, but it is video of people wearing and using hololens, so we know the one they had on stage actually works.


Absolutely love the technology and the real world possibilities. Also loved the developer thoughts on how excited they were to start building applications for it.

But I couldn't help being amused watching people interact with nothing (from the POV of someone without Hololens on). What's interesting is that if I were to then put on a Hololens in that location, I'd then immediately see them interacting with whatever it is they are interacting with (due to the client/server nature of the demo that was being run) assuming I was connected to the same server as they were connected to.

Regards,
SB
 
Back
Top