Microsoft HoloLens [Virtual Reality, Augmented Reality, Holograms]

it is impressive but I still don't like certain aspects they are not being clear with - for example obviously it's not like having someone in your room due to FoV, but also the bit where he taps the thing for her to climb onto - that means you would have to have your room layed out exactly the same - or at least items you wish to interact with.

I don't get why he was 'moving out of the way' - I'm not sure that would be a natural instinct due to the limitations of the tech. also, I'm confused about the onscreen lag vs the live presentation when he was mapped 'in real time' - seems odd the TV would lag behind so much. Finally, how can the whole thing be replayed if only one angle can be seen at any one time by the camera?

Maybe I missed some stuff?
 
it is impressive but I still don't like certain aspects they are not being clear with - for example obviously it's not like having someone in your room due to FoV, but also the bit where he taps the thing for her to climb onto - that means you would have to have your room layed out exactly the same - or at least items you wish to interact with.

I don't get why he was 'moving out of the way' - I'm not sure that would be a natural instinct due to the limitations of the tech. also, I'm confused about the onscreen lag vs the live presentation when he was mapped 'in real time' - seems odd the TV would lag behind so much. Finally, how can the whole thing be replayed if only one angle can be seen at any one time by the camera?

Maybe I missed some stuff?

He moves back because instinctively humans tend to move back if someone appears like they are about to walk into them. Or to give them space.

Each of those respective rooms has an array of cameras focused on the central space. I'm guesstimating 24 based on what's seen in the video. 3 in each corner and 3 in between each corner. So basically they've got 360 degree coverage of that area. Those are used to create the 3D model of that entire space (including people) as well as texturing those 3D models. It's likely that it's just replaying (reconstructing) the data that was used to create the reprojected scene in the first place. The Hololens that he is using isn't involved in any of that, other than displaying the recreated scene with correct relative positioning to himself in the scene. For playback later, he can view that reconstructed scene from any angle (it's a 3D replication, not a recording).

The lag as seen on the TV likely just shows how processing heavy the research software is (think of it as pre-Alpha type stuff) as well as how un-optimized it is. That combined with the inherent lag that virtually all TV's have even when in PC/Game mode, and there you have it.

Regards,
SB
 
So essentially this is years and years away from anyone using it like this. I understand why you would step back (in a VR environment) but when you can only see a 'transparent window' I personally don't think you'd step back. I wish MS would show something more realistic and likely instead of all this 'very much in the future potential' stuff.
 
It's AR. It's new. Everything about it is going to be research about potentials.

VR is easy. VR has been here for over 20 years now. It has yet to be successful and hasn't been done like it is now in a commercial package, but there's already ideas and implementations and games that exist for VR to work off of. AR is something else entirely.

As for stepping back. Even in a relatively small "window" the experience is likely to be far more realistic than a similar VR experience. This is happening in your environment. Right in front of you in your real space. Those 3D reprojections were real enough that you likely wouldn't even think about moving back, you would just move back.

When I see someone walking towards me. I'm not looking at his feet or his legs or even most of his upper torso. I'm looking at his face as that gives the most clues about where a person intends to go and what they intend to do. Yes, your peripheral vision will lack some data with AR versus a real person walking towards you. But the brain has an amazing facility for filling in the blanks. There are many people in the world that suffer from partial blindness where their eye cannot see certain areas directly in front of them. Their brain, however, will fill in the blanks and the person won't ever know they aren't seeing something in that area in front of them until they have an eye examination.

That isn't to say that by seeing that AR "window" your brain will create the rest. It won't. But it will focus on what it can see and relate that to what you usually see, and instinctively react accordingly.

Also, for that particular demo, his child quite likely fit within the AR "window" if he was focusing/looking on her anyway.

Regards,
SB
 
Sorry, I didn't mean VR as in what the current VR headsets can do - I just mean if you are in an environment where all you can see is what is projected (ie if the hololens views was the same as the VR view). I agree about the brain filling in the gaps, but the girl is transparent and you will know she is not there - I'm sorry but I don't buy your brain thinking she's solid and you need to step back (let alone saying that you are doing so - twice, how staged was that).

Anyway, I want to see what the users see's, I confess I've not used it yet so need to keep my mind more open.
 
I confess I've not used it yet so need to keep my mind more open.

Sounds like good advice.

Technology & demonstration was by Microsoft Research. I guess they need to keep their cool stuff to themselves until it's gone through years of beta testing & field trials so it's ready for the general buying public. /s

Fortunately for the rest of us, we like the idea that they are willing to release this stuff so it can inspire others & there will be all kinds of different uses available when Hololens launches.

Tommy McClain
 
I don't buy your brain thinking she's solid and you need to step back

There's a lot of social spatial cues that seem to be wired at a low-level and produce surprising responses. I know at a conscious level that a Combine soldier from HL2 is not a real thing, but having one walk up to you and look at you at eye-level from a foot away does make you want to take a full step back to reestablish whatever your normal comfort space is.

The tele-presence stuff is probably the most exciting element of both AR and VR to me, but there are some pretty giant hurdles to overcome that might push the mature/usable consumer product time frame beyond the predictable tech horizon. I'm especially cautious with any tech that might require another 5-10 fruitful cranks of Moore's Law to see its promised vision fully realized.
 
hololens doesn't need that more turns on moore's law. It really just needs a fast wireless connection to a near by PC that can handle all this information . On the go your going to use the headset for games on the level of what a phone can do and to make calls , surf the web and view videos.

Your not going to do something as seen in the demo in the middle of a street or on a crowded train or what have you. That type of stuff will be done at a home or office.
 
We're looking at needing an order of magnitude improvement on basically every front. Resolution, TDP and battery life, form factor, etc. Accommodation and opacity support. Real time SLAM outside of cherry picked lab environments. And that's just the known missing pieces of the puzzle. The better AR/VR gets, the more pieces we discover are missing. Every time resolution, refresh rate, or tracking precision has improved, new issues have popped up that we didn't even realize were there before.
 
We're looking at needing an order of magnitude improvement on basically every front. Resolution, TDP and battery life, form factor, etc. Accommodation and opacity support. Real time SLAM outside of cherry picked lab environments. And that's just the known missing pieces of the puzzle. The better AR/VR gets, the more pieces we discover are missing. Every time resolution, refresh rate, or tracking precision has improved, new issues have popped up that we didn't even realize were there before.

Yes, which is why Microsoft aren't even really thinking about a potential consumer launch in the near future. Anything consumer related is at least 2-3 years away, if we're being optimistic. Assuming they are keen to keep it untethered the experience will always likely lag behind competing hardware in terms of fidelity, however, the hardware is secondary to seeding Windows as the platform for AR development. That IMO is far more important to them than HoloLens succeeding at a commercial level.

Whether they succeed with that or not depends on how quickly big competitors can come out with a competitive development environment. But Microsoft does have a bit of an advantage due to the ubiquitous nature of Windows, especially within the business sector. As well as leaving it open for use with any AR device available assuming competing hardware AR makers decide to use Windows for whatever reasons (lower development cost, for example). It also helps that some of the big names in design are already implementing support for Windows AR development (Autodesk, for example).

Regards,
SB
 
Pretty cool 2nd experience by Brett Howse

http://www.anandtech.com/show/10210/hololens-round-two-augmented-reality-at-build-2016

The very interesting bit was later on, when we linked our Hololens units with the other people in our six-person pods. This way all six people could interact with a single energy ball. People also got to choose an avatar which would float over their heads. That experience was pretty amazing. With very little setup, the holograms were truly linked to a single point that all people could see.

As part of this demo, my coach suggested I walk around the (very large) room and then look back. This was probably the most amazing part of the demo. After walking a hundred feet or more away, and around some tables and pillars, I looked back and the hologram was still floating exactly where I left it. The ability to really lock things to a location is really the one part that needs to be perfect for this experience to work, and they really nailed it. In addition, my pod mates were all around the room with avatars floating over their heads.

The ability to see something anchored in the world no matter where you roam or how far away the AR object is (this one was ~100 feet/30.5 meters away) is pretty key to AR, IMO. Presumably an architect, for example, could place a building in the real world. Then walk/drive some distance away, lets say half a kilometer, and still see it anchored in the real world and see how it blends in with the surroundings (at that distance a large building would likely fit within the smallish FOV). Then drive to the other side and see how it looks from that vantage.

As expected FOV is still relatively small.

I would explain it as something like a mid-sized television, in the 27-inch range, sitting a few feet away from you.

Translucency is variable depending on the background and lighting. As we've noted before. But even with the small-ish FOV, remains convincing enough that the body and mind instinctively try to avoid potentially dangerous obstacles.

Towards the end of the demo session we did some shooting of orbs which opened up a hole in the floor. Peering down into it, it really felt like this was something you didn’t want to step into. The holograms tend to be a bit translucent, but on this one in particular it was much more solid.

And very wise words.

The experience of AR is much different than VR. Because you are interacting with things in real space, you can easily move around without fear of tripping or walking into a wall. VR is able to offer much more complex graphics and immersion right now, but you are largely bound to a single location. The use cases for AR seem, to me, to be not necessarily the same as VR and both should easily be able to co-exist.

While there is some overlap. AR is not VR and VR is not AR.

For example, with the above architect, perhaps his company did a VR mockup prior to that to get dimensions and whatever correct. Then using AR they can see its impact on its real life surroundings. Is it going to be an eye-sore. How will it block line of sight to potentially important landmarks (both major and more importantly minor landmarks that might be otherwise missed) for surrounding buildings. Something like that could have potentially avoided many situations in real life where companies and individuals were sued because their buildings interfered with established land owner's line of sight to key physical landmarks.

In more gaming centric terms. Imagine putting on an AR device like the HoloLens and seeing Godzilla walking through your city/town/suburb/etc. :D Yes, I know it isn't practical, but it'd certainly be cool. Or a little more realistically, the black obelisk from the movie 2001 sitting in the middle of whatever location a couple kilometers away. Although I personally don't think gaming will be a strength of AR, there's just the potential for so many cooler things that aren't related to gaming.

Regards,
SB
 
Last edited:
It does sound nice, still a few years off before I becomes practical though.

I can't help but think about your Godzilla example, the problem for me is that it can only work in an open space. So if a building obscured Godzilla, I don't see how they can make the avatar be partially covered.

I know Kinect is kind of able to do that in a small space (it is isn't it?), but with large open spaces I can't see how it'd work because the infrared projector wouldn't function in that range or lighting condition.

The technology does have amazing potential, it just seems like we won't have a great implementation in the early stages (same with VR I guess).
 
I can't help but think about your Godzilla example, the problem for me is that it can only work in an open space. So if a building obscured Godzilla, I don't see how they can make the avatar be partially covered.

I know Kinect is kind of able to do that in a small space (it is isn't it?), but with large open spaces I can't see how it'd work because the infrared projector wouldn't function in that range or lighting condition.

In the Godzilla example with a city, they would have to map out the geometry of the city before-hand rather than have Hololens map the geometry in real time. Then you use that to determine what the user can and can't see. Hence, why it isn't terribly practical. Although I wouldn't put it past some organization or company to start 3D mapping major cities if AR ever becomes a "big" thing. The potential applications for something like that could be pretty fantastic.

Imagine a navigation system that in addition to giving you turn by turn instructions, also had an accurate marker visible above the location of the place you want to go to. Or instead of a route being mapped on a handheld device, the route is actually superimposed on the sidewalk you're walking on, or the road you're driving on. The AR device would do local mapping to make sure the route is correctly occluded by people walking over it and correctly superimposed on the world geometry. The 3D model would anchor the floating marker as well as provide the foundation for the providing the route to take. In the case of walking, it'd be some type of wearable. In the case of driving, it'd be an automotive solution (sensors in the car, AR through windshield? or wearable glasses?.

More ambitiously. Imagine something like this out in the wilderness. At any moment you could have it put up a marker or markers to show where civilization is (whether houses, roads, towns, etc.) as well as distance and any significant geographic obstacles (rivers, canyons, etc.). You'd still have to survive the wilderness if you'd gotten lost in the first place, but it'd give you an idea of where to go.

Yes. It's definitely some distance in the future. And likely not something that first gen AR devices, like HoloLens, will ever be capable of doing. But it's certainly something that is potentially possible at some point in the future.

Regards,
SB
 
(at that distance a large building would likely fit within the smallish FOV)

We'll probably have the AR FOV solved long before we have anything like an outdoors, use-anywhere SLAM with sub-millimeter precision and the ability to properly composite AR objects with real world environments beyond that of pre-scanned static interiors.
 
Out of the footage I've seen, I somehow like the mundane putting desktop windows around best. With high enough resolution and fov that would be incredibly useful.

Based on some people's 2d desktops though, I really wouldn't want to step into someone else's AR workspace. :)
 
We'll probably have the AR FOV solved long before we have anything like an outdoors, use-anywhere SLAM with sub-millimeter precision and the ability to properly composite AR objects with real world environments beyond that of pre-scanned static interiors.
I guess it's possible for the headset to scan a room in 3d space without additional hardware. I seem to remember some of the Kinect hacks doing it.
 
We'll probably have the AR FOV solved long before we have anything like an outdoors, use-anywhere SLAM with sub-millimeter precision and the ability to properly composite AR objects with real world environments beyond that of pre-scanned static interiors.

Yes, that would be some Nth gen device in the future. What HoloLens and first gen AR devices will do is let people start playing around with it. Start trying out ideas. Obviously on a small scale at first. As the tech advances, the scale will potentially expand as well.

Regards,
SB
 
I guess I just think it'd be more substantive and intellectually stimulating to talk about applications of tech that will probably (or at least potentially) exist in the next 5 years rather than tech that we have no reason to assume will ever exist. I can only assume that you and I likely disagree on the how/when/what of that 'Nth' device.
 
Maybe, maybe not. I have no predictions on when or if any of this comes about. It's just idea's about the potential. Until AR systems get more robust it's difficult to predict what will actually prove feasible and more importantly whether it's actually a good use of AR.

Similar in many ways to VR. What we thought was good for VR may not actually be the best use of VR. Difference being that VR has been worked on for over 2 decades now. AR is still early in its infancy in comparison.

Regards,
SB
 
Back
Top