Microsoft HoloLens [Virtual Reality, Augmented Reality, Holograms]

But MS isn't going to try to develop all the content for it by itself.
No, but they need to start the ball rolling with some impressive apps that come with the device. If nothing else than to showcase what it's truly capable of. When Apple launched iPhone with all this all-touch UI with multitouch screen, they didn't wait for somebody to develop a cool Map app they partnered with Google and put Google Maps right there and made it so you could drag to scroll, pinch/zoom, rotate and whatnot.

Moving from one technology/UI paradigm to another can be challenging. Things aren't obvious until they are shown to be obvious. Apple's software demonstrated that the lack of buttons and a stylus actually made many things better because hey, you can do this, and that - and hey, it's quicker to do too. Microsoft need to do the same. This is a new UI and interaction paradigm. The capabilities are entirely new - at least to 99.999% of consumers and developers. Microsoft need to set the stage to make developers lives easier, not a challenge to bring their software ideas to this new platform.
 
Last edited by a moderator:
10 million a year would be good. That would be iPad number and that should draw plenty of developers.

Ipad number? The ipad hasn't done less than 10 million in a quarter in years.

If the Hololens at a $1000+ price did a million in sales in its first year that would be outstanding.

The first gen iphone did 6 million in its first year. The first gen ipad did 15 million. The ipod, which started Apple on its road to being the king of the consumer electronics mountain, did 600K in its first 14 months.

Sometimes you have to crawl before you walk.
 
Last edited:
They're going to follow the Occulus Rift model and keep the dev kits in the hands of people who will make software for it, and find the best uses. The hardware will probably go through multiple iterations in the hands of what Microsoft calls "Innovators", who will make software for it before it's ever put on the shelves for consumers. It's not really following the model of Kinect's release.

Beyond that, the discussion about whether it will succeed or fail is basically flipping a coin heavily biased by how much you do or do not like Microsoft, and how overly optimistic/pessimistic you are about new technology.
 
I've heard more than a few horror stories from people who have tried the most recent Oculus prototype and have since had a very hard time using and enjoying their DK2 (akin to the DK1->DK2 experience.) That's the sort of world we're probably going to be living in for the next 5 years or so with these types of devices. Not simply incremental advances every couple years, but goalpost redefining on the scale of months.

I think this is probably why you're going to see a very uncharacteristically public display of R&D as Facebook, MS, Samsung, Sony, etc want to be seen as innovators, but also not looking to commit to an all-in 9 figure marketing campaign that necessitates a firm release timeline and multi-year product life cycle. Even Samsung wasn't able to release their $200 piece of plastic GearVR without an "Innnovator Edition" cautionary tagline. Oculus have been doing essentially the same tempering of expectations by keeping their "dev kit" name (Carmack mentioned that they've had internal discussions about when they can stop calling their products "dev kits"), and I'll be very surprised if they launch their "consumer" version without some kind of subtitle.
 
I thought this was going to come out by the time W10 comes out.

Supposedly the CPU in the prototype a version of Atom due this year.

If this product is years away, to give the demo now is kind of pointless.
 
They're going to follow the Occulus Rift model and keep the dev kits in the hands of people who will make software for it, and find the best uses.
And that's fine, but they still need to have showcase software because AR and VR are fundamentally different in that VR typically replaces the screen you stare at whereas AR is augmenting or altering the environment around you.

Moving Elite Dangerous from a conventional PC to Oculus wasn't a massive leap conceptually (if not technically) but Elite Dangerous isn't going to work on HoloLens because it's set in space and needs a bunch of controls. HoloLens is going to require new software that's useful at home - which rules out a ton of stuff Google Glass does because it's "out and about" utility software.

I think HoloLens would be great for something like seeing how furniture would look re-arranged (without actually moving it) or virtually redecorating but these probably aren't mainstream. What mainstream things will you do, in your home, with your home visible and things laid over top. With no controller device other than what the devices detects your hands doing.
 
I thought this was going to come out by the time W10 comes out.

Supposedly the CPU in the prototype a version of Atom due this year.

If this product is years away, to give the demo now is kind of pointless.

They said the product would be out in the timeframe of Windows 10, meaning from when Windows 10 launches to when Windows 11 launches. That's going to be a couple of years, at least.
 
And that's fine, but they still need to have showcase software because AR and VR are fundamentally different in that VR typically replaces the screen you stare at whereas AR is augmenting or altering the environment around you.

Moving Elite Dangerous from a conventional PC to Oculus wasn't a massive leap conceptually (if not technically) but Elite Dangerous isn't going to work on HoloLens because it's set in space and needs a bunch of controls. HoloLens is going to require new software that's useful at home - which rules out a ton of stuff Google Glass does because it's "out and about" utility software.

I think HoloLens would be great for something like seeing how furniture would look re-arranged (without actually moving it) or virtually redecorating but these probably aren't mainstream. What mainstream things will you do, in your home, with your home visible and things laid over top. With no controller device other than what the devices detects your hands doing.

I totally agree. It needs software. It's on them to make it. They have years to do it. They're just at the point where they felt comfortable with the concepts they'd been working on. Dev kits will be out in the hands of other people. No idea if this product will succeed or fail. Don't really care at this point. It's just a really cool device.
 
Does that mean Windows 11 or Windows 10.1?

I'm not going to go back and watch the presentation, but I know they said the first people that get it when it was available would be select "innovators". We have 9-12 months until Windows 10 is release, and at least another year until Windows 11 or 10.1. I don't expect this product to be on the shelves until late 2016, at the earliest.

Edit:
Put it this way ... It runs Windows 10, but we haven't seen how you interact with Windows 10 at all. It's in a very primitive state. We have no idea how it works. How do you put software on it? It's apparently a standalone device, that doesn't require another computer. That means you must be able to download software, install it and execute it using the device itself. We haven't seen that yet. There must be some way to do it. There are so many questions that need to be answered before we'll understand how it works and how practical it will be. Those answers look to be further off than Q4 of this year.
 
I think HoloLens would be great for something like seeing how furniture would look re-arranged (without actually moving it) or virtually redecorating but these probably aren't mainstream. What mainstream things will you do, in your home, with your home visible and things laid over top. With no controller device other than what the devices detects your hands doing.

Having the equivalent of a 100" TV playing video on your wall while you have the equivalent of a weightless 20" tablet that you can drag into view of out of view whenever you want while you have a Skype video call with friends or relatives pinned to a corner while you casually play Bejeweled (or whatever game is a big hit with casual PC gamer's now days) while a stock ticker does it's thing but you have set up custom points where it notifies you when the stock hits certain points while you edit a shopping list from input gotten from the Skype video call while you coordinate plans for the wedding while you play with your dog or cat while you watch your children to make sure they don't get in trouble (in the real world) while you get ready for a night out on the town while you change your babies diaper while you clean up the living room to get ready for guests while you re-arrange the furniture in the room for the X-th time because your spouse is tired of the current look while you install a new video card into your PC while you assemble the doll house you just got for your daughter while you...

Obviously not all of that at once, but you'd certainly be able to do many of those things simultaneously. VR would fail here as you wouldn't be able to do many of those while simultaneously interacting with and/or monitoring what's going on in the real world.

Sure you can do all that with a tablet and expensive TV and a PC and... But the potential is there to do many/all of those things on once device. Sort of like, yes, you can still get up off the couch and change channels and volume on your TV manually, but why do that when you have a remote? And sure you can have a remote for every device in your home entertainment system (receiver, BluRay player, TV, cable box, DVR, etc.), but why do that when you can have one remote that controls all of them?

That's what I could think of. Which probably isn't even a fraction of what some developers out there could potentially think of.

Hell you could have a Skype call with someone while changing the oil in your car and take notes without getting the oil all over your tablet. Or have a Skype call with your physician to take a look at your baby to see if you actually need to come in for a visit because your baby made a strange noise. Since, the physician would be able to see everything that you see and hear what you hear. Which would apply to veterinarians and pets, cars and mechanics, food and cooks, the sky is the limit. Sure a tablet is nice to pull up recipes for cooking, but what if the recipe was in view the entire time you're cooking dinner?

Hell for me, I could have someone I know help me pick out clothes (I'm red-green color blind) that actually match without them actually being there. :p

The possibilities are endless. Sure you can do a lot of the things with a combination of devices. But then modern life is full of things that let you consolidate actions into once device. After all, wasn't that the attraction of the PC? And tablets? And smartphones? This has the potential to move all of that onto one device.

Yes, there are still many challenges and hurdles to overcome before it's something that is intuitive to use and offers robust controls for many of the above mentioned tasks. But you only asked about the possibilities of AR. It's up to Microsoft and the developers (and any other company that is trying to do AR) to come up with solutions to some very real problems.

Hell, a potentially very interesting thing would be some form of OCR with auto-translate abilities could allow you to read a foreign book/newspaper/pamphlet/menu in your language (to some degree) with it automatically replacing the words in the book/newspaper/pamphlet/menu with words in your language (you'd never see the foreign text unless you instructed the device to stop auto-translation).

Regards,
SB
 
Last edited:
^ As much as I hit my head when I am changing my oil, or working on a light switch - I will end up with either a new way to charge the headset or an oil cooled light engine.
 
Seen mixed messaging from the press on this. Some say they can see objects through the hologram, others tend to not notice it. On the photos around it maybe seems it can be a design choice in setting the opacity levels?

This shot was a capture from the unit the lady was wearing, in which we were told it was still a feed from her device.

Are you sure that what was shown in the demo was from her device? You have to be skeptical about it. If that's truly "her view" where is it coming from? She's wearing yet another camera on her nose just so we can see her view for the demo? Which is then encoding and streaming the video out as well? I don't think so.
But the visor has forward facing camera(s) and maybe we're seeing THAT. Hmm, no, because we see the GUI composited onto it. And that composite is not what the visor's camera sees, either.

So my theory is that the visor she was wearing was actually just the industrial prototype and nothing more. It's not a working unit. The live demo was legit, but done with the many sensors from the funky camera system that was literally looking over her shoulder. You can see it at :23 into the video.
HOLO.png
 
Are you sure that what was shown in the demo was from her device? You have to be skeptical about it. If that's truly "her view" where is it coming from? She's wearing yet another camera on her nose just so we can see her view for the demo? Which is then encoding and streaming the video out as well? I don't think so.
But the visor has forward facing camera(s) and maybe we're seeing THAT. Hmm, no, because we see the GUI composited onto it. And that composite is not what the visor's camera sees, either.

So my theory is that the visor she was wearing was actually just the industrial prototype and nothing more. It's not a working unit. The live demo was legit, but done with the many sensors from the funky camera system that was literally looking over her shoulder. You can see it at :23 into the video.

If the camera on the visor captures raw frames, why can't it composite the 3D info for the "hologram" into the frame before it's encoded and sent out?

I have no idea what the other camera is, but it looks like a stereoscopic camera.

Staging that demo would be incredibly difficult. The voice commands are timed perfectly to the reaction of the software, as are her finger gestures. The parts that she moves seem to track with her. How many individual movements and voice commands does she make in the 8mins 30seconds (approx) that she demos the tool? I have no idea how you'd fake that and have it sync so well.

Edit:
I see what you're saying. You think it was a real demo, but the sensors were all built into the camera, not into the actual headset.

The headset would at least have to give her a working display. No idea how the display works, but she'd have to be getting data from the camera, feeding her display.

If you read the impressions of the headset from hands-on demos, they clearly have it working for their hands-on.
 
Last edited:
Are you sure that what was shown in the demo was from her device?

Well sounds like no one will ever convince you it was real, unless you try it yourself...

Lucky for us many press actually did try it out that day and reported their experiences with it... none cried fake...
 
It has two forward facing cameras that we know of, and they did show skype in sizzle reels where those cameras were used to show the person on the other end of skype what you the "user" were looking at.

If it is using the Intel "Cherry Trail" chipset with WiDi, then we could have been just seeing a stream from that? (My guess)

Edit: I did look at her visor during the first live viewing, but you could not see any images in her visor. Which led me to rule out that she had any lcd display for the demo use.

Stealth Edit:

I want a dev kit, as now I want to create an RTS for multiple players (be expensive) in a household, it would be really fun to move around the room (aka board) to move troops, build things, etc.

Screw Minecraft, give me Age of Mythology, Halo Wars 2, etc. Space Engineers would be fun, as most everything could float around shoulder height.
 
Last edited:
Lucky for us many press actually did try it out that day and reported their experiences with it... none cried fake...
Correct, and the press demos describe the heavy, awkward prototype hardware, not a simple visor of the MS demo.
As Gizmodo put it:

The demos begin by lowering a tethered, relatively small, rectangular computer over your head, which hangs around your neck by sling. Like what Flavor Flav would do with a computer. You can literally feel the heat coming off the computer's fans, which face upward. It feels like you're wearing a computer around your neck, because you are.
After that, the headset was carefully handed to me so that I could guide it onto my head while the demonstrator placed it over my eyes. To be completely clear, the headset dev kit I tried literally had exposed circuit boards.
 
If it is using the Intel "Cherry Trail" chipset with WiDi, then we could have been just seeing a stream from that? (My guess)
It can't be, since the visor's forward cameras would show the room in front of her, but not the AR overlay graphics. And the AR overlay graphics video stream wouldn't show the room. So we saw a custom view, not from her device, but from the camera over her shoulder which synthesized what she should be seeing in an actual device.
 
^ Watch the video again, you can see the camera device during the "Meyerson" hologram is nowhere close to the right angle to be driving the picture. 0:53 seconds is a good example, which matches up with some of the liveblog shots that showed her with hand center to her face and the camera device rig was still near the podium a good 45-degrees off.

I guess there has also been a few reports that it will be at //Build, and units going out this spring to developers. So we will get more news then, better impressions hopefully.

Dunno -- I am still leaning on the side that her device was real, and probably rare enough for MS to not want anyone to touch them. Or that the dev units the press used had more senors for data collection, as it was maybe a good way to have different heights, head movement styles, etc tested/collected.

Edit: I do not buy any of the videos they showed as real that is for sure, those reminded me of the early Kinect reveal vids.
 
Back
Top