Microsoft HoloLens [Virtual Reality, Augmented Reality, Holograms]

Certainly. Reducing brightness is all that's needed, whether a passive transition lens or an active (at least, electrically attained) filter. When the external light is reduced to internal levels by the time it reaches the hologram imaging lens, the results of the composition will be the same.
 
I think the reverse could actually be true regarding the distracted driving issues. A lot of accidents that take place because of distracted driving is taking your eyes off the road to look down at a text, to look up contacts to make calls or checking GPS directions on the dash or near the gear shift.

Halolens in the FOV could limit some of these issues with integrated voice command and text read back. Having to manual text or actual reading of texts may become unnecessary.

Some limits on full functionality would be necessary such as Youtube or video viewing, but somethings if implemented smartly could enhance safe driving like real time weather/traffic issues on your route of travel.
 
No way people are allowed to drive with these things on. There will be laws passed everywhere to expressly prohibit those.

I'm talking more about people walking around and walking right in front of a car with these things on.
 
No way people are allowed to drive with these things on. There will be laws passed everywhere to expressly prohibit those.

I'm talking more about people walking around and walking right in front of a car with these things on.
the FOV is pretty wide just have the screen flash a huge warning over everything when it sees a car
 
Problem isn't obstruction of views. Problem would be being distracted to the point of doing something negligent like walking in front of a car.
 
The hololens should track in real time. If it sees you walking towards a stop sign it could stop everything playing and pop the stop sign up in your view for you to stop , same with lights.

Now if someone is going to walk out in the middle of the street not looking and breaking the law by jay walking then give them a Darwin award. Its no different than walking in front of a car while checking an app on your phone and could be less dangerous since the cell phone would know where it is and whats going on to warn someone
 
Is there any evidence that hololens is operating in environments it's never seen before? I was assuming that the environments are pre-scanned such that it only needs to calculate its location and then map the augmented geometry appropriately to the geometry it already knows. Doing live, per-frame environment scanning and AR rendering, especially outdoors where your reference points for positional tracking are at arbitrary distances is a very different animal. I somehow doubt that getting sub-mm absolute positioning from distant outdoor reference points is going to be that easy. And then inside a car you'd have multiple conflicting sources of information (IMU/acceleration forces, motion relative to the interior of the car, relative to exterior buildings, relative to other cars passing by, etc.) I think usable on-the-go AR with the Hololens-promo's features is a *long* ways off yet.
 
Additionally, you wouldn't even need anything special to have it feeding a live news feed, skype call, youtube video, shopping list, address, text chat, video game, weather updates, stock ticker, sports scores, etc. Many of which could be a serious and dangerous distraction if someone is walking through a busy metropolitan area.

However, that's some point in the future. I'm sure the first versions will heavily recommend people to not use these in those sorts of conditions.

Regards,
SB
 
I think the trick is to have the "AR" be truly AR - such that you are augmenting the world rather than simply throwing information windows onto some kind of 2D hud. If your AR is able to texture map places and things in a way that it behaves as seamlessly as street signs or painted road surfaces already do, then you're not so much throwing *more* information at the user, but *better* information in the same places they already expect to look. Sat nav directions painted on the road surface, personalized menu options and prices on the passing McDonalds sign, improved brake lights of other vehicles, wall-hacks for oncoming cars at blind intersections, virtual billboards over top of roofs or textured onto the sides of buildings. The advertisement part is probably key though - it would basically allow for the device to turn any part of a city into a times square-like advertisement platform, albeit entirely personalized, so even if these devices are freakishly expensive they could be subsidized by ads in a way that smartphones never could. And much like modern web advertising, it would probably be self-moderating as any overly obnoxious implementation of ads would be a death knell for the device. There's a substantial amount of sensory noise in the physical world that we live in - most information that we see during the course of a day is impersonal, if not completely irrelevant to the people receiving it, so we naturally end up spending our day filtering out 99% of the things we see. Having the physical environment behave more like a personalized, selective web experience and less like the shotgun, lowest common denominator network/cable TV experience could be pretty compelling.

But until we have some reliable method of doing that kind of open environment spatial tracking, then on-the-go AR will probably not be much more than a smart watch attached to your face, a la google glass.
 
Can I say something stupid?
Now that wearable and AR are becoming reality, the next step is to integrate something like this

brain-powered-gadgets.jpg
 
Can I say something stupid?
Now that wearable and AR are becoming reality, the next step is to integrate something like this

brain-powered-gadgets.jpg

EEG isn't going to offer anything worthwhile in terms of brain computer interfaces for consumer devices. The resolution is too poor and they are too susceptible to noise (simple muscle activation in the head/face). It's no surprise that every new EEG demo you see in literature ends up using some form of semi-autonomous system (quadcopters, scripted animation systems, brain to brain, etc) in order to give the perception of highly precise and reliable control, when they're actually being used to mask the absence of it. The only place I see it maybe being useful is if you're a quadriplegic and your only other alternative for computer input is sucking or blowing into a straw (and even then you still have guys like Hawking that use their facial muscles for input as they're far more reliable.)

EEG as a technology is like 100 years old, so it's not like it's some new thing that's yet to mature and offer its true value. The last time I got an EEG at a hospital I got bitched at by the tech for not telling her that she had strapped it on too tightly under the chin - the fact that I was having to put tension in my throat against the elastic band was enough to foul up her readings. And that's with a full cap of wet electrodes and lying down in an otherwise restful state. Imagine 3-4 dry electrodes that are not positioned by a trained technician, and being used in an active manner where you're walking, talking, blinking, swallowing, scratching your head, etc. Trying to get a reliable reading of brain activity in that sort of situation is like trying to measure sea life by the size and shape of the waves on the surface.
 
Myoelectric activity isn't going to interfere with EEG signal acquisition because the frequencies at which they operate are too distant (1-40Hz EEG, 100-500Hz EMG).
Similarly, electrostatic noise - or even the 50/60Hz noise from wall plugs - shouldn't be much of a problem either if you apply a low-pass filter. That said, the difference between wet and dry electrodes shouldn't be that different using a modern acquisition IC with decent resolution.

The problem with EEG is that it depends a lot on mood. You weren't bitched at because you were moving the muscles in you head. It was the discomfort caused by the elastic band that triggered a bunch of waves from your brain and messed with the overall readings.

Imagine a peripheral that you can only use if you're calm and comfortable.. in a videogame. You get hit, feel stressed, can't use the peripheral. Character falls down a ravine, feel frustrated, can't use the peripheral. You win, feel radiant, can't use the peripheral anymore.
It's a formula for failure, really..
 
You weren't bitched at because you were moving the muscles in you head. It was the discomfort caused by the elastic band that triggered a bunch of waves from your brain and messed with the overall readings.
That's exactly what HughJ was saying - the background noise of everyday brain activity can't be effectively filtered out to leave the intended, clean input.
 
That's exactly what HughJ was saying - the background noise of everyday brain activity can't be effectively filtered out to leave the intended, clean input.
I was referring to the mention of myoelectric activity of the muscles in the head being a problem for EEG acquisition.
I may be wrong, but AFAIK it's not a problem because of the large difference in the frequency spectrum between myoelectric and electroencephalogy signals.
 
Last edited by a moderator:
I didn't read it as that, but just twitching faces+eyes which we all have. ;) Putting it another way, EEGs are never flat because there's always brain activity, even when not doing* or thinking anything. (*which never really happens unless you're a corpse). It's like trying to read the operations of a CPU by measuring its radio emissions or power draw - you can tell if it's busy or idle, but not what it's up to.
 
Got a tweet, what the press was testing was "@JCrookedSmile Yes, what the press tested was just a focus portion of what it can do. Future events will go into more detail of the work done ;)" My question was in relation to all the complaints about narrow FOV from the dev kits the press were using.

How do we embed tweets?
 
Back
Top