Controllers and every other aspect of Orbis and Durango

I've only used Move briefly on MAG, and I didn't get on with it. Others have raved about it. In my improved model, you'd have thumbsticks on both controls for dual-stick movement (move and rotate), and you could use point+shoot for aiming. But I'm not really thinking about that as a control scheme. I'm thinking things like space and orientation between controllers can be used, or pointing near/far into the screen by the triangulation point angling both controllers inwards, or effortless switching between gestures and dual-sticks, combining realism with our favourite interface.

Move + shooter is a very strange combination. Some people can't get used to it. They may never get used to it. Some people pick it up slowly. Some swear by it and wield it like second nature.

I belong to the slow camp. I think it makes KZ3 SP more immersive and interesting (replayed it with Move). For MP, I'd rather shoot myself. But there are people who headshot with Move quickly. Very odd spread of experiences.

Move + RTS is interesting too, but a touchscreen or maybe pad is better.

Overall, I like Move for titles like Sports Champions and Edmund's Quest.
 
The new PS Eye just seems to be a culmination of all the controller-less gaming research Sony has done since 2004. They are evolved forms of each other.

- EyeToy (320x240@60fps)

- PS Eye (640x480@60fps and 320x240@120fps with 4 array mic)

- PS Eye 2 (1280x720 stereoscopic @ 60fps? and 640x480@120fps? with 4 array mic)

Adding their voice recognition library, that was implemented in SingStar, seems natural.
Adding their face tracking with augmented reality library, also, seems natural.
Using the Move technology, rooted in Dr. Marks 2004 presentation, also seems natural.

I think the only reason the technologies have all come together so quickly is because of MS jump into these Sony researched areas.

I loved Eye of Judgement! :)

Edit: http://www.youtube.com/watch?v=_XF-QGcnVp0

I :love: Eye of Judgment too. It ranks very high on my list (top 5) in terms of playability.

I think accurate PSEye voice recognition is non-trivial. They also need to localize it for multiple languages around the world. Sony may rely more on motion control. We shall see !
 
I :love: Eye of Judgment too. It ranks very high on my list (top 5) in terms of playability.

I think accurate PSEye voice recognition is non-trivial. They also need to localize it for multiple languages around the world. Sony may rely more on motion control. We shall see !

I think one of those videos said it was localized for 20 languages or something like that.

EDIT: Yep, 20 languages. It's in the voice recognition link :)20 sec mark).
 
ltrmq_24127.nphd.png


I had argued before the org. Move was released (when there was talk of a 3D Xbox controller linked to MS's motion wand patents) that there should be a dual wii-mote setup that could be linked/split so as to have a functional standard controller AND dual motion controls.

Seems Sony liked the idea.
 
Move + RTS is interesting too, but a touchscreen or maybe pad is better.

OT, but what was the name of the PSN released RTS that I think had Move controls designed/built early on in its development? I was watching it closely before release then completely lost track of it and now I can't even remember its name. :???:
 
I think one of those videos said it was localized for 20 languages or something like that.

EDIT: Yep, 20 languages. It's in the voice recognition link :)20 sec mark).

Cool ! I also like the SingStar voice recognition feature. it's practical and accurate. Sony have some ways to go for a hands free implementation. They need to somehow address the acoustic for every home (beyond echo cancellation). MS has a head start. It is a difficult problem.


OT, but what was the name of the PSN released RTS that I think had Move controls designed/built early on in its development? I was watching it closely before release then completely lost track of it and now I can't even remember its name. :???:

Under Siege ?

There is also RUSE.

Under Siege is nice but difficult. The Move RTS implementation is ok but less efficient than a touchscreen UI.
 
ltrmq_24127.nphd.png


I had argued before the org. Move was released (when there was talk of a 3D Xbox controller linked to MS's motion wand patents) that there should be a dual wii-mote setup that could be linked/split so as to have a functional standard controller AND dual motion controls.

Seems Sony liked the idea.

Here is a 10 second drawing that I think would be way better and accomodate everything from pure motion control to traditional joystick to trackpad play. The trackpad could even be used on it's own as a movie remote and would have a keyboard pattern printed on it for facilitated typing which would be backlit when a text entry box is on screen. 1 joystiq turns into 3. Oh yeah no more glowing balls at the end... there would instead be an array of LED lights at the top of the joystick. The way they connect is by sliding into each other on a dual rail where they meet.

2ugcm6s.png
 
Last edited by a moderator:
http://www.vgleaks.com/durango-next-generation-kinect-sensor/

A few interesting points:

As part of the process of producing the depth stream, the sensor uses an active IR stream. This stream is 512 x 424 at 30 frames per second. The active IR stream is stable across variable lighting conditions. For example, shadows, pixel intensities and noise characteristics are the same for a well-lit room the same as for no light in the room. As a result, this stream could be used for feature detection in situations where a color stream would be useless.

It seems the active IR stream uses the 'calibrated light source' - I assume the raw source resolution means that the calibrated light source averages out?

Core Kinect functionality that is frequently used by titles and the system itself are part of the allocation system, including color, depth, active IR, ST, identity, and speech. Using these features or not costs a game title the same memory, CPU time, and GPU time. These features also provide advantages. For example, the identity system will run across application switches because it is handled by the system, not individual applications, and avoids having to re-engage and sign-in repeatedly.


Always on in skeletal tracking mode. I was kindof thinking they'd done the opposite.

Kinect’s CPU, GPU and memory usage on Durango are part of the system reservation.

We already knew about a CPU and memory reservation, and it's been suspected that there's a GPU reservation for processing the image. (although we have no idea how much this reservation is).
 
If the spec is correct (and I'm interpreting it correctly), the depth requirements (0.8 to 4.0m) are the same.

So for me it's useless as my couch is about around 13 feet from my TV and the current Kinect can't see me. For people who have had too small of an a room, the story is unchanged as well.
 
It may require to see less of you to still determine your movements and position (has a wider field of view that now accomodates up to 6 people at once apparently).

So I'm still a bit hopeful, because otherwise I'd definitely be out as well - the best I can muster is around 1.8m, and that's being very close to cupboards and couches, which the current version had a lot of problems with. The new version may be able to not be bothered by that however.
 
The higher resolution of the sensor is compensating for the higher FoV, it looks like it won't really be more precise than Kinect 1.
 
The higher resolution of the sensor is compensating for the higher FoV, it looks like it won't really be more precise than Kinect 1.

Even if it was the exact same hardware but hooked up to Durango, then by virtue of not being constrained by the shared USB bandwidth on the 360 it will already be 4x as precise. This was shown by devs who hooked it up to PC.
 
Even if it was the exact same hardware but hooked up to Durango, then by virtue of not being constrained by the shared USB bandwidth on the 360 it will already be 4x as precise. This was shown by devs who hooked it up to PC.
How can the depth sensor resolution increase 4 fold simply by hooking it up to a PC?
 
The higher resolution of the sensor is compensating for the higher FoV, it looks like it won't really be more precise than Kinect 1.
To some degree, but I doubt it's 6x the FOV (which is the increase in resolution).

If it can track some of the things they say it can, then it must be more precise.
Well some of that could be more to do with processing than resolution, but ther must be increased resolution as I mention above.

Even if it was the exact same hardware but hooked up to Durango, then by virtue of not being constrained by the shared USB bandwidth on the 360 it will already be 4x as precise. This was shown by devs who hooked it up to PC.
That's a good point too, but I can't remember the details. The Video feed is 320x240, quarter res, but wasnt the skeleton tracking working on other data at full res? I think for the skeleton tracking, the comparison is 640x480 vs 1920x1080.
 
How can the depth sensor resolution increase 4 fold simply by hooking it up to a PC?
Kinect was providing a quarter resolution video feed to devs, due to limits of the interface on XB360. The same Kinect connected to PC provided a 640x480 video feed. I don't think that affected skeleton tracking though. The particulars are probably out there (didn't the PrimeSense tech use a quarter resolution something-or-other anyway?).
 
Precision is linear, it's the physical distance between discreet samples.
From vgleaks, the new depth sensor has an increase in precision of 512/320 = 1.6x
The increased FoV reduces the spacial resolution by tan(70/2)/tan(57/2) = 1.3x
So the spacial resolution of the new depth sensor is 1.6 / 1.3 = 1.23x

But vgleaks also claims "In addition to higher resolution, the depth sensor is more precise." so there's more to it than the sensor resolution. They improved something but they don't say what it is technically. I assume it's the depth resolution, maybe the point cloud it gives is less noisy.
 
Is the latency drop enough though?? I was hoping for a bit more.

All in all ot definitely looks like microsoft is being conservative this generation...dont get me worng everything has had a nice upgrade and is up to date...but I just dont get the sense we are looking at DARPA projects here..

Perhaps the nice suprise will come in the RRP...
 
I mean..we have a faster usb connection. .better sensors...faster processors....the 3 year advantage of development problem assessment...and the best we have is 1/3 decrease in latency..the universal largest problem with the original kinect??

It certainly looks like we had a wiiu design philosophy going on with kinect 2....ok maybe not that bad...but were looking at something like 15$ BOM for this thing...
 
Last edited by a moderator:
Back
Top