Avatar Kinect

That's not a particularly fair view. We (everyone not involved in Project Natal) had two conflicting specs published in stores, one saying a 640x480 sensor, and the other saying a 320x240 depth feed. We also have the PrimeSense reference design and tech descriptions that say the depth output is a quarter of the depth camera's resolution. There's also the PrimeSense marketing gumpf that says one of the advantages of their tech is that it uses off the shelf camera components, nothing special, which suggests for economies Kinect could use the same camera for both vision and depth. And given a 640x480 image camera, and therefore a 640x480 depth camera using the same component which is the resolution one lot of retailers specs specified, and given that the depth data is downscaled according to PrimeSense and the other set of specs claimed a 320x240 depth feed, that logically points to a 320x240 depth sensor because that fits all the facts we had available.

It may be wrong, as speculations often are, but it's disingenuous to say DF's viewpoint was probably just picking one bit of info and ignoring the rest. I don't see how anyone of a logical mind could look at the info we had and not see 640x480 video and 320x240 depth streams as the most probable configuration.
Except that the DF article was printed _after_ the PC folks had shown that the depth feed is 640x480. They had proof that the depth did not have to be at 320x240, and yet they still kept their assumption that we were for some reason not using that resolution. They should have re-evaluated their data at that point.

The only thing I fault DF for in this case is ignoring the other specs out there detailing a 640x480 feed in favour of the one showing 320x240. Their conclusions were logical, and their assumptions were good, just based off an incorrect datum.

I believe both sensors have the same resolution, and using what you know about the primesense tech, you can probably work out what it is.
 
The only thing I fault DF for in this case is ignoring the other specs out there detailing a 640x480 feed in favour of the one showing 320x240. Their conclusions were logical, and their assumptions were good, just based off an incorrect datum.
I think here the problem is the 320x240 figure we got. It was never substantiated, but you also have to wonder where it came from. DF's explanation managed to integrate all points, although I did wonder how, or rather why, you could select output resolution from the device. Why would MS add switchable output resolutions? Some suggested forward thinking for PC support, but it looks like the real reason is - it doesn't!
I believe both sensors have the same resolution, and using what you know about the primesense tech, you can probably work out what it is.
Ha ha, you tease! The other idea entertained when trying to figure out Natal and getting that first 640x480 resolution spec was that it was a 1280x960 sensor. 640x480 does seem mighty low for this day and age - is anyone still making them as titchy? What I personally didn't factor in at the time was USB limits, and looked at the video res as a hardware limit, not a choice of output res to fit BW, which is a distinct possibility. Of course, I was also thinking of using two of the same cameras for economies, but the tear-downs have shown two different cameras. And there's also more processing gubbins in there then we were expecting. It was an interesting puzzle to have a go out, but we weren't terribly successful at penning out the nature of Kinect from Natal to release. If Richard gets wind of this conversation, maybe he'll have a nother investigation? It'd be nice to get some direct grabs of the 360's input, instead of low-res, blurry off screen captures we tend to get that don't allow particularly fair comparisons.
 
It doesnt matter if Kinect Avatar is "great" or if it adds anything we really need. What Microsoft has realized and what Nintendo has proven is that if you come up with the right product and present it in the right way, it will appeal and sell by the bucketloads.

I bought a Kinect recently, which got me excited at the beginning when I tried out its controller free interface and Kinect Adventures with friends.

But after a while it has occurred to me that,damn.....Adventures actually suck big time.....and the kinect interface in the dashboard was atrocious, unnecessary, and slow compared to the controller.....yet....I was using it.

Its kind of sad if you think about it. And crazy :p

Those feature additions to Kinect work just like the package/presentation of the product. Or just like the visual merchandizing. Only that with Kinect its fused with the product itself. It works. Its constant marketing and a feature simultaneously

What most seem to not realize and probably because of how MSFT demonstrates the feature is that there is no need to say: "XBOX.................[wait for items to appear]........Play Disc" you can issue the commands as one, "XBOX play disc" [no speakable items appear]. I'm someone who makes full use of the 360's trigger and bumpers to navigate through pages and lists and I don't find speaking the commands much slower than turning on my controller and right-bumpering over from one horizontal page/tile to the next. EDIT: Both methods are faster in some fashion than using my harmony remote if only because of layout.
 
Except that the DF article was printed _after_ the PC folks had shown that the depth feed is 640x480. They had proof that the depth did not have to be at 320x240, and yet they still kept their assumption that we were for some reason not using that resolution. They should have re-evaluated their data at that point.

The only thing I fault DF for in this case is ignoring the other specs out there detailing a 640x480 feed in favour of the one showing 320x240. Their conclusions were logical, and their assumptions were good, just based off an incorrect datum.

I believe both sensors have the same resolution, and using what you know about the primesense tech, you can probably work out what it is.

So are you saying the depth feed being used by the 360 is 640x480?

I thought we always knew the specs of the camera itself was 640x480 but thought the USB set up on the 360 was limiting the allowed resolution.
 
What most seem to not realize and probably because of how MSFT demonstrates the feature is that there is no need to say: "XBOX.................[wait for items to appear]........Play Disc" you can issue the commands as one, "XBOX play disc" [no speakable items appear]. I'm someone who makes full use of the 360's trigger and bumpers to navigate through pages and lists and I don't find speaking the commands much slower than turning on my controller and right-bumpering over from one horizontal page/tile to the next. EDIT: Both methods are faster in some fashion than using my harmony remote if only because of layout.

I havent used the voice recognition yet.
 
Ok just tried the voice recognition out of curiosity. Obviously you can say XBOX "set command here" without waiting but I still finding slow and sluggish compared to the controller. The experience is indeed interesting but after a few times I just dont find meaning in the voice recognition.
 
Ok just tried the voice recognition out of curiosity. Obviously you can say XBOX "set command here" without waiting but I still finding slow and sluggish compared to the controller. The experience is indeed interesting but after a few times I just dont find meaning in the voice recognition.

It's still in its infancy related to Kinect, but I found it useful with last.FM when changing to the next song from areas outside of the living room i.e. Kitchen, hallway, etc.
 
I haven't tested it from a totally different room, but it has recognized my voice commands from co-joined rooms adjacent to the living room at least with last.FM.

I don't really utilize the voice commands very often other than to "Play disc" or "Play Next" commands. I assume that it will get updates as Kinect matures. I'm hoping they add voice commands for movie DVDs and enriching Kinect games. For example instead of the pause pose, you say "Xbox pause" or maybe even "Xbox main menu".
 
Ok, how about this.

I have a PS3 and Kinect. We were watching the Scott Pilgrim bluray. The pizza guy showed up. I picked up the PS3 controller and tapped the PS button to power it on. I forgot which button was pause, so I hit triangle, then tapped around to pause. Paid for pizza, got slices, went back to PS3 and the screensaver had started, which disables the triangle menu on the controller. Hit triangle to remove on-screen controls and hit x to resume.

VS

On another day, we wanted to watch Inception and I remembered it was on Zune Marketplace. I pressed the power button on the 360. (wait for 360 to boot and kinect to find the floor). This is the only time I actually pressed a button by the way. I waved, Kinect recognized my face, and signed me in, so now I have Gold member control over the 360. I say "xbox zune" and wait as Zune starts up. I hand-over (well, it's not a mouse ;)) the Inception box art. I hold-over to buy. The movie starts streaming (takes about 10 seconds to get up to 1080p). Pizza guy shows up (yeah, we eat a lot of pizza). "Xbox, pause". We get our stuff and come back to the 360. The screen has dimmed (screensaver mode). "Xbox, play". And we're back into it.
Who talked about PS3? I mention XBOX360 controller vs XBOX360 Kinect.

How about that as a VS?

And then you assumed that you have a screen saver that starts in a matter of seconds and that you forgot how to use the controller or you arent familiar with it. Well someone might forget the commands too or might not be familiar with them. It can go both ways
 
Ok just tried the voice recognition out of curiosity. Obviously you can say XBOX "set command here" without waiting but I still finding slow and sluggish compared to the controller. The experience is indeed interesting but after a few times I just dont find meaning in the voice recognition.

I can understand that. If my controller is on and near me I would hit right-bumper to skip to the next chapter versus saying XBOX skip but I would say XBOX skip before I "fumbled" around on my Harmony 880 since the buttons are not distinct and I don't have the muscle memory for its placement (I don't use it from the remote routinely). The time I spend staring at the buttons on the remote and then pointing (its IR) and pressing the button is definitely not faster than verbal commands.

I also use more verbal commands because for whatever reason Kinect does not pick up my entire living room. There is a serious void when you look at my living room from the Kinect Tuner (Kinect tuner needs to be a meta game, lol). For someone like me that starts all of their equipment from a remote that is not the platform holders controller verbal commands are much faster. I would also add that as easy as Harmony remotes make things my wife finds it much easier to simply use voice commands coupled with gestures to do what she needs/wants to do once everything has turned on.

I'm afraid I don't understand the slow and sluggish though, once something has been instantiated it should take the same time verbal command or button press, correct, or am I misunderstanding your meaning?

I loved 1 vs 100 so if avatarkinect can bring some of that fun back I will surely like it.
 
I'm afraid I don't understand the slow and sluggish though, once something has been instantiated it should take the same time verbal command or button press, correct, or am I misunderstanding your meaning?it.

I get the impression he's waiting for feedback (visual cues) from the xbox, which is not necessary. It's like navigating an automated phone system, if you know the commands you can zip right through.
 
I get the impression he's waiting for feedback (visual cues) from the xbox, which is not necessary. It's like navigating an automated phone system, if you know the commands you can zip right through.

If you are refering to me I just say XBOX "command" directly. I dont say XBOX then wait for it to show me the commands and then say what I need to say (although sometimes I do because I dont always remember the command of the action I want it to make).

I never used a remote for my 360. I use the normal controller which I find much easier (than a remote) to remember what button does what since we already have the buttons mapped in our minds without looking due to our extensive gaming experience with it.


If you want to use hand navigation you first have to wave the hand to enter a secondary menu (Kinect Hub),you need to see where the hand icon is on the screen, then try to move it where you want which has some lag and wait for your selection bar to get full to enter where you want to go.

The voice recognition can be faster than that but again takes some seconds to process it and in some occasions I had to say the command twice before it got it.
 
My car has voice commands, I can say temperature 74 degrees and it will set it, it works quite well. However, there's no point as the buttons to adjust the temperature are right within my hands. It's only useful when I want to call someone though.
 
Last edited by a moderator:
Sometimes it is. Its not always fast and sometimes doesnt get the command right from the beginning. The controller doesnt make mistakes and is directly responsive
 
The implementation is still in its early stages. I expect that it will improve as time goes on just as I expect them to add voice commands to other apps (Hulu Plus and Netflix).

As a bonus, my ISP has added ESPN3 support :) It works on my desktop, but I haven't tried it yet on my X360.
 
I get like 4 ESPN channels if I plug in the cable coming from the wall directly to my TV, and I'm not even a cable customer!
 
Back
Top