Bigus Dickus
Regular
I've seen this mentioned I think once in the last few 1000+ post threads on the next gen consoles, but the idea seems so natural I can't help but be amazed we still haven't seen something like this.
I'm talking about combining dual controllers, one for each hand, with kinect's abilities. The controllers themselves wouldn't have to have any fancy wizardry like move or wiimote, that's what kinect is for.
Take an Xbox controller for example and essentially split it in half. Each hand gets a comfortable palm held or pistol grip style controller body with analog stick for each thumb. Beside the stick, abxy buttons on the right and dpad on the left in a comfortable and familiar position for the thumb just like current controllers. You could easily duplicate trigger and bumper on each side using pistol like triggers for digits 2 and 3. Or use all four fingers, adding buttons for digits 4 and 5.
Back some 25 or so years ago I used the attached image controller and loved it... the triggers were very intuitive and you didn't have to move your fingers around. Imagine that scaled down a bit for a single hand use... maybe a trigger with haptic feedback for digit 2 and small buttons at the fingertips of 3, 4, 5 on the underside of a more eggshaped body.
Anyway, how does that fit with kinect? Well, for one, although I'm sure kinect 2 will pleasantly surprise us with its improved fine hand motion tracking accuracy, there could be some circumstances where these controllers could help. The current xbone controllers have a small IR transmitter for the purpose of keeping gross track of the controller. But it seems likely that having a visible reference point on each controller could aid kinect in precise localization of the hand/controller in space.
Example... you're playing Halo 10, and to aim you take the normal stance you would with a rifle, using the two thumbsticks as sightlines. Squeeze the trigger to fire. Literally. Reference IR points might help ensure adequate accuracy. And no need for kinect to try and watch your finger to determine if you fired (no doubt resulting in utter frustration when you die because the sensor missed it), and no need to make some contrived artificial motion like stomp your foot to fire.
Which brings up the second huge point. One of the primary arguments against kinect, and rightly so, is that to use it you have to put down the controller. Or at least you did. The problem for devs this gen is figuring out how to use kinect to enhance games while you're holding a controller and your hands are tied up. But why? Why should your hands be tied up? And why should kinect have to figure out if you're pretending to hit a button when you could just hit the button? Why not let controllers do what they do best... take advantage of hand and finger dexterity, and let kinect do what it does best? Simultaneously?
This solution addresses the problem of having to come up with substitutes for things you can't do in a room, that controllers for decades have provided a quite fine substitute for... like press the stick forward to walk forward. Or hit x to jump. Or whatever. Or stuff that is just naturally suited to hitting a button better than a gesture... like bring up a map. The two input approaches can even be combined in novel ways... the same button can have different meanings/actions depending on controller/arm/body position.
Lets daydream one example: go back to Halo 10. You're sitting on your couch, because that is the comfortable way to play a game. Using a stick or dpad to walk around in the game, hands comfortably by your sides, dangling, resting on your legs... or out in front of you because you're so into the game with cheeks squeezed tight that just feels natural... your choice. You see an enemy from a distance, take aim with your rifle and tap a fingertip button to zoom. You can aim at enemies on screen, yet still look around your environment using a thumb stick just like you always have. Squeeze off a few shots before realizing you're under fire from up close. You toss a grenade over in the direction the fire was coming from (by simply tossing in that direction, no weapon change required), and run from cover. You see something huge and need more than your rifle... so you assume the position of holding a rocket launcher, one hand above shoulder, and fire with the same trigger... no weapon change required. Harder to aim accurately without two sights, but its a rocket launcher after all. Perhaps you get excited and hop to your feet in the game. That's fine. You literally duck to hide behind a rock and the game responds appropriately. Or you just hit the take cover/duck button on the remote because you're still sitting on the couch like me. Whatever floats your boat.
I'm just winging it here, but the possibilities seem quite immense. Seems like a decent compromise on the lag issue as well. We can all fire quicker than we aim in real life... what's wrong with a game like CoD being even more lifelike? Who knows what approaches devs might take.
There are some obvious technical issues with splitting the controller, but they seem approachable. You'd have to duplicate some systems like battery and wireless, so perhaps go with a battery technology a bit more advanced than AA... big deal. Perhaps real estate in each half would be tighter than in half a standard controller, so things like rumble motors might get squeezed down a bit. I'm sure the extra bandwidth wouldn't be an issue. Perhaps size and weight wouldnt be trivial to nail. But those all seem relatively minor even if ultimately solvable by compromises. Cost would go up. Big deal. Questions about whether to support both standard and split controllers, and whether devs could assume everyone has the split or it just be an accessory like move, pseye, and the first kinect would crop up. Big deal... that's business and marketing, I'm thinking technical.
Thoughts?
I'm talking about combining dual controllers, one for each hand, with kinect's abilities. The controllers themselves wouldn't have to have any fancy wizardry like move or wiimote, that's what kinect is for.
Take an Xbox controller for example and essentially split it in half. Each hand gets a comfortable palm held or pistol grip style controller body with analog stick for each thumb. Beside the stick, abxy buttons on the right and dpad on the left in a comfortable and familiar position for the thumb just like current controllers. You could easily duplicate trigger and bumper on each side using pistol like triggers for digits 2 and 3. Or use all four fingers, adding buttons for digits 4 and 5.
Back some 25 or so years ago I used the attached image controller and loved it... the triggers were very intuitive and you didn't have to move your fingers around. Imagine that scaled down a bit for a single hand use... maybe a trigger with haptic feedback for digit 2 and small buttons at the fingertips of 3, 4, 5 on the underside of a more eggshaped body.
Anyway, how does that fit with kinect? Well, for one, although I'm sure kinect 2 will pleasantly surprise us with its improved fine hand motion tracking accuracy, there could be some circumstances where these controllers could help. The current xbone controllers have a small IR transmitter for the purpose of keeping gross track of the controller. But it seems likely that having a visible reference point on each controller could aid kinect in precise localization of the hand/controller in space.
Example... you're playing Halo 10, and to aim you take the normal stance you would with a rifle, using the two thumbsticks as sightlines. Squeeze the trigger to fire. Literally. Reference IR points might help ensure adequate accuracy. And no need for kinect to try and watch your finger to determine if you fired (no doubt resulting in utter frustration when you die because the sensor missed it), and no need to make some contrived artificial motion like stomp your foot to fire.
Which brings up the second huge point. One of the primary arguments against kinect, and rightly so, is that to use it you have to put down the controller. Or at least you did. The problem for devs this gen is figuring out how to use kinect to enhance games while you're holding a controller and your hands are tied up. But why? Why should your hands be tied up? And why should kinect have to figure out if you're pretending to hit a button when you could just hit the button? Why not let controllers do what they do best... take advantage of hand and finger dexterity, and let kinect do what it does best? Simultaneously?
This solution addresses the problem of having to come up with substitutes for things you can't do in a room, that controllers for decades have provided a quite fine substitute for... like press the stick forward to walk forward. Or hit x to jump. Or whatever. Or stuff that is just naturally suited to hitting a button better than a gesture... like bring up a map. The two input approaches can even be combined in novel ways... the same button can have different meanings/actions depending on controller/arm/body position.
Lets daydream one example: go back to Halo 10. You're sitting on your couch, because that is the comfortable way to play a game. Using a stick or dpad to walk around in the game, hands comfortably by your sides, dangling, resting on your legs... or out in front of you because you're so into the game with cheeks squeezed tight that just feels natural... your choice. You see an enemy from a distance, take aim with your rifle and tap a fingertip button to zoom. You can aim at enemies on screen, yet still look around your environment using a thumb stick just like you always have. Squeeze off a few shots before realizing you're under fire from up close. You toss a grenade over in the direction the fire was coming from (by simply tossing in that direction, no weapon change required), and run from cover. You see something huge and need more than your rifle... so you assume the position of holding a rocket launcher, one hand above shoulder, and fire with the same trigger... no weapon change required. Harder to aim accurately without two sights, but its a rocket launcher after all. Perhaps you get excited and hop to your feet in the game. That's fine. You literally duck to hide behind a rock and the game responds appropriately. Or you just hit the take cover/duck button on the remote because you're still sitting on the couch like me. Whatever floats your boat.
I'm just winging it here, but the possibilities seem quite immense. Seems like a decent compromise on the lag issue as well. We can all fire quicker than we aim in real life... what's wrong with a game like CoD being even more lifelike? Who knows what approaches devs might take.
There are some obvious technical issues with splitting the controller, but they seem approachable. You'd have to duplicate some systems like battery and wireless, so perhaps go with a battery technology a bit more advanced than AA... big deal. Perhaps real estate in each half would be tighter than in half a standard controller, so things like rumble motors might get squeezed down a bit. I'm sure the extra bandwidth wouldn't be an issue. Perhaps size and weight wouldnt be trivial to nail. But those all seem relatively minor even if ultimately solvable by compromises. Cost would go up. Big deal. Questions about whether to support both standard and split controllers, and whether devs could assume everyone has the split or it just be an accessory like move, pseye, and the first kinect would crop up. Big deal... that's business and marketing, I'm thinking technical.
Thoughts?
Last edited by a moderator: