Damn you, PSINext! Always pre-stealing our topics! <shakes fist>
But the next generation of camera interfaces can measure the actual distance to objects using infra-red pulses. And they're extremely precise. They're able to trace the exact contour of any shape, and they can track it as it moves toward or away from the camera. This changes everything!
Hold up, he hasn't even started. Cameras with this kind of resolution can do real-time motion capture. So, you can dance in front of the camera, and all of your movements can be tracked and then applied to a digital model rendered on the screen. In his next demo, Dr. Marks moved around and on the screen a skeletal version of himself moved to match. He'd wave his arms and the skeleton would do the same. Physics was built into the simulation, so when he punched his arms forward, the skeleton punched, and it could hit objects around the virtual room. Because the camera was tracking distances, it could actually track where he was in a 3D space -- standing in certain spots triggered certain actions, for instance. The Eye-Toy's motion tracking looks pretty primitive in comparison. Imagine the gaming possibilities of this kind of interface! You'd literally be, full body, involved in the on-screen action, stepping into another character.
McFly said:All those "buildings" are real. The second webcam (a black one) is on the right side of the white webcam.
ROFLrabidrabbit said:Time to clean up your room, get a haircut, better clothes and do some workout.
Because next gen the graphics superiority is not dictated by if PS3 is the most powerful, but how you and your surroundings looks!
Ugly people with messy rooms are teh l00sers next gen
Computer-generated scenery can be realistically added to live video footage, using a machine vision system developed at Oxford University, UK.
Researchers Andrew Davison and Ian Reid say the augmented-reality system could also in the longer term enable robots to navigate more effectively. Or it could be used to virtually decorate a real house or plan engineering work. It allows a computer to build an accurate three dimensional model of the world using only a video camera feed. It can also keep track of the camera's movement within its environment - all in real time.
Previously, it has been necessary to calibrate a computer using several markers added to a scene. The Oxford team's machine only requires an object of known size to be placed in its line of sight to perform a complete calibration.
The system then automatically picks out its own visual markers from a scene. By measuring the way these markers move the computer can judge how far away each marker is. It can also rapidly determine how the camera is moving.
Next gen really needs something else than just new fps games, fighters, football, racing games... with better graphics
Acert93 said:Imagine some goggles with an see-through LCD overlay and a GPS system.
Alejux said:IMO, this tecnology alone, has no real use in games.
Now if you add a cheap high-resolution 3D VR-goggles , with no eyestrain, then it's a whole new ball game.
StarFox said:Next gen really needs something else than just new fps games, fighters, football, racing games... with better graphics
That's exactly what Nintendo have been saying recently and I agree. We need more then just more of the same next generation.
Acert93 said:Imagine some goggles with an see-through LCD overlay and a GPS system. Troops could fight mock battles as the GPS pushes out the coordinates of enemies. The fake guns could interact and state "hits and misses". Basically the opposite of VR like someone else said--bringing the VR world to the real world.