I saw this at another board, what do you make of it?

GwymWeepa

Regular
http://appft1.uspto.gov/netacgi/nph...ndo.AS.&OS=an/nintendo&RS=AN/nintendo

Edit: Try this: http://tinyurl.com/3jeej

While such pre-rendered texture maps have been used with substantial advantageous results in the past, they have some shortcomings in interactive video game play. For example, texture-mapping a pre-rendered image onto a 3D surface during interactive video game play can successfully create impressive visual complexity but may let down the user who wants his or her video game character or other moving object to interact with that complexity. The tremendous advantageous 3D video games have over 2D video games is the ability of moving objects to interact in three dimensions with other elements in the scene. Pre-rendered textures, in contrast, are essentially 2D images that are warped or wrapped onto 3D surfaces but still remain two-dimensional. One analogy that is apt for at least some applications is to think of a texture as being like a complex photograph pasted onto a billboard. From a distance, the photograph can look extremely realistic. However, if you walk up and touch the billboard you will immediately find out that the image is only two dimensional and cannot be interacted with in three dimensions.

We have discovered a unique way to solve this problem in the context of real-time interactive video game play. Just as Alice was able to travel into a 3D world behind her mirror in the story "Alice Through the Looking Glass", we have developed a video game play technique that allows rich pre-rendered images to create 3D worlds with depth.

In one embodiment, we use a known technique called cube mapping to pre-render images defining a 3D scene. Cube mapping is a form of environment mapping that has been used in the past to provide realistic reflection mapping independent of viewpoint. For example, one common usage of environment mapping is to add realistic reflections to a 3D-rendered scene. Imagine a mirror hanging on the wall. The mirror reflects the scene in the room. As the viewer moves about the room, his or her viewpoint changes so that different objects in the room become visible in the mirror. Cube mapping has been used in the past or provide these and other reflection effects.

We use cube mapping for a somewhat different purpose--to pre-render a three-dimensional scene or universe such as for example a landscape, the interior of a great cathedral, a castle, or any other desired realistic or fantastic scene. We then add depth to the pre-rendered scene by creating and supplying a depth buffer for each cube-mapped image. The depth buffer defines depths of different objects depicted in the cube map. Using the depth buffer in combination with the cube map allows moving objects to interact with the cube-mapped image in complex, three-dimensional ways. For example, depending upon the effect desired, moving objects can obstruct or be obstructed by some but not other elements depicted in the cube map and/or collide with such elements. The resulting depth information supplied to a panoramically-composited cube map provides a complex interactive visual scene with a degree of 3D realism and interactivity not previously available in conventional strictly 2D texture mapped games.
 
Btw, I only copy and pasted what was posted elsewhere, looking at the link, for the life of me I can't find the text description I copied over, maybe I'm blind or maybe this is bs :p
 
We have discovered a unique way to solve this problem in the context of real-time interactive video game play. Just as Alice was able to travel into a 3D world behind her mirror in the story "Alice Through the Looking Glass", we have developed a video game play technique that allows rich pre-rendered images to create 3D worlds with depth.
Bullshit. Even assuming it could work at all, this would immediately break as soon as one tries to look around a corner, as flat images do not store information about objects behind other objects (except in Bladerunner the movie that is... :p). And how would windows be handled?

I guess this patent assumes special, pre-created images that know that an window is a transparent surface and not a picture or painting, and keeps track of what's behind other objects, but then what's the idea of making 2D images "deep"? If not, a lamp or such hanging down from the ceiling would completely f*ck up the effect, as would an armchair with a tall backrest, a large potted plant etc etc etc.

I guess we've all seen render-to-texture effects that do just this sort of thing in realtime. I sure as hell know I have.

[0142] While the invention has been described in detail, the foregoing description is in all aspects illustrative and not restrictive. It is understood that numerous other modifications and variations can be devised without departing from the scope of the invention.
Now they're milking the patent too, making it as broad as possible... :p
 
If you can clamp the camera down to be in distinct locations, this might work. But why do that? Myst is not Nintendo's type of game.
 
Not sure if I'm misinterpreting the whole thing or what, but it sounds like they're talking about some kind of interactive emboss (relief) mapping.

Anybody else draw that conclusion?

Later

Iridius Dio
 
Is the URL correct? I don't find the word "alice" in the URL.

From the other description given, it sounds like some kind of image based representation where you are warping from one panoramic + depth field view to another. This would require an enormous amount of memory to capture a building like a castle say plus as the previous posters said cracks, hole fililng, and transparency issues. It would be interesting if they also had a compression technique to go along with this. These types of ideas have been explored years ago as layered depth images for massive models.
 
PZ said:
Is the URL correct? I don't find the word "alice" in the URL.

From the other description given, it sounds like some kind of image based representation where you are warping from one panoramic + depth field view to another. This would require an enormous amount of memory to capture a building like a castle say plus as the previous posters said cracks, hole fililng, and transparency issues. It would be interesting if they also had a compression technique to go along with this. These types of ideas have been explored years ago as layered depth images for massive models.

Oh I have no clue, this is the link someone posted, but I don't see any reference to the stated text in that patent.
 
I'm a bit curious about why people aren't more interested in this. It is after all probably the feature, or one of the features Nintendo will be tooting as revolutionary on their next console.
GwymWeepa, where did you get this (the text you quoted in you first post)?
 
Well Id have to see it in action, but even thinking about the camera system described in that patent gives me a headache.
 
Squeak said:
I'm a bit curious about why people aren't more interested in this. It is after all probably the feature, or one of the features Nintendo will be tooting as revolutionary on their next console.
GwymWeepa, where did you get this (the text you quoted in you first post)?

Video game play using panoramically-composited depth-mapped cube mapping

I read it last year when it was published but didn't bother posting it. It just reminded me of a fancy IBR version of Apples Quicktime VR tech...
 
In one embodiment, we use a known technique called cube mapping to pre-render images defining a 3D scene. Cube mapping is a form of environment mapping that has been used in the past to provide realistic reflection mapping independent of viewpoint. For example, one common usage of environment mapping is to add realistic reflections to a 3D-rendered scene. Imagine a mirror hanging on the wall. The mirror reflects the scene in the room. As the viewer moves about the room, his or her viewpoint changes so that different objects in the room become visible in the mirror. Cube mapping has been used in the past or provide these and other reflection effects.

this paragraph of text is so lousely formulated that it would hardly have any weight befor anybody else but the us patent officers... anybody who has had at least a remote experience with cubemaps knows what they can and connot do. depicting position-dependent reflections as those in a mirror is not their strongest feature. it takes a position-dependent cubemap (a sufficiently-different breed of cubemaps) to achieve this. which of course is nowhere to be seen in that official us patent paragraph. so 'cubemaps commonly used for mirrors' is anything between a fallacy and a half-truth, alas suitable for the patent office..
 
The patent link was all gibberish to me, but the quoted paragraphs seemed to imply the use of prerendered "worlds" like Resident Evil, but with dynamic camera positioning, like in Link's house in The Ocarina of Time. I could see this being useful in some scenarios, but not replacing traditional polygon-based world rendering altogether. This alone is not revolutionary.
 
Iron Tiger said:
The patent link was all gibberish to me, but the quoted paragraphs seemed to imply the use of prerendered "worlds" like Resident Evil, but with dynamic camera positioning, like in Link's house in The Ocarina of Time. I could see this being useful in some scenarios, but not replacing traditional polygon-based world rendering altogether. This alone is not revolutionary.

There are some rumors rumbling around that Nintendo will do something different with Mario's camera work, perhaps they'll take dynamic control away from the user (or limit it greatly), perhaps this technique can aid in creating lush worlds in a game with a relatively static camera.
 
"Again I can't help but think this is like normal mapping in a sense, where you take data from a higher detail model onto a lower one. You take a few panoramic cube maps and mesh them together in an algorithm so they can be painted on simpler geometry and still get the detail of a pre-rendered image.

If what I'm saying is accurate, it could be relatively easier to implement than just doing normal maps of every little thing you want to have applied to a lower poly mesh. Instead of pre-rendering some bricks, the floor, ceiling etc and making maps of everything individually, you make the room in its entirety and create some cube-mapping to reflect all the data, visual and dimensional (depth) and encode it to a texture, them you create a new room, with far fewer polys and then apply all the previous data to it."

I was thinking along similar lines when I ran across this by MP. A hybrid with displacement mapping since the depth data can transform meshes, allowing you to interact with actual poly models. This would allow for a far easier creation of environments if interpreted correctly from a developer's standpoint.
 
Basically, RE:Remake's photorealistic visuals with environmental interactivity. (or partial interactivity?) Accomplished with no serious overhead? I have got to see this.
 
I've placed everything here in bold for a reason.

The whole flat texture map with virtual three dimensional fields sounds like relief mapping. I'd thought you could create an entire gaming environment like this, but there are provisions. Sounds like somebody had the same idea, though.

Later
 
I only read the passage that was quoted here - the embodiment describing the use of cubemapping is just a 360degree panoramic texture with depth information.
So you could do your Onimusha or RE where cameras can rotate in the spot. As far as that passage goes - it has no relation to relief maps, normal maps, or anything similar.

Not really sure why it would warrant patenting either, but I guess that's something most US patent applications have in common... :?

Basically, RE:Remake's photorealistic visuals with environmental interactivity. (or partial interactivity?)
Nope - same interactivity as in RE - with a partially mobile camera (instead of completely static).

For others that read it - is it worth reading the rest of the patent?
 
Wasn't really referring to the whole thing, just one specific part sounded like relief mapping, but I concur with the previous statement that the wording is poorly formulated.

Later
 
"Resurrection!" - Gill

There are rumors about that this is what the Revolution is going to be based on. While trying to refute the technical plausability of such an idea in another forum, the cogs started turning in my brain, and then I was reminded of this thread. Could it be for real? This "augmented reality" mixed with the tech described in the patent sounds like just the sort of mix that would work. It also seems like something that could be implemented by Sony or MS in the 11th hour and Nintendo would do well to keep under wraps.
 
Back
Top