WiiGeePeeYou (Hollywood) what IS it ?

Status
Not open for further replies.
Looks like Nintendo is using Robert L. Cook of Pixar Shade Tree, Reyes Rendering architecture. Not entirely though. I wonder about the complexity of such model, also the cons compared to MSFT, DX 8-10 pixel shader technology.
 
pc999 said:
Still isnt 16Mgs
"Mega grams"? What, man? You mean megabyte, write MB. Or Mbit for mega...well, you get it. :)

30-60 hours of batery and a camera thing:!:
Battery life is pretty good if true, considering bluetooth mice hardly last 60 hours on one charge. About 3 days is the most I've ever gotten out of mine, and that doesn't include the time I spent sleeping and otherwise away from the PC of course.

The "camera thing" is probably just a misinterpretation of whatever infos they were given. As there is an IR window at the front of the wiimote, it couldn't take any normal pictures anyway, even assuming there is some kind of ccd or cmos image sensor behind there.

Fox5 said:
Hmm, how come the doom 3 windows didn't have that same "low res texture" look that reflections in many games (the need for speed series comes to mind) have? Just higher res texturing?
Yeah, I would think that. They most likely use a render target that is the same size as the screen buffer.
 
Guden Oden said:
Battery life is pretty good if true, considering bluetooth mice hardly last 60 hours on one charge. About 3 days is the most I've ever gotten out of mine, and that doesn't include the time I spent sleeping and otherwise away from the PC of course.

To be exact it is 30 hrs for full use and 60hrs for accelerometer only.
 
Ooh-videogames said:
Looks like Nintendo is using Robert L. Cook of Pixar Shade Tree, Reyes Rendering architecture. Not entirely though. I wonder about the complexity of such model, also the cons compared to MSFT, DX 8-10 pixel shader technology.

From here is that info?
 
pc999 said:
From here is that info?

Looking at the patent, you'll see that there are many references used to create the Shader, I assume is for Wii and previously GC to a small extent.

I've notice through some searches on the net and things I have viewed on television, many of the FXs that are used in the gaming industry is connected to film animation studios. The guys behind Shrek1-2 came up with sub-surface scattering to make textures resemble skin alot more accurately.
 
Ooh-videogames said:
Looking at the patent, you'll see that there are many references used to create the Shader, I assume is for Wii and previously GC to a small extent.

I've notice through some searches on the net and things I have viewed on television, many of the FXs that are used in the gaming industry is connected to film animation studios. The guys behind Shrek1-2 came up with sub-surface scattering to make textures resemble skin alot more accurately.

Thanks.

Would the number pixel pipe lines limit what a GPU is capable of through shaders?

What eating at me is could GPU with say, maybe 8 pixel pipelines or 4 be able to perform FX's such as Parallax mapping, sub-surface scattering?

Well depending on the architeture and the rez meybe (and what you call a pixel pipeline), I would expect a X1300 well pushed to do well in UE3 games at low rez, just think that games like UT07/BIA3/Crisys must run in this kind of cards or very few will be able to play those games.
 
Last edited by a moderator:
Ooh-videogames said:
Looking at the patent, you'll see that there are many references used to create the Shader, I assume is for Wii and previously GC to a small extent.

I've notice through some searches on the net and things I have viewed on television, many of the FXs that are used in the gaming industry is connected to film animation studios. The guys behind Shrek1-2 came up with sub-surface scattering to make textures resemble skin alot more accurately.

I believe I recall hearing about sub-surface scattering way before shrek 1 came out. Or did you not mean to imply that the technique was created for the movie, or was the movie just in production long enough that the technique was publicized before the movie?
 
Fox5 said:
I believe I recall hearing about sub-surface scattering way before shrek 1 came out. Or did you not mean to imply that the technique was created for the movie, or was the movie just in production long enough that the technique was publicized before the movie?

HBO showed a making of video for Shrek 2, during the video one of the guys from the guys from the studio, said that they created this shader algorithm for Shrek 2 which he called sub surface scattering. The studio is called PDI. The studio released papers this year at siggraph, concerning the implementation of SSS.

One of the inventors is Juan Buhler PDI/Dreamworks, the other is Henrik Wann Jensen of Stanford University.

http://graphics.stanford.edu/~henrik/papers/fast_bssrdf/fast_bssrdf.pdf

http://silicon-valley.siggraph.org/MeetingNotes/Shrek.html

I believe the technique was publicized before the movie. It was used for Golem/Smeagol
 
Last edited by a moderator:
Just a precision, the Real Time implementation and quality of SubSurface Scattering is different than the Offline Renderer one.

In Real Time, developers use a PRT/Texture Space Lighting model to obtain the SSS effect. Star Ocean 3 uses this effect, via PRT (SH) Lighting, if I remember correctly.
darkblu said:
yes. although for full correctness you may need to update both the reflection/illumination map and the perturbance/bump map on a by-frame basis if you want to account for totally dynamic scene configurations.
Well, technically Normal Mapping is a form of Indirect Texturing, just like EMBM is.
 
Vysez said:
Well, technically Normal Mapping is a form of Indirect Texturing, just like EMBM is.

well, i orginally meant embm (sorry i did not make that clear), but if we consider normal mapping as indirect texturing, then, again, you will have to transform your bump texture samples additionally before using them for indirection given you have a dynamic scene setup (i.e. one where viewer, surfaces and light-sources positions all change). which is semantically equivalent to re-generating your bump/perturbance map.
 
Fox5 said:
Hmm, how come the doom 3 windows didn't have that same "low res texture" look that reflections in many games (the need for speed series comes to mind) have? Just higher res texturing?
The engine doesn't use a small fixed power-of-two texture size for refractions, but uses a dynamically sized non-power-of-two texture that will allow a 1:1 mapping. It's basically a secondary backbuffer.
Hence you'll see resizing artifacts only in areas with extreme surface distortion.
 
pc999 said:
Just one question, could the lighting of RS trailer (or even the one on that guy in disaster, video posted many pages ago) be done via indirect texturing.

Some ss

http://media.wii.ign.com/media/821/821973/img_3701191.html

http://media.wii.ign.com/media/821/821973/img_3701198.html

http://media.wii.ign.com/media/821/821973/img_3701190.html

Trailer

from what footage i've seen so far of this game it seems like they use vertex lighting over high-density meshes but with HDR-style luminosity post-processing. they also seem to use tons of static lightmaps, but the most interesting part is the nice (self) shadowing - quality -wise it looks like shadow mapping, but it can't be given its self-inflicting nature. if i were to heavily speculate i'd guess some technique derived from the classic light-space depth-based shadows, requiring screen-to-light-space projection (possibly through an on-the-fly-generated indirection map) for a look-up into a (filtered) depth-buffer...

Ooh-videogames said:
Can Flipper Z-texture?

i just _think_ that flipper allows you to copy an arbitrary region out of the edram videobuffer and into the shared mem, and subsequently re-introduce that as a texture source. that's very highly speculative on my part though (never written a single line of code on the cube, just what i recall from reading unofficial docs). but i also think some people on these forums can give an authoritative answer to that *may take some nudging* ; )
 
Last edited by a moderator:
darkblu said:
from what footage i've seen so far of this game it seems like they use vertex lighting over high-density meshes but with HDR-style luminosity post-processing. they also seem to use tons of static lightmaps, but the most interesting part is the nice (self) shadowing - quality -wise it looks like shadow mapping, but it can't be given its self-inflicting nature. if i were to heavily speculate i'd guess some technique derived from the classic light-space depth-based shadows, requiring screen-to-light-space projection (possibly through an on-the-fly-generated indirection map) for a look-up into a (filtered) depth-buffer...

Thanks.

Vertex lighting :oops: , given the floor and this (the thing here we put hands, at the right) I thought it would be something better, but in that case it is very bad as we will get good lighting or complex/big scenes, I suspected about the static lightmaps and that info about the shadowing.

Would it be that hard to upgrad a bit the GPU so it can give us a few more things:devilish: :devilish: :devilish: .
 
pc999 said:
but in that case it is very bad as we will get good lighting or complex/big scenes

Doesn't per-pixel lighting require a lot more processing power than per-vertex? Remember, it doesn't matter how it's done as long as the result looks good.
 
pc999 said:
Thanks.

Vertex lighting :oops: , given the floor and this (the thing here we put hands, at the right) I thought it would be something better, but in that case it is very bad as we will get good lighting or complex/big scenes, I suspected about the static lightmaps and that info about the shadowing.

Would it be that hard to upgrad a bit the GPU so it can give us a few more things:devilish: :devilish: :devilish: .

pc, unless you're a gamedev, i suggest you don't worry about these things. the best production-level visuals i've seen in my life have been a result of the technical prowess of devs (and talent of artists), and much less of the technical perfection of the hardware. e.g. the best fake global illumination model i've seen to date in-production on a console runs on the ps2.
 
Last edited by a moderator:
fearsomepirate said:
Doesn't per-pixel lighting require a lot more processing power than per-vertex?

I think that is depending on how many polys you have?

Remember, it doesn't matter how it's done as long as the result looks good

I completely agree with that, but if it is per vertex then the lighting may get worst very fast with a bigger the complexity of the scene.

pc, unless you're a gamedev, i suggest you don't worry about these things. the best production-level visuals i've seen in my life have been a result of the technical prowess of devs (and talent of artists), and much less of the technical perfection of the hardware. e.g. the best fake global illumination model i've seen to date in-production on a console runs on the ps2.

Just want to know the potential of the console, plus good HW helps the devs too, BTW which is that PS2 game?
 
Status
Not open for further replies.
Back
Top