WiiGeePeeYou (Hollywood) what IS it ?

Status
Not open for further replies.
Sorry, you got it the wrong way around. Per pixel lighting - at least as implemented in Doom3 and similar games - requires normal maps (DOT3 bumpmap). Unfortunately, it also requires cubemapping, which the gamecube ironically CANNOT do!
perpixel lighting doesnt require cubemapping at all, im assuming u want cubemapping for the normalization.
A/ supply the vectors yourself already normalized (this is how ive done it on lesser hardware)
B/ better hardware u have normaluze instruction
 
Guden Oden said:
Sorry, you got it the wrong way around. Per pixel lighting - at least as implemented in Doom3 and similar games - requires normal maps (DOT3 bumpmap). Unfortunately, it also requires cubemapping, which the gamecube ironically CANNOT do!

Yes, as far I know, Dot3 Bumpmapping and Cubemapping has to be implement via Gekko and thus has a rather poor performance.
 
If my memory doesn´t fail to me Flipper is like GeForce2 in technology and the TEV is something between NSR of NV15 architecture and Pixel Shader of NV2x architecture. Am I wrong?

I didn´t know the fact that some graphics FX of GCN must be done with the help of Gekko, I am sure that the implementation of them in the GPU needs a change in the GPU and a new GPU architecture.

Everyday I am sure that Hollywood is an optimized Flipper.
 
Guden Oden said:
Sorry, you got it the wrong way around. Per pixel lighting - at least as implemented in Doom3 and similar games - requires normal maps (DOT3 bumpmap). Unfortunately, it also requires cubemapping, which the gamecube ironically CANNOT do!

Huh? Since when does phong shading require normal and cube maps? And back when Rebel Strike was coming out, Julian Eggebrecht said there were cube maps in it. I know I've at least seen sphere maps in a number of Cube games. Environment mapping is listed on the Flipper specs, so the choice of sphere mapping over cube mapping is probably more due to fillrate/memory issues than featureset.

See also here:
http://www.gamasutra.com/features/20021002/sauer_01.htm
and here:
http://cube.ign.com/articles/094/094556p1.html

Factor 5 used per-pixel lighting on some surfaces for diffuse/specular, but apparently not all.
 
Last edited by a moderator:
Guden Oden said:
I think you're trying to overexaggerate what has been said.

Actually, the cube's TEV is less powerful than the "pixel shaders" of the NV2A GPU used in the xbox. And calling those pixel shaders might be somewhat of an exaggeration as well actually, as they're very primitive with limited instructions and very limited instruction slots...

ERP said that both TEV and Pixel Shader were simply acronyms for a programmable color combiner. The only real differences between the hardware being that TEV interleaves its reads with its combines (an advantage over NV2a's pixel shader) but is "a little bit more limited" (ERP's words) in its combiner operations. So I wouldn't say pc999 was exagerating ERP's words much at all.
 
zed said:
perpixel lighting doesnt require cubemapping at all, im assuming u want cubemapping for the normalization.
A/ supply the vectors yourself already normalized (this is how ive done it on lesser hardware)

this works only for directional lights.
for phong with positional lights (i.e. not at inifinity) the closer the light source is to the surface the more you need normalisation per-fragment, and pre-normalisation does not save you (the vectors get de-normalised at lerp). IOW for positional lights you need cube maps or a normalisation op. or live with the dull illumination resulting from the denormalised vectors.

alternatively, (i've never tried it but i can't see why not) you should be able to use 3d textures for normalisation. of course the chances that a hw supports 3d maps but not cube maps is rather miniscule.
 
Last edited by a moderator:
I read again the comments and my memory may tricked me but only a bit, anyway other like Factor 5 said they are on par (althought they may be a bit biased).

BTW how do you do normal mapping without per pixel lighting?
 
Last edited by a moderator:
zed said:
perpixel lighting doesnt require cubemapping at all, im assuming u want cubemapping for the normalization.
Well, as I was saying, the way it's done in Doom3 and similar, there's a cubemap requirement... When there's a context, one has to take that context into account when replying. :p

B/ better hardware u have normaluze instruction
Naturally, but on flipper this isn't really the case. :)

Edit:
Fearsomepirate,
What I was talking about isn't phong shading. And (spherical) environment mapping is something completely different from cubemapping, particulary as flipper is utterly and completely unable to perform the latter. It just isn't possible, the chip isn't geared to perform those types of operations. No matter what Julian Eggebrecht may or may not have said, that guy always talked too much for his own damn good anyway. :D


Oh, and PS: What about that damn interview? :devilish: Lookin' more and more fake by each day that passes to me!
 
I just got high-speed yesterday. I haven't talked to Jessica yet about it.

Are you 100% positive the Flipper can't process a cube environment map? All I have to go on is that Eggebrecht said they implemented cube mapping in his Rogue Squadron games. I take it you've also developed on Cube? Did you ever get self-shadowing to work? Because that's another effect that if it weren't so obvious in RL, people would say Flipper can't do it, given the 0% implementation rate in other Cube software.
 
fearsomepirate said:
I just got high-speed yesterday. I haven't talked to Jessica yet about it.

Are you 100% positive the Flipper can't process a cube environment map? All I have to go on is that Eggebrecht said they implemented cube mapping in his Rogue Squadron games. I take it you've also developed on Cube? Did you ever get self-shadowing to work? Because that's another effect that if it weren't so obvious in RL, people would say Flipper can't do it, given the 0% implementation rate in other Cube software.

Cube mapping can be done on the GCN, just not Flipper, Gekko has to be used for it. It is just not very efficient. Julian Eggbrecht never said he did it with Flipper alone IIRC.
 
hupfinsgack said:
Cube mapping can be done on the GCN, just not Flipper, Gekko has to be used for it. It is just not very efficient. Julian Eggbrecht never said he did it with Flipper alone IIRC.

hupfinsgack, cube map lookups are per-fragment, to do that on the cpu you'd have to:
1) output your to-be-normalised vectors in a render target
2) run the cpu over that render target effectively doing the cube lookup
3) re-introduce the render targer with now normalised vectors back into the gpu pipeline

note that 2nd stage would be anything but cheap, and the price would depend on the screen-projected size of the surface you're trying to cube-map. i can hardly see that done realtime for large screen estates. for the record, i'm doing it in sw in thurp, and i can imagine how slow it'd be on gekko-level cpu - the biggest issue being that the algorithm has some substantial flow clontrol going on, and a potent SIMD set can't help you much with that.
 
Last edited by a moderator:
fearsomepirate said:
I just got high-speed yesterday. I haven't talked to Jessica yet about it.

Are you 100% positive the Flipper can't process a cube environment map? All I have to go on is that Eggebrecht said they implemented cube mapping in his Rogue Squadron games. I take it you've also developed on Cube? Did you ever get self-shadowing to work? Because that's another effect that if it weren't so obvious in RL, people would say Flipper can't do it, given the 0% implementation rate in other Cube software.
I've heard that the self-shadowing in RL was possible because the stuff that casts shadows onto each other (e.g. X-Wings' foils) aren't actually the same model, they're separate. So it isn't actually self-shadowing in the strictest sense.
 
darkblu said:
hupfinsgack, cube map lookups are per-fragment, to do that on the cpu you'd have to:
1) output your to-be-normalised vectors in a render target
2) run the cpu over that render target effectively doing the cube lookup
3) re-introduce the render targer with now normalised vectors back into the gpu pipeline

note that 2nd stage would be anything but cheap, and the price would depend on the screen-projected size of the surface you're trying to cube-map. i can hardly see that done realtime for large screen estates. for the record, i'm doing it in sw in thurp, and i can imagine how slow it'd be on gekko-level cpu - the biggest issue being that the algorithm has some substantial flow clontrol going on, and a potent SIMD set can't help you much with that.

As I said it's hugely inefficient, but it is the only way it can be done on the GCN, as there's not support for cube mapping on flipper. I remember that we have had a discussion here whether it actually could be done for the sake of doing it a few years ago :p
 
Urian said:
If my memory doesn´t fail to me Flipper is like GeForce2 in technology and the TEV is something between NSR of NV15 architecture and Pixel Shader of NV2x architecture. Am I wrong?

I can't say you're right or wrong, but I would tend to say you're closer to being right. I never thought of it exactly like that. Flipper's TEV might be like something between the NSR of NV15 and the Pixeh Shaders of NV2x. although in terms of texel fillrate, the Flipper is more like an NV10~GeForce256 (GeForce1) because TEV outputs 4 texels per clock, like a GPU with 4 TMUs total. (1 per pixel pipe) the GeForce2~NV15 outputs 8 texels per clock since it has 8 TMUs (2 per pixel pipe)/

I didn´t know the fact that some graphics FX of GCN must be done with the help of Gekko, I am sure that the implementation of them in the GPU needs a change in the GPU and a new GPU architecture.

I didn't really know that either. all I knew is that Gekko can help Flipper with lighting.

Everyday I am sure that Hollywood is an optimized Flipper.

it could be, but could Hollywood be a Flipper with more pipelines, more TEVs, also ?
 
Every time I see the title of this thread I think "it should have been WiiPeeYou" (cos that's what ATI calls their graphics chips; VPU). :smile:
 
Megadrive1988 said:
it could be, but could Hollywood be a Flipper with more pipelines, more TEVs, also ?


If it can do what we saw from RS, Pangya, or Pokemon (there is some good self shadowing and ligtning in there) then it can possible be, althought I doubt that a simple tweaked flipper could do it.
 
I don't know if it's been mentioned elsewhere in this thread but ERP once said something to the effect of "In many ways Flipper is actually more advanced than NV2a when it comes to pixelshading, It's just held back by it's clock speed". Not shure if I remember it correctly...
 
Squeak said:
I don't know if it's been mentioned elsewhere in this thread but ERP once said something to the effect of "In many ways Flipper is actually more advanced than NV2a when it comes to pixelshading, It's just held back by it's clock speed". Not shure if I remember it correctly...

If I said that it's being taken out of context.....

It can do some things that can't be trivially accomplished on NV2A, but most of them can't be usefully exploited, partly because of no vertex shaders to do useful setup and partly because of the limitations on the inputs to the TEV stages.

The TEV is a bit of a one trick pony, it's very good at indirect texturing.
 
Status
Not open for further replies.
Back
Top