The TEV on the Gamecube

Glonk

Regular
I've been hearing a lot about this, but no one has any specifics.

From what I've heard, it sounds like a slightly more flexible version of the NSR on the GeForce 2 chips. But, at the same time, I was just told by someone who seems to know what he's talking about (but could be a kid for all I know), say that the TEV is much more flexible than the pixelshaders on the NV2A. Instinct tells me that's incorrect, simply due to the transistor count. ;)

Does anyone know? Can anyone tell? :)
 
From what I understand, there are a few things that can be done with the TEV that can't be done on the NV2A, however, overall the NV2A is more feature rich.
 
The major difference is that NV2A does all of it's texture reads first, so you can't do a deffered read based on a calculated result (unless it's one of the calculations supported by the texture stage) without resorting to multipass.
In the TEV texture fetches are interleaved with the combiner operations(though not totally freely), so you can theoretically do arbitrary deffered lookups.
However, in most ways flippers texture lookups are more limited as are it's combiners flexibility.
IMO Flipper probably has the better theoretic approach, with the downside of some potential texture cache performance issues.

Realistically there are operations you can do on NV2A that youy can't on Flipper and viceversa.
 
Could you give some examples of effects that are easily done on the TEV and not on the NV2A, and vice versa?
 
Anything that requires you to compute f(g(x)) where g(x) requires combiner stages to compute. and f(x) is computed as a texture lookup.

It quite honestly doesn't come up very often, but it's nice to have the flexibility.
 
Hey Erp, Started job huntinng yet? I have a question about Starfox. I remember the 'fur shading' subject coming up before Rare had even showed the pics of Fox' fur. And everyone said that the Flipper just wouldnt be able to do it in realtime (only in cutscenes) and only the Xbox Pixel Shaders might be able to do it in game. And now we've learned that Starfox' fur is in realtime AND around 60 fps with some REALLY big outdoor areas (and grass blades too). What routine do you think Rare are implementing? And what else do you think we'll see from the XB and GCN as far as grass and fur?
 
Goldni: I'm wondering about that too.

People compare the Flipper to GF2 and other DX7-level graphics renderers.. but it seems to be doing a lot more than those boards could do. Personally, I don't see how th' TEV could be more flexible than the NV2A's pixel-shading.. but hey, I'm not a developer. I know that Factor 5 said that the GCN and Xbox are equal in power overall, but they're a tad bit biased..
 
I'm certain the Fur rendering technique is the same as the one used by everyone else. They're all based on the paper by Hughes Hoppes (MS Research), the papers on his homepage.

Basically you draw a number of transparent concentric shells of the model approximating a volumetric render. You can also draw fins on the model to show the fur cross-section. It doesn't require any clever shading, just fillrate.

Given how far away from Fox you are most of the time they probably need to render only one additional shell, possibly two, but I'm speculating here.

MS has demos of other variations on this technique including Grass, and volumetric trees. The trees in particular look great, but the dataset required ends up being IMO impractically large.
 
starfox_screen001.jpg

starfox_0102_screen017.jpg


Not too hot looking ;)
 
That pic looks terrible.. I mean it doesnt even look lke a real fox.. look how big his eyes are!

And a Fox wouldnt be seen dead in that kinda gear - so outdated.

:p
 
Only thing I don't like in those pics is the tail. Icky.

But that's apparently been fixed; latest screenshots in NP look *MUCH* better.
 
Hey ERP, could please explain a little bit more about the TEVs? It is pretty easy to find some stuff about Pixel Shaders, but I don't find anything on how the TEVs exactly work and what principles they're based on. Thanks.
 
It loses some up close.

DeathKnight said:
Not too hot looking ;)

He's right. Up close the illusion breaks down, and you can see the layers of transparent textures used to create the effect.

Look at left side of the picture.

Not to mention it's really easy to LOD this effect -- it's simple to avoid rendering this many fur layers when you're far away from the character.
 
Hey ERP, could please explain a little bit more about the TEVs? It is pretty easy to find some stuff about Pixel Shaders, but I don't find anything on how the TEVs exactly work and what principles they're based on. Thanks.

The TEV and "pixel shaders" are basically cute acronyms for what used to be called color combiners. The TEV also incorporates the Texture reading part of the pipeline.

A color combiner is in general implemented as a single logic op, in NVidia's case thats public (register combiner docs) and is of the form
A op1 B op2 C op1 D
where op1 is either Dot Product or multiply, op2 is either add or select.
As you can see by repeating this multiple times with some register manipulation between stages you can do most basic math. Pixel shaders just provide a simple consistent interface to this (and other vendors implementations).

The TEV uses a different basic combine operation which is a little more limited. However since the Texture reads can be interleaved with the combiner operations it allows you to do things that would require multipass render on NV2X.

So as an example
on NV2X I have to write


Texture Read
Texture Read
.
.

Combiner Op
Combiner Op
Combiner Op
Combiner Op
.
.
.

On Flipper I can write

Texture Read
Combiner Op
Combiner Op
Texture Read
Combiner Op
Combiner Op
Texture Read
Combiner Op
Combiner Op
.
.
.

I guess the easiest explanation is that Flipper has simpler units for combining and reading textures, but allows more complex arrangements of the units.
So if one of the texture reads is dependant on a previous combiner Op and you can't squeeze the ops into the texture addressing instructions the NV2X would require multipass to do the same thing.
 
Yes at least a good explanation of TEV thanks ERP,
it's really frustrating not to find any docs or in depth information relating to Flipper. TEV is a 16 stage pixel pipeline right (I think I read it a while ago in an interview of Greg Buschner..) ? basically any stage in this pipeline can be a texture read or a combiner op unlike Nvidia's Register combiners if I understand well. What kind of combiner ops the Flipper can execute ?
 
On a second note, it was previously established that the 6:6:6:6 and 8:8:8 bit color modes of the flipper are a limitation of its framebuffer. However, do its internal units (TEV, Texture coordinate generator, etc.) support a higher level of precision? Is it comparable to the NV2A (Integer 10 format)?
 
Back
Top