How does trilinear filtering effect the NV2A/NV20 hardware?

BenSkywalker said:
Legion-

can some one then please provide with the theoretical maximums?

Single texture layer/ Two/ Three/ Four/ Five/ Six/ Seven/ Eight-

XBox- 932MPixels/ 932Mp/ 466Mp/ 466Mp/ 311Mp/ 311Mp/ 233Mp/ 233Mp

Doesn't xbox have a 4x2 architecture? it will give it double of the fillrate you have specified.

XBox- 1864MPixels/ 1684p/ 932Mp/ 932Mp/ 622Mp/ 622Mp/ 466Mp/ 466Mp

Am i correct? or maybe I had drink something "bad"? :?


EDIT: I didn't notice he specified it was PIXEL fillrate, instead or TEXEL fillrate, but, why to show pixel fillrate instead of the texel one, which is what is used when you texture?

P.D.: I'm a semi-total newbie in 3d graphics, so be nice if I make some mistakes wich is very very very likely to happen.
 
Personally I'm surprised ERP hasn't taken a gun to his noggin with all questions thrown at him (and being quoted elsewhere on top of that) :p

I thought a dev mentioned here that the MCP-X throws hundreds of megs of data around the HT bus for audio encoding?

btw, welcome back Ben! ;)

edit: nm, didn't know this thread was so old!!
 
Personally I'm surprised ERP hasn't taken a gun to his noggin with all questions thrown at him (and being quoted elsewhere on top of that)
I'm surprised that someone hasn't already taken a gun to Apoc's head, for draggin up a 14 month old thread ;)
 
arhra said:
Personally I'm surprised ERP hasn't taken a gun to his noggin with all questions thrown at him (and being quoted elsewhere on top of that)
I'm surprised that someone hasn't already taken a gun to Apoc's head, for draggin up a 14 month old thread ;)


Ouch! you know... i was reading all time threads, and I didn't notice this was a VERY OLD one, sorry boys :(, as i said, I think I smoked something....
 
I really can't see the problem of dragging an old thread up from the depths of the forum, if you think it's relevant and you want to ad something to it.
I really can't see what harm it should do, unless people wouldn’t want to see their old posts again for some reason. :)
 
Why do people call xbox a geforce 4 or a souped up geforce 3?
Pixel and vertex shading is far above that of a geforce 3(I think 2-3x?), but everything else is a good percentage less. Even with 2x the shader performance, that just brings the xbox shader performance into a usable area for games, but most other areas it is a bit lacking. Whoopie, the xbox can do halo 2(while a geforce 3 probably maxes out on halo 1), which still doesn't look or run as good as a geforce 3 with ut2003, or it can run full quality doom 3 at 10 fps while a geforce 3 does 5. You could call the xgpu a souped up Geforce 3 Ti 200 maybe. Of course, I could be wrong about all of this, I'm not positive about the performance figures on the xbox and the geforce 3.
 
Xbox's graphics processor (NV2A) is just short of a GeForce4 (NV25)

both NV2A and NV25 are essentially just souped up GeForce3s (NV20)

almost the same architecture. NV2A and NV25 both have 2 geometry/lighting units (Vertex Shaders) whereas the NV20 has one.

all of them have a 4x2 configuration
(4 pixel pipelines with 2 texture units per pipe)
 
Ok, well I believe a stock geforce 3 runs at 200 mhz core, 230 mhz memory.
From what I can find, xgpu runs at 233mhz core(I thought it was 200, I know it got a downgrading from the original rating), with 200 mhz ddr memory.
I believe geforce 4 speeds are something like 300mhz core, and 300 mhz ram.
Geforce 3 TI 500 runs at 240 mhz core, and 250 mhz memory.
Geforce 3 TI 200 ruuns at 175 mhz core, 200 mhz memory.

So the xgpu has higher fillrate and other features over the geforce 3, but lower memory bandwidth, made even lower since it has to share bandwidth with everything else. I guess you could say the xgpu has a good performance advantage over a geforce 3(but it's not the huge difference people make it out to be, it still isn't geforce 3 ti 500 level), but if it's bandwidth limited, it probably wouldn't be faster than a ti 200.(guess it wouldn't be bandwidth limited unless it does hdtv res or aa, and really not likely to reach bandwidth limit if it goes heavy on pixel shaders).
I don't personally, I would choose a geforce 3 in the xbox over the xgpu if it meant having halo with ut2003 like graphics at 60 fps over halo graphics.(assuming the pentium 3 would be fast enough to get it up to 60 fps)
I'm still just a little peeved that from day one since I had my ti 200, I found out it wouldn't be able to actually use any of those fancy pixel shader effects it had and maintain 30 fps.(well, it sort of did, I put it in a second computer and it runs halo at a pretty much solid 30 fps at 640x480, but looks horrible compared to my radeon 9700 pro at 640x480)
 
Fox5 said:
Ok, well I believe a stock geforce 3 runs at 200 mhz core, 230 mhz memory.
From what I can find, xgpu runs at 233mhz core(I thought it was 200, I know it got a downgrading from the original rating), with 200 mhz ddr memory.
I believe geforce 4 speeds are something like 300mhz core, and 300 mhz ram.
Geforce 3 TI 500 runs at 240 mhz core, and 250 mhz memory.
Geforce 3 TI 200 ruuns at 175 mhz core, 200 mhz memory.

The xbox GPU is closest in terms of core clock and featureset to a GF4 Ti 4200, which ships at 250 mhz core clock.

The xbox GPU was originally to have been run at 300 mhz, but was downgraded to 233 mhz due to cost and production yield reasons.

You can't directly compare the core clock of the GF3 and the GF4 because of the differences in the design.
 
while the GeForce 3 Ti 500 has higher bandwidth and higher fillrate than XGPU, all of that is needed for higher resolutions than 640x480.

The XGPU crushes the GeForce 3 Ti 500 in terms of polygon & lighting power because XGPU has twice as many Vertex Shaders (2) compared to Ti 500 (just 1 like all GF3s)
 
Back
Top