Trident's back in the game!

Ouch.

Wish they would have done some tests at other resolutions to figure out if it's simply BW limited though. Why they didn't run some test at the standard 1024x768 is beyond me...

Mize
 
Mize, I was just about to say the same thing... While the 16x12 results are revealing, they don't really explain whats going on.

And whats with the 8tmus/pipe... thats nuts. I thought that was just a typo at first?!? I guess they must not be "real" (or perhaps "full" is a better term) tmus in the sense that I think of them?
 
ET also took the term "Heirarcical Tiling" wrong. From the way they talk about tiling and AA it seem they think this to be a TBR, however "Heirarcical Tiling" would suggest IMR and hence have AA bandwidth restriction in line with other IMR's (without compression).

That the second erroneous conclusion I've seen in two days. in Toms 9500 PRO review Lars concluded that ATI had made alterations to the core when looking at the 3DMark2001 fillrate scores. It didn't occur to him that the 128bit bus just isn't capable of supporting 8 pixels per clock.

Sorry, that type of things gets my goat a little.
 
Ack! Trident's fault for making a product that can't perform or is it an actual faulty part? The bandwidth seems to be lacking quite a bit for 32 tmu's if in fact it has that many.

I too am wondering why they only tested in 1600x1200. They chose that resolution because the drivers were in Beta form still?
 
I don't think they would have made such scathing remarks about the card if it had performed competitively at lower resolutions... IMO, they must have tested it at lower resolutions if for no other reason than to satisfy their own curiousity :)
 
Rancidm said:
I don't think they would have made such scathing remarks about the card if it had performed competitively at lower resolutions... IMO, they must have tested it at lower resolutions if for no other reason than to satisfy their own curiousity :)

And then left those benchmarks out completely?

I'm inclined to believe the 10x7 performance wasn't so horrendous and that the reviewers - like most on the net - went for the most sensational, black-and-white situation they could muster up in order to slam Trident as much as possible...very Geraldoesque IMHO. I mean, seriously, if you want to give people a sense of a new card's perf. it's standard practice to at least include the default 3dmk21.

Mize
 
If this chip is bandwidth limited at 7.4 GBytes/sec (230 MHz DDR * 128 bit bus; same as first geforce3), then it has got to have one of the least efficient memory controllers of any gpu I have ever heard of.

Unless there is something about the 1600x1200 resolution that completely breaks whatever caching scheme this chip uses.
 
Mize said:
Rancidm said:
I don't think they would have made such scathing remarks about the card if it had performed competitively at lower resolutions... IMO, they must have tested it at lower resolutions if for no other reason than to satisfy their own curiousity :)

And then left those benchmarks out completely?

I'm inclined to believe the 10x7 performance wasn't so horrendous and that the reviewers - like most on the net - went for the most sensational, black-and-white situation they could muster up in order to slam Trident as much as possible...very Geraldoesque IMHO. I mean, seriously, if you want to give people a sense of a new card's perf. it's standard practice to at least include the default 3dmk21.

Mize

It's possible. But it's also possible that the lower resolution results just didn't change the picture enough to be worth the effort to publish the benchmarks. (time constraints or whatever)

IMHO, most review sites would have the decency to give a sub-100 video card a chance even if it didn't perform well at 1600*1200. After all, most people with 19, 21" monitors can afford to pay more than 100 bucks for a video card :)

I guess we'll find out when more review sites get their boards...
 
Either way it still doesn't explain why they used 1600x1200. Their excuse was "the drivers were in Beta form." Huh? Just use a default resolution of 1024x768.

I'll wait for some other sites to review the XP4. If the poor performance trend continues then.....
 
horvendile said:
From Extremetech:

However, given the lack of performance seen running without AA at 1600x1200, we doubt AA will improve the XP4's performance.

You don't say? :-?

I thought that at first as well, but the statement is not as stupid as it seems. ExtremeTech only benches 1024x768 with AA and 1600x1200 with no AA. Cards are sometimes faster at 1024x768 with AA than 1600x1200 without AA.

Also I think ExtremeTech made a mistake in this preview by not testing a lower resolution. This is a case where sticking to policy might not show the full story. After all anyone that buys a Trident card expecting to run 1600x1200 is kidding themself.
 
I have never read any article on line, published by a site that is as popular as that, slam a peice of hardware as much as those guys did.
 
3dcgi said:
I thought that at first as well, but the statement is not as stupid as it seems. ExtremeTech only benches 1024x768 with AA and 1600x1200 with no AA. Cards are sometimes faster at 1024x768 with AA than 1600x1200 without AA.

To be honest, I didn't read the entire article. Still, the way it's formulated, I find it hard to interpret the statement in the (otherwise reasonable) way you suggest. On the other hand, I don't really believe that Extremetech actually believes that AA at a given resolution should be faster than non-AA in the same resolution, so I'll give them some sort of benefit of doubt.
 
The sad thing is that I don't think Trident can really undercut the 9000 PRO in price. ATI's budget offering doesn't have much more than 30M transistors, and I remember OpenGLguy saying that a transistor on 0.13 um costs more than a 0.15 um transistor.

Couple that with the far greater volume that 9000 PRO's have, and all Trident has is a low-performing, money-losing project.
 
I find those fillrate numbers highly suspect.
It looks as if the XP4 has ONE pixelpipe with additional texturing capabilities. How on earth would a 4-pipe GPU running at 250 MHz be able to have a fillrate of only 177 MPixels/s? (525 MTex/s)

And if that truly is the fillrate, why would they feed it with 128-bits worth of 250MHz DDR, when they might as well have used much cheaper memory?

Something, somewhere, is wrong. It just doesn't add up.

Entropy
 
Back
Top