Trident's back in the game!

Actually, they explain right in the article why they only tested at that resolution:

"We normally test at two resolutions: 1024x768x32 with 4X FSAA (Full-Scene Anti-Aliasing) enabled, and 1600x1200x32 with no FSAA. But Trident informed us that the driver build they sent us for this preview didn't yet have AA implemented. We discovered this fact only when we went hunting for a 3D config panel for either Direct3D or OpenGL, since neither were anywhere to be found. "
 
arjan de lumens said:
Unless there is something about the 1600x1200 resolution that completely breaks whatever caching scheme this chip uses.

Something like that IS actually possible...

The good old Voodoo5 5500 had a problem sometimes where the SLI would break in 1600x1200 resulting in an obscene performance drop in that resolution compared to all others. :)
 
kid_crisis said:
Actually, they explain right in the article why they only tested at that resolution:

Actually that doesn't explain why they only test at those two resolutions or why they don't include the more standard and almost-never-BW-limited resolution of 1024x768 with no AA. In all seriousness, this card is meant to compete with the Radeon 9000 and nobody should expect to run that card at 1024x768 and 4xAA with acceptable framerates. The vast majority of sub-$100 cards will be played at 10x7 with no AA so why not test them there?

[Edit: I guess I'm just spoiled by B3D reviews with plots of performance vs. resolution and the insight they provide]

Mize
 
[Edit: I guess I'm just spoiled by B3D reviews with plots of performance vs. resolution and the insight they provide]

Yes. One day all 3D hardware vendors will realise they need their products sent to B3D first for a fair, just and comprehensive review of the various facets of their products.

:D
 
DaveBaumann said:
Yes. One day all 3D hardware vendors will realise they need their products sent to B3D first for a fair, just and comprehensive review of the various facets of their products.

:D

Splat!

There goes my lunch. ;)
 
DaveBaumann said:
Yes. One day all 3D hardware vendors will realise they need their products sent to B3D first for a fair, just and comprehensive review of the various facets of their products.

:D

Unless of course, their hardware is lacking, in which case they'd realise they need to send it to clueless reviewers instead in the hopes that the shortcomings will be overlooked...

So Dave, are you expecting a XP4 any time soon? 8)
 
Having suffered in pain using a Trident Cyberblade XP integrated chipset on my laptop for the last year, I feel slightly embarrised that I thought, for a second or two, the Trident might pull one out the bag with this one.

Should've known better. :D
 
Heh, I remember when I first got my new PC around two years ago, since I only had a Voodoo2 and very little money left for a vid card (figured the V2 was adequate and w00t I was right), I got an El Cheapo that turned out to be a Trident 9880, the original Blade3D.

According to Trident's site, it's supposed to be faster than Voodoo2 in many situations, and at least have really decent IQ and such.

<drEvil>Riiiiiiiiiiiiiight...</drEvil>

For the most part, performance was abysmal. The only thing was my Blade3D actually pulled a massive hat trick in 3DMark2000, and in fact for a VERY long time was the fastest Blade3D in the database (take a look! The ones that are faster are WAY FUCKING FAST by comparison, so I suspect they're wrongly identified cores). Odd, since my actual performance was MUCH lower than projected.

(edit: I should probably add that the 3dm2k database appears to be down right now, oh well, it'll be baaaaack)

Oh and the IQ sucked outright. :p
 
I don't know where Salvator pulled the specs from, but last time I checked it was a 4x2 setup, with some sort of pipeline sharing scheme and the capability of 8-layer MT (probably where the Texel fillrate numbers get pulled off).

As far as marketing/naming scheme goes: tile based rendering architecture my a** pffffffff.

One thing that stroke me when reading it is that he had trouble disabling vsync through API's and claimed to have done it on a game per game basis. I'll hold my breath until they iron out their drivers and s.o else looks at it, but I'm not having too high hopes for it (never had anyway). Looks like an excellent budget laptop chip and that's about it.
 
Geeforcer said:
Anand clearly states that XT4 is a DX8.1 GPU, yet ET (heh) keeps referring to it as DX9.

*ahem*

  • Eight texturing units per pipe
  • Pixel Fill Rate: 920Mpixels/sec (T2) or 1Gpixel/sec (T3)
  • Texture Fill Rate: 7.36Gtexels/sec (T2) or 8Gtexels/sec (T3)

Thus each texture layer/pass=1TMU :eek:

8)
 
In my mind, the article was a complete load of :oops: and will not take it into any account when I think of the kt4.


Extremetech had a budget graphics card running at "non-budget" resolutions. Who would buy a 19" - 24" monitor and then splash out on a sub £100 graphics card. I am going to wait to see how this card performs at more "budget" wise resolutions (800x600 - 1024x768) against other budget and mainstream cards.

Extremetech did nothing in my mind but to attempt to destroy the card before it had fully working drivers and without actually running at the resolutions it is actually intended to run at.
 
Something doesn't add up. In Anandtech's preview article the second highest performing Trident XP performed fine. It wasn't great, but it wasn't horrible. Now Extreme Tech is claiming it's D.O.A. Sounds like they either screwed up horrendously when testing it, or the drivers they received are just really bad. :rolleyes:
 
sas_simon said:
In my mind, the article was a complete load of :oops: and will not take it into any account when I think of the kt4.

Extremetech had a budget graphics card running at "non-budget" resolutions. Who would buy a 19" - 24" monitor and then splash out on a sub £100 graphics card.

me, now using 21" sony G520 with radeon 9000.

on anybody who uses his computer mainly for "real work",
and just occasionally plays games.

Though me and others like me don't use resolutions like 1600x1200 with games,
but 1280x1024 is ok with radeon 9000 on most games.

But I agree that they should've tested with lower resolutions too.
 
Ailuros said:
Geeforcer said:
Anand clearly states that XT4 is a DX8.1 GPU, yet ET (heh) keeps referring to it as DX9.

*ahem*

  • Eight texturing units per pipe
  • Pixel Fill Rate: 920Mpixels/sec (T2) or 1Gpixel/sec (T3)
  • Texture Fill Rate: 7.36Gtexels/sec (T2) or 8Gtexels/sec (T3)

Thus each texture layer/pass=1TMU :eek:

8)

or one trident TMU may be capably of taking just one sample from a texture,
so that they need 8 of their "TMU's" to get just one trilinearry-filtered texel, or 4 to get one bilinearry filtered texel.

This way they can quote very high theoretical numbers, that have almost nothing to do with reality/real situations
 
hkultala_ said:
Or one trident TMU may be capably of taking just one sample from a texture,
so that they need 8 of their "TMU's" to get just one trilinearry-filtered texel, or 4 to get one bilinearry filtered texel.

This way they can quote very high theoretical numbers, that have almost nothing to do with reality/real situations

That still doesn't explain how a 4-pipe GPU running at 250 MHz can have a pixel fillrate of 177 MPix/s.
Even though you wouldn't expect it to reach its theoretical maximum, this is incredibly off the mark, and the rest of the benchmarks indicate that it isn't just an artifact/conflict with the 3DMark fillrate test.

Sure, they are said to use a pipeline architecture that uses some of the same resources, which could lead to those resources being limiting in some circumstances, but that pixel fillrate is _less_ than you would expect even of a moderately successful single pipeline design!

Something is wrong, but what?

Entropy
 
Back
Top