Trident XP4 Interview

fremin

Newcomer
I recently had a chance to interview Le T. Nguyen of Trident, where I asked a slew of questions about the Trident XP4 chip. I'll post a few bits of interest that might be of interest. I got some more info out of them not posted yet because of time constraints but I'll try to post those here later, is 12:30AM here right now ;). Read more here http://www.hardwareaccelerated.com/articles/reviews/graphics/tridentinterview/ and tell me what you guys think of the interview+the site if you can.

How exactly does the XP4 use these 4 pixel pipes? We know that each pipeline isn't identical like in most chips in the past, are these pixel pipes functionally identical to the competition? Any performance loss with such a design? What resources are shared among them? Do the shaders work in a similar fashion?

The XP4 employs advanced mathematics that specifically optimizes for multiple pixel rendering and results in unprecedented transistor savings. As an illustration, the XP4 requires 15 million transistors for the first pixel pipe, 7.5 million for the second pipe, 3.8 million for the third pipe and 2 million for the fourth pipe. The total is roughly 30 million transistors for all 4 pipes, as compared to NVIDIA’s design which required over 60 million transistors in GeForce4 Ti 4600. The performance compromise is not much (20%) and is totally acceptable to 97% of the 3D graphics users. Please note that there is no compromise in functionality or feature set, i.e. full support for DirectX8.1 and Base support for Direct 9.0. I know your readers want much more technical details of our trade secrets, but we all have real families to support here, so we must apologize for our lack of forthcoming in this area. Please forgive our sense of secrecy.

- . Of course, Trident will bring out our very exciting DirectX 9.0 product in ample time for Xmas ’03 as well.
-The XP4 does not support Pixel and Vertex shaders 2.0, as they are part of the Full set of DirectX 9.0 standard. However, we do support displacement mapping as the Base feature of DirectX 9.0

-It was stated that you would come within ~80% of NVIDIA's latest graphics card performance wise, is this based on benchmarks (if so which?), polygon rate, or some other indicator?

Our performance target for the XP4 T3 is 80% of GeForce4 Ti 4600 over the majority of industry-standard benchmarks that are commonly used by popular magazines and websites.

If anyone has any more questions they'd like me to ask Trident, email me or just post them here and I'll ask them later.
 
"We are at first very surprised why ATI consumed 110 million transistors to implement the Radeon 9700, which will be very very expensive to manufacture. After analyzing in greater details, we are very delighted to see that ATI still has not discovered our secret to designing performance graphics with minimum transistor count."

Read: ATi owned us :p :p :D

but: "Improving memory bandwidth efficiency is exceedingly important if you don’t want to waste your money in 256-bit memory bus like ATI Radeon 9700."

is quite a pot shot at ATi. To say that ATi are going simply brute force is rather ballsy, since ATi have done a lot to push bandwidth saving tech in IMR's forward.

"unreliable 256-bit wide memory" I didn't know that....

" you know how horrible your engineer’s lives will become with 256-bit of DDR-II memory interface running at 1,000 MHz next July 2003 ?"

do you know it doesnt actual run at that frequency.

anywho, this guy is a hoot.
 
I don't think they were aiming for that kind of board at any point. But obviously they are hyping their card. One question I would have liked to ask but didn't get a chance was how their yield is doing at UMC, I'll make sure to ask them soon though.
 
Arrrgh! Car analogies!

Thanks for posting. Interesting information, but I'm waiting for benchmarks to reveal the truth behind the marketer's half-truths. He did talk about FSAA and availability:
The XP4 employs various techniques to support anti-aliasing, including super-sampling (2X, 4X …) and multi-sampling (uniform, sparse …). We also have special hardware to accelerate shadows, which are non-trivial to do right in 3D graphics. The performance hit for using anti-aliasing is expected to be around 20%.
[...]
We expect the XP4 on the shelves in the second half of October.
 
Filled with a bit of marketing crap, but a good read none the less.. Thanks! :)

Edit: So does it have a T&L unit or what?

"Our performance target for the XP4 T3 is 80% of GeForce4 Ti 4600 over the majority of industry-standard benchmarks that are commonly used by popular magazines and websites."
^^ that sounds like marketing speak for 3dmarks.. :-?
 
So does it have a T&L unit or what?

lol, I asked that same thing once I realized what his reply was. I'll get more information as far as that goes shortly.

^^ that sounds like marketing speak for 3dmarks..

Actually sounds more like Quake3 and 3D Mark ;). I heard that they had a meeting with the Epic guys so maybe they will prove fairly well in Unreal Bench.
 
I found the part about the lower number of transistors for the additional pixel pipelines particularly interesting.

Also interesting was the fact that it certainly appears that the XP4 is NOT a deferred renderer (as most people think of when somebody says, "tiler"). Instead, it doesn't look like any more than ATI has put forth with Hyper-Z.

Anyway, back to the lower number of additional transistors for the other pixel pipes. It stands to reason that in order to optimize between the rendering of different pixels, it is absolutely necessary for those pixels to be similar in some fashion or another. The most obvious similarity must be that they have to be in the same triangle. This makes sure that the XP4 is not a forward-looking design, and as triangle counts increase, its method of reducing transistor counts will be made useless.

One optimization that would be absolutely lossless would be z-buffer compression. It shouldn't be too hard to use some sort of z-buffer compression between the pixel pipelines, provided all use the same triangle, in order to save on die space there.

As for the actual rasterization, the only possible way that I could see to save on transistor count would be at a reduction in image quality (note how the representative specifically avoided that?). That is, sharing rasterization power would result in effectively lower MIP lod settings. Perhaps Trident has found a way around this, but I somehow doubt it.

I will be extremely interested to see just how well the Trident does in the image quality of its texture rendering. I am also very interested to know about the status of Trident's FSAA and anisotropic support is. If the XP4 doesn't support either of these, or simply doesn't do either well (i.e. no aniso, basic supersampling AA), then the XP4 is certainly not even in the same class as the GeForce4 Ti cards, as Trident would have you believe.

Update: I just looked over the rest of the interview, and it looks like the XP4 may well have at least decent AA support. We'll see...but I still didn't see any mention of anisotropic.
 
One optimization that would be absolutely lossless would be z-buffer compression.

Apparently it does have Zbuffer compression, "The XP4 implements just about every trick of the trade in memory bandwidth reduction, including occlusion culling, texture compression, Z-compression, vertex caches"

But perhaps you meant something else....what I'd like to know is how well this technology scales when you say want 8 pipelines. I'll add it to my list of future questions for them I guess...
 
Looks like that Trident guy is trying to be like Dave (Kirk). Interviews where marketing trumps truth, and repeated jabs are taken at ATi. Poor ATi.

Of course, the fact that they seem to be putting inordinate amounts of effort into marketing hype is not indicative of poor or good performance... XP4 could turn out to be a GeForce3, or a Parhelia.
 
"Improving memory bandwidth efficiency is exceedingly important if you don’t want to waste your money in 256-bit memory bus like ATI Radeon 9700."

"unreliable 256-bit wide memory"
LOL! Is this the same guy who said that ATi and nvidia should start buying Trident stock and take them over? Ouch! My side is hurting from laughing so hard :D
 
LOL! Is this the same guy who said that ATi and nvidia should start buying Trident stock and take them over? Ouch! My side is hurting from laughing so hard

I also read that and had the same reaction ;). And i believe there was something about a poision pill so a buyout wouldn't be possible. Yes lots of hype towards this, but hopefully the card will turn out good. Either way price for DX8 boards should drop to almost as low levels as the XP4 by october.
 
OpenGL guy said:
"Improving memory bandwidth efficiency is exceedingly important if you don’t want to waste your money in 256-bit memory bus like ATI Radeon 9700."

"unreliable 256-bit wide memory"
LOL! Is this the same guy who said that ATi and nvidia should start buying Trident stock and take them over? Ouch! My side is hurting from laughing so hard :D

Obviously they read on all the message boards that ATI was going to "own" them, and got the wrong idea... :LOL:
 
andypski said:
Obviously they read on all the message boards that ATI was going to "own" them, and got the wrong idea... :LOL:
:LOL: I guess their marketing folks did their research well... they just had a slight misunderstanding with the use of the word "own". :D
 
The first one, (thank you bablefish), gives an outline of the specs and price and then has pictures of models from three different manufactures, Jetway, Chaintech, and someone else who has already been discussed.

The second one just rehashed Anandtechs UT2003 XP4 benchmarks.
 
Back
Top