Trident: "XP4 T3 will achieve 70% of Radeon 9700 perf&q

Gollum said:
Also, Anand said XP4 has "a base level of DirectX 9 support" about the shaders, so I suspect it's Pixel- and Vertexshaders probably go beyond DX8.1 spec and might even meet minimal DX9.0 spec.
I can't quote the DX9 specs by heart, but I am pretty sure that a floating point pixel pipeline was a part of PS 2.0. If Trident had such features, it seems like they would be shouting about it, not saying things like "compliant" which could be misunderstood.
 
The XP4 desktop product family is available in three versions:
- XP4 T3 supporting 128MBytes of up to 700MHz DDR memory with 128-bit bus;
- XP4 T2 supporting 64Mbytes of 500MHz DDR memory with 128-bit bus;
- XP4 T1 supporting 64Mbytes of 500MHz DDR memory with 64-bit bus.

The XP4 T3 tops the performance chart with a 300MHz graphics engine and enables PC OEMs plus card makers to deliver advanced DX8.1/9.0 graphics card with 128MBytes memory for less than $99 (suggested retail price) to the end-users. The 128-bit Double-Data-Rate (DDR) memory system reaches a bandwidth of up to 11.2 Gbytes/sec and the 3D graphics engine achieves a peak performance of 1.2 billion pixels/sec.

The XP4 T3 offers a maximum power dissipation of less than four watts. This corresponds to only a fraction of competing alternatives, improving chip reliability in typical consumer-oriented operating environments where overheating could be the cause of system failure.

The XP4 T2 provides mainstream DX8.1/9.0 performance with a 250MHz graphics engine and reduces the end-user price of 64Mbytes graphics card to less than $79 (suggested retail price). XP4 T2 memory bandwidth peaks at 8.0 Gbytes/sec and 3D graphics performance hits 1 billion pixels/sec, rivaling other high-end, competitive products.

The XP4 T1 brings entry-level DX8.1/9.0 graphics card to a rock-bottom price of less than $69 (suggested retail price) also with a 250MHz graphics engine but in a 64-bit memory bus for lowest cost. The XP4 T1 memory bandwidth reaches 4.0 Gbytes/sec.

http://www.tridentmicro.com/press/R...nt&SmallClassName=release&SpecialID=0

I dont know how they can make such a claim. I will eat my hat if the XP4 is DX9.0. :eek:

I dont own a hat though ;)
 
The lack of a fp pipeline means no DX9 compliance. Also, it is my opinion that they also cannot support DM, or VS2.0.Why? A value chip with a laughable transistor count, a chip that is built around the concept of logic sharing, will be hard pressed to squeeze in DX8 units, let alone the more complicated and transistor consuming DX9 ones. I think that the "DX8.1/DX9" quote is some more agressive marketing talk, that will blow like a bubble with review sites and experienced users, but will be a great opportunity for Trident to sell their chips to the rest of the world, EG to the people that fall for such marketing.
 
Testiculus Giganticus said:
The lack of a fp pipeline means no DX9 compliance. Also, it is my opinion that they also cannot support DM, or VS2.0.Why? A value chip with a laughable transistor count, a chip that is built around the concept of logic sharing, will be hard pressed to squeeze in DX8 units, let alone the more complicated and transistor consuming DX9 ones. I think that the "DX8.1/DX9" quote is some more agressive marketing talk, that will blow like a bubble with review sites and experienced users, but will be a great opportunity for Trident to sell their chips to the rest of the world, EG to the people that fall for such marketing.

I don't see why you couldn't create a fully DX9 compatible chip with just 30 million transitors. Afterall, you could implement the entire DX9 pipeline on a CPU that has just as many or less transitors than that. Of course it wouldn't be fast, but that is a different issue than actually supporting the features.
I'm not saying that it does (support DX9 features), just that you can't discredit it based solely on transistor count.

I hope they are able to deliver.
 
Anyone who spends a minute looking into what Trident has managed to accomplish in the past few years will find themselves with 55 seconds left over, and a much better understanding of just how small the chances are that the XP4 will be worth a damn.
 
I don`t think that is a valid comparison. The only thing from DX9 that could/may/will be implemented by means of the host CPU in a VIABLE manner is VS2.0. CPUs have a different design and differenr purposes. They are still far more programable, almost infinetly, and the goal with CPUs with each gen is to gain speed, not features, so the transistor count remains fairly low. VPUs on the other hand, are not as programable, but are slowly getting there, so the purpose is mainly to add FEATURES at better than last gen speed. Transistor counts will increase until VPUs become programable enough. That is why I am more than skeptical in this matter, and, considering Trident`s track record, I find it impossible to believe that they found the means to do such a complicated VPU with such a low transistor count.
 
I think you missed my point.
You could (no maybe here) implement a software rasterizer with the entire DX9 feature set. It certainly wouldn't be fast. My point is that you could create DX9 class card with that low a transistor budget.
I do agree, however, that you would be very hard pressed to get full DX9 compatibility and the speed they are claiming with that sort of transistor budget.
 
Nexiss said:
I don't see why you couldn't create a fully DX9 compatible chip with just 30 million transitors. Afterall, you could implement the entire DX9 pipeline on a CPU that has just as many or less transitors than that. Of course it wouldn't be fast, but that is a different issue than actually supporting the features.
I'm not saying that it does (support DX9 features), just that you can't discredit it based solely on transistor count.

I hope they are able to deliver.

I have yet to see pixel shader effects (1.0, let alone 2.0) run successfully on a CPU.
 
While I doubt that (you have probably seen some more 'mundane' things possible with pixel shaders implemented in old software engines), it is beside the point. Just becauase you haven't seen it doesn't mean it isn't possible.

I really don't see what is so hard to understand about this.
The CPU is a general puprose processor. You tell it what to do and it does it. You tell it to do this or that with some texels and pixels and it'll do it The entire DX9 pipeline could be implemented on a CPU. That doesn't mean you would want to, of course, but it is possible.

And now we are straying from the point of this topic...
 
Nexiss said:
You could (no maybe here) implement a software rasterizer with the entire DX9 feature set. It certainly wouldn't be fast. My point is that you could create DX9 class card with that low a transistor budget.
Is this what you think Trident did?
I do agree, however, that you would be very hard pressed to get full DX9 compatibility and the speed they are claiming with that sort of transistor budget.
If you could get 70% of the performance of the Radeon 9700 via software emulation (which is what you are describing), then don't you think people would have done this instead of making such a complex chip? CPUs are very programmable, but they don't lend themselves to massively parallel operations unless they are specifically designed to do so. This is one reason why CPUs aren't replacing video cards. Also, a 30 million transistor CPU isn't going to be parallel enough to give reasonable results as a 3D rasterizer trying to compete with the Radeon 9700.

Think about a Pentium 4 trying to execute pixel shader operations in SSE2. How many cycles does it take to compute a single MAD (multiply and add) operation? Then you can figure out the maximum number of common single instruction shaders you can execute in a second. I bet it doesn't compare too well to any reasonably fast 3D chip. Don't forget I didn't compute vertex operations, depth testing, fog, or alpha blending operations (to name a few).
 
Miscommunicating has to be the cause of some of the biggest problems in human history...

OpenGL guy said:
Nexiss said:
You could (no maybe here) implement a software rasterizer with the entire DX9 feature set. It certainly wouldn't be fast. My point is that you could create DX9 class card with that low a transistor budget.
Is this what you think Trident did?
Hardly. I don't think they'll have a full DX9 feature set, and right now I would question whether they will have any (DX9 features). I put in bold my main point.

My main point isn't that Trident might be using the CPU to achieve DX9 features (I don't think they are), merely that you couldn't dismiss them having a DX9 feature set based on transistor count alone (as the original poster this was intended for seemed to be doing). To demonstrate this point I brought up CPUs as an example of a low transistor count processor that could achieve full DX9 compatibility (with the obvious sacrifice of speed).
Nothing more.

I do agree, however, that you would be very hard pressed to get full DX9 compatibility and the speed they are claiming with that sort of transistor budget.
If you could get 70% of the performance of the Radeon 9700 via software emulation (which is what you are describing), then don't you think people would have done this instead of making such a complex chip? CPUs are very programmable, but they don't lend themselves to massively parallel operations unless they are specifically designed to do so. This is one reason why CPUs aren't replacing video cards. Also, a 30 million transistor CPU isn't going to be parallel enough to give reasonable results as a 3D rasterizer trying to compete with the Radeon 9700.

Think about a Pentium 4 trying to execute pixel shader operations in SSE2. How many cycles does it take to compute a single MAD (multiply and add) operation? Then you can figure out the maximum number of common single instruction shaders you can execute in a second. I bet it doesn't compare too well to any reasonably fast 3D chip. Don't forget I didn't compute vertex operations, depth testing, fog, or alpha blending operations (to name a few).
Once again you are missing my point, but hopefully what I wrote above is enough to clarify.
 
Nexiss said:
Once again you are missing my point, but hopefully what I wrote above is enough to clarify.
Sorry, I thought you were trying to say this is what you thought Trident could be doing. Everything is possible (within reason) on a CPU, but getting reasonable performance is often an issue :)
 
demalion said:
1024x768x32 with no aniso and no AA with pixel and vertex shader support in hardware is by no means ugly, and not at all "stupid". :rolleyes:

It absolutely is, if you're attempting to say it's as good as a GeForce4 Ti 4400, or 70% as good as the Radeon 9700.

I own a GeForce4 Ti 4200, and almost never run without aniso or FSAA. If the "average" user actually believes that Tridents' claims are correct (assuming their aniso/FSAA isn't up to par...which is probably a very good assumption given the very small transistor count), then they have been fooled.

That said, I have been saying for some time that I'd like to see a low-end DX8 card. If Trident can indeed produce stable and compatible drivers for these video cards, then they will be good for the gaming market as a whole.

I just have problems with the claims of performance close to their competitors' high-end products if they cannot produce the FSAA/aniso performance as well.
 
Did anyone here consider that scoring 70 or 80% of a competitors performance can actually be quite bad depending on the benchmark ? Say you take a game that is pretty much completely CPU limited and your flashy new Trident XP scores 70 or 80% of the competitors... well that means your performance is bad since the CPU is doing a lot of overhead work that the competitors do not need.

When Marketing makes such statements you always need to take it with a huge grain of salt since you do not know the data they are basing their claims on. Without knowing the actual aplication, system used, drivers used, setting used, etc... its impossible to be impressed or dissapointed by the claim.

Take a game like Morrowind, which I believe is fairly slow due to CPU limitations, if they claim 70 to 80 % for that it would be non-impressive.

So for now lets keep our heads cool and not get excited about wild marketing claims during an interview. Trident is in their "Hype Period", they are trying to get people to notice them.

About using DX8.1/9.0 in their press release, its the same trick as always : all DX7 boards can run DX9 interface games thanks to Microsofts backwards compatibility. Does their hardware have any of the new DX9.0 features ? Using the CPU for vertex processing they might have VS2.0 but with their claimed chip complexity I doubt they have FP pixel shaders. If they do it will be pretty slow.

I am still wondering about their "smart-tile (tm)". Is it :

a) Simple tile based memory layout, so render a triangle in little rectangular areas rather than triangle scan line per scan line.
b) PowerVR front-end system : grab the whole scene, sort the geometry into screen space tile areas and then render these tiles in an immediate mode way without advanced early Z. If its this system one has to wonder about the efficiency of their sorting, do they have hardware, how and where do they store the scene...
c) something else

My personal hunch is that its just (a) dusted off and hyped up...

K-
 
Gollum said:
Since I don't seem to remember DX7-class hardware like a GF4MX or R7500 ever claiming DX8 "compliance

Have you ever read a Gf4MX box? Certainly claims DX8.x on the ones I've looked at.

Goes back to what Kristof said above about backwards compatibility.
 
rubank said:
Wow, the amount of FUD :eek:
Some seem to get really nervous 8)

I don't think so. Look at the guys who spread FUD; they are nearly the same who spread NV30 rumors. So IMHO they are only upset that their "beloved" NV30 is on the backburner now and the XP4 gets all the recognition. :LOL:
 
Back
Top