RV250 benchmark - slower???

Apart from the denotation of the new product, which i believe is ATi's right to chose whatever they want to, i am more interested in it's performance or at least in performance/price.

From the numbers given, it looks like it's quite a bit slower than even the 8500LE despite it's substantially higher clockspeeds.

I remember from far back (a month or two ago) rumors that RV250 was going to implement a 2-Pipe design. Now, i know, that someone mentioned, rv250 in this case being a 4 pipe design, but if we look at the numbers and the basics of economics and marketing, woulnd't a 2Pipe-Design make much more sense and money for ATi?

1) Reduced Pipelines = Reduced Manufacturing Cost

2) Higher integration of former external components = Higher ASP (with regrad to possibly more simple board designs, which makes the whole product probably even less expensive than R8500)

3) Marketing-Terms can be kept and recycled (you don't have to leave out Hyper-Z II for example)

4) The more refined 0,15µ-process (with optional 0,13µ in a few months) allows for higher clockspeeds, making up for the lower fillrate and esp. in bandwidth-limited situations guaranteeing a more efficient usage of ressources.

Well, if i were ATi, i'd probably do it this way as it can earn them some serious money in the Budget and Office market.

(Sorry for my improper english, i just german).
 
Quasar said:
Apart from the denotation of the new product, which i believe is ATi's right to chose whatever they want to, i am more interested in it's performance or at least in performance/price.

From the numbers given, it looks like it's quite a bit slower than even the 8500LE despite it's substantially higher clockspeeds.

I remember from far back (a month or two ago) rumors that RV250 was going to implement a 2-Pipe design. Now, i know, that someone mentioned, rv250 in this case being a 4 pipe design, but if we look at the numbers and the basics of economics and marketing, woulnd't a 2Pipe-Design make much more sense and money for ATi?

1) Reduced Pipelines = Reduced Manufacturing Cost

2) Higher integration of former external components = Higher ASP (with regrad to possibly more simple board designs, which makes the whole product probably even less expensive than R8500)

3) Marketing-Terms can be kept and recycled (you don't have to leave out Hyper-Z II for example)

4) The more refined 0,15µ-process (with optional 0,13µ in a few months) allows for higher clockspeeds, making up for the lower fillrate and esp. in bandwidth-limited situations guaranteeing a more efficient usage of ressources.

Well, if i were ATi, i'd probably do it this way as it can earn them some serious money in the Budget and Office market.

(Sorry for my improper english, i just german).

Reducing a 4-pipelined ASIC to 2-pipelines isn't as easy as it sounds, but you are definitely right in thinking there will have been extensive optimisation to the design as well as process refinements. This board is apparently 25% cheaper to produce than it's nVidia equivalent (NV17, NV18?) and no doubt the high level of integration and subsequently low PCB costs would have helped that. The daughtercards are also extremely cheap.

My guess it that the fixed-function (DX7) T&L engine got the chop. Early specs state 50 million tri/sec throughput, which is closer to the RV200's 40 million/sec than R200 (75/62.5 million).

Am I write in thinking that as we move towards a shader-based era, this will become of less and less significance? Anybody?

MuFu.
 
Well, let's see if i got you right, MuFu:

With chopped-off geometrie-part and your subsequent reference to the shader-era you're implying, that the DX7-style TnL-Units have been removed from RV250, so that only Vertex-Shaders remain to shade some vertices?

Well, that's definitely possible, but it wouldn't explain the lower performance compared to R200.
 
Quasar said:
Well, let's see if i got you right, MuFu:

With chopped-off geometrie-part...

I didn't mean "completely removed" by "got the chop", sorry (although it got me thinking...). :D

I just meant a simplification/leaning-out of the T&L logic. Wouldn't reduced T&L throughput account for such performance losses?

Maybe they have switched to a single T&L unit that performs DX8 ops natively and emulates DX7; in that case, then yes, they could remove the fixed-function logic at the cost of some performance in DX7 games. The benchmarks don't make this clear though, so we'll have to wait and see. There also appears to be some bandwidth limitation coming into play. :-\

MuFu.
 
I don't think paring down the T&L alone could cause such a decrease in speed, specifically the decrease in FPS as resolution increases. Peformance should be basically equal at all resolutions until the fillrate/bandwidth limit is reached, assuming triangle performance is the limiting factor. That shouldn't be the case in any of those benchmarks, though, so I don't think we can draw any real conclusions about T&L performance from these results.
 
Yeah, that's a bummer; I mentioned not being able to draw any conclusions from these benchmarks in my post. Not long now till we know for sure, I hope... :)

MuFu.
 
Maybe they decreased the size of internal caches - that would decrease transistor count at the expense of performance.
 
Could be... caches take up a lot of silicon. About 50 million trannies per MB I believe.

MuFu.
 
opy alreday mentioned the lack of Hyper-Z, though i doubt it, still sticking to my 2Pipes-theory, which would save quite a bit more silicon estate imo.

BTW, MuFu, i didn't get you wrong with "chopped geometry", i just expanded it a little bit intentionally, because with a Shader, why would one need a DX7-style TnL-Unit any more especially if the performance in those Games is by no means geo-limited...

edit:
btw, changing cache-sizes would be quite an effective way to reduce T-count, granted. But looking at the subsequently inevitable controller-logic optimizations and driver level adjustments, esp. with ATi track record in drivers, i doubt it to be the route they took.
 
Quasar said:
opy alreday mentioned the lack of Hyper-Z, though i doubt it, still sticking to my 2Pipes-theory, which would save quite a bit more silicon estate imo.

BTW, MuFu, i didn't get you wrong with "chopped geometry", i just expanded it a little bit intentionally, because with a Shader, why would one need a DX7-style TnL-Unit any more especially if the performance in those Games is by no means geo-limited...

edit:
btw, changing cache-sizes would be quite an effective way to reduce T-count, granted. But looking at the subsequently inevitable controller-logic optimizations and driver level adjustments, esp. with ATi track record in drivers, i doubt it to be the route they took.

Ah... I misunderstood you, but it did make me think about them removing the fixed-funtion unit completely; which is an intriguing idea. :D

I totally agree with your edit; was going to say the same thing myself... that the drivers would have to be re-optimised to take into account the changes in the cacheing/buffer system.

Controller-logic optimisation take pure skill... I really hope they had some great people working on R300 to make sure they got it right. Maybe some nVidia engineers... ;)

MuFu.
 
Well, the specs are out, and the previews hint at some advanced funtionality with regards to shaders that can explain some of the "9xxx" rationale for the Radeon 9000.

However, the fact remains that the Radeon 9000 and 9000 Pro are benchmarking right now significantly slower (in almost all benchmarks) than an 8500. To me, that is a major issue of dishonesty in naming.

That said, it is confusing to refer to this in the context of the GF4 MX, since the features the GF4 MX lacks compared to any GF3 are rather profound (I've cringed several times in the past few weeks seeing posts with people wondering where their vertex/pixel shader effects were on their brand new GF4....MX), and that isn't the case with the 9000. A more apt analogy in terms of the dishonesty it represents to the user is the Pentium 4 (when it first came out) compared to Pentium III, where people were buying weaker systems because they were duped by the "bigger number". I think it will be less confusing to dump on ATi properly :LOL: in that context rather than continue to confuse it with the GF4 MX.
 
Bugger! I posted here yesterday that the 9000 had 1 TMU per pipe and no hardware Truform support but then I deleted it because I thought the information was incorrect. N/mind.... shows a great deal of mistrust on my part, lol. :D

Oh well...

MuFu.

P.S. I don't think the naming is dishonest at all; the 9000 is definitely one technology generation ahead of 8500. Then again, I don't think it's that important (same goes for the GF4MX thing).

MARKETING SUCKS, always. That isn't going to change. :mad:
 
I don't like the naming convention myself. 9000 simply implies better performance than the 8500. On the other side of the coin, I think that ATI putting the R300 in the same "9000" series as the RV250 does a disservice to the R-300.

I can certainly understand ATIs reasons for going with the 9000 name, but I still don't like it.

That being said, the 9000 product itself is excellent for what it is, a value $100 and lower street price part. ($109 list price.) It kicks the snot out of the GeForce4 MX, and includes DX8 capability.

I was expecting a 2 pipe DX8 part for RV250...because I didn't see how ATI could get value pricing out of a 4 pipe DX8 part on 0.15 microns. I was close...ATI cut out one texture unit per pipe. ;) So the end result is that the "3dfx texel rate" is in fact cut in half. In effect, the Rv250 is about "half" the chip that the R-200 is, in terms of theoretical texturing performance. It scores much better than half of the performance of R-200 clock for clock, and that is to ATI's credit.

I've read some reviews that said it best: the trade-offs that ATI made to cut transistors to create a "value" part, is much better than what nVidia did by doing a 2 pipe DX7 part that is the GeForce4 MX.
 
I can certainly understand ATIs reasons for going with the 9000 name, but I still don't like it.

I tend to agree. However, if Anands preview is anything to go by then it seem they may also introduce a cut down R300 to fill the midrange so I can understand why they would want it as this will probably occupy a 9200-9600 naming slot!
 
Again IMO the biggest reason why people in the 'know' did not like the Geforce 4 MX naming scheme as it was a DX7 card that didn't even support EMBM. The same critics can't say the same for the Rv250, this is the card we NEED in volume in the marketplace along with the Xabre, the Dells and Compaqs will now ship with a full DX 8.1 GPU's that will hopefully get the devlopers to get off this two year old DX7 garbage. :p
 
You might have a chance with Compaq (they offer an 8500 AIW as an option but the rest are NVIDIA cards), but Dell seems to be an NVIDIA only store. HP offers a 7500 AIW amongst NVIDIA cards, eMachines a ProSavage8 or Intel inside, Sony has SiS only.

My guess is (simply because of the channels and business arrangements), DX8 won't become mainstream until NVIDIA cans the current GF4MX and replaces it with a GF4MX that's worthy of the name "GF4" that is at a similar price point. Even then, it'll be years.

I would love to see DX8 gain mainstream acceptance, but its going to take a while (either to crack the channel, or until NVIDIA changes their lineup).

(It used to be crappy DX5 cards from ATI that was in everything, now its crappy DX7 cards from NVIDIA)
 
Doomtrooper, bringing DX8 to the masses is good, but you have to realize that despite you thinking otherwise, ATI *is* doing just the same thing as Nvidia: name your rehashed previous DX-generation product in the same cathegory as your newest DX-generation enthusiast card. Nothing you can say will change that fact, but hey, I don't even see a big deal with it at all. Wether it's GF4(DX8)/GF4MX(DX7) or R9700(DX9)/R9000(DX8), there's no difference (last year there was, when ATI clearly showed the value part was "older" tech by naming it 7500 instead of 8000), not anymore.

You shouldn't be surprised to see "people in the know" not approving of ATI's "new" naming scheme, its just as wrong as the whole GF4MX issue was. Sucks from an "honesty" point of view from both companies, especially further down the road, but we're not in church here.

IMHO, for the casual gamer, what's the difference? A GF4MX performed rather well in 99% of all games at its release, is dead cheap and continues to perform rather well in a majority of cases. The argument that one generation of DX is a vastly more important leap over preivous ones will degrade over time, DX7 once was a giant leap itself, time changes perspectives! The same argument against the GF4MX today will go against the R9000 some day ("DX8 is old, we need DX9, these old cards are holding us back!"). The pity is that it took almost a year for a value DX8 part to come along, DX7 value parts were much faster to appear.

The first product to change this emerging trend of DXn/enthusiast vs. DXn-1/value again (been common practice now for what, over a year?), could be NV31, but just how "value" this DX9 chip can be 6 months from now remains to be seen...

Note, that doesn't have anything to do with RV250 not being a good product, it is, very good in fact. Right now a DX8 value part is a good thing, but the next generation is already here as well, so it won't be up-to-date very long ...
 
Gollum said:
Doomtrooper, bringing DX8 to the masses is good, but you have to realize that despite you thinking otherwise, ATI *is* doing just the same thing as Nvidia: name your rehashed previous DX-generation product in the same cathegory as your newest DX-generation enthusiast card. Nothing you can say will change that fact, but hey, I don't even see a big deal with it at all. Wether it's GF4(DX8)/GF4MX(DX7) or R9700(DX9)/R9000(DX8), there's no difference (last year there was, when ATI clearly showed the value part was "older" tech by naming it 7500 instead of 8000), not anymore.

You shouldn't be surprised to see "people in the know" not approving of ATI's "new" naming scheme, its just as wrong as the whole GF4MX issue was. Sucks from an "honesty" point of view from both companies, especially further down the road, but we're not in church here.

IMHO, for the casual gamer, what's the difference? A GF4MX performed rather well in 99% of all games at its release, is dead cheap and continues to perform rather well in a majority of cases. The argument that one generation of DX is a vastly more important leap over preivous ones will degrade over time, DX7 once was a giant leap itself, time changes perspectives! The same argument against the GF4MX today will go against the R9000 some day ("DX8 is old, we need DX9, these old cards are holding us back!"). The pity is that it took almost a year for a value DX8 part to come along, DX7 value parts were much faster to appear.

The first product to change this emerging trend of DXn/enthusiast vs. DXn-1/value again (been common practice now for what, over a year?), could be NV31, but just how "value" this DX9 chip can be 6 months from now remains to be seen...

Note, that doesn't have anything to do with RV250 not being a good product, it is, very good in fact. Right now a DX8 value part is a good thing, but the next generation is already here as well, so it won't be up-to-date very long ...

Wow..I strongly disagree with you. The naming of the Radeon 9000 might be a bit dishonest, but it's nowhere near the caliber of the Geforce 4 mx. There were lots of developers like Carmack and Tim Sweeny bashing the Geforce 4 mx naming scheme, but I doubt you'll see the same thing for the Radeon 9000.

The one oversight in your argument is the timeframe. Both the Radeon 9000 and the Geforce 4 mx were introduced at end of the directx 7/beginning of the directx 8 era. The difference between the two is the Radeon 9000 IS a directx 8 capable card. The Geforce 4 mx is not, and consequently is holding the industry back and a non-futureproof purchase. That's why enthusasists and developers are unhappy with the naming scheme.

I doubt the Radeon 9000 will encounter the same problem of "holding the industry back" in the future. By the time developers actually start using Directx 9 (which is a long way off), the Radeon 10000 (r300mx - dx9) or whatever will be on the shelves and the cycle will start over again. That's the difference between the two.
 
Back
Top