r420 benchmarks (not real..extrapolated results)

digitalwanderer said:
Either something don't add up or else nVidia is getting ready to buy back a big chunk of market share by taking a huge loss on 'em.

Can ya imagine if it's like the original 9500 Pro with modability? I can almost see nVidia doing just that to win back the enthusiast crowd.

That's my preliminary conclusion as well. However, Huang has vowed to restore profit margins to their historical levels, and unless NVDA's yields have soared I can't see this being achieved with such a strategy. There must be something we aren't being told about the vanilla 6800.
 
Doomtrooper said:
OLD PS 2.0 stuff ?? If anything is old it is PS 1.1 which still dominates most game releases.
rofl.gif


There is nothing 'old' about PS 2.0, it is about a year old and still wating for some real uses. Far Cry is about the best example out there presently.
Then factor in PS 2.0 profiles that include longer instruction counts, what we see happening is Nvidia trying to use the one marketing angle they got...SM 3.0 but as already said is no different then what was attempted by ATI with PS 1.4 vs PS 1.1...visually they are no different and in this case may actually be slower in some cases.
The truth of the matter is, there is Zero SM 3.0 parts on the market but millions of PS 2.0 boards, and still waiting for decent titles to expose them. :LOL:


Ok maybe I meant Older PS 2.0 stuff... NVidia is making a major effort to get developers onto 3.0 bandwagon. FarCry will have its patch.. I beleieve that Stalker will be 3.0 and EA LOTR RTS is suppose to be 3.0 game. Probably others...

Yes 2.0 is out there but I believe that 2004 is year of the RIze of PS 3.0.

I believe that a big diffeernce in 3.0 spec than 1.4, is that 3.0 was part of the orginal DX9 spec and 1.4 was not - didn't 1.4 come about with DX8.1. ATI could write to PS3.0 is they want to... it is an open spec.

I am curious about one thing.. hasn't it been a year for DX9, before DX9 - how long was to DX8.1 and PS1.4. To me this means that shader technologies is just evolving to next logical step.
 
kemosabe said:
That's my preliminary conclusion as well. However, Huang has vowed to restore profit margins to their historical levels, and unless NVDA's yields have soared I can't see this being achieved with such a strategy. There must be something we aren't being told about the vanilla 6800.
The big money from the NV4x architecture won't start coming in until the rest of the line is released (most lilkely this fall).
 
Chalnoth said:
The big money from the NV4x architecture won't start coming in until the rest of the line is released (most lilkely this fall).

The problem is that once you set you flagship product very low, it's difficult to raise those prices in the future. You then have problems fitting in the rest of the line unless you drop those prices too. After all, who will buy a 8 pipe 6500 for $199 if you can buy a 16 pipe 6800U at $299 that gives you two or three times the preformance? You end up making less margin on the mid/low end, and it actually hurts you when people buy the expensive high end cards that you've heavily subsidised with non-existent margins.

In short, you can buy your way back into market share by cutting your prices, hurting your profit margins and subsidising your products, but that's only a short term gain in market share that costs you money, and in the long run devalues your product as you teach your market to expect your products to be cheap.
 
Heathen said:
Ok maybe I meant Older PS 2.0 stuff...

What older PS 2.0 stuff?

Exactly...we have

TR:AOD
Halo
Far Cry
??

Take two of the most anitcipated upcoming titles, HL2 and Doom 3. Doom 3 being mostly DX7/DX8 Opengl equivalent engine and HL 2 being the most promising title with PS 2.0 shaders...how many more are even boasting Pixel Shader support ??
Yes developers can start working on PS 3.0 games as visually they will look the same, and they should be completed just in time for the R500/NV50, meanwhile the games that are in development now can be patched with PS 3.0 all they want...as visually the PS 2.0 owner will not be missing out on anything except possibly speed..which is not a issue with a R420
 
Bouncing Zabaglione Bros. said:
The problem is that once you set you flagship product very low, it's difficult to raise those prices in the future. You then have problems fitting in the rest of the line unless you drop those prices too. After all, who will buy a 8 pipe 6500 for $199 if you can buy a 16 pipe 6800U at $299 that gives you two or three times the preformance? You end up making less margin on the mid/low end, and it actually hurts you when people buy the expensive high end cards that you've heavily subsidised with non-existent margins.
Sorry, but you really should look at past launches. This has not been a problem in the past. Besides, on high-end parts, the margin is pretty much always quite high. A quick example: The GeForce FX 5900 non-Ultra at $200 was a good fraction of the performance of the FX 5950 Ultra at more than twice the cost.

And you can't forget that as time goes on, it becomes easier and easier to reduce costs in producing these cards. That naturally increases margins as time goes on.

In short, you can buy your way back into market share by cutting your prices, hurting your profit margins and subsidising your products, but that's only a short term gain in market share that costs you money, and in the long run devalues your product as you teach your market to expect your products to be cheap.
The thing you have to realize is that the cash cow is still the low-end OEM parts (e.g. FX 5200). We're talking about parts that are built for low cost, low power consumption, and lots of features.
 
Doomtrooper said:
as visually a the PS 2.0 owner will not be missing out on anything except possibly speed..which is not a issue with a R420
Not necessarily true. While it may be true that anything that is possible in PS 3.0 is also possible in PS 2.0, the algorithm required may either be just too slow to be worth it, or the developer may have so much of an easier time programming the PS 3.0 shader that he doesn't bother to write the appropriate PS 2.0 fallback that produces an identical image, but instead goes for one that is more of a "hack," to save on either performance or development time.

Now, I don't know how often this will occur, but it does seem that at first we won't see visual differences. It'll take a while for developers to feel that they shouldn't spend the time for PS 2.x hardware.
 
Chalnoth said:
It'll take a while for developers to feel that they shouldn't spend the time for PS 2.x hardware.

That's exactly the point : how long will this take? Will they pick up SM3.0 fast enough that the R420's "inferiority" will be obvious to end-users, or will the transition process take long enough that once SM3.0 is truly here in all it's glory, ATi can say "me too, and then some" with the R500?
 
Chalnoth said:
Sorry, but you really should look at past launches. This has not been a problem in the past. Besides, on high-end parts, the margin is pretty much always quite high. A quick example: The GeForce FX 5900 non-Ultra at $200 was a good fraction of the performance of the FX 5950 Ultra at more than twice the cost.

The msrp was on the 5900 was not $200. In fact I think at launch the 5900XT was $229.

2)The 5950 was a refresh part, released after the 5900ultra as a new high end.

I don't know that profit margins were particularily great on the FX line, are you sure that's a good example?
 
anaqer said:
That's exactly the point : how long will this take? Will they pick up SM3.0 fast enough that the R420's "inferiority" will be obvious to end-users, or will the transition process take long enough that once SM3.0 is truly here in all it's glory, ATi can say "me too, and then some" with the R500?
1. End-users have purchased video cards with more features with no evidence of improvement in the past.
2. OEM's typically seem to care more that the features are there than that they'll be used. Particularly since SM 3.0 was pushed by Microsoft, I expect OEM's to really latch on to SM 3.0 parts.

Anyway, here is what I expect:
In the near-term, I expect the NV4x to have a performance advantage in new SM 2.0 games (Starting with Far Cry), that is, when using SM 3.0 they'll be faster than using SM 2.0, but there will be no visual difference.

As SM 2.0 usage increases over the course of the year, so will the usage of SM 3.0, and thus the NV4x will show more advantage. This will make the NV4x look even more attractive for future titles.

In about a year's time or so, we may start to see a game or two that doesn't bother to replicate all effects perfectly in PS 2.0, a trend that will increase moreso as time goes on.
 
Chalnoth said:
anaqer said:
That's exactly the point : how long will this take? Will they pick up SM3.0 fast enough that the R420's "inferiority" will be obvious to end-users, or will the transition process take long enough that once SM3.0 is truly here in all it's glory, ATi can say "me too, and then some" with the R500?
1. End-users have purchased video cards with more features with no evidence of improvement in the past.
2. OEM's typically seem to care more that the features are there than that they'll be used. Particularly since SM 3.0 was pushed by Microsoft, I expect OEM's to really latch on to SM 3.0 parts.

Anyway, here is what I expect:
In the near-term, I expect the NV4x to have a performance advantage in new SM 2.0 games (Starting with Far Cry), that is, when using SM 3.0 they'll be faster than using SM 2.0, but there will be no visual difference.
Show me a single shader from Far Cry that compiles differenly under SM 3.0 compared to SM 2.0. The fact that the engine supports it is useless unless there is content to take advantage of it.

-FUDie
 
Heathen said:
Ok maybe I meant Older PS 2.0 stuff...

What older PS 2.0 stuff?

Oh come on. Halo was so last month. Do try and keep up!!!


heh, all kidding aside, I hardly see developers abandoning PS2.0 at this point when there are so many R300 and Nv3x cards around. If anything developers can just now barely start to target Ps2.0 as the standard performance level for gamers.

Ps 3.0 is a year or more away from mainstream use IMO.

yeah there will be Ps3.0 demos and a few games that may support it with a few small effects but Ps2.0 is going to be around for quite a while.
 
A bit late, but very interesting:

DaveBaumann said:
IMO. NV40 vs R420 = "scrap".
DaveBaumann said:
Yields on the NV3x line of chips appears to have been a bugbear of NVIDIA over the past year, and that is something they are hoping to address with NV4x. Jen-Hsun spoke of "heavily patented technology" utilised in the design of NV4x in order to bring the yields up, but there was no expansion on just what this technology was. They noted that due to the fast cycling nature of the market they are not able to get the similar types of benefits as CPU vendors by refining a design to bring costs down – it was noted that for these reasons and due to the issues of yield which are not able to be resolved in the timescales available for one platform NVIDIA actually has the largest "scrap" (wasted die) of any semiconductor in the industry, which is probably a good reason for their margin performance over the past year. Jen-Hsun made note of his desire to see a "100% yield coming out of TSMC" – while NVIDIA talked up IBM last year, its clear that TSMC is rapidly becoming NVIDIA’s primary foundry partner in a more vocal sense, as the number of processors produced from IBM for NVIDIA are still likely to be dwarfed by those that have continued to be produced by TSMC.
 
C'mon guys - it'll be a "scrap" because the Pro and Ultra are evenly matched, performance-wise. No other crazy reasons, lol.
 
MuFu said:
C'mon guys - it'll be a "scrap" because the Pro and Ultra are evenly matched, performance-wise. No other crazy reasons, lol.

Frankly it's still somewhat difficult to imagine that ATI has enough raw power in R420 to match the 6800 Ultra with a 12-pipe part. Brand me a skeptic, but that would make the XT a NV40-killer performancewise, which seems like a very tall order. :?
 
kemosabe said:
Frankly it's still somewhat difficult to imagine that ATI has enough raw power in R420 to match the 6800 Ultra with a 12-pipe part.

I don't see how it's that difficult. We know (at least, it's widely speculated) that ATI is somewhere in the 500 Mhz range for the core. At "only" 533 Mhz, the fill rate exactly meets that of the 400 Mhz 16 pipe 6800 ultra.

This is assuming that both parts have silimar "efficiency" for texturing and fill-rate...and roughly the same pixel shading ops per clock. There's no way we'll know that until benchmarks come out, but it's not a stretch to imagine that ATI could be as efficient, or a little more efficient, than nVidia.

That being said, I anticipate ATI's Pro to have a "notch lower" in bandwidth. (Current rumor is 450 Mhz ram for the pro, vs. nvidia's 550 Mhz ram for the Ultra.) I doubt that ATI would have a memory efficiency to make up for that deficit. So in the end, I am currently anticipating the following, with repsect to performance:

1) ATI Pro pixel shader performance to be generally on par with 6800 Ultra. Win some, lose some.

2) ATI Pro "traditional app AA performance" to generally be lower than the 6800 Ultra.
 
Joe DeFuria said:
kemosabe said:
Frankly it's still somewhat difficult to imagine that ATI has enough raw power in R420 to match the 6800 Ultra with a 12-pipe part.

I don't see how it's that difficult. We know (at least, it's widely speculated) that ATI is somewhere in the 500 Mhz range for the core. At "only" 533 Mhz, the fill rate exactly meets that of the 400 Mhz 16 pipe 6800 ultra.

This is assuming that both parts have silimar "efficiency" for texturing and fill-rate...and roughly the same pixel shading ops per clock. There's no way we'll know that until benchmarks come out, but it's not a stretch to imagine that ATI could be as efficient, or a little more efficient, than nVidia.

The preliminary results indicate the 6800 pixelshaders to be on par with (or about 5% faster than) the 9800 ones, for current benchmarks. Assuming ATi has improved performance (by tuning their current design if nothing else) seems a safe bet.

That being said, I anticipate ATI's Pro to have a "notch lower" in bandwidth. (Current rumor is 450 Mhz ram for the pro, vs. nvidia's 550 Mhz ram for the Ultra.) I doubt that ATI would have a memory efficiency to make up for that deficit. So in the end, I am currently anticipating the following, with repsect to performance:

1) ATI Pro pixel shader performance to be generally on par with 6800 Ultra. Win some, lose some.

Better.

2) ATI Pro "traditional app AA performance" to generally be lower than the 6800 Ultra.

But very probably better image quality. And their current best mode (6x) will almost certainly be better than the NV4x 4x, while being faster than the NV4x 8x mode.

Add the higher clockspeed to the mix, and...
 
Btw. If that rumored compression technique can do nice things for their bandwidth, ATi might even equal nVidia at that, while using cheaper ramchips. It doesn't even have to make that much of a difference, as long as that difference is done to something that behaves like a bottleneck right now.

:D

Well, we'll just have to wait and see, I guess.
 
Back
Top