The LAST R600 Rumours & Speculation Thread

Status
Not open for further replies.
So, you think ATi should have done just what nVidia did--and gone ahead and shipped its "DX10/Vista-ready" R600's even if they weren't "DX10/Vista-Ready"...? Based on what nVidia's getting at the moment as a result of doing that, I think ATi would be crazy to do it, myself. I mean, since they've seen how popular this move was for nVidia, *why* should they set themselves up for more of the same?

Well, the key difference with R600 is that Vista is already out and compatibility with R600 would be well defined. Hence ATI/AMD could simply make sure their claims for R600's Vista-readiness matched its actual ability. It was NVIDIA's forward looking claims that caused the problem. ATI can simply launch R600 as Vista ready with *qualifications*. No need to delay the launch.

Anyway, I think that it's easy to get distracted by G80's DX10 capabilities. G80 is attractive right now because it's by far the fastest DX9 card on the market. Even if it wasn't DX10 capable, it would sell well, simply because it's the fastest by a long chalk. DX10 capability is a nice extra and gives peace of mind, but the idea that R600 will be sold largely on its Dx10 capabilities is mistaken. R600 will be long dead by the time DX10 game engines are the majority.
 
Thanks for the run reading guys. :)

I agree with those who say the delay is due to a hardware performance fix.

Perhaps a metal layers revision to improve clocks. AMD knows they need to come out the clear #1, and they are willing to delay as needed to ensure that.
 
Meh. I still think if you combine xbit with hkepc you get the true answer.

"To better align our strategy with current market opportunities. . . targeting a broader market segment in Q2".
 
AMD knows they need to come out the clear #1, and they are willing to delay as needed to ensure that.

I keep thinking about the very odd quote from ATi regarding the delay

We are going to deliver a competitive configuration to market with an extremely attractive combination of performance, features and pricing, targeting a broader market segment in Q2

It sounds to me like they may be planning to cut down r600 to an (upper) midrange part and hope to dominate there.

If so it could be a very risky strategy as NV could have plenty options available to compete with them by then.

Anyway hopefully I comepletely misread the statement.
 
Thanks for the run reading guys. :)

I agree with those who say the delay is due to a hardware performance fix.

Perhaps a metal layers revision to improve clocks. AMD knows they need to come out the clear #1, and they are willing to delay as needed to ensure that.

I still think it's a software issue, either Vista performance is not up to par and/or their DX9 performance is not equal to the 8800. If it was a hardware respin they could always just use a few samples of that new spin that do work well and at least show everyone what it will do, and just have some limited availability at launch. To delay the whole editor's day for this does not make sense. We may never know the real answer for sure. DAAMIT!
 
Meh. I still think if you combine xbit with hkepc you get the true answer.

"To better align our strategy with current market opportunities. . . targeting a broader market segment in Q2".

I have a similar line of thought. In the last couple of years, post-r300, ATI usually came out with the high end that was competitive & more, only to get beaten at the middle by the likes of the 6800gt, 6600gt, 7600gt, etc.

Management perspective from AMD's side is probably getting ATI to finally give the mainstream more attention.
 
I have a similar line of thought. In the last couple of years, post-r300, ATI usually came out with the high end that was competitive & more, only to get beaten at the middle by the likes of the 6800gt, 6600gt, 7600gt, etc.

Management perspective from AMD's side is probably getting ATI to finally give the mainstream more attention.

Yes it would certainly benefit AMD to pay more attention to the few notches below the very top end IMO.

However as I said a few pages ago, if you're going to make a shift in strategy of that sort it seems neither necessary nor sensible to do so by canceling event for the launch of your latest, greatest (and late) high-end part. It makes you look like you're headless chickens. Stick with the plan this time, and make strategic changes through the refresh and R7xx family, and pretend that that's what you meant to do all along. That gives the shareholders the impression that you do actually know what you think you're supposed to be doing.
 
I have a similar line of thought. In the last couple of years, post-r300, ATI usually came out with the high end that was competitive & more, only to get beaten at the middle by the likes of the 6800gt, 6600gt, 7600gt, etc.

Management perspective from AMD's side is probably getting ATI to finally give the mainstream more attention.

Did they just realize this last week? That wouldn't exactly represent particularly bright planning.

I gotta think that if they could release a R600 now they would. Their board partners, particularly the ATI-only have got to be desperate for it.

The GeForce 5900 Ultra wasn't exactly beating up on the 9800, but NVIDIA got it out there, and then followed it with the lower cost XT (am i right with those names?). There is nothing that would prevent ATI from launching a 6800 GT type later and still getting R600 out now barring problems of some sort - be they technical, performance, or yield-related.
 
Meh. I still think if you combine xbit with hkepc you get the true answer.

"To better align our strategy with current market opportunities. . . targeting a broader market segment in Q2".

Yeah but that doesn't explain the "sudden" change in strategy. Where's the catalyst? (no pun intended :D)
 
I said AMD want the 30-40% improvement to beat just that, the 8900GTX.

It's delusional to think that a large group of top GPU specialists would not see a potential 40% performance fix during the 3 years they've been working on it and then suddenly notice it just a few weeks before release. It doesn't work that way.

Some games out there are likely to have shader programs that currently aren't efficiently compiled to the new architecture. A new release of the compiler could plug that hole (we have seen large increases in performance in the past for specific games), but that's the standard way compilers improve over time anyway.

It's theoretically possible that their compiler was so bad that it lost 30% of theoretical performance across the board and that they're working on a fix, but it's not realistic: to get simulations going, that compiler most have been good enough for at least 2 years for common cases to make reasonable performance evaluations on existing games.

Everybody who designs an instruction set for a performance applications (GPU or not), analyzes cycle counts to death to make sure nothing is left on the table. We did such an exercise in college! ;) It's impossible that ATI didn't do this. This Inq story should be filed in the (rather large) 'magical thinking' drawer.

If I remember back when ATI first started playing with their memory controllers they were getting some crazy performance increases just by tweaking the controller. ... and they were getting boosts of 20-40% back then with just 4x and 6x AA in various games.

Are you claiming that the they say a 20%-40% across the board performance increase with a new design. Could you point to some examples?

It's a nice rule of thumb that, for coherent traffic, you should be able to get a DRAM running at 70-80% of theoretical bandwidth. Maybe the traffic of a GPU is not coherent enough to get it this high, but I doubt it, because latency is not of the highest importance, so they can take their time to schedule the right transaction. If a GPU can't reach it, who would, other than those you have extremely predictable, regular traffic patterns (like packet routers etc.)

So what you're basically saying, is that memory controllers before the magical new one, were running at only 50% efficiency. One wonders how ATI was ever able to compete in the past.

Once again, it's possible that specific games had specific traffic patterns that could be corrected by tweaking some parameters, the potential improvement would be rather small.
 
It's delusional to think that a large group of top GPU specialists would not see a potential 40% performance fix during the 3 years they've been working on it and then suddenly notice it just a few weeks before release. It doesn't work that way.

Not delusional at all. Software can do many things, in fact, software IS the things that can make a card run at 100% or run at 1%. It all depends on how the software works. Especially with Vista still being finalized, etc, I hardly doubt ATI had their R600 card working at 100% efficiency at the design stage 3 years ago. If this were the case why do most all driver improvements increase performance over the life of the card? The card's arn't changing.

Some games out there are likely to have shader programs that currently aren't efficiently compiled to the new architecture. A new release of the compiler could plug that hole (we have seen large increases in performance in the past for specific games), but that's the standard way compilers improve over time anyway.

It's theoretically possible that their compiler was so bad that it lost 30% of theoretical performance across the board and that they're working on a fix, but it's not realistic: to get simulations going, that compiler most have been good enough for at least 2 years for common cases to make reasonable performance evaluations on existing games.

Thank you, you just proved my point and contradicted yourself at the same time. Why do you think Nvidia is having such problems with Vista? By your reasoning they shouldn't be experiencing any problems at all and everything should be rosey goings.

Everybody who designs an instruction set for a performance applications (GPU or not), analyzes cycle counts to death to make sure nothing is left on the table. We did such an exercise in college! ;) It's impossible that ATI didn't do this. This Inq story should be filed in the (rather large) 'magical thinking' drawer.

Maybe you should be helping AMD and then they'd have this 100% efficiency right out of the gate. :D

Also, let me clarify something. I think ATI's R600 design was made to beat or be on par with what an 8900GTX would be (performance wize) from the beginning. I never said a software tweak would put them 40% past what their card was originally intended for. For that you'd need some hardware changes of course.
 
Last edited by a moderator:
Thanks for the run reading guys. :)

I agree with those who say the delay is due to a hardware performance fix.

Perhaps a metal layers revision to improve clocks. AMD knows they need to come out the clear #1, and they are willing to delay as needed to ensure that.

I don't believe this is the case. If it were just starting a respin now, I doubt they could respin it, validate it, and manufacture enough cards to be available even by the tail end of Q2. Not to mention it would take months to add another metal layer. Not exactly a last minute fix. If they had started a respin earlier then, well, this wouldn't have been a big surprise and the frantic/sudden editors' day cancellation wouldn't have ever occurred (and it's likely it wouldn't have been scheduled in the first place).

I don't see this as being a hardware issue, at least not one that would require a respin. Most likely it's a yield issue and/or making changes in hardware configurations that doesn't require a respin (ie: how many pipes should we disable to make an XL last minute changes).

One thing I'm confident about is that this won't be a repeat of NV30. If anything it'll be a repeat of R520 - a decent card, 4 months too late to market, soon to be overshadowed by its own refresh and its competitor's refresh.
 
So, how come no-one at B3D was invited to the launch, and subsequently received the cancellation? Why is it "other sites" that received the cancellation? It's not as if B3D is short on European staff.

Doesn't AMD like B3D?

Jawed
 
Meh. I still think if you combine xbit with hkepc you get the true answer.

"To better align our strategy with current market opportunities. . . targeting a broader market segment in Q2".
One could take this to mean that R600 was indeed going to be faster than G80 (in its current form, however one could interpret that [somehow it's ridiculous to think R600 could gain 30% but natural to expect G80 to gain 25%])--but significantly more expensive, too. Or are the midrange/budget offerings the real play?
 
Not delusional at all. Software can do many things, in fact, software IS the things that can make a card run at 100% or run at 1%. It all depends on how the software works.

We can break up GPU performance into multiple pieces: on silicion, memory bandwidth and texture filtering throughput are fixed function, so little can be done. Shader performance is pretty much the only variable over which you have major control. The other part is the driver.

In existing GPU's, at reasonably high resolutions, we have seen performance scale almost linearly with the amount of hardware pipes/ALU's/etc. This is only possible if the driver is a small part of the performance equation, but let's assume that, worst case, the GPU has to sit idle 20% of its time, waiting for the driver.

Even with an infinitely fast driver, you'd still only gain 25% in performance.

So your magical 30-40% increase simply has to come from the compiler. Let's not forget, you were not talking about a specific shader here or there, but about an across-the-board performance increase. Basically, you're talking about a compiler that up a couple of weeks ago just completely totally absolutely sucked and that nobody seemed to realize it. ATI had very good shader compilers in the past. Are you suggesting that their entire, very competent, compiler team resigned and was replaced by a bunch of fumbling idiots?

But could it be that Vista drivers suddenly demand much more CPU cycles than XP?
It's possible that DX9 doesn't fit very well in the Vista driver model, but wasn't everybody raving about the high quality of ATI Vista drivers? Aren't they supposed to be unifed anyway, so only a small part is hardware specific? I have yet so see complaints about overall 40% performance loss for switchers to Vista. If ATI can make efficient Vista drivers for DX9, they should be even more efficient for DX10, unless all the praise for the much higher efficiency of DX10 was just one big lie. Unlikely, don't you think?

Especially with Vista still being finalized, etc, I hardly doubt ATI had their R600 card working at 100% efficiency at the design stage 3 years ago.
Strawman argument. Their compiler had to be efficient enough 2 years ago to start validation of expected performance. 40% off theoretical peak rate is not 'enough' in my book.
During those 2 years, the compiler can be gradually improved to fix corner cases. A process that will continue, as we have seen in previous generations.

If this were the case why do most all driver improvements increase performance over the life of the card? The card's arn't changing.
Exactly my point. In the past, we've never seen across the board 30-40% performance jumps. They were always gradual.

Thank you, you just proved my point and contradicted yourself at the same time.
O tempora! O mores!

Why do you think Nvidia is having such problems with Vista? By your reasoning they shouldn't be experiencing any problems at all and everything should be rosey goings.
Another strawman.

Also, let me clarify something. I think ATI's R600 design was made to beat or be on par with what an 8900GTX would be (performance wize) from the beginning. I never said a software tweak would put them 40% past what their card was originally intended for. For that you'd need some hardware changes of course.
Summarizing my arguments above: that automatically implies horrible compiler performance and staggering incompetence. Yes, I suppose it's possible.
 
Last edited by a moderator:
So, how come no-one at B3D was invited to the launch, and subsequently received the cancellation? Why is it "other sites" that received the cancellation? It's not as if B3D is short on European staff.

Doesn't AMD like B3D?
A. Have you infiltrated our private communications network again? :(

B. Obviously--they hired the previous EIC. :p
 
I have a similar line of thought. In the last couple of years, post-r300, ATI usually came out with the high end that was competitive & more, only to get beaten at the middle by the likes of the 6800gt, 6600gt, 7600gt, etc.

Management perspective from AMD's side is probably getting ATI to finally give the mainstream more attention.

What a scary statement! Ever since the AMD buyout, it's been suggested that ATI will no longer make GPUs for the high-end and focus exclusively on the mid and low-end. Are you suggesting they should be doing that? Or is it merely inevitable?
 
Status
Not open for further replies.
Back
Top