Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I have serious doubts that AMD would change ATI's product lineup this soon after the buyout. To do so would just be, well, incredibly stupid. So I would tend to expect that this first product cycle after the merger will be essentially identical to what it would have been had there been no merger.What a scary statement! Ever since the AMD buyout, it's been suggested that ATI will no longer make GPUs for the high-end and focus exclusively on the mid and low-end. Are you suggesting they should be doing that? Or is it merely inevitable?
So, how come no-one at B3D was invited to the launch, and subsequently received the cancellation? Why is it "other sites" that received the cancellation? It's not as if B3D is short on European staff.
Doesn't AMD like B3D?
Jawed
@ Geo et al, can we get a summary of this thread and topic in general?
I haven't been able to follow all the excitement as of late and with the delay the thread has grown quickly. 89 pages... wow.
Maybe a summary with links would be great. Seems there is a lot of confusion, from everything about the release date, why the editors day was cancelled, when other products are coming, what we DO know, what we MAY know, and was is FUD. Heck, a "FUD" heading with links to some of the fake news would be great too!
@ Chalnoth: A hardware bug sounds most reasonable to me. A performance deficit, while no good, doesn't seem like reason enough to cancel such an event and delay (how much can they really resolve this in a short period of time?) A hardware bug that makes the GPU near useless/uncharacteristic of the design (like the R520 soft ground) is a compelling reason to push things back for a respin. Of course the truth may be one of many, many factors.
How's this: a decision was made regarding the R600 release, that will be a very pleasant surprise to many people.
I did not see it posted here but I think it's kind of important for you folks to know.
Here is what KMAX said about R600's delay @ R3D.
:smile:
I did not see it posted here but I think it's kind of important for you folks to know.
We can break up GPU performance into multiple pieces: on silicion, memory bandwidth and texture filtering throughput are fixed function, so little can be done. Shader performance is pretty much the only variable over which you have major control. The other part is the driver.
In existing GPU's, at reasonably high resolutions, we have seen performance scale almost linearly with the amount of hardware pipes/ALU's/etc. This is only possible if the driver is a small part of the performance equation, but let's assume that, worst case, the GPU has to sit idle 20% of its time, waiting for the driver.
Even with an infinitely fast driver, you'd still only gain 25% in performance.
So your magical 30-40% increase simply has to come from the compiler. Let's not forget, you were not talking about a specific shader here or there, but about an across-the-board performance increase. Basically, you're talking about a compiler that up a couple of weeks ago just completely totally absolutely sucked and that nobody seemed to realize it. ATI had very good shader compilers in the past. Are you suggesting that their entire, very competent, compiler team resigned and was replaced by a bunch of fumbling idiots?
But could it be that Vista drivers suddenly demand much more CPU cycles than XP?
It's possible that DX9 doesn't fit very well in the Vista driver model, but wasn't everybody raving about the high quality of ATI Vista drivers? Aren't they supposed to be unifed anyway, so only a small part is hardware specific? I have yet so see complaints about overall 40% performance loss for switchers to Vista. If ATI can make efficient Vista drivers for DX9, they should be even more efficient for DX10, unless all the praise for the much higher efficiency of DX10 was just one big lie. Unlikely, don't you think?
Strawman argument. Their compiler had to be efficient enough 2 years ago to start validation of expected performance. 40% off theoretical peak rate is not 'enough' in my book.
During those 2 years, the compiler can be gradually improved to fix corner cases. A process that will continue, as we have seen in previous generations.
Exactly my point. In the past, we've never seen across the board 30-40% performance jumps. They were always gradual.
O tempora! O mores!
Another strawman.
Summarizing my arguments above: that automatically implies horrible compiler performance and staggering incompetence. Yes, I suppose it's possible.
It's gonna launch at $400?![]()
It's obviously not something technical since they don't have enough time to change that so it has to be pricing, doesn't it?
That would be quite sweet, one of the too good to be true outcomes.It's gonna launch at $400?![]()
It's obviously not something technical since they don't have enough time to change that so it has to be pricing, doesn't it? Or is there something else that we'd consider "pleasant" that AMD has the power to change at this late stage?
My thinking as well.
Considering how late R600 is, I believe R680 is just about almost ready to go. I don't think ATi wants another R520/580 fiasco again. With this said, I believe they are holding off R600 to put the last finishing touches on R680 for a big product luanch that will cover all segements of the market meeting all the price points.
Only thing I can think of. I know it's not performance because they would have known by november. Can't be hardware problem either.
silent_guy said:Are you claiming that the they say a 20%-40% across the board performance increase with a new design. Could you point to some examples?
It's a nice rule of thumb that, for coherent traffic, you should be able to get a DRAM running at 70-80% of theoretical bandwidth. Maybe the traffic of a GPU is not coherent enough to get it this high, but I doubt it, because latency is not of the highest importance, so they can take their time to schedule the right transaction. If a GPU can't reach it, who would, other than those you have extremely predictable, regular traffic patterns (like packet routers etc.)
So what you're basically saying, is that memory controllers before the magical new one, were running at only 50% efficiency. One wonders how ATI was ever able to compete in the past.
Once again, it's possible that specific games had specific traffic patterns that could be corrected by tweaking some parameters, the potential improvement would be rather small.
http://www.hexus.net/content/item.php?item=3668Hexus said:Improvements 'of up to 35%' in Doom3 are explicitly mentioned by sources within ATI.
I've thought about this too. After they first found out about the 8800GTX performance(which is probably months before it launched) AMD decided to go straight to the R680. It would be sweet, but that is highly unlikely, yet I still concede it's possible.![]()
No. NO NO NO. THEY ARE LAUNCHING R600..
Heh what if they did a run of 65nm R600's and it came back so clean that they're dumping the entire 80nm line and ramping 65nm from the get-go ?
Just thought I'd get in on the fairy-tales too![]()
No. NO NO NO. THEY ARE LAUNCHING R600. /me goes to find Wavey's frying pan.