AMD: R9xx Speculation

TechRadar talked to AMD's David Hoff recently:

Hoff was also happy to speak about it's upcoming new range of DirectX 11 graphics cards. "It's certainly more than changing a sticker," he said with a grin. "I wouldn't say it's an absolute, complete from the ground up new architecture. It's a nice, different architecture."

No concrete information, but better than a fake slide.
 
The earllier snippet, plus:

"A one year cadence is about the best you're ever going to see on a new architecture," he continued.
does, at least, make it sound like a substantial change. Have to admit I'd started wondering if it was only going to be a refresh, constrained by 40nm.
 
False... TSMC was completely bottlenecked from their own problems.

And yet nvidia still got wafers for their gt200 series.

The point is AMD didn't get enough wafers, the ones they did get have excellent yield -- well at least much better than nvidia.

News flash, that's not AMD's fault (well, except for, like I said, bad planning), nor can they do anything about it except for book more wafers next time.

Only yielding problem is the real production problem here. So you are the one with completely misleading information.
 
Sorry, but I really HOPE for the sake of AMD that whoever "leaked" that slide deliberately changed at least ONE spec and/or the performance targets (the screenshot was taken in edit mode, after all ...).

You don't have to hope. Such products will never see the light when you can simply get a 5830 with 32ROP running @~725MHz which would have the upper hand on TDP, cost, performance and T2M.

AFAIK back in late 2008
That's only at most one year before the production date and NI would have been or was only a few months away from being taped-out by then. And your target process should set alongside your floor plan -- not something you can just change that late. And if you do change sometime close to first silicon, that's at least half a year gone straight to waste.

So that barts slide is real :LOL:

You can safely bet on the other horse. But I won't get in your way if you insist.
 
Last edited by a moderator:
And yet nvidia still got wafers for their gt200 series.

The point is AMD didn't get enough wafers, the ones they did get have excellent yield -- well at least much better than nvidia.

News flash, that's not AMD's fault (well, except for, like I said, bad planning), nor can they do anything about it except for book more wafers next time.

Only yielding problem is the real production problem here. So you are the one with completely misleading information.

So... your conclusion is that AMD didn't "book" enough wafers solely because Nvidia had 40nm GT2x0 wafers?
TSMC had stated a certain output volume for summer '09 of 40nm wafers, they didn't reach those numbers until around the end of '09/beg of '10.
 
And yet nvidia still got wafers for their gt200 series.

The point is AMD didn't get enough wafers, the ones they did get have excellent yield -- well at least much better than nvidia.

News flash, that's not AMD's fault (well, except for, like I said, bad planning), nor can they do anything about it except for book more wafers next time.

Only yielding problem is the real production problem here. So you are the one with completely misleading information.

Uh...

There was no bad planning involved. According to what TSMC was telling anyone they were planning to ramp up wafer production as normal.

AMD made wafer allocations based upon the assumption that wafer ramping would be similar to past nodes (55 nm, 65 nm, etc.) with a chance of it being slightly slower. The fact that TSMC then had significant problems producing enough wafers to meet demand on 40 nm wasn't anything AMD could have planned for in advance.

Sure if AMD could have looked into the future and known 40 nm wafer supply was going to be severly constrained, I'm sure they would have secured a larger initial allotment.

If that was bad planning, then ATI/AMD have been doing bad wafer allocation planning at TSMC going all the way back to the 90's. :p

And enough for GT200? Sure, Nvidia took a huge gamble and secured a large allotment of wafers hoping to be able to launch GT200 on a hot lot even if yield was non-optimal. That was, what, around Oct./Nov. 2009? Half a year later they finally launch and can't sell all of the few cores they did produce. Then had to go and sell back some of their allotment. And suffered massive losses. Good planning? Or yet another company suffering from TSMC problems at 40 nm? Hopefully, you'll be at least consistent in which of those choices you go with. :p

Regards,
SB
 
@ neliz: I feel Rick-Roll'd.

This is the AMD R9xx Speculation thread - and you post a link to a badly written Cypress OC analysis based on the experiences of a pubertizing kid trying to run Crysis @ different clock and memory speeds?
 
@ neliz: I feel Rick-Roll'd.

This is the AMD R9xx Speculation thread - and you post a link to a badly written Cypress OC analysis based on the experiences of a pubertizing kid trying to run Crysis @ different clock and memory speeds?

Yes yes, but don't you see! Barts only needs a 128-bit bus!

Like that will change the game...
 
Have you spotted this or not yet? :LOL:
Charlie says: :oops:


If a x4 cluster is only as fast as an x5 cluster, then (1920*5/4) = 2400. 50% faster there.

Now I think the real speed gain will be from the front end, if they can up efficiency by a good chunk in parsing the workloads, you would get an increase on top of that 50%. Think about having a 25% more efficient front end, that would mean an effective 2400 *1.25 = 3000 shaders.

Can you say twice as fast from a 10-20% area increase?

-Charlie


http://semiaccurate.com/forums/showpost.php?p=69973&postcount=182
http://semiaccurate.com/forums/showthread.php?t=3249&page=19



Now, I am really excited and happy. Charlie, thank you for making me so happy. :p
 
Yes yes, but don't you see! Barts only needs a 128-bit bus!

Like that will change the game...

The 256bit bus is there for the 32 ROPs and not just plain bandwith increase if the specs are true. The writer completly missed the point that 5770 had 16 ROPs.
 
The 256bit bus is there for the 32 ROPs and not just plain bandwith increase if the specs are true. The writer completly missed the point that 5770 had 16 ROPs.

I'm jesting!
I just put it up to show people that with just 14 years of age and a PC, you could write articles on the interwibblez too that would seem deep and informative to the ill-informed.
 
Back
Top