AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
Not to feed too much into this sideshow that I've only superficially followed, but did any part of that response refute the accusation/allegation in question?

We need hardware sample leaks now, because the waiting is forcing the journo corps to resort to devouring themselves. ;)
 
Never before have I seen so much horticultural discussion in these forums! :LOL:
except for the conium error the discussion has been restricted to trees so maybe an "disproportionate dendrological dialogue" might be a better way of putting it.

Looking at the codes, there are quite a few and no obvious order. The only possible order i have left is that the trees are present in the location that the chip of the same name is being developed in. ie Tsuga in Canada or Taiwan based, Redwoods in California, Juniper in Texas say. Cypress and Cedar are hard to pin down to a particular location though as they grow everywhere.

After puzzling for awhile, not being able to see reason at all, i saw this job ad(sadly removed yesterday) for a new graphics product planner for AMD.

....so it appears the guy who liked trees got the axe.
 
I don't see why ATI would want to put everything on a single piece of Si period. The whole point of the sweet spot strategy is to reduce die size and using multiple dice where demanded by the market.

If you make a big piece of Si, then you should just make a big GPU like NV. ATI's strategy doesn't involve large pieces of Si, AFAICT. And that's probably good.

I don't think this is the right way to look at it, not now anyway.

At the time the sweet spot strategy was a fig leaf to cover that ATI no longer had the dollars, to develop a really large gpu. It caused problems at first as it allowed their competitor free reign and quite good profits from temporariy having no competition.

Times change though, since then, like sand in the desert, the millions of customers shifted so that today the landscape looks completely different to the one of a few years ago. Now there is just not enough grains left to cover the cost of developing the really huge gpus.

This would be quite bleak, if not for another development over the last year or two. The design process appears to have got much more automated such that can scale the different parts of their design up and down very rapidly to cover all parts of the market.

Last year AMD produced 3 chips and a dual card within 3 months to overhaul their complete lineup. Similarly nvidia was pursuing the same course with 4 chips top to bottom: GT212->GT218. Nvidia had some issues with the top 2, some combination of lack of market, price/performance over existing offerings or technical difficulties(ie excessive leakage in their larger parts).

Coming in a September AMD appears to be introducing 3 or 4 chips + dual card. From the notebook schedule it appears the smallest is delayed somewhat, most likely till TSMC can build out 40nm late this quarter sufficently to handle the orders.

I havent got a good term for this rapid scaling but it appears to be the normal development strategy now.
 
There's also the theory that large GT21x chips had been cancelled before they even reached tape out in order to divert those resources elsewhere.
 
At the time the sweet spot strategy was a fig leaf to cover that ATI no longer had the dollars, to develop a really large gpu. It caused problems at first as it allowed their competitor free reign and quite good profits from temporariy having no competition.
No, the strategy was in place before R600 arrived, hence RV670 was small and the rumours of the X2 version were part and parcel of the expectation of its arrival. It was only ~190mm² (perhaps <180mm² if you allow 0.5mm of packaging margin). G92, its theoretical competitor, is 256mm² on the same process (240mm² if allowing for packaging margin?). You could say RV670 was actually competing with G94, not sure what die size that ended-up at on 55nm.

Times change though,
Back in 2004 R420 was 281mm², half GT200 size in 2008. No wonder the prices of the halo cards have gone up.

since then, like sand in the desert, the millions of customers shifted so that today the landscape looks completely different to the one of a few years ago. Now there is just not enough grains left to cover the cost of developing the really huge gpus.
Huge GPUs also predicate a significant testing period on a process if you want to be assured of a successful launch - whoops R520 and hence NVidia's process reticence all these years.

So if you want to adopt a new process rapidly with a huge GPU that launches first then the effort naturally hinders the rest of the chips. Just cutting out the 400mm²+ chip saves a lot of engineering time and cost.

R5xx GPUs were supposed to be a top-to-bottom launch over ~5-6 months (~May-October). That was 3 GPUs.

This would be quite bleak, if not for another development over the last year or two. The design process appears to have got much more automated such that can scale the different parts of their design up and down very rapidly to cover all parts of the market.
AMD simply not doing a 400mm²+ GPU leaves resources for the other parts. In theory the ring bus was part of making it easier to scale a GPU, since it's not just the count of units that needs scaling, but the connectivity.

(I can't help thinking that a ring bus might return... Larrabee has one.)

Last year AMD produced 3 chips and a dual card within 3 months to overhaul their complete lineup. Similarly nvidia was pursuing the same course with 4 chips top to bottom: GT212->GT218. Nvidia had some issues with the top 2, some combination of lack of market, price/performance over existing offerings or technical difficulties(ie excessive leakage in their larger parts).
GT200b took 3 revisions on a "mature" 55nm process.

NVidia's cancelled G100, GT206 (though did that ever exist?), GT212 and GT214. I'm now wondering if NVidia has cancelled the original GT300 chip, because the conflicting tape-out rumours coupled with a chip of <<500mm² conflict with a chip that's just taped-out and is >500mm². Maybe the GT300 that gets launched is another re-think like G100->GT200?

Jawed
 
GT200b took 3 revisions on a "mature" 55nm process.

There are 3 revisions yes, but it didn't take GT200b 3 revisions to get on shelves (just to avoid misinterpretations).

http://www.pcgameshardware.com/aid,...16-with-55-nanometer-GT200b-reviewed/Reviews/

NVidia's cancelled G100, GT206 (though did that ever exist?), GT212 and GT214. I'm now wondering if NVidia has cancelled the original GT300 chip, because the conflicting tape-out rumours coupled with a chip of <<500mm² conflict with a chip that's just taped-out and is >500mm². Maybe the GT300 that gets launched is another re-think like G100->GT200?

Jawed
GT206 appeared as a codename on that infamous Elsa roadmap as a 55nm chip; I don't think anyone knows for sure what it stood for. It could have very well been a revamped GT2x0 or what we saw as GT200b after all.

G100 is a pretty weird story; I recall someone saying years ago that G80 will be followed by a "G200" and after that a D3D11 "G100", while others insist that it was a rather weird project that has been canned and was replaced by GT200.

IHVs change codenames and roadmaps all the time, but I have severe doubts that any of them has the luxury to radically change architectural aspects for refresh X or generation Y when each of them lies years in development. Yes of course can their be "plan B's" for each chip but I severely doubt radical changes there and no I don't believe that either AMD or NVIDIA operate for the past few years on a constant "plan B string".

Can you really make out in all honest what is what when it comes to ATI's own codenames of the past and today? What's the mythical "R700" that was to follow years ago the R600? RV770 or "RV870" no wait make that Evergreen to be completely correct after all, since I recall rumors of the ancient past wanting "R700" to be an X11 compliant architecture.

If you want me as a layman and observer to put a different perspective on the whole X vs. Y affair: years ago there was a saying circulating that one should let ATI design GPUs and NVIDIA do the execution. NVIDIA can nowadays learn how to execute or more simple start to learn walking again; if anything ATI/AMD has cornered NV tremendously these days since RV670 since they're driving a damn strong and admirable execution timetable.

The rest how and what changed on each sides' roadmap and meaningless codenames is such a worthless bubblegum to me that it isn't even worth noting. Call "GT300" or "G300" D12U and its the safest bet you can get at the moment; or as my personal joke went Little Red Riding Hood (LRRH) as a nice answer to both Evergreen and LRB :devilish:
 
At the time the sweet spot strategy was a fig leaf to cover that ATI no longer had the dollars, to develop a really large gpu.
GPUs are highly modular, it's not a flat cost to develop a larger chip but the scaling is highly sublinear. What would it have gained them to have competition in the high end but still be stuck with no competitive parts in the high volume markets for 6 months though?

Their market share was pathetic, going from the trickle down release schedule to mainstream part first was quite obviously the correct choice. It diminished the profitability of 3D graphics hardware as a whole, but that wasn't really their main concern at the time.

The shift really made life difficult for NVIDIA ... the sheer brilliance of the G80 made it possible for them to stick to the trickle down schedule for a while longer, but with the DX11 shift it's going to really hurt them if they try to do it again.
 
According to this, the main scheduler is more than twice as complex as in the previous generation (HD4000). The whole chip is much more complex too. They beefed up a lot of the hardware.

The relatively simpler/cheaper scheduler was part of their advantage over Nvidia though. If they start spending more transistors on hardware thread control and scheduling the flops/mm^2 gap might close.
 
Who knows, maybe they pulled off another miracle and managed to beef up the rest of the chip without increasing size similar to what they did with with the ALU's for Rv770.

I seriously doubt it, but at this point I'm not putting anything past them.

Regards,
SB
 
I am disappointed at the less scope for flop improvement because of this, but I hope they did something smart to make gpu's more programmable.
 
Who knows, maybe they pulled off another miracle and managed to beef up the rest of the chip without increasing size similar to what they did with with the ALU's for Rv770.

I seriously doubt it, but at this point I'm not putting anything past them.

Regards,
SB

Would you be disappointed if the chip grew to.. say G92 size, while G300 stayed GT200 size?
 
Back
Top