AMD Analyst Day - Lots of Info

I think hardocp is reading waaaaaaayyyy into the slides for bulldozer.

The most definitive parts are presence of an extra HT link on a new HT protocol, rough TDP ranges, and PCIE on-die.

For the rest, the most tangible info is deferred promises of performance leadership.

Fusion seems to be moving even further into the future, if the first Fusion model is to have a Bulldozer CPU derivative. That's mid-2009 at the earliest.

(edit: What does this mean for SOI on future CPUs?
If TSMC is to fab some Fusion chips, will it be on SOI? It doesn't seem so right now.
The costs of having parallel SOI and bulk Bulldozer designs would be significant, and it seems like AMD has been reaping fewer and fewer rewards from having SOI on smaller processes.)

R700 info slide was even more vague than the Bulldozer info. The slide shown had zip in the way of detail, just indistinct promises of improvement.
 
Last edited by a moderator:
Most seem to be reacting with a loud "Enough about future plans, do something today." Many have lost faith in the company since it hasn't even released Barcelona and the R600 was extremely underwhelming.

My take is 2009 is too far away -- AMD might not even be around then. They need to be worrying more about their current situation, not about further pipe-dreams.
 
Banks won't let it go down at least not intially, thats why we still see them funding Dresden. All they have to do is balance their sheets. Going .65 will give them even more margins, well if yeilds are good I guess.
 
Most seem to be reacting with a loud "Enough about future plans, do something today." Many have lost faith in the company since it hasn't even released Barcelona and the R600 was extremely underwhelming.

My take is 2009 is too far away -- AMD might not even be around then. They need to be worrying more about their current situation, not about further pipe-dreams.

Most likely Samsung will buy AMD, sending B3D into a tizzy fit of rage that AMD was not destroyed :LOL:
 
Nice colourful slides there full of promises. I hope these won't be empty again.

BRiT: everything since R420 was underwhelming in my eyes. They haven't had a "winner" since R300.
 
Steady on, there was nothing wrong with R580.

Never said that there was anything wrong, but it was way late and hasn't crushed the competition or made any impact whatsoever (btw I'm still using a x1800, so no, I'm not anti-ATI but think that their management should be fired alltogether).
 
_xxx_ said:
Never said that there was anything wrong, but it was way late and hasn't crushed the competition or made any impact whatsoever
R520 was late, R580 was quite prompt. And no 3D chip ever "crushes" the competition unless the competition simultaneously f*cks up. R580 was a better chip than the actually-quite-good G70. You can't blame ATI because Nvidia didn't screw up, and they deserve credit for producing a product that was even better than an un-screwed-up rival product. By contrast, almost anything would have been far better than NV30. :)

Getting back to the OT, this is from Anantech's article:

The R7xx GPU will be built on a 55nm process and it appears that, at least on the high-end, there won't be any UVD support. AMD's roadmaps clearly outline UVD as a part of the mainstream R7xx feature set, but the high end platforms are completely missing the checkbox. We'll find out next year for sure if the lack of UVD and Purevideo HD on high end parts will continue.

We can't say much more about R7xx, other than AMD is quite confident in its abilities despite the lackluster reception of the R600. AMD has its reasons...
You'd have thought ATI would have learned to avoid half-node processes by now....
 
Didn't mean "late according to the planning" but late in the real life from the consumer POV. It couldn't move anything and had zero impact on the market and was again not cheaper than the competition, rather the opposite.

R420 lacked features, R520 was much too late and had the useless bulk of silicon for the ring bus and DB which made it unnecessarily complicated, R580 was "on time" but too late to change the perception of the buyers. And R600 was The Bomb with all the stuff that happened around it's release and all the delay and broken promises.

So now we're suddenly expected to believe ANYTHING coming from this same management? Not me, sorry. And that's my opinion as a businessman and not as a simple consumer. Trick me once... and all that.
 
Didn't mean "late according to the planning" but late in the real life from the consumer POV. It couldn't move anything and had zero impact on the market and was again not cheaper than the competition, rather the opposite.

R420 lacked features, R520 was much too late and had the useless bulk of silicon for the ring bus and DB which made it unnecessarily complicated, R580 was "on time" but too late to change the perception of the buyers. And R600 was The Bomb with all the stuff that happened around it's release and all the delay and broken promises.

So now we're suddenly expected to believe ANYTHING coming from this same management? Not me, sorry. And that's my opinion as a businessman and not as a simple consumer. Trick me once... and all that.

I think R600 is a great pice of tech. But 3 things are screwing it up.

1) Dev relations seem to keep missing the boat on brand new releases meaning that initially, R600 is screwed up on the big name games. They probably fix this quickly but you generally don't hear much about the fixes, just the initial screw up.

2) Drivers still seem a bit immature, i.e. some games where R580 still performs better, poor AA performance etc...

3) G80 was so spectacularly good that unless ATI produced a miracle, anything they came up with was going to look average by comparison. I think this same thing afflicted NV30 actually. It was practically twice as fast as the already extremely powerful 4600Ti and came with a far more advanced featureset. Had it been released without competition 50Mhz slower it would have been relaticely cool, quiet and been hailed as an excellent performer (but like G80, not so good at the next gen DX). However in light of R300's even better performance forcing them to raise clocks/noise/heat and R300's relatively good DX9 performance aswell, it was hailed as a total failiure.

However as a standalone product, it was pretty good IMO. Don't get me wrong though, you can't look at it in isolation so im not defending NV. Im just saying that it wasn't necessarily bad, it was just a lot worse than the amazing GPU ATI came up with. If R300 had simply been average and NV30 was still much worse, THEN it would have been bad.
 
The one thing that surprised me very positively is that they said Shangai wasn't a direct shrink ala Brisbane; there will be IPC improvements too, even beyond those related to the cache size. While that is unlikely to do miracles (I would presume that Nehalem will have IPC improvements compared to Penryn too), it will definitely make the fight more interesting.

They are also proposing Bobcat for Imageon. Why not, but I'd like to see what the wattage is compared to, say, an ARM Cortex-A8 for a given level of performance. Before I see that, I refuse the be impressed by the approach. They said Bobcat will scale down to a 1W TDP, but that's arguably still too much - so what I really want to know is what the wattage looks like at lower levels of CPU activity.

Besides all this, I am optimistic about Bulldozer, let alone because some varians of it will likely use Z-RAM. Errr, and possibly more importantly, because it's an AMD California design. Those are the ex-DEC/NexGen guys. No offense intended to AMD Texas, but you know... these guys know their shit, and have an AAA track record.

The problems at this point IMO are execution and, possibly even more importantly, Intel's execution. AMD's roadmap is satisfactory, but it is nothing absolutely mind-blowing. As such, they remain highly vulnerable from a technological perspective, and even more so from a financial perspective (more on the latter in another post...)

EDIT: Regarding Fusion, my impression at this point is that AMD's plan is to have a MCM-based Fusion in 2H08 and a monolithic implementation in 2H09. I guess this would be both better and worse at the same time as our earlier impression of a monolithic solution in 1H09, heh.
 
I like the Bulldozer Fusion slide, I think that is a good place to aim for. Whether that can beat discrete components at that time I am not sure ( and I am secretly hoping it cannot ) however from a system point of view I can see that taking the entire low to middle range market point for the masses.
 
The current issue is not the quantity of RAM that the graphics boards use, but the requirements of the app itself.

Besides, right now this is a good reason to go to 64-bit, at least where Vista is concerned, because anyone that build their own PC and has bought a retail version of Vista has both 64-bit and 32-bit in the box.
 
Back
Top