The AMD Execution Thread [2007 - 2017]

Status
Not open for further replies.
AMD doesn't have direct control over the manufacturing process, although the SOI node they are the customer for may afford them some leeway in requesting something of Globalfoundries. AMD's rapidly moving away from being in the position of being catered to, which has the side effect of them being on the hook for special charges as a result.

Have they ever thought to somehow try to either improve GF or get rid of it? It is ridiculous that they are customers of something and cannot request their partner to be up-to-date with processes as Intel is...

There should be some legal way for improvement...
 
If it were that simple to keep up with Intel, Globalfoundries would still be AMD's fab division.
The foundry exists because developing and running advanced fabrication processes is a multibillion dollar proposition that AMD couldn't afford, and now it wants to pay even less.
Getting out of those obligations is not without cost, as AMD had to agree not to abandon its fabs altogether.

There's no guarantee that you can match the best processor fabs in the business, even with matching funds, and AMD doesn't have matching funds.
Besides, a number of its problems were decisions made while AMD still ran the fabs.
 
I'd suggest it was a stab at a benchmark hero (which makes perfect sense) but as it's OEM-only it's not going to be reviewed by the tech press at large.

Those same OEM's won't touch it with a 40-foot bargepole unless AMD is giving it to them ridiculously cheap. It'll not quite be as fast as a 12-thread intel cpu costing less and at half the power draw...who the fuck is going to use that in a system when they can just have a 3930K instead?

Either the price (rumours of $800) is way wrong, the TDP is wrong or the clock speeds are wrong. AMD is not going to be able to sell a chip that loses to a 12 thread Intel CPU, is more difficult to integrate, and costs more money.
 
Yeah, it might make sense at $250~300 or so, for people who want high performance and don't care about load power, otherwise I don't see it.

But I guess AMD needs some sort of answer to Haswell.
 
Have they ever thought to somehow try to either improve GF or get rid of it? It is ridiculous that they are customers of something and cannot request their partner to be up-to-date with processes as Intel is...

There should be some legal way for improvement...

While they've got their lawyers out they should sue the Laws of Physics too. It's about as useful.
 
Wow, forget about Prescott. We've a new bit of perf/watt hilarity coming to a CPU review near you. Fortunately today's CPU coolers are incredibly superior to 2005's options.

I saw somebody say it might be an interesting value for a budget VM server. Since AMD isn't gaming us with disabling features (like they have much of a choice). I assume the thing can idle as coolly as any other member of the line.
 
Wow, forget about Prescott. We've a new bit of perf/watt hilarity coming to a CPU review near you. Fortunately today's CPU coolers are incredibly superior to 2005's options.

I saw somebody say it might be an interesting value for a budget VM server. Since AMD isn't gaming us with disabling features (like they have much of a choice). I assume the thing can idle as coolly as any other member of the line.

It's probably a little leakier than your average Vishera, but the main problem is that Vishera's idle power is pretty high, or rather the AM3 platform's idle power is high. Either way, something is wrong, especially compared to what AMD can do on FM2.

http://www.hardware.fr/articles/897-11/consommation-efficacite-energetique.html
 
AMD must be bleeding a lot of money now :D

316u3rq.jpg


This price should be reduced with 100$ :p
 
An Anandtech article outlines AMD's server roadmap through 2014.

http://www.anandtech.com/show/7079/amd-evolving-fast-to-survive-in-the-server-market-jungle-/
http://www.theregister.co.uk/2013/06/18/amd_opteron_arm_server_chips/

It makes for grim but unsurprising reading on the x86 side.
There will be no Steamroller based successors to Seoul/Abu Dhabi, only a 'tweaked Piledriver' incremental update due in 1Q14.
What is presumably Kaveri (2 steamroller modules, radeon cores) will make an appearance as a low-end server/workstation APU under the codename Berlin in 1H2014, replacing existing single socket Opterons.

They are making a big push using ARM cores into the web server market though with Seattle in 2014, integrating 8-16 A57 cores, 10GbE, and Seamicro technologies into a 28nm SoC to replace the recently launched and apparently short-lived Jaguar-based X-series Opterons.
Where this leaves future development of the Jaguar core for other markets, but to do so would appear to be pointless duplication of effort.
 
I can't think of much to say about the absolute stagnation in the main x86 server market for AMD.
It's apparently reached the point that AMD won't expend the effort and money to design and validate a chip for that market. Absent a node transition, the upside may be too limited to even contemplate it.
The way microservers are described, they can get away with a lower level of RAS and with smaller chips.

The Kyoto to Seattle is a bit unclear to me. I didn't see the metrics or clarity as to what it means to perform similarly to Jaguar, which will probably be a year old or more by the time Seattle makes it out there.
Perhaps it is the iteration time that got the SOC elements pinned to an ARM implementation so quickly.
I wasn't sure about AMD going octocore for the ARM implementation, as there were some statements at other times that left the number up in the air.

The lack of numbers makes me want to reserve judgement of the cores until later. There's so much different in the SOC surrounding them that vague PR blurbs can obscure implementation details like if AMD is using standard A57 cores, and what power levels the ARM chip is running at.
It would be quite a feather in ARM's cap, and a potential indictment of the x86 engineers at AMD, if an off-the-shelf A57 can beat what should have been a more customized x86 backed by AMD's supposed design expertise.

edit:
One thing I forgot to mention is that this shows how critically AMD needs a node transition, all other fluff aside. The next good node to hop to might be a ways away. Planar 20nm may not be a good target.
 
Absent a node transition, the upside may be too limited to even contemplate it.

That sentence, I'm inclined to believe, is key.

They have limited resources, so with their own APUs along with the semi-custom business and future microserver initiatives being better bets for profitability; not spending resources on what would likely make little difference on dedicated server (and desktop) products is pretty understandable.

When (and from whom) is the next node (suitable for the kind of high-performance profile AMD needs) expected to show up anyway? GlobalFoundries' 28nm-HPP is ridiculously late, but we'll probably see with Kaveri if the power and performance characteristics are anything to write home about. My guess would be we won't get anything new until 20nm, so maybe late 2014 (if ever)?

Edit: To your edit; yes indeed.
 
I can't think of much to say about the absolute stagnation in the main x86 server market for AMD.
It's apparently reached the point that AMD won't expend the effort and money to design and validate a chip for that market. Absent a node transition, the upside may be too limited to even contemplate it.
The way microservers are described, they can get away with a lower level of RAS and with smaller chips.

The Kyoto to Seattle is a bit unclear to me. I didn't see the metrics or clarity as to what it means to perform similarly to Jaguar, which will probably be a year old or more by the time Seattle makes it out there.
Perhaps it is the iteration time that got the SOC elements pinned to an ARM implementation so quickly.
I wasn't sure about AMD going octocore for the ARM implementation, as there were some statements at other times that left the number up in the air.

The lack of numbers makes me want to reserve judgement of the cores until later. There's so much different in the SOC surrounding them that vague PR blurbs can obscure implementation details like if AMD is using standard A57 cores, and what power levels the ARM chip is running at.
It would be quite a feather in ARM's cap, and a potential indictment of the x86 engineers at AMD, if an off-the-shelf A57 can beat what should have been a more customized x86 backed by AMD's supposed design expertise.

edit:
One thing I forgot to mention is that this shows how critically AMD needs a node transition, all other fluff aside. The next good node to hop to might be a ways away. Planar 20nm may not be a good target.

This slide indicates A57 cores.
SeattleSlide2.png


As for A57 being better than Jaguar, I don't know. It's probably better suited to this kind of market when you put 16 of them into a single SoC and clock them at 2GHz, but in a high-end tablet or ultrathin notebook, I suspect Jaguar still retains a valuable IPC advantage—even more so when you consider that Jaguar will probably be updated in 2014.
 
Being in all 3 consoles is starting to show it's advantage: All Frostbite 3 Titles to Ship Optimized Exclusively for AMD.

"Starting with the release of Battlefield 4, all current and future titles using the Frostbite 3 engine — Need for Speed Rivals, Mirror's Edge 2, etc. — will ship optimized exclusively for AMD GPUs and CPUs. While Nvidia-based systems will be supported, the company won't be able to develop and distribute updated drivers until after each game is released."
 
Feldman was talking about 20% better perf/Watt with the updated PD so they can still make decent gains at low cost and on the same node. While not an ideal situation the cost savings are surely worth it for a company in AMD's situation.

Jaguar is basically paid for many times over because of the console wins remember - even if they ditch it completely in favour of ARM cores it's still been an incredible success. The Xbox One (if the rumours of fabbing at GF are true) should be able to use up a decent amount of their ~$1bn yearly WSA as well.

I would guess that AMD is in no hurry to waste more money at GF (and to a lesser extent TSMC) and is leaving them to fix their processes properly before taking them up at cheaper cost.
 
This slide indicates A57 cores.
...
As for A57 being better than Jaguar, I don't know. It's probably better suited to this kind of market when you put 16 of them into a single SoC and clock them at 2GHz, but in a high-end tablet or ultrathin notebook, I suspect Jaguar still retains a valuable IPC advantage—even more so when you consider that Jaguar will probably be updated in 2014.

There's the implied performance equivalence of a >2GHz A57 to Jaguar, which becomes less compelling when it barely reaches 2.0 with only 4 cores.
My question is what undisclosed factors are giving A57 equivalent per core performance with a compute/watt advantage with a quadrupled core count, unless AMD wants to reopen the RISC is superior to x86 debate all over again.

At least in theory, ARM cores have to be somewhat generalized in their design choices to be palatable to a lot of commodity platforms and to reduce engineering effort. Bog-standard ARM has not impressed many in terms of its memory pipeline and microarchitecture. AMD's promised benefits of the ARM variant seems to be that the standard ARM core anybody can use simplifies design and time to market by not utilizing most of AMD's design expertise.

AMD, in theory, will be about a little shy of 20 years of experience with high-performance OoO cores by the time Seattle comes out in volume. So, I'd ask where the A-team went wrong. Or did AMD fire them all?
 
Being in all 3 consoles is starting to show it's advantage: All Frostbite 3 Titles to Ship Optimized Exclusively for AMD.

"Starting with the release of Battlefield 4, all current and future titles using the Frostbite 3 engine — Need for Speed Rivals, Mirror's Edge 2, etc. — will ship optimized exclusively for AMD GPUs and CPUs. While Nvidia-based systems will be supported, the company won't be able to develop and distribute updated drivers until after each game is released."

I'm sure DICE will pay special attention to GCN and collaborate closely with AMD, but I don't see why they should deny NVIDIA access to early builds, nor do I think they will. And this article has no direct quote suggesting otherwise.
 
Status
Not open for further replies.
Back
Top