*re-spin* IHV choice in consoles

Energy efficient Athlon 2 x4's and Phenom 2's are still much better peforming chips and will use very little power.
Well I'm not sure how those part were done, high bin parts?
Other than that Piledriver wins in many cases against Star core /K10/K10.5, not thanks to intrinsic better IPC but higher clock speed, better turbo mode, and support for new instructions.
Stream rollers should definitely let K10 behind in every multi-threading workload. sometime in 2013 though...

In my opinion CMT is doomed anyway, they are beating a dead horse. In some slides at last IDF they spoke about possible convergence between their low power and high power cores (after Excavators and the V2 of Jaguar cores). I hope they get rid of CMT at this time.

I'm a believer that a 3 cores that would include all the improvement and refinement AMD included in both BD and PD would still be a better performer than the two aforementioned products.
I also believe that temporarily AMD should have done what they did with GPU, go with tinier die size. ( I mean till their new architecture would have been ironed out).
AMD can't work on many chip concurrently that why we have usually one APU and salvage parts and the FX (4modules) and it salvages parts.
They should have made it that in the desktop realm, the only chip that sell is the APU.
The chip is too big, anything above ~190mm^2 seems to imply some production headache.
They should have make their mainstream desktop chip to fit within that constrain.
A 3 cores design with a way lesser GPU (Cayman class EDIT Caicos class oops) could have fit the bill as as many reviewer pointed out APUs are still searching for their audience, such GPU power is more than what casual care for and not enough for "occidental" gamers.
Taking in account AMD process disadvantage it could have been a great idea.
The part could have been a fair competitor for core i3 while making AMD more money (more chips per wafer, higher yields).

Actually I wonder if it would have been possible for AMD to pull something akin to the Tegra3, like having two cores on a high power process and the third one on a low power process. May be tricky from the OS pov, I don't know.
 
Last edited by a moderator:
if AMD did a sucky GPU it would only compete frontally with Intel and stiil lose.
What is sure is an A10 5800 can be good for "family gamers". go back a couple years ago such people were seeking to upgrade with a cheap graphics card, they ended up with a gt220 ddr2 or a 4650 or something sucky like this. Low end graphics cards are the land of 64bit bus and bottom of the barrel memory, or 128bit but with a GPU still crippled by bandwith. Many even enjoyed this as they were getting a 3x to 5x graphical power increase over what they had before.
 
if AMD did a sucky GPU it would only compete frontally with Intel and stiil lose.
What is sure is an A10 5800 can be good for "family gamers". go back a couple years ago such people were seeking to upgrade with a cheap graphics card, they ended up with a gt220 ddr2 or a 4650 or something sucky like this. Low end graphics cards are the land of 64bit bus and bottom of the barrel memory, or 128bit but with a GPU still crippled by bandwith. Many even enjoyed this as they were getting a 3x to 5x graphical power increase over what they had before.
They still lose and they sell a 240 mm^2 chip barely above 100$.
With a lesser chip and better performances they would be more competitive, still lose but be closer and make more money which they need.
I've a APU and a discrete GPU, I bought the laptop not because of the APU but because of the decent (redwood) GPU. The GPU in the APU (another redwood) handle everything outside of games. A lesser GPU would not be different for most users.
There is imho still a significant market for APU till the bandwidth issue is solved. Once it is solved we should be in the 22 nm process node which would allow to allocated more transistors to the GPU.
As it is now I would suspect that you have more bang for you bucks by going with the last Pentium and match it with a HD6670 or the Nvidia part competing in the same category.

It still about price, a lesser part might allow AMD to price its chip even more competitively or to have higher margin.
Is there is some way to have a tegra3 like set-up (but better with the three cores possibly being active at the same time) it would be a huge win especially for Laptop, here the process disadvantage AMD faces is even more crippling. They have no choice but to sell their highest bin parts for a bargain (or close). It's also the biggest market market in personal computing.

Overall it is an assumption of mine that a standard core would perform better I might be wrong but honestly I don't think I'm. With also the improvements across the board the PD still under perform significantly the old phenom when the new instructions are no used.

There is also a interesting discussion going on about Jaguar core, and especially the L2 latency. What I get from it is that even working at half speed the latencies are comparable but, the BD/PD has more means to hide those latencies. In case of big cores, they would have close to the same ability to hide those latencies, Which in my views mean that AMD could have used the same type of cache hierarchy (saner) than in Jaguar for its high end CPU cores. A nice thing is that it is share and running at half speed it must be quiet power efficient.

Anyway I'm long on AMD throw away CMT as soon as they can which is after Excavators. Let see in 2 years (on more).
 
Last edited by a moderator:
AMD only look bad in comparison to Intel; their CPUs are light years beyond Xenon and Cell. No-one would use Intel because the cost would be prohibitive and their usage would be highly restrictive. AMD CPUs are probably the best choice for a high performance part in a console.

If Steamroller is half as good as the rumours suggest then AMD could deliver a part with very good performance per watt, performance per mm^2 and absolute performance.
Sorry for the pedantic question, but why is Intel cost prohibitive? What makes IBM and AMD and Nvidia easy to work with, with a willingness to do a custom chip, but not Intel? I would think that the prospect of winning a console contract would make most company reduce the margin enough to get that contract...
 
Sorry for the pedantic question, but why is Intel cost prohibitive? What makes IBM and AMD and Nvidia easy to work with, with a willingness to do a custom chip, but not Intel? I would think that the prospect of winning a console contract would make most company reduce the margin enough to get that contract...

In theory nothing, in practice...
Intel does it's own fab, and historically won't sell the masks or IP.
This really doesn't matter at launch but it does years later as a manufacturer tries to shrink the parts, and combine them to reduce cost.
It's the primary reason that first MIPS then Power parts were so popular in consoles.

Also if anyone is going with a combined GPU/CPU out of the gate, you'd pretty much have to use a single vendor for both parts (intel doesn't have a competitive GPU) or use "old" tech from whoever wasn't doing the integration.
 
Sorry for the pedantic question, but why is Intel cost prohibitive? What makes IBM and AMD and Nvidia easy to work with, with a willingness to do a custom chip, but not Intel? I would think that the prospect of winning a console contract would make most company reduce the margin enough to get that contract...
Easy Intel Inside is a F strong brand, they lead performances and public perception by a mile over competition, they have no intensive to bargain.
They sell what they want at the price they want with good margins. They are willing to sell extra production capacity to make money but to anyone but people that are not even remotely competing for their money.
IBM has extra capacity and engineer teams they can afford to live with it, but as money is spent anyway why to make some profit out of it? AMD is still in financial turmoil, I don't see where you see that Nvidia is willing to let anything go out for a bargain. They may in the mobile realm and with Tegra because they are fighting a up hill battle, with quiet some successes.
 
I don't see where you see that Nvidia is willing to let anything go out for a bargain.
They did make GPUs for the previous 2 generations. They also said they were confident they'd make one for the next gen, that implies a willingness to play by the rules of competition, it's not like they have an amazing tech which would allow them to charge a premium compared to AMD.
 
they're not better performing, let alone "much".

unless you just mean per clock.

They are better performing, when both are OC'd the FX line just get's hammered by Phenom 2's.

And before you go off on one saying I'm wrong I've not only owned pretty much every FX and Phenom 2 CPU but I also run them on a phase unit at -50c so I've run them at silly overclocks, trust me, Phenom 2's are the much better performing CPU's with FX being around 20% slower per-clock.

IPC would be better suited to consoles over raw clock speed as clock speed is generally one of the things that gets scaled back to ensure a chip hits certain power budgets.
 
IBM seems to have plenty of mojo with Power8 in the works, and working with 3M to stack 100 cpu cores by 2013. The claim of a 1000x increase in CPU performance by IBM also struck me as a hint of wanting to do another custom console CPU. It was Ken Kutaragi going to IBM wanting a 1000x increase in performance with CELL and for IBM to make such a statement for a next-generation of CPU design came across as a bit odd.
 
IBM seems to have plenty of mojo with Power8 in the works, and working with 3M to stack 100 cpu cores by 2013. The claim of a 1000x increase in CPU performance by IBM also struck me as a hint of wanting to do another custom console CPU. It was Ken Kutaragi going to IBM wanting a 1000x increase in performance with CELL and for IBM to make such a statement for a next-generation of CPU design came across as a bit odd.
There's very little context around their 1000x claim, maybe it was just marketing speak, without any timeframe. The 3M tech will allow much much bigger chips (in total die area), and some super fast communication between cores, cache and and memory (shorter wider paths). But I don't understand how those advantages can really multiply performance per watt, or performance per dollar. I'd accept twice or even four times, but 1000?. My pessimist side thinks it's only to allow huge 400W chips for servers with 10000 mm2 total area, and they'll sell them $20000 per chip. While my optimist side thinks they have something up their sleeves which will revolutionize CPU design but it requires a large number of stacked dies, a real 3D cpu design.
 
What I find intresting is that IBM is heavily invovled with Hybrid Memory Cube and we know Microsoft is a member of that. IBM touts Bionic Packaging which the 3M chip stacking material seems to be a part of. I don't expect a 1000x increase anytime soon, but I think it is possible IBM and Microsoft could be working on a 3d chip.
 
Back
Top