So if I follow this, you're suggesting that Microsoft and Sony have actually invested a fair bit in their APU designs. Hundreds-of-millions to billions is not small change (and certainly exceeds what Apple expend on R&D for a single cheap each year - based on their public finance). But two posts back you had this view:
Which looks to the reverse because you are saying that you do not think consoles are worth that investment. Of course it's what Microsoft and Sony think which matters but I'm confused about your position.
This only contradicts my view if the roll-your-own-and-play-with-the-big-dogs approach is cheaper than contracting with a company that knows what it's doing for hundreds of millions of dollars.
If you believe AMD, the head of the server group presented last year with an estimated 400 million dollars and three years to design an x86 server processor versus 30 million and 1.5 years for an ARM.
I don't actually believe that was really a fair comparison (it was for investors, not people AMD wants to educate) nor entirely applicable in this case, but a complex OoO design can take 3-5 years and some hundreds of millions of dollars all on its own without including a GPU or additional SOC hardware.
My view of the situation is that a console manufacturer starting from a non-presence in performant and manufacturable modern APU design faces more expense and risk for a device whose volumes are a fraction of what is considered sustainable on the modern market, and whose software as razorblades model nearly broke completely in the last generation.
Despite costs, the consoles started with an effectively finished and unmodified Jaguar CPU, and the fundamental framework of standard GCN.
Starting fresh would have incurred the above costs and then added on years and tens to hundreds of millions of dollars more.
Instead Jaguar and GCN have their costs amortized over a much wider range of SKUs than the comparatively small volume of consoles per quarter.
The tens of millions of units in aggregate of Jaguar/Puma APUs and various Sea Islands GPUs, and even Kaveri help provide sales volume where the consoles cannot.
The less said about how mobile SOCs sell an order of magnitude above that quarterly, the better.
If we believe Vgleaks and the diagram of an old Steamroller-based PS4 design, Sony had the choice between
two expensive multiyear CPU projects and got to shrug one off.
To do the same in a from-scratch in-house effort would have been ruinous.
But also it's difficult to compare the design process of a chip costed around $17 to go in a phone compared to one costing $100 to go in a console. You can do a lot better with 5x the budget, more power and active cooling.
It's actually possible that the lower-cost high volume chip had more R&D go into it than the one that accepts higher per-unit costs. It's actually the general story of much of AMD's product mix versus Intel.
An Intel Core i3 might cost tens of dollars in silicon costs, but there is little doubt that Intel spent AMD into the ground to design it, given the process co-development and significant research and design costs that went into the cores, superior memory subsystem, LGA package, higher levels of integration, and superior per-unit costs.
Spend more now, and save money over a hundred million units. Be better than AMD, and charge more to boot.
Well that's one motherload of an assumption there. In terms of cores and clocks, Apple's A7 is well behind the curve. It's tops out at a modest 1.4Ghz (iPad Air) and is dual core. You think Apple simply couldn't have managed a quad-core design like so many others, or offered higher clocks? Why? I personally don't think they needed to, but not needing to and not being able to are quite different.
I haven't seen an architectural deep dive concerning Cyclone's features.
The apparent emphasis is on being physically optimized to have a lower max frequency in the current implementation.
Its width is fine at the low clocks it targets, but would be prohibitive if they were higher.
It has a 4-cycle load latency at low clocks, which might point to a cache subsystem that adds extra stages to help drive signals at lower voltages than a more tight load pipeline would permit.
Memory bandwidth is an order of magnitude below what the consoles work with, and getting a good memory subsystem is an even darker art than getting a good-enough core.
AMD's has some glaring deficiencies, but it is still beyond what mobile designs have tried to do, and outside of a few initial custom forays memory performance is a critical disadvantage that ARM is now working towards improving. The A7 is notable in that it improves upon things to some unspecified level.
The core is double the area of Jaguar, so there are likely physical tradeoffs for reduced leakage and a lower clock ceiling can result.
I haven't seen discussion on the features of the A7's L2 and L2 interface, which is a rather impressive area consumer for Jaguar.
To compound the comparison, AMD's Puma core is Jaguar with functional turbo and bug-fixed power management.
This goes to show the complexity of Jaguar's less glamorous attributes and the limited engineering resources AMD had to devote to all its initiatives. For what it's worth, AMD's not-best autonomous power management is still quite good relative to most of the supposed competitors.
I personally struggle with calling either console a 'high performance' device.
For a consumer device of this type there is AMD, Intel, ARM in mobiles, and the last dregs of IBM's old designs.
The latter two are not in the same area code in terms of performance, at least not until the much higher core count A57 chips finally make their way to market 1.5 years after Jaguar (and possibly only this close because AMD's R&D was significantly disrupted at some point in the last few years).
AMD's chips are clearly inferior to Intel, but still can reach some level of parity in some measures to the low to mid tier Intel consumer chips, and in that regard AMD has little company.
A console APU can manage the multithreaded CPU throughput of a single Steamroller chip, which nips at the lower end of Intel's i5 and more solidly matches i3.
Single-threaded, not so much.
If you want to go a bit further afield, there's POWER and perhaps Oracle's chips, but those are well out of consideration.
For graphics, the number of players in the same zip code as the consoles is two.
The second generation of HD consoles where 1080p still isn't being delivered enough. Of the last four console generations the current crop are fairly lacklustre compared to technology in mainstream PC space.
It's a price, power, and size constrained entertainment appliance whose traditional economic model is borderline unsustainable.
There are hundreds of watts of TDP the consoles can't have, and the prospects for cost reduction and power savings through node transitions are much poorer than they once were.
The last gen had a worse time of things than was expected, and if that were known in advance they might have scaled back as well.
Of course I offered ARM as a contender [for processor] in PlayStation 5. Microsoft and Sony don't need the best or fastest processor in a games console (look at what we have now), they just need to be good enough.
As I mentioned, if AMD doesn't implode, then one good-enough ARM contender is going to have a good-enough custom ARM or a closely aligned good-enough x86 and IP that is closely aligned with the existing platform.
I'm not sure what this comment is in regard too? If it's about the choice of an AMD APU this generation, Mark Cerny make an interesting comment that
suggested it really wasn't a choice at all. It's was the only viable option given their target launch window.
It was the viable option given their constraint that it all be one-chip, have a common CPU architecture, have a performant GPU, and be able to reach a final design in several years.
Had they relaxed the SOC constraint and release date, several other combinations would have been readily applicable.
AMD's CPUs are not the best CPUs, and Jaguar is not the fastest AMD CPU.
AMD's GPUs are not the best GPUs.
AMD's memory controllers and cache subsystems are not the best.
AMD's DVFS, physical design, IO, connectivity, compiler design, driver quality, 64-bit ISA, etc. and so on...
On the other hand, the ones that beat AMD in some of those categories were no-shows in others, which is why some unspecified defense contractor's super-ARM may not be enough to make it compelling if the designer's overall portfolio and expertise don't provide the total package.