Predict: Next gen console tech (9th iteration and 10th iteration edition) [2014 - 2017]

Status
Not open for further replies.
I imagine that reality was produced by having Ken at the helm and perhaps the lack of continuity between hardware iterations of the PlayStation.

Also, I presume it much easier for Sony to make BC a prerequisite without the need to plug in old hardware given AMD has a greater level of intimate knowledge of the overall hardware combined with their expertise in modern CPU/GPU design and the desire to not lose the majority of its semi custom business.
The PS3 was designed when Sony was very intimately aware of the prior generations' hardware designs. The mixture of a more standard cache-based processor and the SPEs was a compromise between IBM's desire for a more standard multicore setup and Sony+Toshiba's history of providing specialized non-cached DSP-like processors.

It was being compatible with the old ways of doing things that lead Kutaragi to split the difference and give one portion of Cell to IBM's way, and the other to what Sony and Toshiba were comfortable with.

We know IBM's philosophy also went into the Xbox 360, Wii, and essentially everything else. Included in this was ATI/AMD's outside GPU.
The addition of a GPU to the PS3 appears to have been the result of a later-stage realization that the old way Cell was counting on didn't compete in realized power or performance.

It wouldn't be unprecedented if it's AMD's turn this time to be the old way Sony shouldn't be too slavish to. Backwards compatibility might be something in AMD's interest to easily provide, but it is probably even more in its interest to not have GPU IP that is so fully beaten across so many metrics.
That AMD is in that position means it is blind to those realities, or more possibly that it is aware and what we see is its best effort.
It's not Sony's concern why, just whether it's potentially damaging to remain chained to it.

To be honest, I think the biggest barrier to nvidia entering the console market is just whether they feel like it's a good business. Low margin and high volume doesn't really seem like their thing.
On the other hand, there's the Switch. It's low margin, potentially higher-volume as a portable than at least one of the major consoles, and Nvidia's doing way more software and support work than AMD is.
Having so much infrastructure handled by Nvidia might make losing backwards compatibility worthwhile. Nvidia might even be making the case: if it was good enough for Nintendo...

It's also a case where Nvidia's mobile GPU tech outpaced AMD's GPU--although in this case it was outdated AMD tech.
Nvidia's mobile IP fed into the GPUs that are now beating GCN, and perhaps that cycle is more likely to be aligned for the coming generation of standard consoles.
 
Last edited:
The PS3 was designed when Sony was very intimately aware of the prior generations' hardware designs. The mixture of a more standard cache-based processor and the SPEs was a compromise between IBM's desire for a more standard multicore setup and Sony+Toshiba's history of providing specialized non-cached DSP-like processors.

It was being compatible with the old ways of doing things that lead Kutaragi to split the difference and give one portion of Cell to IBM's way, and the other to what Sony and Toshiba were comfortable with.

We know IBM's philosophy also went into the Xbox 360, Wii, and essentially everything else. Included in this was ATI/AMD's outside GPU.
The addition of a GPU to the PS3 appears to have been the result of a later-stage realization that the old way Cell was counting on didn't compete in realized power or performance.

It wouldn't be unprecedented if it's AMD's turn this time to be the old way Sony shouldn't be too slavish to. Backwards compatibility might be something in AMD's interest to easily provide, but it is probably even more in its interest to not have GPU IP that is so fully beaten across so many metrics.
That AMD is in that position means it is blind to those realities, or more possibly that it is aware and what we see is its best effort.
It's not Sony's concern why, just whether it's potentially damaging to remain chained to it.


On the other hand, there's the Switch. It's low margin, potentially higher-volume as a portable than at least one of the major consoles, and Nvidia's doing way more software and support work than AMD is.
Having so much infrastructure handled by Nvidia might make losing backwards compatibility worthwhile. Nvidia might even be making the case: if it was good enough for Nintendo...

It's also a case where Nvidia's mobile GPU tech outpaced AMD's GPU--although in this case it was outdated AMD tech.
Nvidia's mobile IP fed into the GPUs that are now beating GCN, and perhaps that cycle is more likely to be aligned for the coming generation of standard consoles.
I would assume that PS5 hardware is already decided on, at least if Sony is targeting a holidays 2019 launch (or six-year cycle). AMD actually delivered quite well with the PS4, and the subsequent Slim and Pro.
I'm sure they could provide a compelling slide deck for the PS5. And cynisism aside, I believe they could do a decent job with Zen and Navi as a foundation within a reasonable power draw.
Nvidia would have to come up with something compellingly better than what Sony is currently enjoying with AMD, and it's not certain that it would be worth it for them. The potential is there though, and that helps in negotiations with AMD.
 
To be honest, I think the biggest barrier to nvidia entering the console market is just whether they feel like it's a good business. Low margin and high volume doesn't really seem like their thing.

Nintendo Switch..?

Either way I kind of dream of what Nvidia could cram in a ~200W TDP on 7nm node. Their first Volta (Tesla) PCIe card has 250W TDP and is ~14TFlops...and that's on 12nm process...
 
I would assume that PS5 hardware is already decided on, at least if Sony is targeting a holidays 2019 launch (or six-year cycle). AMD actually delivered quite well with the PS4, and the subsequent Slim and Pro.
I'm sure they could provide a compelling slide deck for the PS5. And cynisism aside, I believe they could do a decent job with Zen and Navi as a foundation within a reasonable power draw.
Nvidia would have to come up with something compellingly better than what Sony is currently enjoying with AMD, and it's not certain that it would be worth it for them. The potential is there though, and that helps in negotiations with AMD.
Not sure if decided on. They probably have different PS5 specs, releasing at different launch timings but still deciding on which one to go with. it gives them more flexibility for changing market forces
 
I think there's zero chance of PS5 using ARM based CPU or Nvidia mobile GPU like Switch. I suppose there's a slight chance Sony would work with Nvidia to give PS5 a full standalone GPU, even though it's very unlikely. The most likely option is a beefy AMD APU with a Zen2 based mobile/lite GPU and Navi and/or Next Gen GPU.

The main question on my mind (it depends on PS5's time frame) is that I'm wondering how much a difference in transistor budget and clock speeds could 7nm+ manufacturing make over plain 7nm ?

Edit: Referring to TSMC's 7nm and 2nd gen 7nm+ nodes. I wasn't thinking about GloFo's 7nm roadmap. I'm assuming PS5 silicon would be manufactured by TSMC, like PS4 Pro and XB1X.
 
Nintendo Switch..?

Either way I kind of dream of what Nvidia could cram in a ~200W TDP on 7nm node. Their first Volta (Tesla) PCIe card has 250W TDP and is ~14TFlops...and that's on 12nm process...
That pure compute, Vega with a 15-20% better process wouldn't be that far away in compute only.
To me AMD and NV look quite comparable in term of power consumption on the ALU side, I think NV is getting its power wins in all the other areas of the pipeline.

I think everyone agrees 16nm TSMC process is better then 14nm LLP by a little bit and 12nm is a little bit better the 16nm.

Unless someone brings a big ARM core to market, it will be like bringing a knife to a gun fight. You want the next level of GTA5 style immersion your going to need all the raw CPU you can get. if your logic is use the GPU to do the CPU's job to make up for it you just lost the advantage you sort by going with an NV GPU.
 
X1 and PS4 launched with incredibly low end cpu cores and i'd expect that to continue. Many arm cores doesn't seem that ridiculous to me, but I think x86 would definitely be preferable.
 
That pure compute, Vega with a 15-20% better process wouldn't be that far away in compute only.
To me AMD and NV look quite comparable in term of power consumption on the ALU side, I think NV is getting its power wins in all the other areas of the pipeline.

I think everyone agrees 16nm TSMC process is better then 14nm LLP by a little bit and 12nm is a little bit better the 16nm.

Unless someone brings a big ARM core to market, it will be like bringing a knife to a gun fight. You want the next level of GTA5 style immersion your going to need all the raw CPU you can get. if your logic is use the GPU to do the CPU's job to make up for it you just lost the advantage you sort by going with an NV GPU.
Is ARM actuslly a weak CPU for gaming though? I get it's not lighting the world on fire for OS and workstation use cases, but as a pure game code cruncher with very little running in the background would it really be that weak in comparison?
 
X1 and PS4 launched with incredibly low end cpu cores and i'd expect that to continue. Many arm cores doesn't seem that ridiculous to me, but I think x86 would definitely be preferable.

The trend for consoles now is big gpu relative to cpu...which makes sense for what they're purpose is.
 
Is ARM actuslly a weak CPU for gaming though? I get it's not lighting the world on fire for OS and workstation use cases, but as a pure game code cruncher with very little running in the background would it really be that weak in comparison?

They seem to work pretty well in iPhones/iPad's...
 
Is ARM actuslly a weak CPU for gaming though? I get it's not lighting the world on fire for OS and workstation use cases, but as a pure game code cruncher with very little running in the background would it really be that weak in comparison?
The reason for going with x86 for the next generation of consoles would be backwards compatibility. You could make the case for this being a weakness of the current architectures since maintaining backwards compatibility is more easily achieved by staying with x86 which locks Sony/MS to AMD or intel, reducing their bargaining power.
Since the current x86 processors are so weak however, it may actually be possible to emulate them well enough using ARM cores on 7nm, giving way more freedom CPU wise. Realistically, they are still limited to a very small number of GPU providers. But having a choice at all makes for a better bargaining position than not having any real competition at all!
 
X1 and PS4 launched with incredibly low end cpu cores and i'd expect that to continue. Many arm cores doesn't seem that ridiculous to me, but I think x86 would definitely be preferable.

Thinking about this a little bit more deeply, for decades we had consoles with very low RAM - and all we heard from developers was "if only there was more RAM!!" Types of complaints.

This gen, we got a lot of RAM. I daresay quite a lot more than most were expecting before launch.
And now all we hear is "if only we had better CPUs!"

My point is, just because something hasn't been fixed yet, doesn't mean that it won't.

We have a precedent where RAM starved consoles were the problem, and finally they're not RAM starved anymore.

I can see a future where consoles won't be CPU starved anymore.
 
X1 and PS4 launched with incredibly low end cpu cores and i'd expect that to continue. Many arm cores doesn't seem that ridiculous to me, but I think x86 would definitely be preferable.

Why? This generation was really not that typical, Xbox1, celeron wasn't that weak, Xenon and Cell where both very big CPU's thax to massive vector units/SPE. This generation there just wasn't that much choices, Who could deliver an OOOe core stronger then jaguar in 2013 with as much throughput with 8 cores that could integrate into a 2TF GPU?

Is ARM actuslly a weak CPU for gaming though? I get it's not lighting the world on fire for OS and workstation use cases, but as a pure game code cruncher with very little running in the background would it really be that weak in comparison?
It comes down to what you want, if your code has serial sections that are branchy, that big OOOe core with large window can extract as much IPC as possible, predict earlier, prefetch earlier, recover from stall earlier etc. You can kind of see this with something like a i3-2100, two big cores @ 3.1ghz delivering more performance in many current gen games then ~7 core 1.6ghz SOC.

You only need to look at Zen to see how important latency is to games, within a CCX you have best in class latencies, outside the CCX you just have "good" latencies , now look how much performance you leave on the table if you pin to cores in different CCX's.

The other thing about a sea of cores is you still need to be cache coherent, every extra core increases the latency of all cores. Those big AMD cores also have SMT so if your branchy code does stall threads you can still keep your thoughtput up.

A 7nm 6 core Zen+ CCX with a 25watt TDP would be mid to high 2ghz, maybe even 3.0ghz, what arm option is going to come close to that?


The trend for consoles now is big gpu relative to cpu...which makes sense for what they're purpose is.

One generation does not make a trend........

They seem to work pretty well in iPhones/iPad's...

And those are very big Cores with lots of cache, how many of those are there on the open market for people to integrate into a SOC, M2 nope, A73 nope, snapdrag... i mean A73.........
 
One generation does not make a trend........

I don't see it changing though...They are going to want to cram as much gpu grunt as they can while just having enough cpu to feed it and be able to do one other task like streaming/music app. That's all it's has to be able to do really. It's not a PC...
 
I don't see it changing though...They are going to want to cram as much gpu grunt as they can while just having enough cpu to feed it and be able to do one other task like streaming/music app. That's all it's has to be able to do really. It's not a PC...

I disagree. If anything, the only meaningful way they can distinguish a next-gen console from the mid-gen upgrades, PS4Pro and XB1X, is by providing a significant update in CPU performance.

From a consumer perspective, "if I already own a Pro/X1X and game at 4k, why do I need to upgrade to PS5/XBN?"

The surefire way to answer that value proposition is to go hard on a beefy Zen CPU to enable simulation complexity that simply will not run [well] on a Jaguar...

If anything, I'd argue that the existence of Pro/XB1X make a next-gen CPU upgrade all the more important, both for consumers and Sony/MS who will be trying to convince console owners to upgrade.
 
I disagree. If anything, the only meaningful way they can distinguish a next-gen console from the mid-gen upgrades, PS4Pro and XB1X, is by providing a significant update in CPU performance.

From a consumer perspective, "if I already own a Pro/X1X and game at 4k, why do I need to upgrade to PS5/XBN?"

The surefire way to answer that value proposition is to go hard on a beefy Zen CPU to enable simulation complexity that simply will not run [well] on a Jaguar...

If anything, I'd argue that the existence of Pro/XB1X make a next-gen CPU upgrade all the more important, both for consumers and Sony/MS who will be trying to convince console owners to upgrade.

Unless they make a more power hungry console (Watts) I don't really see how they can start going to a big cpu. There are also probably practical limits to the size of the APU/SOC. The bigger you make the CPU, the smaller you make the GPU and vice versa. Guaranteed the CPU they provide will be an upgrade on Jaguar. It'll have to be with a much bigger gpu. But I'm not expecting anything mind blowing.
 
I disagree. If anything, the only meaningful way they can distinguish a next-gen console from the mid-gen upgrades, PS4Pro and XB1X, is by providing a significant update in CPU performance.

From a consumer perspective, "if I already own a Pro/X1X and game at 4k, why do I need to upgrade to PS5/XBN?"

The surefire way to answer that value proposition is to go hard on a beefy Zen CPU to enable simulation complexity that simply will not run [well] on a Jaguar...

If anything, I'd argue that the existence of Pro/XB1X make a next-gen CPU upgrade all the more important, both for consumers and Sony/MS who will be trying to convince console owners to upgrade.

Not implying they won't use zen or won't be more powerful CPU than Jaguar...but I still think relatively speaking the gpu will be the priority..
 
Unless they make a more power hungry console (Watts) I don't really see how they can start going to a big cpu. There are also probably practical limits to the size of the APU/SOC. The bigger you make the CPU, the smaller you make the GPU and vice versa. Guaranteed the CPU they provide will be an upgrade on Jaguar. It'll have to be with a much bigger gpu. But I'm not expecting anything mind blowing.

When I say "bigger" CPU, of course I'm talking in relative terms with the amount of GPU ALU they have on the maximum economical APU size.

Not implying they won't use zen or won't be more powerful CPU than Jaguar...but I still think relatively speaking the gpu will be the priority..

E.g. If Sony will be deciding between a 4c8t Zen+ CPU and GPU with something like 64-80CUs, and an 8c16t Zen+ CPU and something like a 56 CU GPU, I'm arguing that there's more of a reason to choose the latter than the former, because there's a limit to how much of a benefit those extra GPU CUs will provide over a greater depth of simulation complexity that the bigger CPU will provide; especially considering that with mid-gen consoles changing the equation, next-gen consoles will have a harder time demonstrating what "next-gen" means for gamers with just "more GPU ALUs".
 
When I say "bigger" CPU, of course I'm talking in relative terms with the amount of GPU ALU they have on the maximum economical APU size.



E.g. If Sony will be deciding between a 4c8t Zen+ CPU and GPU with something like 64-80CUs, and an 8c16t Zen+ CPU and something like a 56 CU GPU, I'm arguing that there's more of a reason to choose the latter than the former, because there's a limit to how much of a benefit those extra GPU CUs will provide over a greater depth of simulation complexity that the bigger CPU will provide; especially considering that with mid-gen consoles changing the equation, next-gen consoles will have a harder time demonstrating what "next-gen" means for gamers with just "more GPU ALUs".
The direction of moving draw calls from CPU to GPU is going to continue. With draw calls taking up a large chunk of CPU, if the CPUs of next gen are also restrictive, we would matured from API and support perspective such that there will be a lot of GPU side calling to free up CPU cycles. Right now I don't think anyone is working all that much with it because it's either not as fully functional as they want it to be, or there isn't enough support for it yet. But I'm pretty sure the move to executeIndirect and the better nvidia version of it, is definitely coming, it's definitely going to be part of the next set of games within the next 5 years.

There's always been this rebuttal argument of batching and draw call reductions, making larger systems etc. And that's the way it is today. But all of these costs money, a lot of money and time to constantly have these assets prepped for different batched calls, or modelling whole systems like a flock of birds, as opposed to a single bird. The move to increase draw calls will be inevitable when the hardware supports it. It's bound to eventually save both time and money, and provide more flexibility as a result and address a lot of these concerns about what would constitute as a next generation game. Draw calls are also a big limitation here, a lot of work is done to get this number down.
 
Status
Not open for further replies.
Back
Top