PlayStation 4K - Codename Neo - Technical analysis

For the purposes of discussion in this thread, it makes sense that we accept the conceit that the leaked specs are solid. If they aren't solid then there's no firm base on which to *reasonably* speculate. At a certain point when you start building speculation on top of speculation, the discussion becomes pointless.

Not shure they are final... I believe they are correct at the time, but not so shure they will remain for the final console. Specially on the CPU side!
As was once spoken in this forum, the TDP from getting a Jaguar CPU going from 1.6 Ghz to 2 Ghz is a 66% increase. In this case we are going for 2.1 Ghz, for a 31.25% increase in performances, for about 70% more TDP.

CPU heat could be a problem, so I think there is a chance the CPU may yet change!
 
Not shure they are final... I believe they are correct at the time, but not so shure they will remain for the final console. Specially on the CPU side!
As was once spoken in this forum, the TDP from getting a Jaguar CPU going from 1.6 Ghz to 2 Ghz is a 66% increase. In this case we are going for 2.1 Ghz, for a 31.25% increase in performances, for about 70% more TDP.

CPU heat could be a problem, so I think there is a chance the CPU may yet change!
You are not taking account the process node change. Going 2,1 Ghz at 14nm could perfectly suppose a reduction in TDP. In fact Puma+ was a revision of Jaguar in 28nm that already reduced greatly the TDP (without architecture changes).

With the rumored specs PS4 Neo consumption could end up being less than original PS4 (that was, what? 150 watts?).
 
I've actually a tough time believing that somebody will use "cat' type of CPU on 14/16nm but MAD won't do it and try to get into a couple windows tablet/nettop/laptop. No matter the rumors I believe the PS4 Neo will use that good old 28nm lithography we know and like.
From here I expect a pretty huge chip. Looking at the announced specs mostly GFLOPS throughput and clock speed, I believe that the power budget would be a little high even for a system bigger than the PS4.
I expect Sony to have refined its system and have now a proper power management feature from the late "cat APUs". So I think the rumors we have are about the top clock speed and not the sustained performances. The reason I believe that are the following:
To push higher frames rates on the CPU side, your are going to need higher clock on the CPU which will raise the power budget significantly by self, so before you ever render something you need extra power. You need to make room for the power.
Say you accounted for the extra power and you are in a situation to fed the GPU, how much CPU do you need to render at say twice the frame or twice the resolution (depending on how the experience is tweaked)? Twice as much sounds right-ish. Now I do believe the numbers we have are to high to do just that, especially as the system could/should be built around newer IPs (which further improve the real world performances).

So my take on all of that is that an extensive power management system will make it so the system does what it has to do within a beefier TDP yet a more reasonable TDP than what you would expect for such big GPU (+ the CPU cores) in the PC realm.
Going down further that rabbit hole (so a 28nm one), power may have been a major concern in the design, especially if Sony want system to be relatively sleak and silent and sell it at a fair margins.
Memory and memory controllers burn power, so if both act faster, the end result is easy to predict: more power, power than comes on top of the extra CPU power already burnt. So I think to would make sense to Sony to have move to a wider bus width: 320 bits along with memory acting (on average) at a slower pace.
How would that match the GPU, I could see the GPU consisting of three shader engines, each one consisting of a the appropriate fixed function hardware (geometry engine, etc.) 12 CUs, 16 ROPs. So the GPU would consist 36 CUs and 48 ROPS.
I could see that 36 CUs figures around being the total number of CUs on die, not the total number of CUs activated (which may depend on what comes back from the foundry). That is already big on 28nm, either way it has more than 36 CUs...
I would not shock me if the system end up anywhere in the 30ish (I pass on a random bet).
While the system is in PS4, one Shader engine would be shut off, and the two left would run at slower speed to emulate the two available in nowadays PS4 system (/I've no idea about the level of control AMD has here, disabling an engine sounds reasonable, cluster of CUs? I don't know).
I'm going back to the memory, I also believe a wider bus make sense because I can't explain myself why the new system would have more memory available for the OS or games (unclear at this point to me at least, yet more memory has to come from somewhere). A 320 bits bus would explain that:
Sony may have linked the SOC to 9GB of GDDR5 (mixing different capacity chips operating at the same speed), in a case akin to what happened to a Nvidia GPU, may 512MB may end up slower and SOny prefers to hide it away (or for themselves who now).

So that it the system could end being pretty reasonable in power draw, Sony may have traded power for an increase in silicon area and proper turbo mechanics.
 
Last edited:
I wonder why xbox and PS4 still reserve 3GB for OS-stuff.
Well the xbox has at least the apps while also running a game... but still 3GB. That is 6 times the amount of memory the xbox 360 had.

Instant game/out of game. Otherwise I don't see the point either.

Some of the popular current set top media players (Roku 4, Fire TV, Apple TV 4th Gen) generally seem to be shipping with around ~2 GB (1.5 - 2 GB) of memory. The previous version of those same devices were working with roughly 512 MB from what I can tell. I suspect the next generation of those devices will release within the life span of the current crop of consoles (the next 1 to 2 years based on previous releases) and have a similar boost in memory (and likely CPU) capacity. How those media devices (which do even less multi-tasking than phones) will use that memory, or for that matter how they use their current memory, I'm not sure of as that's well outside of my area. But I suspect that as the baseline specs for media devices increases, the developers of the media apps that get released on those devices will continue to update their apps and take advantage of that additional capacity in some manner (probably not 1 for 1, but more of a slow evolution). As such, simply from the perspective of being a viable media playback device (in addition to a games device), ensuring that a healthy chunk of resources are available to apps running in the app space even if those resources aren't being used today seems like a smart move. I'm going well outside of my comfort zone with this next statement but I get the impression that when it comes to performance bottlenecks for the current consoles, other issues crop up long before memory capacity. If that's the case, I'm not sure there's much impetus to release additional capacity for games from the OS/App reserve (particular since they can't get that capacity back later).
 
Last edited:
Maybe the higher memory is based on offloading more I/O buffers to the south bridge. It's 256MB total right now, which probably includes an OS so very little left for I/O buffer, That chip could easily be a drop-in 1GB part, and if there was 512MB used in the main memory for I/O buffers, these could be easily moved around. It's really low speed requirements.

I was thinking even if they freed up an additional 512MB from the main memory, it could be left the same on PS4 on purpose. Testing a game extensively on PS4 and then using a higher target resolution on PS4K, that additional 512MB would avoid the possibility of OOM because of the bigger frame buffers, and also the buffers necessary for 4K upscaling. Otherwise would devs cripple the PS4 version anyway. One less thing to think about.

I still don't understand what they are doing with so much memory reserved for the OS. It can only be buffers/cache for UI responsiveness, or the unpredictability of web browser memory usage.
 
Maybe the higher memory is based on offloading more I/O buffers to the south bridge. It's 256MB total right now, which probably includes an OS so very little left for I/O buffer, That chip could easily be a drop-in 1GB part, and if there was 512MB used in the main memory for I/O buffers, these could be easily moved around. It's really low speed requirements.

I was thinking even if they freed up an additional 512MB from the main memory, it could be left the same on PS4 on purpose. Testing a game extensively on PS4 and then using a higher target resolution on PS4K, that additional 512MB would avoid the possibility of OOM because of the bigger frame buffers, and also the buffers necessary for 4K upscaling. Otherwise would devs cripple the PS4 version anyway. One less thing to think about.

I still don't understand what they are doing with so much memory reserved for the OS. It can only be buffers/cache for UI responsiveness, or the unpredictability of web browser memory usage.
It could be though, whereas guessing the inner architecture is anyone's guess, I actually like those two ideas a lot (removing the fat from my previous post): 28 nm and wider bus along with slightly different memory arrangement.
28nm is the only single best explanation about how SOny could have the system up and running this fall, when we may just be seeing the first 14/16nm GPU around that time, and there is even more mystery on the CPU side of things.I mean if AMD pushed 14/16nm CPU for consoles before anywhere else I could see investors left in disbelief, I mean AMD is out of the picture for any low power type of device. Whereas Puma+ may not be the greatest thing around they are so far from sucking, Intel process advantage is killing them. 14/16nm would make them into quite amazing tiny things. Intel is soloing and selling Dual cores with pretty lame gpu performances because they are alone on that segment, power wise, Cat APUs are on the same bracket as INtel main Core line :(
 
Last edited:
I think that's impossible. They're not going to release a 200-250W console.
Well who knows? It is more likely to me than Cat cores at 14/16nm. North of 200 Watts is manageable for a system that may sell at an unusual premium in the console realm. A Huge yet relatively slow chip (compared to PC) should definitely be "coolable" and without sounding like a full on vacuum. It just a matter of putting the extra money in.
Sorry to bring business consideration into the discussion but it is needed, Morpheus will cost around the same price as the PS4 iirc, competing headsets start way above the cumulated price of the Morpheus+PS4, IF the experience is actually really good for Neo+Morpheus Sony can ask some good money, to me 999$ is not insane. With that money you can buy a cooler and also design a bigger internal PSU.

NB I'm not myself a believer in VR at least not in the living room, I actually like AR or a blend VR/AR a lot more.
 
Polaris is expected soon, it's not crazy to think the PS4K APU can be ready for an october launch on 14nm.

If the decision to make this console was a long time ago, AMD could have done the work in parallel for the PS4K APU without a huge amount of engineering effort. It could be practically the same part as whatever AMD were planning for their own PC APU with GDDR5 support, because their respective needs are practically identical. (this is not the case with XB1 because of the esram, amiga blitter, and audio processors)
 
Doubt we'll see it, but would be funny to see a 450~500 mm^2 28nm chip though. And I suppose 28nm should be cheap and low defect by now ...
 
Doubt we'll see it, but would be funny to see a 450~500 mm^2 28nm chip though. And I suppose 28nm should be cheap and low defect by now ...
There is a lot of wishing thinking going, we yet to hear or read anything from ADM that they that theirs synthesizable design are available below 28nm, the same applies to theirs high density library. Those are not tiny things you hide away from investors, ARm is pretty clear about what IP is available as hard IP at which node.
I see NO reason to assume that Cat cores are available at the node Polaris should use.
Another thing is we have yet to see is AMD making an APU that uses its last GPU IP anytime close to the release of the same IP, I think they can't most likely a blend of the inner process and manpower.

My best bet till a serious proof is a big chip.
 
Last edited:
Cat cores?

Edit: Ah.
d690cbb35e4b2d522bdda7e5c8322f0f1edbf0ddb778bd3f273b6dfe0743824a.jpg
 
Some of the popular current set top media players (Roku 4, Fire TV, Apple TV 4th Gen) generally seem to be shipping with around ~2 GB (1.5 - 2 GB) of memory. The previous version of those same devices were working with roughly 512 MB from what I can tell. I suspect the next generation of those devices will release within the life span of the current crop of consoles (the next 1 to 2 years based on previous releases) and have a similar boost in memory (and likely CPU) capacity. How those media devices (which do even less multi-tasking than phones) will use that memory, or for that matter how they use their current memory, I'm not sure of as that's well outside of my area. But I suspect that as the baseline specs for media devices increases, the developers of the media apps that get released on those devices will continue to update their apps and take advantage of that additional capacity in some manner (probably not 1 for 1, but more of a slow evolution). As such, simply from the perspective of being a viable media playback device (in addition to a games device), ensuring that a healthy chunk of resources are available to apps running in the app space even if those resources aren't being used today seems like a smart move. I'm going well outside of my comfort zone with this next statement but I get the impression that when it comes to performance bottlenecks for the current consoles, other issues crop up long before memory capacity. If that's the case, I'm not sure there's much impetus to release additional capacity for games from the OS/App reserve (particular since they can't get that capacity back later).

Maybe it's just that 5GB is all that's needed for games on a console with less than 2TFLOPS so it's better to just keep it in reserve for the console upgrades that they had planned.
 
There is a lot of wishing thinking going, we yet to hear or read anything from ADM that they that theirs synthesizable design are available below 28nm, the same applies to theirs high density library. Those are not tiny things you hide away from investors, ARm is pretty clear about what IP is available as hard IP at which node.
I see NO reason to assume that Cat cores are available at the node Polaris should use.
Another thing is we have it to see AMD making an APU that use its last GPU IP anytime close to the release of the same IP, I think they can't most likely a blend of the inner process and manpower.

My best bet till a serious proof is a big chip.

Interestingly, and potentially in support of the idea of 28nm, is the fact that mobile chip makers have long since shifted to 20 and 16nm, and that AMD and Nvidia will begin shifting to 16/14 this summer for their mainstream GPUs with their big boys following early next year. This means that TSMC will soon have acres of 28 nm capacity and be in need of customers. There are likely to be some good discounts available for a high volume chip consumer such as a partnership of AMD and Sony and the option of 14nm from GF might make quite a bargaining chip ...

On top of that, the cost of moving a design to 16/14 nm is likely to be very expensive, at least in the short run:

http://semiengineering.com/finfet-rollout-slower-than-expected/

When AMD have 28nm libraries for everything you need, and you know yeilds will be excellent, perhaps there is a case to stay on a big old node for a while!
 
Maybe it's just that 5GB is all that's needed for games on a console with less than 2TFLOPS so it's better to just keep it in reserve for the console upgrades that they had planned.

The PC would suggest that 2TF is far from enough to make the most of 5GB of ram! :eek:
 
Maybe it's just that 5GB is all that's needed for games on a console with less than 2TFLOPS so it's better to just keep it in reserve for the console upgrades that they had planned.
There's no such thing as a RAM maximum for performance. You can trade processing solutions with memory solutions often enough. eg. Calculate dynamic LODs using processing, or store 100 different precalculated LOD models and just fetch the required one. Calculate procedural materials in realtime, or calculate them once and store the result.
 
Isn't one of the rumours concerning PS4 Neo is the whole machine in a smaller case? From what i have read about it, NEO is planned to be in a smaller case box compared to the original PS4.
 
Isn't one of the rumours concerning PS4 Neo is the whole machine in a smaller case? From what i have read about it, NEO is planned to be in a smaller case box compared to the original PS4.

I haven't inferred that from the rumors. I think most people expect a PS4slim and a PS4K.
 
Back
Top