Playstation 5 [PS5] [Release November 12 2020]

I am technically illiterate regarding these things. So what does all this mean in practice?
Is the PS5 bottle necked somewhere because they removed something?
What is this they removed and how does it fair next to competition?

If they remove it there is a good reason if it was mandatory for games, they would not have remove it. Don't forget a console is tailored for games. PC need to be performant in every situation.
 
The removal of certain features/logic is a cost/performance/heat/die space compromise. Its all about (the right) compromises, nothing new. 'Zen3 features', Infinity cache etc where the secret sauce of this generation to a very small group of people untill yesterday.
Aside from that, no-one cares about all of this when investing in a 400/500 dollar machine. Its intresting to discuss the APU die pictures on a technical forum, though.

So, i take it that the full AVX1/2 instruction sets are there? In the world of pc gaming, i think AVX(2) did start to see usage, BFV comes to mind. Haswell and later for Intel do contain those, and again if memory serves correctly, resulted in a small performance increase (abit above 5% in framerates) for BFV.

To add to the above, games supporting DX12 utilize AVX/AVX2 (BFV for example). On a OC'ed Intel CPU, enabling AVX2 does induce higher temperatures and load.
DX12 supporting AVX2, i doubt sony would omit the instruction set seeing the lifespan of a console.
 
Jaguar CPUs have FADD, right? Or was that removed on PS4's APU. Did no PS4 software use it? Not even apps like those for streaming video or content like that?
 
The removal of certain features/logic is a cost/performance/heat/die space compromise. Its all about (the right) compromises, nothing new. 'Zen3 features', Infinity cache etc where the secret sauce of this generation to a very small group of people untill yesterday.
Aside from that, no-one cares about all of this when investing in a 400/500 dollar machine. Its intresting to discuss the APU die pictures on a technical forum, though.

So, i take it that the full AVX1/2 instruction sets are there? In the world of pc gaming, i think AVX(2) did start to see usage, BFV comes to mind. Haswell and later for Intel do contain those, and again if memory serves correctly, resulted in a small performance increase (abit above 5% in framerates) for BFV.

To add to the above, games supporting DX12 utilize AVX/AVX2 (BFV for example). On a OC'ed Intel CPU, enabling AVX2 does induce higher temperatures and load.
DX12 supporting AVX2, i doubt sony would omit the instruction set seeing the lifespan of a console.
ps5 is especialy challenging because the cpu supports 256 bit native instructions that consume alot of power
 
An intervention from sebbbi about the PS5 die

Slightly off topic, but as someone who works on hardware performance, I love looking at profile diagrams likes this.

My question would be what can we do to increase hardware utilization for the cases where it looks to be under 75%, both new combined stages don't seem to tax the hardware to 100% other than short peaks. Is this an algorithmic limit or some hardware limitation that leaves theoretical performance on the table?

The RT related blocks on the other hand so seem to indicate that there would be a benefit from additional hardware. However, those RT blocks account for only ~30% of the new frame and for example doubling the hardware resources to cut that time in half (theoretically), wouldn't be a good trade-off: 2X hardware for maybe a 15% overall gain. Simplified assumptions, I know. But I think to get RT running eventually some new hardware techniques are going to be needed.
 
So, the first die shots of the PS5 APU are available
Fritzchens Fritz auf Twitter: "A first quick and dirty die-shot of the PS5 APU (better SWIR image will follow). It looks like some Zen 2 FPU parts are missing. https://t.co/PefXCxc3G1" / Twitter

EuM4_ZeXUAcjVuy

EuM4-XEWYAYKnMk
Maybe someone can point out, what is which part of the GPU, CPU, ...

Edit:
seems like caches are still split, so no infinity-cache (never really expected that).

That's highly surprising given a couple of certain Youtubers kept hammering home about a unified L3$ xD. Same with Infinity Cache. Guess we can finally mark those two off the list.

Eyeballing it, looks like 2 X 4MB L3 for the CPU so same size as XSX / Renior.

Surprise to no-one, I'm sure.

Also lmao Red Gaming Tech.

Also Moore's Law Is Dead. MLID in particular was REALLLLY pushing unified L3$ on CPU and specifically Infinity Cache, at least a few times last year. Always found it odd how the people pushing this stuff (or let's also say, specific customizations to the Geometry Engine which I'm still not necessarily doubting have been made) could never go into more specifics but just kept saying Sony were under NDA by AMD to not talk about it, or that negative feedback from Road to PS5 convinced them to quell down on technical specifics.

I don't think a few kneejerk lolz gahmerz's chat comments would be enough to sway their course of reaction considering the vast majority liked Road to PS5's presentation (myself included).
 
Always found it odd how the people pushing this stuff (or let's also say, specific customizations to the Geometry Engine which I'm still not necessarily doubting have been made) could never go into more specifics but just kept saying Sony were under NDA by AMD to not talk about it, or that negative feedback from Road to PS5 convinced them to quell down on technical specifics.
More views resulting in more money. PS is a gigantic brand and the PS5 topic gave them lots of exposure beyond their usual audience.
 
I get what you're saying but isn't this idea totally little insane? Makes no sense, AVX is useful and I don't believe the work to remove instructions from the transistors in the FPU isn't worthy.
Also, Zen 2's energy consumption on 7nm shouldn't be a problem on a high performing hardware.



Or maybe Fritz was drunk and shot the wrong die?
Man... the fanboys conservationists will have a field day with this. I can already hear they claiming PS5 not only uses RDNA1 for it's CPU, now the CPU was confirmed to be Zen 1.
No idea how can they justify the thing performing almost the same and some times better than the SeriesX however...

I think at the very least, this might take off the table all the other explanations that had to rely on phantom tech customizations not already mentioned by Cerny like the cache scrubbers or Cache Coherency Engines in the I/O block. THOSE things could still be aiding in relative performance parity between the platforms, and you don't need to jump to "secret sauce" like IC or unified L3$ when proof now seems to disprove either of those existing in the design whatsoever.

There's other factors that could also be logically reached to in order to explain relative performance parity: better I/O subsystem (this could particularly be true if parts of Xbox's Velocity Architecture are not readily available yet; DirectStorage for example, still isn't readily available on PC and closest availability is Nvidia's offshoot of GPUDirectStorage and even that doesn't seem to be fully ready yet), different teams handling different versions of the game for different platforms (the "A" team could be handling the PS5 version while the "B" team could be handling Series versions, for example. I remember this being a regular thing during the SNES/MegaDrive and later on PS1/Saturn/N64 eras and I'm suspecting it still occurs though with likely better parity between the teams handling the different versions), PS5's devkit tools being easier to work with (all word says they're basically supercharged PS4 dev tools), and yeah even the pseudo-meme of Series dev tools coming in hot (which again, some like DirectStorage aren't even readily available for usage yet; this is all Microsoft's "fault" but still...).

Those seem to be the best explanations going forward IMHO, and realistically it can usually be a mix of any of those. I guess a fourth potential explanation would be the segmented memory in Series systems having some higher-than-desired bandwidth access penalty when data spills out from the GPU-optimized pool, but a few people on the board here have already gone into that and indicate it seems to be an issue with GDK still being ironed out. Forgot the specifics, but they covered the possible issues there pretty well. Suffice to say, if it's mainly a software-related issue, then it's temporary and should be fixed sooner rather than later. Some of the new feature updates Microsoft will talk about later this month, I imagine at least some of these would not be worth discussing at this point if other things that could impact their usage within the GDK environment (such as the aforementioned memory allocation between the pools of memory) weren't at least tracking forward in being resolved internally through updates.

More views resulting in more money. PS is a gigantic brand and the PS5 topic gave them lots of exposure beyond their usual audience.

Definitely true. Gonna be fun to see how they pivot this to keep things flowing as they are. MLID had some topic on his whiteboard called "AMD vs Microsoft" and said he was waiting until 3P perf results from other games came out before committing to it later in the year.

Which could really just be them covering their bases with some grounds of supposed evidence for whatever topic they want to push as something real, though it very well could be through means of conflation on his end given proof now these die shots for PS5 aren't showing what he claimed with "insider knowledge" was there in the system.

It's all entertaining to watch for me I guess, and I guess given the amount of stuff online that's actually toxic, this type of stuff is essentially nothing. At the very least it lets us see how people can come up with semi-convincing narratives to drive engagement, even if they're only based on partial bits of truths (or fibbing through obfuscation/absence of details) xD
 
Last edited:
That's highly surprising given a couple of certain Youtubers kept hammering home about a unified L3$ xD. Same with Infinity Cache. Guess we can finally mark those two off the list.



Also Moore's Law Is Dead. MLID in particular was REALLLLY pushing unified L3$ on CPU and specifically Infinity Cache, at least a few times last year. Always found it odd how the people pushing this stuff (or let's also say, specific customizations to the Geometry Engine which I'm still not necessarily doubting have been made) could never go into more specifics but just kept saying Sony were under NDA by AMD to not talk about it, or that negative feedback from Road to PS5 convinced them to quell down on technical specifics.

I don't think a few kneejerk lolz gahmerz's chat comments would be enough to sway their course of reaction considering the vast majority liked Road to PS5's presentation (myself included).

I said it here, I don't like when they were talking about secret sauce without saying anything. When they talked about unified cache, it was possible to find the truth. I don't watch so much redgamingtech but I will watch the next video. It will be funny...:mrgreen:
 
THOSE things could still be aiding in relative performance parity between the platforms

Theres a reason DF uses PC gpus to test relative performance of the consoles. And with that, the PS5 performs where it should overall, abit better as opposed to a RX5700XT, which in pure paper specs, it should. This means theres no secret sauce or things boosting its performance in rendering.
 
Damn, imagine how big PS5 would be if it had 256, if it is now that big with 128 :runaway::mrgreen:
processed.jpg

Space X's next project. Will it fly? Find out in five years!

I said it here, I don't like when they were talking about secret sauce without saying anything. When they talked about unified cache, it was possible to find the truth. I don't watch so much redgamingtech but I will watch the next video. It will be funny...:mrgreen:

Exactly. Some of these people just got drunk on exoticism because, to be fair, Sony does have a history of exotic customizations to their systems. Which even the PS5 I guess has at least a couple (there are no other GPU designs in the consumer entertainment space with cache scrubbers IIRC), but they certainly aren't as prolific as with systems past (nor do they need to be).

What's more, we already heard of all their customizations back in March. Even things Cerny discussed like seeing hardware implementations in future PC GPUs bearing the fruit of their partnership with AMD, we see with the Smartshift and SAM designs for AMD RDNA 2 and Zen 3 GPUs/CPUs on compatible motherboards mimicking the shared flexible power delivery and memory access parameters of PS5 (the latter tho more of just a thing to get around PCs being nUMA designs).

But just watch, we will see more contortions and twists coming from quite a lot of people on ways to spin these x-rays. It will be hilarious.

Theres a reason DF uses PC gpus to test relative performance of the consoles. And with that, the PS5 performs where it should overall, abit better as opposed to a RX5700XT, which in pure paper specs, it should. This means theres no secret sauce or things boosting its performance in rendering.

Pretty much. And since consoles are optimized much more than PC GPU cards, we should see PS5 (and Series X/S, for that matter) performing well even against higher-tier low-mid/mid-range PC GPU cards at least releasing within the next year possibly.

Having millions of consoles out there all at the same guaranteed spec provides incentives for optimization that PC simply can't match (though again, PC doesn't need to; they can just keep releasing more advanced cards that outstrip anything the consoles can do, as tends to happen).
 
Last edited:
So for the

10 Print "Hello World!"
20 Goto 10

Crowd, what does it mean that they cut the FADD unit, for practical purposes?

I am technically illiterate regarding these things. So what does all this mean in practice?
Is the PS5 bottle necked somewhere because they removed something?
What is this they removed and how does it fair next to competition?

So FADD removed as it would not really be used by games. But it is something that might be used in encoding e.g. compression and decompression. So it was redundant anyway due to the kraken block?

FADD is one of the most fundamental operations, and is used very widely in gaming. The PS5 will absolutely need to be able to do it.

However, the reason people are speculating that the FADD unit was cut from PS5 is that the Zen2 core actually has two different places that can do FADD. The FMA pipes can do FADD with a throughput of 2 per clock and latency of 5 cycles, but in addition to this there are FADD pipes that can also do FADD with a throughput of 2 per clock, but with a latency of 3 cycles.

That is, FADD is in general such an important instruction, that they added completely separate execution units just to cut 2 cycles of latency from them. It would appear that Sony felt that this is a waste, and just has the FMA units calculate them instead.

This is definitely a downgrade, but it might be a very small one.
 
It's all guess fof now right ? The layout is different from zen/zen2, some stuff seems missing... For all we know the "blocks" marked as similar, aren't at a low level...


Anyway, we all know the secret sauce is an Emotion Engine and a Cell, because PS5=2+3. Think people, think.
 
It's all guess fof now right ? The layout is different from zen/zen2, some stuff seems missing... For all we know the "blocks" marked as similar, aren't at a low level...


Anyway, we all know the secret sauce is an Emotion Engine and a Cell, because PS5=2+3. Think people, think.

Guess we need an xray of the power supply now, too :S

I only pray that PlayStation fans don’t have to live through someone declaring a 3D stacked L3$

I think they should be spared of that, but then again it is Youtube :p

FADD is one of the most fundamental operations, and is used very widely in gaming. The PS5 will absolutely need to be able to do it.

However, the reason people are speculating that the FADD unit was cut from PS5 is that the Zen2 core actually has two different places that can do FADD. The FMA pipes can do FADD with a throughput of 2 per clock and latency of 5 cycles, but in addition to this there are FADD pipes that can also do FADD with a throughput of 2 per clock, but with a latency of 3 cycles.

That is, FADD is in general such an important instruction, that they added completely separate execution units just to cut 2 cycles of latency from them. It would appear that Sony felt that this is a waste, and just has the FMA units calculate them instead.

This is definitely a downgrade, but it might be a very small one.

How big would the hit for lacking the additional FADD execution units be, would you guess? And does it have a perceptible impact on AVX-256 instructions (I've seen some people discussing it earlier in the thread; I don't know a lot about AVX-256 instructions outside of them being "particularly taxing", as Cerny alluded to. But maybe that is just in reference to their own design due to removal of these FADD units?)?
 
How big would the hit for lacking the additional FADD execution units be, would you guess? And does it have a perceptible impact on AVX-256 instructions (I've seen some people discussing it earlier in the thread; I don't know a lot about AVX-256 instructions outside of them being "particularly taxing", as Cerny alluded to. But maybe that is just in reference to their own design due to removal of these FADD units?)?
As you need to do the processing for more and more objects, not using vectorized instructions start to slow you down dramatically. If you need to check whether the player is colliding with 1 of thousands of possible collision entities for game code logic to happen, 256-bit math allows you to stuff in many more objects into a single calculation. So if the position vector for each object is 16 bits, than you can calculation the collisions for 16 objects in 1 go. Provided the numbers you are working with are large, you're going to get an advantage using the AVX instructions over iterating the array normally.

This is my general understanding I've read about, but I don't know how often it's actually used.
 
If they remove it there is a good reason if it was mandatory for games, they would not have remove it. Don't forget a console is tailored for games. PC need to be performant in every situation.
It depends on a lot of factors. Cost, thermals, production deadlines, form factor. So if pressure is there, I wouldnt be surprised if they make some compromises we wouldnt expect.
 
Back
Top