The APU reduces the amount of L3, which is a common sacrifice for mobile.Do amd laptop zen3 cpu's get cutdowns like that too? Seeing their powerdraw is lower, the cooling and space savings etc.
I'm not sure that provides a comparable scenario to what the benchmarking for the 4700S showed, which makes a Zen2 vs PS5 that has the SSE cut in half as well.There aren't that many benchmarks of games using avx vs sse you can find, but here's one. This is on an 8700k. The difference in current games is small or negligible.
SSE vs AVX how much performance different ? Grid 2 and Project Cars 2 1080p low test - YouTube
Wait what?One thing that did come up in the review is that there's an apparent drop in FPU ports:
no game we've seen released on console has switched to AVX yet. It may or may not ever happen this generation.Does that bore out in the games we have seen on the system?
Does that bore out in the games we have seen on the system?
Most games can exceed 60fps on a 3+ghz 8 core Bulldozer, and they wouldn't have been as well optimized as the console releases. Also, just because they have the same theoretical peak performance doesn't mean that the Zen2 FPU isn't more efficient in practice. So, I don't know if it's bore out in games yet.Does that bore out in the games we have seen on the system?
In 2013 at the price point and power budget both consoles needed to hit it would have been an 8 core Jaguar or a quad core Atom (without hyperthreading) or a dual core i3. Also, I'm unaware of any tablet running a Jaguar CPU with more than 2 cores or more than 1ghz. They never even made 8 core laptop APUs. If the enhanced consoles have proven anything, it's that most games of their generation are still held back by the GPU.Not many next-gen/new features are being really extensively used as of yet, be it advanced cpu features or things like fast SSD, RT, mesh shaders, vrs etc etc. The start of this generation where most if not all cross platform games are cross-generation we have a hard time concluding each systems true capabilities.
Furthermore, the ps5 cpu might be weaker than zen2 cpus, its however a very healthy increase over the tablet jaguar cpus that never should have happened.
I believe the average fp IPC would still be higher, but at peak, yesWait, does that mean PS5's CPU core has the same theoretical floating point IPC as PS4's Jaguar cores? Back when Series X was announced they talked about the CPU being 4x faster than Xbox One. 2x the IPC and 2x the clocks. Obviously real performance would be better on PS5 vs PS4 because the clocks are higher, but still.
that means Sony certainly removed feature not used in the console space for games, that's why it's not missed and does not show difference with the XsX.
Maybe MS will leverage some features on its exclusives.
There's no reason it should be non-useful to games. That video above shows a ~10% difference when doubling the simd width, for *some* games that make use of avx at least.
Barring some Sony-specific reasoning, cutting the FPU doesn't have any clear justification. The laptop chips are not performance consistent, they aren't in a super-constrained power range, and it's a tweak of the DVFS to adjust the ceiling if need be.
I'm still not confident that what Sony did was really necessary, given the number of other adjustments that can be made without potentially paying for a revised CPU that loses capabilities rather than gains them.
that means Sony certainly removed feature not used in the console space for games, that's why it's not missed and does not show difference with the XsX.
Maybe MS will leverage some features on its exclusives.
I'm pretty sure SSE has been useful for games for many years now. There are lots of Vector3 and Vector4 operations that you do on the CPU even for relatively simple 3D games, especially if you use physics. Unreal offers you types that use various vector optimisations in the background, so bums like me don't have to bother remembering maths from 20 years ago, or learn intrinsics.
SSE has been used in games for the past 20 or so years but ok. Anyway, it was either the cut down SSE/AVX performance or much lower clocks and Sony probably made the right decision to go with 3.5ghz max clock instead of not even 3ghz when AVX/SSE are steaming at max usage.
I'm just thinking that perhaps there could be something more fundamental to how PS5 is forced to operate going on.
The mix tested points to a noticeable drop in peak IPC. There could be mixture-dependent differences, since it's not clear that the tests shown indicate if there's more or less contention with non-arithmetic ops things like moves and shuffles.Wait what?
If that's the case, it means that the IPC of floating point instructions on PS5's SoC in general is lower than a standard Zen 2, not just AVX256 instructions. (Not necessarily half the IPC, since some code don't exhibit much inherent ILP, but lower)
A reminder that since AMD64, SSE is floating point, since x87 is deprecated.
Some of the CPU-limited frame times we've seen where people chalked a minor deficit for the PS5 might have been worsened by this. Since the clock speed shouldn't be an ideal linear scaling, perhaps the results where it seemed like there was such scaling are evidence of FPU contribution.no game we've seen released on console has switched to AVX yet. It may or may not ever happen this generation.
And wrt the FP performance, it's probably negligible unless benchmarking 120fps performance for a lot of titles. A real bottleneck would have cropped up by now I think if it's an issue at 60fps or below.
I feel like this isn't helpful given that the comparison is between two OoO superscalar processors, so what ports get hit in what cycle is already variable. Since the PS5 is generally not down-clocking to equivalent speeds, the raw performance aspect doesn't seem to be an issue presently.Could it possibly be related to hardware level backwards compatibility? Something related to vector instructions per cycle?
It would seem really odd, but could there be some part of PS4's software ecosystem that would break if certain performance limitations were exceeded?
A particularly stringent power ceiling, or a quirk to AMD's DVFS that somehow didn't give Sony what it wanted could be at issue. I don't know if Sony got anything else tweaked or added when this was done to the FPU.The changes aren't to increase performance, or to significantly reduce footprint, and if they aren't about power management, could they be about something fundamental to how the PS5 must operate?