PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
Going back to the substance engine benchmark, I guess we can rule out the PS4 having less reserved cores theory. Does this suggest PS4 jaguars are clocked higher or they're the same clock and there's some overhead that drags down the Xbox's performance.

No, the last tweet about how many PS2 cpus would have the same frequency as the PS4 cpus calculated with 1.6GHz. It was the first official statement about the clock speed. The xbone cpu may be a bit slower because of the virtualization. But this can change with newer updates.
 
The standard Jaguar module L2 is 2MB. That is not consistent with the presentation's labeling.

whops, you're right, so for some reason they've cut half of the L2 out. The reasoning why it's slow going to "wrong L2" still holds, though.
 
Seeing as how all the fabbed chips will have 8 Jaguar cores & 2x2MB L2, and so the two reserved cores can always be the same physical cores on the die, is there any reason why they might reserve L2 on the other module :?: Reserving half for the module where they reserve the two cores makes some sense at least.

edit:

Perhaps "just-in-case" spill over and/or for non-latency sensitive OS tasks :?:
 
Seeing as how all the fabbed chips will have 8 Jaguar cores & 2x2MB L2, and so the two reserved cores can always be the same physical cores on the die, is there any reason why they might reserve L2 on the other module :?: Reserving half for the module where they reserve the two cores makes some sense at least. Perhaps "just-in-case" spill over and/or for non-latency sensitive OS tasks :?:

Isn't that assuming the reserved cores are in the same module? Perhaps I'm mis-understanding you.

EDIT

Saw your update. To rephrase is there any reason to assume both reserved cores are on the same module? I believe it was previously mentioned that doing so would help prevent the other module's cache from being polluted by the OS cores (or something along those lines but do we have any solid info stating they are?
 
Isn't that assuming the reserved cores are in the same module? Perhaps I'm mis-understanding you.

One of the later diagrams at least implies they are core 6/7 on a single module. I'm not sure having the reservation split between modules is that great of an idea for cache purposes, but that's one for the CPU folk to maybe help out here. :p

edit:

I mean, basically the situation might be your one reserved thread/core for some reason does use more than 512KB L2. It'd seem better for performance just to be able to make use of the adjacent L2 on the same module "just-in-case", but that's just a guess. *shrug*

Put another way, what would the advantage be in splitting the reserved cores between modules?
 
One of the later diagrams at least implies they are core 6/7 on a single module. I'm not sure having the reservation split between modules is that great of an idea for cache purposes, but that's one for the CPU folk to maybe help out here. :p

Ah, okay, didn't realize there was diagram floating out there implying that. Thanks.
 
It was in the video, however it can't be known for sure whether it was accurate in terms of just removing cores 6 and 7, or was it just for illustration purposes removing 2 cores from the end, when they could have been any of the 8 available cores
 
So again, because all 8 cores are present by design and desire, what purpose would it serve to "randomize" or split up the reserved cores between the two modules :?:
 
Resilience against compromise but I can only think of a few vectors of attack which are predicated on knowing which CPUs in a multicore system are running the hypervisor or code with escalated privileges. And TrustZone (or whatever equivalent solution Sony are using for security) would already need to have been compromised. That'd be AAA paranoia.
 
I can only think of a few vectors of attack which are predicated on knowing which CPUs in a multicore system are running the hypervisor or code with escalated privileges.
err... mind you to share your thoughts? You lost me on those (or maybe it's the music in my ears now, but still...).

And TrustZone (or whatever equivalent solution Sony are using for security) would already need to have been compromised. That'd be AAA paranoia.

...and here I am totally lost.
 
err... mind you to share your thoughts? You lost me on those (or maybe it's the music in my ears now, but still...).
I'll pm you.
...and here I am totally lost.
I'm fairly confident that Sony are using a TrustZone solution in PS4 and you'd need to compromise that first.
 
Going back to the substance engine benchmark, I guess we can rule out the PS4 having less reserved cores theory. Does this suggest PS4 jaguars are clocked higher or they're the same clock and there's some overhead that drags down the Xbox's performance.

The compiler for PS4 is better than the one provided by Microsoft. I think the CPU power difference is entirely explained by that.
 
The compiler for PS4 is better than the one provided by Microsoft.

...again, I cannot believe that, sorry. MSVC has more resources, more experience on optimizing for multiple platform and AMD chips.
And they make compilers since the start of '90 (and took away from borland loads of people as well).
 
...again, I cannot believe that, sorry. MSVC has more resources, more experience on optimizing for multiple platform and AMD chips.
And they make compilers since the start of '90 (and took away from borland loads of people as well).
Ok, based on what data have you reached this conclusion?
 
I thought it was openly and publicly reported by developers that for whatever reason MS's tools are in a worse state than the already-less-than-stellar PS4 tools?
 
I am not discussing about toolchain status - rather compiler's backend.
My conclusion comes just by the fact that MS is/should be highly advantaged on developing an optimized backend for AMD - they just need to add a custom profile for those parts, among the existing others. Code generation is still the same, given it is x64.
 
I wasn't aware that code generation had been a solved problem since the K8 and Prescott P4.

Obviously not. For example, gcc usually optimizes code better than MSVC (it has i.e. a better constant propagation/folding), among other interesting amenities, if someone cares to test.

Fact is, MSVC has a long-term team behind that, which has quite a grasp on CPU miarch, especially for AMD (and Intel).
Sony obviously is not developing x86/x64 compiler since two decades.
 
Status
Not open for further replies.
Back
Top