SUBSTANCE ENGINE

Fair enough. But you would have to question what's going wrong with their engine if a 3.4Ghz+ i7 core is less than twice as fast as a 1.6Ghz Jaguar core in this operation.
Which is why they are more likely talking about the whole CPU and not a core. 8 (or 7, or 6) x 1.6 GHz Jag cores = 12/14 units, versus 4 x 3.4 GHz = 26 units.

It's decidedly unclear either way, but ignoring the labelling of '1 CPU', the rest of the metrics line up more expectedly with the entire CPU in each device rather than single cores (for which there's no way a PS4 core is 1/6th better than a XB1 core unless there's frickin' Secret Sauce).

Even then, the Jag is doing very well here as the scaling, back in 2011 anyhow, wasn't linear. The improvement in performance across more cores on the consoles must be much better than the 2011 results. Or there's something else entirely going on that we're not privy to.
 
I think it's per core and not per CPU. Otherwise, the iPhone 5 vs Tegra 4 result wouldn't make any sense. Assuming that it's dual core A6@1.3GHz vs Quad core T4@1.9, I expect T4 to be at least twice or 3x faster.
 
If really per core, any chance that could have anything to do with a hypervisor or other abstraction layer being less efficient? Differences in code compiling was already mentioned... I guess that would be similar. I have a hard time believing PS4 is clocked higher, especially that much higher, as we had rumors and leaks on pretty much everything that came to fruition and lots of stuff that didn't for both platforms, and I don't think that was even entertained as a fantasy.
 
That's the after interpretation, which would place Jaguar very well against i7, although it's a poor of phrase. With 2x the number of cores it'll have comparable performance at half the clock rate. Factoring in small size, Jaguar would the 'down-layerer-of-the-smack'. Is there any reason for MS and Sony compilers to be that different seeing as it's x86? I assume a lot of the compiler will come from AMD.

MS would probably right their own compiler from the ground up. I believe AMD uses GNU toolchain and MS wouldn't touch that for licensing reasons.

It's almost has to be a per core benchmark otherwise the Jaguar is awful in comparison to the the old iPad2 CPU. I think that number only makes sense on a per core basis.

Likewise, we don't know what i7 is in this benchmark. Nehalem, Haswell, or anyone of the i7's in between.
 
They mean texture effects. The substance demos are very impressive. The shaders support parameters like 'age' and you can easily adjust age of a material and have it computed in realtime, adding decay.

From 1:10


He mentions scalability. 3x improvement across 4 cores versus 1 core.
I am totally impressed with this technology, imagine this with Tiled Resources, fabulous textures at a marginal cost. This technology would be perfect for the PS3 and Xbox 360 to keep up. They have multicore CPUs and a 70% improvement by using 4 sounds really nice. The problem of those consoles is that they have very little to help with other processes, like sound and physics.

Also surprised at how well the Jaguar performs compared to the i7.

Sony could upclock the CPU without mentioning it anywhere. I think that at 2.0 GHz it would fit the monster GPU of the PS4 just perfectly. At 1.6GHz it seemed to be a somewhat gimpy CPU for such a GPU.

Xbox One should be pretty fine with what it has. A 1.31 teraflops GPU is a nice fit for the CPU and I just hope they don't try to upclock anything. Just sayin'
 
I am curious as to why it was generally accepted that PS4's CPU was clocked at 1.6GHZ. Sony never anything about the CPU other than 8 core Jaguar.
 
Seems implausible though. A 2GHz announcement would be free PR; they gain nothing from being silent about that investment.
Maybe they had second thoughts when one of the competitors announced a CPU upclock and that changed their perspective. Since it was pretty much a known fact that the PS4 is a more powerful console they didn't announce it. I wonder what are going to do now all those sites which took a 1.6GHz CPU for granted, if the rumours about an upclock aren't confirmed by Sony.

So is 1.75Ghz with 7 cores available the most plausible guess?
Following Allnets maths....
Clearly it's 14/12 ~= 2.0/1.75

Clearly.


1,75 x 7 would be 12.25. 1,75 x 6 is 10.5, which rounds to 10, + 2 extra cores= about 12, and the CPU has two cores reserved for the OS.

2 x 7 would be 14. But then 2x6 would be 12.

This is grandma's maths, as the saying goes here, and I wonder what I am doing by multiplying this.
 
I am curious as to why it was generally accepted that PS4's CPU was clocked at 1.6GHZ. Sony never anything about the CPU other than 8 core Jaguar.
Agreed. The only thing we have from Sony was the KZ slide from way back in February AFAIK.

Following Allnets maths....
Clearly it's 14/12 ~= 2.0/1.75

Clearly.
Al was being sarcastic. I highly doubt the PS4's CPU is clocked at 2Ghz.

1,75 x 7 would be 12.25. 1,75 x 6 is 10.5, which rounds to 10, + 2 extra cores= about 12, and the CPU has two cores reserved for the OS.

2 x 7 would be 14. But then 2x6 would be 12.

This is grandma's maths, as the saying goes here, and I wonder what I am doing by multiplying this.
You don't multiply the cores by the clock speed do you?

We know the X1's CPU is 1.75Ghz and that there are 6 cores available, so 2MB/s of textures are generated per core @ 1.75Ghz. So that means that 7 cores are available on PS4, running at the same clock speed as the X1's CPU, doesn't it? Or am I way off here? (long day)
 
Last edited by a moderator:
Agreed. The only thing we have from Sony was the KZ slide from way back in February AFAIK.


Al was being sarcastic. I highly doubt the PS4's CPU is clocked at 2Ghz.


You don't multiply the cores by the clock speed to determine how many MB/s of textures the CPU can generate, do you?

We know the X1's CPU is 1.75Ghz and that there are 6 cores available, so 2MB/s of textures are generated per core @ 1.75Ghz. So that means that 7 cores are available on PS4, running at the same clock speed as the X1's CPU, doesn't it?
lulz, Allnets, then we would have to ask him to calculate our mantissas. Get in our mainframe Al. :p

Now seriously, what about 1,6GHz and 7 cores then? Still a bit of a gimpy clock for such a GPU but upclocking the CPU to the same exact clock than Xbox One's makes me wonder why Sony would do that... What do they gain by following Microsoft's specs for the CPU without testing? Microsoft announced the upclock fairly late in the development cycle, when the PS4 couldn't change its specs, afaik.

Then there is the 3MB of eSRAM close to the CPU blocks on the Xbox One -what are they used for?-, and now someone who allegedly know what he says -Matt, dunno who is him, I am not into Neogaf at all- source says that the CPU of the PS4 has some extra grunt compared to the Xbox One's.

Why a trustworthy and reliable person said once here that the Xbox One has a 50% faster cache for the CPU? Was it because of the maximum bandwidth? Why limit then the PS4's CPU to 20GB/s of bandwidth when you have 7 cores at developers disposal instead of 6 CPU cores like on the Xbox One, but those cores can use up to 30GB/s on the X1?
 
Is Xbox One CPU Game/WinOS allocation, 6/2 Physical Cores or ~ 75%/25% of each core processing time?

1.753*0.75=1.3125

1.3125/1.6=82%

11.7/14.27 =>12/14
 
and now someone who allegedly know what he says -Matt, dunno who is him, I am not into Neogaf at all- source says that the CPU of the PS4 has some extra grunt compared to the Xbox One's.
If you are referring to the post that was linked earlier, that is not how I would interpret it.
The wording was more that one could extract more from it, which may also be a matter of overheads, current tools, or differences in the OS and service architecture.

There seems to be an assumption that the engine is doing the exact same thing for every platform, but without knowing exactly what it is doing in terms of code and system calls, we can't be sure about that.

Why a trustworthy and reliable person said once here that the Xbox One has a 50% faster cache for the CPU?
Where?
The cache has clocks linked tightly to the CPU, which isn't 50% faster.
 
If you are referring to the post that was linked earlier, that is not how I would interpret it.
The wording was more that one could extract more from it, which may also be a matter of overheads, current tools, or differences in the OS and service architecture.

There seems to be an assumption that the engine is doing the exact same thing for every platform, but without knowing exactly what it is doing in terms of code and system calls, we can't be sure about that.


Where?
The cache has clocks linked tightly to the CPU, which isn't 50% faster.

http://forum.beyond3d.com/showpost.php?p=1808041&postcount=327
 
Back
Top