Digital Foundry Article Technical Discussion Archive [2014]

Status
Not open for further replies.
Don't forget guys, it's not as if X1 is a rock solid 30. It drops in all the same places, it's just a situation of it being "not as bad", not that it's really good either.
 
That chart is... strange.

The 9590 is, for all accounts, a faster CPU than the i5 2500k, usually even besting the 4670k in most benchmarks. So how come it's so much slower here? Seems really strange, especially because the game was probably made mostly for the consoles, and thus actually for AMD hardware. But I guess it's the same with ACB on my end. Worse single thread performance. Cinebench Single Core rates the Intels far higher (obviously)... the chart I just saw (http://www.computerbase.de/2013-07/amd-fx-9590-prozessor-test/5/#diagramm-cinebench-r115 , German page) rates the ST perf 6% points lower for the 9590 vs. the 2500k. Pretty much exactly what we're seeing here.

No wonder this game runs as it does on consoles. Their single thread performance is WAY lower.
 
Yeah. They also have CPU Utilization snapshots:
http://gamegpu.ru/action-/-fps-/-tps/assassin-s-creed-unity-test-gpu.html

h7H2J0F.jpg

js1p59R.jpg
 
Thanks AlNets!
That is starting to paint a picture as to what is happening. And wow those AMD numbers, talk about a need for a stronger processor.
I'm guessing both consoles should chart close to the load of the 8 core? (I assume that it includes OS), or should we be looking at comparing it to the load of the 6 cores (since that is what is available for games)?
 
XB1 has the same CPU BW blocking, no?
well, actually the xb1 has more memory-bandwidth available for the cpu and it is also more dynamic.

so it seems like resolution on xbox one is lower because of the gpu and on ps4 because of cpu and memory bandwidth.
The available memory bandwidth on ps4 could fall bellow 100GB/s on cpu intensive tasks, according to the sony chart (the chart does not show the maximum memory usage of the cpu (just one way)).

but the biggest flaw is, that the engine is just to new and most likely still unoptimized. AC Unity is the first game with this engine. The game had a fixed release date, so lowering the resolution was probably the easiest/fastest way to achieve mostly fluent framerates.
 
My last post before I jump on the train for a bit. But if the CPU loads are like those 8 core graphs on both consoles that's pretty intensive, almost like a Prime64 test.
Can someone do some fan volume checks on their PS4 and X1 if you have the game? It's nice to see that level of utilization, that level of CPU utilization combined with that level of GPU utilization would be an interesting result to see thermally.

Thanks,
 
What about draw call limits? Is it brute forcing DX limitations?
According to ubisoft, the unity engine was optimzed for very little draw call overhead. So this is maybe nothing the xbox suffers from so extremely due to its dx 11 nature
 
Why are people assuming that the XB1's on-paper CPU clock advantage will translate to the real world?

Last time I checked the XB1 CPU wasn't realising it's potential possibly due to driver or virtualisation overhead.

For one, there were all those CPU benchmarks and this DF interview:

http://www.eurogamer.net/articles/d...its-really-like-to-make-a-multi-platform-game

Closer to launch there were some complaints about XO driver performance and CPU overhead on GNMX.
Oles Shishkovstov: Let's put it that way - we have seen scenarios where a single CPU core was fully loaded just by issuing draw-calls on Xbox One (and that's surely on the 'mono' driver with several fast-path calls utilised). Then, the same scenario on PS4, it was actually difficult to find those draw-calls in the profile graphs, because they are using almost no time and are barely visible as a result.

What's your take on the differences between Xbox One and PlayStation 4?
Oles Shishkovstov: Well, you kind of answered your own question - PS4 is just a bit more powerful. You forgot to mention the ROP count, it's important too - and let's not forget that both CPU and GPU share bandwidth to DRAM [on both consoles]. I've seen a lot of cases while profiling Xbox One when the GPU could perform fast enough but only when the CPU is basically idle. Unfortunately I've even seen the other way round, when the CPU does perform as expected but only under idle GPU, even if it (the CPU) is supposed to get prioritised memory access. That is why Microsoft's decision to boost the clocks just before the launch was a sensible thing to do with the design set in stone.
Has the situation changed recently due to XB1 driver/SDK updates?
 

Having not looked at CPU scaling graphs for a while, I was surprised to see that the game runs at comparable framerates and with nearly identical CPU utilization on an Intel i3-2100 and an AMD FX-8350. After looking around a bit, it seems that's not unusual, at least as far as raw performance. CPU utilization shows more variability. Intel CPUs are certainly known for better single-threaded performance. Still, It's confusing to me that a game can use 72% of the CPU time on a 8-core FX-8350 and just barely outperform an i3-2100. Is it fair to think that this indicates that the engine in question is just poorly optimized for multi-core CPUs?
 
The graphs are pretty unfathomable. Every time I think I might have an explanation there's some data point which completely screws it up. For example I was going to suggest that maybe they're just limited by single threaded performance (despite begin able to full other threads were available) but then how would we explain the 5960's dominance despite it having slightly less single threaded performance than the I3.

The only thing I can really take away from these charts in complete contradiction to the general consensus is that this game does seems to be awesomely optimized for multiple cores and that's something I'd love to see more of in future as it should help to drive us (finally) beyond the quad core rut we're stuck in. It's great to see an 8 core at 3Ghz kicking the butt of a 6 core at 3.5Ghz.
 
It's a combination of both then, single threaded and multi-threaded performance, one of the elements has more weight than the other, but never the less it seems both are important here.

It also goes to show that even with targeted optimizations for AMD CPUs, Intel hardware will still come out on top, which nullifies any software advantage AMD might have hoped to gain after reaping all the console contracts.

GPU wise the game seems to favor NVIDIA hardware as well, giving them huge leads over AMD counterparts, but this could be attributed to GameWorks effects, AMD could use this to attack GameWorks more rigorously.

http--www.gamegpu.ru-images-stories-Test_GPU-Action-Assassins_Creed_Unity-test-new-ac_1920_msaa.jpg
 
And Those FX are piledrivers, aren´t they?
Not really cores, but modules (2 int pipes and 1 FPU per module)

I´d like to see a 4 core jaguar or Puma with a dGPU, anyway consoles clocks are so low :(
 
Both GPUs seriously punching above their weight then in the consoles, the 7870 is basically the PS4s GPU and the R7 260 similar to the 7770 based xbox one.

Console wars aside, this is a pretty piss poor situation as far as UBI are concerned. Seems the public and press have dubbed them the new EA which is a touch unfair to EA in my opinion, most of their stuff has at least passed some sort of QA before release.
 
Intel is Sonic the Hedgehog, and AMD is Robotnik. :p

Can Tails be Hyperthreading??

Why are people assuming that the XB1's on-paper CPU clock advantage will translate to the real world?

Most recent directly comparable bench shows x1 CPU about 15% faster. There have been two occasions where ms have released reserved cpu now.

This is a separate issue to mono performance, which is an unknown at this point, but will probably always be slower than dx12.

Both GPUs seriously punching above their weight then in the consoles, the 7870 is basically the PS4s GPU and the R7 260 similar to the 7770 based xbox one.

To be fair, those pc charts are for 1080 with 4xmsaa, which will be proper expensive on a deferred renderer.

Agree that frame rate isn't really good enough on either console though.

Even admirable ambition has to be tempered by realism.
 
True, but from my own tests increasing aa when cpu limited makes little to no difference, as does increasing resolution.

Both consoles could go for 1080 p 4xmsaa and still be in the same ballpark they are now.
 
Status
Not open for further replies.
Back
Top