Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
I'm no GAF expert but I don't recall Matt being a reliable GAF insider, much less a dev.

He was one of the "yield issues and downclock" FUD crowd IIRC.

No, from my memory he was mostly involved in confirming the leaked specs and the PS4's GPU advantage, all of which came to pass. He never really waded into the CBOAT stuff.

3dilettante said:
I don't believe the debate was ever really concluded as to what was going on with that benchmark, nor how it can be reconciled with this:

http://gamingbolt.com/new-benchmark-...-frames-faster

(4 cores in use)

Even those results don't make sense if we accept Microsoft's assertion that their CPU is running at a higher clockspeed than the PS4's unless there is performance overhead on the Xbox One OS.
 
There are multiple ways that can come to pass, including some unknown quirk with the testing or the platforms themselves.

Microsoft, or just a spokesperson for Microsoft, was really only in the position of asserting that their CPU was faster than their original base clock. They couldn't really speak for Sony, and Sony isn't talking.

There were weaknesses to the specs they intimated Orbis had, like the alleged 100% coherent memory bandwidth disparity, that implied they were going on limited information themselves and weren't doing a perfect job of interpreting what they were seeing.
 
Except MS execs did on multiple occasions claim explicitly that their CPU was faster and at the time it was pointed out that information was not public and they had not the standing to conclude as much. They have never retracted or disavowed those, or other statements about the relative power arguments despite the fact that most of what they tried to argue has been demonstrated to be false.
 
Even those results don't make sense if we accept Microsoft's assertion that their CPU is running at a higher clockspeed than the PS4's unless there is performance overhead on the Xbox One OS.

The article from Gamingbolt really doesn't make much sense at all.

The PC CPU benchmark time is 4ms, PS4/X1 CPU result is 11ms.

http://www.radgametools.com/bnkmain.htm

Reading the rest of the updated article from Gamingbolt:

"PC (2.8Ghz Core i5 with 4 cores and AMD R9 290x): CPU: 1.3 ms. GPU: 1.4 ms.
PS4: CPU 2.3 ms. GPU: 1.6 ms.
Xbox: CPU 2.3 ms. GPU: 2.3 ms.

As you can see, the GPU time is less on the PlayStation 4 since it has more compute units but the time is still limited by the CPU and GPU on both the Xbox One and PS4, resulting into an effective speed of 2.3 ms on both consoles."

So the CPU part of the results above is the impact of the CPU on the GPU benchmark. So the PC CPU speeds up the GPU result by 0.1ms, the PS4 CPU causes it to slow down by 0.7 ms while the X1 CPU has no effect on the result at all?
 
They have never retracted or disavowed those, or other statements about the relative power arguments despite the fact that most of what they tried to argue has been demonstrated to be false.
I do not know if they ever will. I do not know how far up the chain of command it was cleared to make the comparisons, but after it went over like a brick at GAF, I'm pretty sure they abandoned the attempt.

It was few posts on a web forum, and no specific reasons for the disparate results have been definitively given, so they'd have wiggle room. Since stonewalling is probably more effective than trying to wade into that again, they'll say nothing.

The article from Gamingbolt really doesn't make much sense at all.

There is a 4K CPU-only time of 11ms for both consoles on the Bink page as well.
 
I don't know if that's what it means.
The writing could do with a more clear description, but one interpretation is that the decoding process is running the GPU and CPU phases in parallel, with the times for each side being independent. This leaves the longest time as the actual limiter for the overall system.

Perhaps a GPU of the Xbox One's capacity takes 2.3ms, and it so happens to match the CPU time. The percentage difference is close to the CU gap.
 
Actually I misremembered, it was a claim of a 200% gap.

Durango was stated as having 30 GB/s in coherent bandwidth, or 3x for GPGPU. The Hotchips diagrams for the SOC indicate the link with an arrow going in both directions, but in the same diagram the 204 GB/s eSRAM link is denoted in the same way.
Consistency would indicate that the coherent bandwidth is not 30 GB/s in both directions simultaneously.

The Vgleaks documents for Orbis had 10 GB/s next to the diagram arrows, but other text that indicated it was that amount in each direction.
 
It was also misleading in that PS4 has an additional bus that bypasses the GPU cache to enhance performance for certain situations. The most common interpretation is that Sony sacrificed a portion of the CPU bus speed to gain that capability. Microsoft's spin would suggest this puts the PS4 at a disadvantage when it's likely the opposite is true.
 
And even cooler, Bink 2 can be much faster than Bink 1, due to its multi-core scaling and SIMD design (up to 70% of the instructions executed on a frame are SIMD). It is really fast - it can play 4K video frames (3840x2160) in 4 ms PCs and 11 ms PS4/Xbox One using the CPU only (or 1.4 ms PC and 2.3 ms PS4/Xbox using GPU acceleration)!
http://www.radgametools.com/bnkmain.htm

I can't see that 1.6ms number for PS4 on "Bink Video" official page.

PC: CPU ---> 4ms CPU&GPU--->1.4ms
PS4/Xbox One: CPU--->11ms CPU&GPU--->2.3ms

These are very different from gamingbolt claims.

-PC (2.8Ghz Core i5 with 4 cores and AMD R9 290x): CPU: 1.3 ms. GPU: 1.4 ms.
-PS4: CPU 2.3 ms. GPU: 1.6 ms.
-Xbox: CPU 2.3 ms. GPU: 2.3 ms.
Take a look at their PC results, 1.3ms for CPU alone & 1.4ms for CPU and GPU acceleration ! lol
 
Last edited by a moderator:
http://www.radgametools.com/bnkmain.htm

I can't see that 1.6ms number for PS4 on "Bink Video" official page.

PC: CPU ---> 4ms CPU&GPU--->1.4ms
PS4/Xbox One: CPU--->11ms CPU&GPU--->2.3ms

These are very different from gamingbolt claims.

Take a look at their PC results, 1.3ms for CPU alone & 1.4ms for CPU and GPU acceleration ! lol

Assuming 3dilettantes interpretation that the two parts are run in parallel and thus the longest of the two is the total time is correct - which is how I interpret it until additional information is presented then it does add up for the most part since 2.3ms is the total time it takes on both consoles due to the CPU limitation.

There is another time of 1.3ms further down the page for the PC CPU+GPU but I assume that's just a typo.

The interesting thing for me from these results is the 290X time of 1.4ms vs the PS4 GPU time of 1.6ms. The only explanation I have for that is that the PC GPU time also accounts for the latency of sending the data back and forth over PCI-E. This gives us a pretty good idea of what kind of benefits the HSA architecture of the consoles might offer over a discrete GPU.

We can assume that given the power disparity between the PS4 GPU and a 290x the actual GPU time would be around 0.5ms which means the PCI bus is adding about 0.9ms to the overall operation. I bet if they used a 280X for this the overall PC GPU time would be slower than the PS4. Although obviously faster overall since the PS4 is still held back byt the CPU.
 
Except MS execs did on multiple occasions claim explicitly that their CPU was faster and at the time it was pointed out that information was not public and they had not the standing to conclude as much. They have never retracted or disavowed those, or other statements about the relative power arguments despite the fact that most of what they tried to argue has been demonstrated to be false.

Their CPU is clocked faster, and is the exact same design. It's a fact unless you're willfully being obtuse. Other than that we have a couple shady "benchmarks" (which dont even agree, one has the PS4 CPU faster the other has them tied) which some have used to imply there's more overhead on X1 CPU, that prove nothing concrete so far really.
 
Assuming 3dilettantes interpretation that the two parts are run in parallel and thus the longest of the two is the total time is correct - which is how I interpret it until additional information is presented then it does add up for the most part since 2.3ms is the total time it takes on both consoles due to the CPU limitation.

There is another time of 1.3ms further down the page for the PC CPU+GPU but I assume that's just a typo.

The interesting thing for me from these results is the 290X time of 1.4ms vs the PS4 GPU time of 1.6ms. The only explanation I have for that is that the PC GPU time also accounts for the latency of sending the data back and forth over PCI-E. This gives us a pretty good idea of what kind of benefits the HSA architecture of the consoles might offer over a discrete GPU.

We can assume that given the power disparity between the PS4 GPU and a 290x the actual GPU time would be around 0.5ms which means the PCI bus is adding about 0.9ms to the overall operation. I bet if they used a 280X for this the overall PC GPU time would be slower than the PS4. Although obviously faster overall since the PS4 is still held back byt the CPU.

I'm kinda confused by gamingbolt report. As I said before, there is no sign of that 1.6ms number for CPU&GPU on PS4 (actually there is no sign of "1.6" number on the whole page) and 1.3ms for PC CPU (that 1.3ms typo is about CPU&GPU acceleration on PC not CPU result) on "Bink Video" official page.

If you go by official page results there is no problem. 4ms for CPU and 1.4ms for CPU&GPU on PC is acceptable. On the other hand 11ms for CPU and 2.3ms for CPU&GPU on PS4/X1 is acceptable, too. The benefits of using coherent bandwidth on both consoles APU and lower overhead is very obvious in this results (going from 11ms with weaker CPU to 2.3ms by using GPU compute shaders and coherent bandwidth).
 
I'm kinda confused by gamingbolt report. As I said before, there is no sign of that 1.6ms number for CPU&GPU on PS4 (actually there is no sign of "1.6" number on the whole page) and 1.3ms for PC CPU (that 1.3ms typo is about CPU&GPU acceleration on PC not CPU result) on "Bink Video" official page. If you go by official page results there is no problem. 4ms for CPU and 1.4ms for CPU&GPU on PC is acceptable. On the other hand 11ms for CPU and 2.3ms for CPU&GPU on PS4/X1 is acceptable, too. The benefits of using coherent bandwidth on both consoles APU and lower overhead is very obvious in this results (going from 11ms with weaker CPU to 2.3ms by using GPU compute shaders and coherent bandwidth).

Apparently the 1.6ms figure was given to Gamingbolt directly, along with the spec of the PC they were benchmarking. I asked them about the numbers and they are hoping to go into more detail as to the reasons the numbers come out like that in a planned interview with RAD.
 
Apparently the 1.6ms figure was given to Gamingbolt directly, along with the spec of the PC they were benchmarking. I asked them about the numbers and they are hoping to go into more detail as to the reasons the numbers come out like that in a planned interview with RAD.

A developer from Rad Game Tools got in touch with us clarifying that the numbers were incorrectly phrased on their website [which they have corrected now].
http://gamingbolt.com/new-benchmark...ay-4k-video-frames-faster#BJzorBArCDEz2k2V.99

Then what website they were talking about?!
 
I'm kinda confused by gamingbolt report. As I said before, there is no sign of that 1.6ms number for CPU&GPU on PS4 (actually there is no sign of "1.6" number on the whole page) and 1.3ms for PC CPU (that 1.3ms typo is about CPU&GPU acceleration on PC not CPU result) on "Bink Video" official page.

As Zachriel said, the broken down CPU+GPU results were given to Gamingbolt directly. There's no real reason to advertise the CPU or GPU only times on the official website because they're not really relevant on their own. They are only sub-components of the total decode time (if considering the CPU+GPU time).

It seems the consoles are being held back by the CPU component of the CPU+GPU decode while the PC is being held back by the GPU component - likely because of PCI latency. However CPU+GPU decode is obviously still a lot faster than CPU decode alone on the PC since the CPU portion of the work is a lot smaller.
 
It was also misleading in that PS4 has an additional bus that bypasses the GPU cache to enhance performance for certain situations. The most common interpretation is that Sony sacrificed a portion of the CPU bus speed to gain that capability. Microsoft's spin would suggest this puts the PS4 at a disadvantage when it's likely the opposite is true.

Maybe because Durango might sport an additional link too. Kaveri has three busses. The Radeon bus and two additional links, one non coherent and the other a coherent bus that bypasses the gpu caches.
 
I like the spin that MS had somehow made technical comments on Sony's hardware. Do we have source for that?
 
Last edited by a moderator:
I like the spin that MS had somehow made technical comments on Sony's hardware. Do we have source for that?

Albert Penello said:
Performance: I’m not dismissing raw performance. I’m stating – as I have stated from the beginning – that the performance delta between the two platforms is not as great as the raw numbers lead the average consumer to believe. There are things about our system architecture not fully understood, and there are things about theirs as well, that bring the two systems into balance.

People DO understand that Microsoft has some of the smartest graphics programmers IN THE WORLD. We CREATED DirectX, the standard API’s that everyone programs against. So while people laude Sony for their HW skills, do you really think we don’t know how to build a system optimized for maximizing graphics for programmers? Seriously? There is no way we’re giving up a 30%+ advantage to Sony. And ANYONE who has seen both systems running could say there are great looking games on both systems. If there was really huge performance difference – it would be obvious.

I get a ton of hate for saying this – but it’s been the same EVERY generation. Sony claims more power, they did it with Cell, they did it with Emotion Engine, and they are doing it again. And, in the end, games on our system looked the same or better.

I’m not saying they haven’t built a good system – I’m merely saying that anyone who wants to die on their sword over this 30%+ power advantage are going to be fighting an uphill battle over the next 10 years…

http://www.neogaf.com/forum/showpost.php?p=80168873&postcount=708
 
Status
Not open for further replies.
Back
Top