Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
No. That's just an assumption he makes. He doesn't know first hand. Maybe most of his sources don't know the final clock yet either. You can't keep parroting his assumption that 800mhz remains the target as fact. The story he's breaking pretty specifically throws that figure into doubt, whether or not he recognized the incongruity.

Well, according to sources who have been briefed by Microsoft, the original bandwidth claim derives from a pretty basic calculation - 128 bytes per block multiplied by the GPU speed of 800MHz offers up the previous max throughput of 102.4GB/s. It's believed that this calculation remains true for separate read/write operations from and to the ESRAM.

"Sources who've been briefed by Microsoft" doesn't imply the information is old.

While none of our sources are privy to any production woes Microsoft may or may not be experiencing with its processor, they are making actual Xbox One titles and have not been informed of any hit to performance brought on by production challenges. To the best of their knowledge, 800MHz remains the clock speed of the graphics component of the processor, and the main CPU is operating at the target 1.6GHz. In both respects, this represents parity with the PlayStation 4.

It really cant get a lot clearer there. You have to jump through hoops to see what you're seeing, not vice versa.

It's also pretty fanciful to suggest "oh, the ESRAM is capable of 2X what we thought, we never knew you could write to it!". Which is why I have some questions about the article altogether.

But lets say they are true, we should say it has 192 GB/s+68 GB/s =260 GB/s combined BW available to the GPU, all feeding fewer ALU's, so the real discrepancy per ALU is much higher. Wow! XB1 could have a pretty big advantage in a lot of alpha blending scenarios (just as X360 did)
 
"Sources who've been briefed by Microsoft" doesn't imply the information is old.

It does if you don't ignore the "original bandwidth claim" part of the sentence you quoted. I'm sure this is difficult for you, but try not to quote parts of the article that say the opposite of what you think they prove.
 
"remains true"

The OVERWHELMING evidence of the article (and outside it) points to no downclock. a small, vague part can be, if you try really hard, interpreted to mean something else.

That's what you're doing, fine, just admit it though.

For example, they claim "real world" usage of 133 GB/s (only a modest uptick on the original 102 claim). So only 69% efficiency to 192 GB/s. Unless they're dealing in something different than A cross B=192 GB/s, that doesn't sound right to me.

Would you say PS4 can only utilize .69X176=121 GB/s? Do PC GPU's hold to these kind of efficiencies?

I know you have disdain for MS engineers, but would they really screw something up that badly?

If not, then this ESRAM calculations altogether do not appear to be operating under normal "A cross B" rules. The 192 could refer to darn near anything, it's coincidence. The whole article is difficult to parse, but it does address and confirm 800 mhz directly.

Again, the article is weird. I wish Richard would write an entire follow up tbh.

Hell, it would be more believable to me that MS was flat lying about 102 GB/s in order to deceive the competition, then "oh whoops, it's actually a lot more".
 
It does if you don't ignore the "original bandwidth claim" part of the sentence you quoted. I'm sure this is difficult for you, but try not to quote parts of the article that say the opposite of what you think they prove.


This is really not even in question based on the article...

While none of our sources are privy to any production woes Microsoft may or may not be experiencing with its processor, they are making actual Xbox One titles and have not been informed of any hit to performance brought on by production challenges. To the best of their knowledge, 800MHz remains the clock speed of the graphics component of the processor, and the main CPU is operating at the target 1.6GHz. In both respects, this represents parity with the PlayStation 4.
 
This is really not even I question based on the article...

Yeah you have to basically be like "well this part of the article is infallible, but this other part of the same article means nothing"

At any rate though I dont believe it's been down-clocked, 50 mhz would certainly not be catastrophic unlike the previous rumors, and might even be more than canceled out by the (alleged) "found" bandwidth in real world usage. Something sure made those e3 titles look good.

260 GB/s feeding only 1.2 TF's could be quite a lot of effective bandwidth, much more than equivalent PC GPU's.
 
133gb/s while alpha blending makes some sense if they figured out some way to overlap reads and writes on the esram.

To do an alpha blend you need to read pixel a, read pixel b, and write pixel a+b. That's 3 memory ops (two reads one write)

Suppose you can overlap the write for a+b with the first read for the next pixel. read pixel a, read pixel b, write pixel a+b / read pixel c, read pixel d, write pixel c+d / read pixel e ...

So you get 133 gb/s vs 100 gb/s while alpha blending. Seems plausible.
 
Last edited by a moderator:
133gb/s while alpha blending makes some sense if they figured out some way to overlap reads and writes on the esram.

To do an alpha blend you need to read pixel a, read pixel b, and write pixel a+b. That's 3 memory ops (two reads one write)

Suppose you can overlap the write for a+b with the first read for the next pixel. read pixel a, read pixel b, write pixel a+b / read pixel c, read pixel d, write pixel c+d / read pixel e ...

So you get 133 gb/s vs 100 gb/s while alpha blending. Seems plausible.

How would this explain the alleged 192 GB/s peak though?
 
How would this explain the alleged 192 GB/s peak though?

I think this amplification of bandwidth comes from doing framebuffer op similar to the way that writes to the eDRAM in 360 can only hit 256gb/s if you are doing 4xMSAA, so that would mean that if you are not doing MSAA on it the effective bandwidth is 64gb/s. So in the X1's case, it seems that for typical or general use its normal or effective bandwidth is 102gb/s but if you are doing writes with alphablending you have an effective bandwidth of 133gb/s and the 192gb/s effective bandwidth is only if you are doing a particular type of read and or writes operations.

Anyway this is just my opinion based of what is available in the article and 360 eDRAM operation.
 
Rangers said:
"remains true"

Yeah, the laws of mathematics haven't change. The same math that gave use 102.4GBps still gives us that figure if you assume no data points have changed. Miraculous!

This is really not even in question based on the article...

Third parties saying they haven't heard about a change is not the same as incontrovertible proof that nothing has changed. Not when the math is giving us reasons to believe it has.
 
Yeah, the laws of mathematics haven't change. The same math that gave use 102.4GBps still gives us that figure if you assume no data points have changed. Miraculous!



Third parties saying they haven't heard about a change is not the same as incontrovertible proof that nothing has changed. Not when the math is giving us reasons to believe it has.

May not count for much, but I've heard directly from a first party, and not an insignificant one, that there hasn't been any change to the clocks, and that they've yet to even hear anything of the sort, outside of what's been popping up on game forums, and by extension, gaming news sites. They are still targeting the specs as released by vgleaks.

I've said it before, but if this person hasn't heard anything about a potential downclock, then that's a pretty scary thought.
 
Yeah, the laws of mathematics haven't change. The same math that gave use 102.4GBps still gives us that figure if you assume no data points have changed. Miraculous!


.

You just have conflicting math, and choose which you think is right.

All the evidence favors 800 mhz/second, but you can choose to believe the math the evidence doesn't favor (actually it is more like you are choosing to believe a specific application of math, that may not apply, eg the referenced 192GB/s likely is not derived from clock X bytes).

Math also says the clock is 800 (800X128=102.4, "remains true"). Choose which text is correct.

Also you didn't address my other issues (efficiency). Most likely because you agree with them in truth.

Edit: after reading pages of prior comments on the article in this thread, plus my own very limited knowledge, the article borders on illogical. Mr Leadbetter needs to write a follow up if possible, explaining it better.

BTW, I finally got around to looking up "Albert Panello"s comments on Neogaf and...yikes (if it's really him, and it certainly seems to be from his post history).
 
You just have conflicting math, and choose which you think is right.

Where do you see a conflict in my math?

Can you point to it?

Cause I don't believe you can.

Yes, all the evidence supports your position, as long as you exclude counter evidence and twist other evidence so that it means what you want it to mean and not what it actually says or simply don't understand how English works.

The problem is the 88% percent improvement figure makes very little sense, and I would wager a guess that it's a number Richard derived assuming 800Mhz was still true and was not one supplied to him by his new insider source.

So what is more likely, that an 800Mhz memory interface can both read and write on 7 out of every 8 clock cycles by virtue of some ludicrous, unexplained hardware quirk to produce the 192GB/sec figure, or it's simply the product of doubling the bandwidth of a 750Mhz part?
 
Where do you see a conflict in my math?

Can you point to it?

Cause I don't believe you can.

Yes, all the evidence supports your position, as long as you exclude counter evidence and twist other evidence so that it means what you want it to mean and not what it actually says or simply don't understand how English works.

The problem is the 88% percent improvement figure makes very little sense, and I would wager a guess that it's a number Richard derived assuming 800Mhz was still true and was not one supplied to him by his new insider source.

So what is more likely, that an 800Mhz memory interface can both read and write on 7 out of every 8 clock cycles by virtue of some ludicrous, unexplained hardware quirk to produce the 192GB/sec figure, or it's simply the product of doubling the bandwidth of a 750Mhz part?

The conflicting math is 128X800=102.4, and 750X128X2=192.

One points at 800, one 750, both are in the article as facts. One (128X800) the derivation is outlined clearly, the other (192) it is not stated how it is derived.
 
Yeah, that's been my point all along. 128*800=102.4 was the original assumed bandwidth figure. The new rumor being reported suggests that figure ignores the ability to read and write at the same time doubling the theoretical peak. That should give us 128*800*2=204.8, but instead we are presented with the figure 192GB/s. Richard assumes the base clock hasn't changed and makes a very strained argument about an 88% efficiency improvement through some undisclosed voodoo. The much simpler answer is that 128*750*2=192 and that the "upgrade" that is being leaked here actually implies a downgrade in the final clock speed, even if that hasn't been disclosed to third party developers yet.
 
Yeah, that's been my point all along. 128*800=102.4 was the original assumed bandwidth figure. The new rumor being reported suggests that figure ignores the ability to read and write at the same time doubling the theoretical peak.
If the bus was bidirectional and capable of 204.8 GB/s, we'd have heard it. That would have been the BW figure in the tech doc leaks, just as every BW figure is labelled clearly so devs know what they have to work with.

http://www.vgleaks.com/durango-memory-system-overview/

Audio/camera bus = 9 GB/s Read, 9 GB/s Write, clearly labelled
DDR3 = 68 GB/s Read and Write, clearly labelled
ESRAM = 102 GB/s Read and Write, clearly labelled

What you're suggesting is in reality the ESRAM BW was 102 GB/s Read, 102 GB/s Write, but MS didn't tell anyone this or label their tech documents as such!

The 192 GB/s figure is also not a clear bidirectional BW figure, otherwise there wouldn't be any confusion about what the BW rate is. It wouldn't be reported "The new bus is 192 GB/s but for some reason we only get 133 GB/s real use from it in general use."

That should give us 128*800*2=204.8, but instead we are presented with the figure 192GB/s. Richard assumes the base clock hasn't changed and makes a very strained argument about an 88% efficiency improvement through some undisclosed voodoo. The much simpler answer is that 128*750*2=192 and that the "upgrade" that is being leaked here actually implies a downgrade in the final clock speed, even if that hasn't been disclosed to third party developers yet.
I agree that the voodoo of the memory is a mystery, but there's no way the original bus was capable of 204.8 GB/s communication yet devs weren't told. Neither explanation works!
 
DF also says:
"No, people are suggesting that GPU has been downclocked to 750MHz, but it hasn't. 88% more is combining read/write at same time"

So that 88% increase is either adding BW or [insert believable explanation here]
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top