Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Asking MS about more CUs is never going to get you a straight or completely honest answer. Their system was designed around 12, they have no interest in telling anyone that more than 12 is a good thing, then people will make the obvious comparison to 18. Their narrative is the Goldilocks story. They chose everything just right, anything else would "unbalance" the system. 12 probably is a magic number for their system. Of course you can design system which perform great with more or less, hence AMD has a whole line of cards from 8 to 30 CUs.

Starting off your thought process with that statement immediately red flags it. You assume that a) they are lying or B) what they are saying isn't true. You have an assumption that numerically larger is better. But at looking at a system holistically that may not be true. They made a tradeoff and that tradeoff limited them in some areas but benefits them in others and maybe be better at producing a systemic benefit.
 
I reckon they knew what was going to happen when they tested the up clock vs the extra 2 CUs. For those guys who are so deep in the tech it must have been a no brainer. The real benefit to MS was the opportunity for PR giving the impression that their faster 12CUs were more effective than Sony's 14 slower ones...
 
The DMEs and a number of sundry units like the display controller share a connection capable of 25 GB/s in each direction.

I believe this is the expansion hub that is part of AMD GPUs that allows them to more freely add or subtract units from the GPU.

Wouldnt the bandwidth of the DME increased with the increase in the gpu speed with a bandwidth of 27.3 GBs versus 25.6?
 
In truth I have always been under the impression that 12-14 CU somehow is the point of balance of the system.
Sony, originally, has the statement 14+4 (as reported via VGleaks based upon their tech documentation) before they realize that it could be a much more powerful marketing tool to have 18CU.
But, I do not believe that the next generation console competion will be remembered as the era of CU. No, there is much more than this.

Now, I am courious to know what ekim already seems to know.
 
Starting off your thought process with that statement immediately red flags it. You assume that a) they are lying or B) what they are saying isn't true. You have an assumption that numerically larger is better.

No, he's just assuming that Microsoft would never admit to the public that their system is weaker than Sony's. They'll do whatever they can (true or not) to avoid having a portion of its customers jumping ship if convinced that Sony will provide better visuals, physics, etc.

Which is a pretty good assumption, IMO.
 
In truth I have always been under the impression that 12-14 CU somehow is the point of balance of the system.
Sony, originally, has the statement 14+4 (as reported via VGleaks based upon their tech documentation) before they realize that it could be a much more powerful marketing tool to have 18CU.
But, I do not believe that the next generation console competion will be remembered as the era of CU. No, there is much more than this.

Now, I am courious to know what ekim already seems to know.

Well you are simply just taking the available data and making it match MS's message IMO. Sony has always had 18CUs, that number has nothing to do with marketing. We know you can make 30CU video card, given enough video memory bandwidth. Sony is pushing GPGPU wth their added ACEs and have given talks saying you can shift those CU resources back and forth from rendering to compute, that is not an admission that some number of CUs is somehow wasted. There is a leaked talk that mentions this, the speaker says you can shift those resources per frame as little or much as you like. Don't read flexibility as a weakness. Even if there is some small drop due to scaling as CUs goes up, more is always better assuming you can feed them and 176GB/s is more than you need for 14CUs (the 7870 has 20CUs with 154GB/s).
 
Well you are simply just taking the available data and making it match MS's message IMO. Sony has always had 18CUs, that number has nothing to do with marketing. We know you can make 30CU video card, given enough video memory bandwidth. Sony is pushing GPGPU wth their added ACEs and have given talks saying you can shift those CU resources back and forth from rendering to compute, that is not an admission that some number of CUs is somehow wasted. There is a leaked talk that mentions this, the speaker says you can shift those resources per frame as little or much as you like. Don't read flexibility as a weakness. Even if there is some small drop due to scaling as CUs goes up, more is always better assuming you can feed them and 176GB/s is more than you need for 14CUs (the 7870 has 20CUs with 154GB/s).

That 176GB/s figure is theoretical peak. The real world will be less. I asked over on the Orbis thread what people thought this may be, but couldn't get an answer. I said I would assume the same % difference between theoretical and real world on the x1 of about 75% and nobody had any better ideas...or if they did they kept them to themselves!

So for the sake of argument that would leave the ps4 with about 130GB/S, less anything the CPU needs (max of 20GB/s)...so if the x1 is apparently balanced with it's 200GB/s (less max 20GB/s for the cpu) the 14CUs figure may be already a bit much.

Of course I could be up the pole with that 75% figure, but can't see it being miles off....
 
Any discussion about the 14 + 4 CU split should be taboo.
At this point it's nothing more than noise.

Aren't people referencing it in relation to the general idea that MS is selling (right or wrong) of diminishing returns with more CU's?

I dont think anybody is saying there's a physical 14/4 split anymore or has for a long time.
 
That 176GB/s figure is theoretical peak. The real world will be less. I asked over on the Orbis thread what people thought this may be, but couldn't get an answer. I said I would assume the same % difference between theoretical and real world on the x1 of about 75% and nobody had any better ideas...or if they did they kept them to themselves!

So for the sake of argument that would leave the ps4 with about 130GB/S, less anything the CPU needs (max of 20GB/s)...so if the x1 is apparently balanced with it's 200GB/s (less max 20GB/s for the cpu) the 14CUs figure may be already a bit much.

Of course I could be up the pole with that 75% figure, but can't see it being miles off....

All bandwidth numbers are peak, so are AMD's they publish with the card specs, so that point is moot. Thinking that MS has 200GB/s available for 12CUs is kind of silly, they may see 200GB/s aggregate at times but in general their available bandwidth is much lower, remember the eSRAM is only 0.04% of the RAM.
 
All bandwidth numbers are peak, so are AMD's they publish with the card specs, so that point is moot. Thinking that MS has 200GB/s available for 12CUs is kind of silly, they may see 200GB/s aggregate at times but in general their available bandwidth is much lower, remember the eSRAM is only 0.04% of the RAM.

They said 150GB/s can be a common number.
 
At the very least it shouldn't be in the xbox thread.
Why not? MS are talking about balance and their upclock being more effective than an additional 2 CUs in whatever they were testing (for X1). "Hardware balanced at 14 CUs" does seem somewhat relevant.
 
I like that people would have apply an arbitrary 100% utilization on one design to boast its elegance, but think of an edge case scenario on how the other would run like crap on the other.

I also love the fact that the words from one horse's mouth are always deceptive and lying, while the lying horse on the other hand, whose words are generally perceived as "what is actually meant is blah blah blah..."

But hey, what's new.
 
No, he's just assuming that Microsoft would never admit to the public that their system is weaker than Sony's. They'll do whatever they can (true or not) to avoid having a portion of its customers jumping ship if convinced that Sony will provide better visuals, physics, etc.

Which is a pretty good assumption, IMO.

I think its bollock unless you also assume that Sony would "admit that their system is weaker". Whatever "weaker" means.

My assumption which I think is the most reasonable one, is that MS believes they designed a great system: full stop. Heres why:full stop. We made tradeoffs, heres what they are and this is why you will enjoy our system regardless of what other people (i.e., the competitor, the internet fora or the digiterati) say.

Those descriptions have nothing to do with the other guy except to countervail a perception that was wholly manufactured.
 
All bandwidth numbers are peak, so are AMD's they publish with the card specs, so that point is moot. Thinking that MS has 200GB/s available for 12CUs is kind of silly, they may see 200GB/s aggregate at times but in general their available bandwidth is much lower, remember the eSRAM is only 0.04% of the RAM.

No the 200GB/s is real world...according to DF doc anyway... The combined peak of ddr3 and esram is something like 270GB/s.

That 0.04%, if used correctly will be used really heavily as the stages in pipeline use it to store intermediate results. It punches well above it's weight for such a little guy....
 
They said 150GB/s can be a common number.

That is not very meaningful IMO. Is that an average or a particular operation? We won't know these numbers until developers start leaking real world experiences with real engines. Now we just have some MS PR.
 
No the 200GB/s is real world...according to DF doc anyway... The combined peak of ddr3 and esram is something like 270GB/s.

That 0.04%, if used correctly will be used really heavily as the stages in pipeline use it to store intermediate results. It punches well above it's weight for such a little guy....

This is just a number game, by the same argument, having cache on processors would be pointless because they are merely a fraction of the system RAM, why bother?
 
That 176GB/s figure is theoretical peak. The real world will be less. I asked over on the Orbis thread what people thought this may be, but couldn't get an answer. I said I would assume the same % difference between theoretical and real world on the x1 of about 75% and nobody had any better ideas...or if they did they kept them to themselves!

So for the sake of argument that would leave the ps4 with about 130GB/S, less anything the CPU needs (max of 20GB/s)...so if the x1 is apparently balanced with it's 200GB/s (less max 20GB/s for the cpu) the 14CUs figure may be already a bit much.

Of course I could be up the pole with that 75% figure, but can't see it being miles off....

So from your perspective the NOT XB1 is only has 110 GB/s which will basically be nearly the 102 GB/s of the original ESRAM bandwidth spec. :LOL: Nicely done. The tables have turned and the XB1 is the bandwidth MONSTER !!! ;)

We also have X1 Balance (tm) at 200 GB/s and 12 CU meaning NOT XB1s will limp along starving for bandwidth and falling by the wayside. :devilish:

More seriously I would step back and think about how many developers over how many years have been able to access the bandwidth of GDDR5 memory. Is every game on every GDDR5 based GPU throwing away 25% of the bandwidth for all these years ??
 
Status
Not open for further replies.
Back
Top