Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Esram average 150gb/s. Ddr, 68gb/s. Magically 218 gg/s

French you are correct...that's a lot of convoluted work. The main constraints on xb1 were Kinect and RAM choice. Different choices in both regards and you would get a design closer to PS4.
You forget power consumption though it applies to both system.
 
They quote peak bandwidth, as does everyone. There has to be a scenario where this is possible.

Around and above 200GB/s is realistic with their eSRAM and DDR3.

From what I understand peak/theoretical maximum is only possible for very short amounts of time. For example if you are are reading a consecutive block of data into the GPU from system ram, you may get 68GB/s...for a few nano seconds (it would take about a tenth of a second to read the entire 8gb!...and what would be the point of that...), then you are on to something else which means set up work for the memory controllers which whist blisteringly fast does knock you a bit. So over an entire second you get a fair chunk less than the theoretical peak.
 
You forget power consumption though it applies to both system.

True..but unless you are aiming for top eco credentials the presumed power savings of going with esram will only be usefull if you apply the power elsewhere in the soc budget.

Its fair to assume that hasnt been the case here.
 
True..but unless you are aiming for top eco credentials the presumed power savings of going with esram will only be usefull if you apply the power elsewhere in the soc budget.

Its fair to assume that hasnt been the case here.

For the most part. However, it could have helped enable the modest upclocks.
 
To a certain extent they are obviously playing up the clockspeed increase over enabling all 14/14 cu because of yield and cost issues.

Imagine the hit on yields if they couldn't accommodate for defects in any compute units.

I'm disappointed this being a "versus" article he didn't immediately pick up on that, and question their statements.

That struck me as well. I guess it is an implicit suggestion from DF that if Sony were to be so kind as to do a "deep dive" with the PS4 that they would get a similar "VS" treatment ;)


Glad for the article but I am still mildly annoyed at the PR seeping in. Before the upclocks were just nice little bumps in performance because MS could do it without risking yields, so why not. Now they are fundamental to keeping "parity" with PS4 and needed to get consistent frame rates. MY GOD how would the XB1 work without those single digit percentage increases in clock-rates ? What kind of dysfunctional machine would we be dealing with sans the up-clocks ???? :LOL:

I was also not a fan of drawing a fallacious comparison between the 360/PS3 and PS4/XB1. Sure the edram driven 360 was easier to program for but that has nothing at all to do with what is going on in the next generation.
 
To a certain extent they are obviously playing up the clockspeed increase over enabling all 14/14 cu because of yield and cost issues.

Imagine the hit on yields if they couldn't accommodate for defects in any compute units.

I'm disappointed this being a "versus" article he didn't immediately pick up on that, and question their statements.

I think you may be assuming too much. Does activating all 14CUs have a thermal impact as well as a performance impact?

If it does, they may very well not have been able to do both 14CUs *and* the up-clock due to the compounding impact it would have on power draw and heat.

Keep in mind the up-clock essentially gives them 13CUs in terms of compute power, but it also speeds up other parts of the subsystem as well which i could easily see having an overall advantage over just adding 14CUs.
 
And if the PS3 only used the PPU for CPU tasks, you'd be right. But audio, AI, and almost everything they could manage was shunted off to the SPE units. In some games, the PPU was mainly a glorified scheduler, handing out jobs for the rest of the CPU. So, yes, if you only count the PPU, it cannot outclass an 8 core jaguar. But if you include the entire CPU, it's twice as fast as an 8 core jaguar. In theory. In practice, not so much. The X1 CPU and the 360 CPU have the same FLOPS, in theory. In real running code, the X1 is about 6-8X faster. But optimize a VMX calculation on the 360, and it will execute twice as fast as the X1. Same for the PS3.

Jesus, is the situation that dire for the next gen consoles CPU wise? Good god.
 
That struck me as well. I guess it is an implicit suggestion from DF that if Sony were to be so kind as to do a "deep dive" with the PS4 that they would get a similar "VS" treatment ;)


Glad for the article but I am still mildly annoyed at the PR seeping in. Before the upclocks were just nice little bumps in performance because MS could do it without risking yields, so why not. Now they are fundamental to keeping "parity" with PS4 and needed to get consistent frame rates. MY GOD how would the XB1 work without those single digit percentage increases in clock-rates ? What kind of dysfunctional machine would we be dealing with sans the up-clocks ???? :LOL:

I was also not a fan of drawing a fallacious comparison between the 360/PS3 and PS4/XB1. Sure the edram driven 360 was easier to program for but that has nothing at all to do with what is going on in the next generation.
I think MS did a pretty good job of trying to avoid it turning too much into a versus discussion, but as we know from this forum it's pretty much impossible. :LOL: (Nature of the beast) Also a lot of editorial things was included.

There was two upclocks, I know they said that the gpu one was free but did they also say that about the cpu one?
GPU one being free doesn't mean they didn't have option of doing either enabling the CU's or doing upclock. (both may have been too costly together)
We don't know how much it would have cost to enable the CU's, but in the end they believed that the upclock was better overall.

CPU upclock, their just saying that you get more bang for your buck with it.
Without it, sure it would've been fine, with it makes bigger impact than people assumed.

Also worth remembering we don't know officially what the PS4 cpu is at, it could have easily have changed, especially if their analysis showed similar benefits to cpu upclock etc.

Jesus, is the situation that dire for the next gen consoles CPU wise? Good god
Yes, but they will work around it no choice in the matter. 8 core sounds impressive though. :)
 
From the DF piece

"Did we do a good job when we did all of our analysis and simulations a couple of years ago, and guessing where games would be in terms of utilisation.

Indeed the crux of the issue. Working within the parameters they were given ( thermal/power budget, kinect, AV stuff etc. ) what should they choose to do design wise. They chose a reasonable course of creating the "xbox720" to run "xbox720" games. Take some 360 titles and find the strengths and weaknesses of the 360 in grinding through that game code. Design to maximize the strengths and minimize the weaknesses. The MS engineers can then make use of the fact that the XB1 will only really need to be optimized to run directx 11.2 code. Fewer issues to worry about means more engineering time and SOC area to spend on making the sequel to the 360/dx9 system that much faster.

The only way to view how the word "balance" is being [over]used is through the lens of the MS's goal for the XB1. The XB1 is "balanced" not as a gaming console but as a kinect enabled AV system that runs directx 11.2 game code efficiently and taking into account how 360 devs would most likely deal with next gen type games.
 
And if the PS3 only used the PPU for CPU tasks, you'd be right. But audio, AI, and almost everything they could manage was shunted off to the SPE units. In some games, the PPU was mainly a glorified scheduler, handing out jobs for the rest of the CPU. So, yes, if you only count the PPU, it cannot outclass an 8 core jaguar. But if you include the entire CPU, it's twice as fast as an 8 core jaguar. In theory. In practice, not so much. The X1 CPU and the 360 CPU have the same FLOPS, in theory. In real running code, the X1 is about 6-8X faster. But optimize a VMX calculation on the 360, and it will execute twice as fast as the X1. Same for the PS3.

These flops. Now, it is a LONG time ago, but when I was doing asm programming I would try and avoid floating point arithmetic like the plague, it was slow, using loads of cycles and was hard (no hardware support)
However when discussing the merits of cpus today the flop performance seems to be a significant kpi. Has something changed? Are cpus no longer doing so much compare, move, branch, integer arithmetic etc and doing loads of flops instead?
If there is a requirement to do lots of flops on these consoles cpus, surly it would be better done using the compute functions of the GPU?
 
The only way to view how the word "balance" is being [over]used is through the lens of the MS's goal for the XB1. The XB1 is "balanced" not as a gaming console but as a kinect enabled AV system that runs directx 11.2 game code efficiently and taking into account how 360 devs would most likely deal with next gen type games.

Its a possible explanation but not the only one. Can you build a 'balanced' system around 16CUs? 18CUs? 24CUs? Sure of course you can but there are other bottlenecks that are present in this console generation that may make it pointless. In this equation they could be:

1. CPU Power
2. 1080p maximum resolution
3. Power, which equals heat + noise
 
From the DF piece



Indeed the crux of the issue. Working within the parameters they were given ( thermal/power budget, kinect, AV stuff etc. ) what should they choose to do design wise. They chose a reasonable course of creating the "xbox720" to run "xbox720" games. Take some 360 titles and find the strengths and weaknesses of the 360 in grinding through that game code. Design to maximize the strengths and minimize the weaknesses. The MS engineers can then make use of the fact that the XB1 will only really need to be optimized to run directx 11.2 code. Fewer issues to worry about means more engineering time and SOC area to spend on making the sequel to the 360/dx9 system that much faster.

The only way to view how the word "balance" is being [over]used is through the lens of the MS's goal for the XB1. The XB1 is "balanced" not as a gaming console but as a kinect enabled AV system that runs directx 11.2 game code efficiently and taking into account how 360 devs would most likely deal with next gen type games.
Wow how biaised... you know MSFT can profile games on the 360 on PC, they discuss with the industry at large (from hardware vendors to software publishers), they are writing API, and so on.
I think that a couple of years ago MSFT engineers had a better (and wider) view and understanding about where the industry (at large again soft and hard) were heading than everybody here (including devs).
Now they were a bit conservative it seems not with the silicon (at large including ram) but power draw, no matter the slight overclock they are hardly pushing the bar. It is not like comparable hardware in the pc world pulls insane amount of power.
But it also applies to Sony (different design choices), it seems both systems are to burn less power than their predecessors.
How the XB1 is not a proper gaming system? I would be surprised if anybody can do better iso power outside of Intel, yes Intel can do better but that does say us much about either Sony or MSFT choices aka their tech is out of their reach.
 
It boils down to this for me; Why were thermals so important that MS would allow it to "limit" (air quotes) power? What driving factor could have been so strong that heat dissipation to the extreme was the primary goal of thier design? EU power restrictions? RROD?

I think they went too far. In fact I think nearly all of their design decisions for nearly every product over the last 15 months was totally off base. Only the unification of OS' from an engineering standpoint makes sense.
 
I think MS did a pretty good job of trying to avoid it turning too much into a versus discussion, but as we know from this forum it's pretty much impossible. :LOL: (Nature of the beast) Also a lot of editorial things was included.

There was two upclocks, I know they said that the gpu one was free but did they also say that about the cpu one?

Nothing that I am aware of made the distinction between the 2 so I would assume ( with all the pitfalls that come from assuming ) that they were clock bumps made because they were essentially free. Seems to me that if the CPU clock speed was so fundamental it would have been designed with that upclock speed in mind. Since it was 1.6 Ghz in the leaks I don't think it was as fundamental as they are claiming right now. Maybe such speculation will have to wait for the "book"

CPU upclock, their just saying that you get more bang for your buck with it.
Without it, sure it would've been fine, with it makes bigger impact than people assumed.

We don't exactly know how much BIGGER the impact is really so any speculation on the validity of "assumptions" made by "people" also are assumed as well :devilish:

In any case it's nice to have.
 
And if the PS3 only used the PPU for CPU tasks, you'd be right. But audio, AI, and almost everything they could manage was shunted off to the SPE units. In some games, the PPU was mainly a glorified scheduler, handing out jobs for the rest of the CPU. So, yes, if you only count the PPU, it cannot outclass an 8 core jaguar. But if you include the entire CPU, it's twice as fast as an 8 core jaguar. In theory. In practice, not so much. The X1 CPU and the 360 CPU have the same FLOPS, in theory. In real running code, the X1 is about 6-8X faster. But optimize a VMX calculation on the 360, and it will execute twice as fast as the X1. Same for the PS3.

Those SPUs were doing a lot of GPU offloading too, of course you are right some calculations like audio were offloaded as well. With the PS4 GPU not needing any help from the CPU, and some dedicated audio hardware (mixing, decoding) it seems the PS4 cores should be left to do traditional CPU work and should be a huge step up in power from last gen. So it begs the question, why would the CPU be a bottleneck now?

CPU requirement are also per game. So how is it a MS engineer can make a blanket statement about dropping frames due to CPU? How many games does he have profiling data from?


Jesus, is the situation that dire for the next gen consoles CPU wise? Good god.

I don't think that was the take away. 6-8x faster with running code. The VMX is only for specific calculations, stuff Sony might want to do on the GPU and MS has support for in the audio block? I know these are not all interchangeable, but there is overlap I'm sure.

How does the AVX in the Jaguars compare to VMX?
 
Last edited by a moderator:
It boils down to this for me; Why were thermals so important that MS would allow it to "limit" (air quotes) power? What driving factor could have been so strong that heat dissipation to the extreme was the primary goal of thier design? EU power restrictions? RROD?

I think they went too far. In fact I think nearly all of their design decisions for nearly every product over the last 15 months was totally off base. Only the unification of OS' from an engineering standpoint makes sense.
I would think costs, a better heater has to be a tad more expensive, higher clock could slightly impact yields. MSFT did not want to lose money on hardware, even at launch, so pretty I think there were under high pressure to cut costs as much as they could.
Say MSFT launch a@400$ but as Sony loses 60$ on every units, I guess they could have afford, the slightly lower yields and so pushing the clock significantly higher (like 1.8GHz/950MHz that looks reasonable looking at PC parts), better cooling solution and so on.
May be they were a bit too concerned with reliability too, I guess we will never know :(
 
Those SPUs were doing a lot of GPU offloading too, of course you are right some calculations like audio were offloaded as well. With the PS4 GPU not needing any help from the CPU, and some dedicated audio hardware (mixing, decoding) it seems the PS4 cores should be left to do traditional CPU work and should be a huge step up in power from last gen. So it begs the question, why would the CPU be a bottleneck now?

CPU requirement are also per game. So how is it a MS engineer can make a blanket statement about dropping frames due to CPU? How many games does he have profiling data from?
Man they have data for all the games on the 360 on PC (and so with multiple CPU configurations).
Look at how some games behave with AMD vs INtel processors, it is not blanket statement, it should surprise nobody.
 
Last edited by a moderator:
I guess. It's important for those reading between PR lines and concluding that there might be a second GPU or the custom processors as something magical to finally put those notions to rest. 8 processors in the audio block explains where the 15 comes from, rather than several secret processors not revealed before. Likewise, it's SI and not VI or GCN 2 or anything uber fancy.

I fearlessly predict that folks will point to "synergistic qualities" in the XB1 and make the claim that any increase in the clock will have a 2 or 3 or more fold increase in performance than the "mere" upclock would suggest. Taking the possible upclocks and synergistic qualities of the xb1 all into account gives you a practical 2nd GPU performance wise. Someone check MisterX and see if I'm right. :devilish:
 
Nothing that I am aware of made the distinction between the 2 so I would assume ( with all the pitfalls that come from assuming ) that they were clock bumps made because they were essentially free. Seems to me that if the CPU clock speed was so fundamental it would have been designed with that upclock speed in mind. Since it was 1.6 Ghz in the leaks I don't think it was as fundamental as they are claiming right now. Maybe such speculation will have to wait for the "book"



We don't exactly know how much BIGGER the impact is really so any speculation on the validity of "assumptions" made by "people" also are assumed as well :devilish:

In any case it's nice to have.
I've only heard them say that GPU upclock was free, I'm not saying CPU wasn't, just that I'm not making that assumption as that isn't what was said.
Fundamental? The developers would've worked around it, but given the chance to up the clock now prior to release knowing that things are cpu bound, why not? If it cost, then they deemed it worth the extra cost.

BIGGER, is just that, bigger than most people on the collective net think, the people who are totally dismissive of it.
Fact is like I said, developers would've worked around it regardless.
There just saying things are a bit more cpu bound than is assumed.
 
I would think costs, a better heater has to be a tad more expensive, higher clock could slightly impact yields. MSFT did not want to lose money on hardware, even at launch, so pretty I think there were under high pressure to cut costs as much as they could.
Say MSFT launch a@400$ but as Sony loses 60$ on every units, I guess they could have afford, the slightly lower yields and so pushing the clock significantly higher (like 1.8GHz/950MHz that looks reasonable looking at PC parts), better cooling solution and so on.
May be they were a bit too concerned with reliability too, I guess we will never know :(

I dunno. I think you have to look at total cost of development and sale. Theres no way to know how much MS spent versus Sony on the development of their systems. However, what is telling is that MS is spending tons more effort convincing buyers and maybe developers thats their design choices were worthwhile. Nevermind silly things like releasing a system designed around Sea Islands architecture AFTER the Volcanic Islands discrete parts will become available.

Obviously they knew the hw roadmap and they missed on nearly every gamble this time around.
 
Status
Not open for further replies.
Back
Top