Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
"Andrew said it pretty well: we really wanted to build a high performance, power-efficient box,"

"Having ESRAM costs very little power and has the opportunity to give you very high bandwidth. You can reduce the bandwidth on external memory - that saves a lot of power consumption and the commodity memory is cheaper as well so you can afford more. That's really a driving force behind that... if you want a high memory capacity, relatively low power and a lot of bandwidth there are not too many ways of solving that."

Or you could highlight it like this:
"Having ESRAM costs very little power and has the opportunity to give you very high bandwidth. You can reduce the bandwidth on external memory - that saves a lot of power consumption and the commodity memory is cheaper as well so you can afford more. That's really a driving force behind that... if you want a high memory capacity, relatively low power and a lot of bandwidth there are not too many ways of solving that."

Anyway my point is that its not really OR, they are AND so you cant really simply divide it like that.
 
I don't think it makes much difference,because that would imply that the PS4 still have 400Gflosp on the compute side that most be compensated by something on the xbox one hardware any way.

14+4 or just plain 18 for all in the end i think is the same.

Well the XB1 does seem designed to have special function hardware reduce the need for GPGPU usage.
 
What I really miss is an explanation how the paper specs require a 215W PSU. I would really want to understand where I made a mistake in my assumption.
 
What I really miss is an explanation how the paper specs require a 215W PSU. I would really want to understand where I made a mistake in my assumption.

Perhaps the safety margin with efficiency margin and maybe there being no benefit in making a smaller rated PSU? Sort of like the cost of making a 150GB mechanical drive is nearly the same cost as making a 215GB mechanical drive. At some level the costs no longer decrease. (I don't know, just grasping as wild ideas.)

Or perhaps it's safety margin with efficiency margin with all USB3 ports loaded. *shrug*
 
Such as? the only thing I can see is audio which is generally a CPU job and not a huge one at that.

It kinda depends on how you want to split it up in the sense that giving devs SHAPE et al means they have a place to do advanced mdoern audio calculations, wehreas on PS4 they might have to use those CU's for that kinda stuff. On the one hand, those extra CU's can be used for other general purpose stuff too. Then again, general processors require much larger processing overhead compared to specialized, targeted, fixed function processors. So the end result is having fixed function stuff like SHAPE keeps X1's CU's presumably more open for business in terms of other types of calculations (non-audio stuff). So do we give them credit on that front for leveraging SHAPE to make GPGPU usage of their CU's more targeted towards non-audio stuff? It's kinda in the eye of the beholder imho.
 
It's funny that we get so much insight in the working of the console and it's design only because the technical team was so pissed off to the internet haters . :D
Actually, it's quite logical. It's the same reason ATI gave Dave the Xenos interview when the Xbox 360 was launching. The architects didn't think the design was getting the credit it deserved.

Was this ever in doubt? I assumed the system was perfectly flexible as the GPU has read/write to both pools. It'd be odd to limit the GPU to only outputting to ESRAM.

There's not a great deal of additional info, I guess. It's important for those reading between PR lines and concluding that there might be a second GPU or the custom processors as something magical to finally put those notions to rest. 8 processors in the audio block explains where the 15 comes from, rather than several secret processors not revealed before. Likewise, it's SI and not VI or GCN 2 or anything uber fancy.
It was likely in doubt for people who remembered how the edram in the xbox 360 worked and were afraid of the same limitation. Also, the Sea Islands family is CI. SI is Southern Islands.

We have been used to having more than able CPU power in regards to gaming for so long, that one is forgetting how relatively puny those Jaguar cores are. So while this quote shouldn't come as a surprise, it is definitely worth keeping in mind IMO.
Your statement is true for PCs, but not for consoles.
 
No, astrograd thought it was a quantum effect. MS says you can just take advantage of access patterns. It also mean the 204GB figure is basically a never thing.

No, I speculated it might be related to the size of the logic elements which could have altered the timings (distance between elements may have affected quantum parameters known to govern transistor timings). The timings is what I was told about a priori and what was in line with the original DF article on the subject. It also fit in well with the math about the 7/8 cycles theory.

Those access patterns are a result of the overlap in timings. You have to be able to fit the timings into a cycle to get both reads and writes in that window but if the timings are slightly longer than half the cycle width you will eventually shift the pairs of ops to straddle a cycle in such a way that only 1 of them fits inside.

That article confirmed just about everything I've been arguing over the past 6 months or so. Only other thing I was hoping to see was a link between GPU clock boost (its magnitude, not its payoff vs extra CU's) and the eSRAM bandwidth timings. Such is life. :)


Who was it...Engadget? Were they the ones saying the 7790 was basically the X1 GPU? Sounds like they deserve props there. That was pretty close to being spot on. I also wanna see them explain the thin black arrow connecting the eSRAM to the CPU on their HotChips presentation, or have we figured that out already?
 
Apparently they have separate ports for reads and writes and as long as there is no bank conflict, they can carry out both simultaneously. That's probably all and a technique almost as old as SRAM itself. And it doesn't require techniques nobody has ever heard of or defies common rules of logic design. MS probably just confused themselves with their spec given to devs. :rolleyes:
 
Oh, and I am also disappointed they cheaped out of enabling the two redundant CU's and doing the upclock as well. You are charging $499 for this system, Microsoft...

They found out that upping the clock was better than having 2 more CU.
It didn't matter at all that 2 additional CU cost more money while more hz cost nothing . ;)
 
I don't think it makes much difference,because that would imply that the PS4 still have 400Gflosp on the compute side that most be compensated by something on the xbox one hardware any way.

14+4 or just plain 18 for all in the end i think is the same.

You could add 1000 cu, but if the CPU is the bottleneck, whats the point?

"Interestingly, the biggest source of your frame-rate drops actually comes from the CPU, not the GPU," Goosen reveals. "Adding the margin on the CPU... we actually had titles that were losing frames largely because they were CPU-bound in terms of their core threads. In providing what looks like a very little boost, it's actually a very significant win for us in making sure that we get the steady frame-rates on our console."
 
You could add 1000 cu, but if the CPU is the bottleneck, whats the point?
That quote could be misleading. MS evaluated current software and found the CPU was a bottleneck (although why Jaguar then?). However, if the software isn't optimised for the hardware (compute), then the bottlenecks seen may not be representative of bottlenecks devs will experience in future. We may find that the CPU has enough grunt to drive more CU once more workloads are shifted off the CPU onto the GPU. Might. I'm only identifying it as a weakness in the argument that states frame-drops are typically CPU bottlenecks. Devs can write any sort of engine that could be throttled by any subsystem. Having said that, it's worth noting the ERP also sees the CPU as the primary bottleneck, although again coming off current software designs. A lot depends on how things change on the software side in the coming years.
 
Perhaps the safety margin with efficiency margin and maybe there being no benefit in making a smaller rated PSU? Sort of like the cost of making a 150GB mechanical drive is nearly the same cost as making a 215GB mechanical drive. At some level the costs no longer decrease. (I don't know, just grasping as wild ideas.)

Or perhaps it's safety margin with efficiency margin with all USB3 ports loaded. *shrug*

I presume it's the most commonly produced psu, and therefore the cheapest.

All the talk about power efficiency though ... They have a huge box, an external brick .... Perhaps their box is going to be the quietest, at least?
 
Apparently they have separate ports for reads and writes and as long as there is no bank conflict, they can carry out both simultaneously. That's probably all and a technique almost as old as SRAM itself. And it doesn't require techniques nobody has ever heard of or defies common rules of logic design. MS probably just confused themselves with their spec given to devs. :rolleyes:

Basically, the BW version of double buffering except people usually don't claim they have 512kb of cache when they're double buffering 256kb.
 
You could add 1000 cu, but if the CPU is the bottleneck, whats the point?

Then you go back and re-write your code to remove the bottleneck, and it's not a 1000 vs 12 CU :)

It's a great interview that surely demonstrates that Microsoft did not make choices blindly and it's obvious that they think that the way they went with there hardware was the correct one. I would be surprised to read a interview without them defending themselves and coming up with good arguments.
 
a good interview. kind of weird it took people on forums questioning the esram for ms to talk about it nad they were shocked that people questioned it. they never explained it so people were going to question it duh.

the dynamic resolution stuff sounds awesome! im not trying to turn this into a versus but wouldn't this mean that multiplatform games would be equal to each other? being able to dynamically change the resolution would mean solid framerates right? if the hud stays the same resolution thanks to display planes then xbox games might perform better than the competition if the compeitition drops framerates every once in a while due to not having hardware for dynamic resolution scaling.

i think thats a bigger deal than anything else no?

also the memory access stuff with the crossbar, is that like how ps3 could access both the gddr3 and xdr? would that mean theres a latency penalty for xbox one?
 
Last edited by a moderator:
What I don't understand with the dynamic resolution is how do you know when to drop the resolution and by the time you know wouldn't it be too late? Is there some sort of counter indicating time to render the frame and you need to make the decision on the resolution at the start of the rendering of the frame?
 
the dynamic resolution stuff sounds awesome! im not trying to turn this into a versus but wouldn't this mean that multiplatform games would be equal to each other? being able to dynamically change the resolution would mean solid framerates right? if the hud stays the same resolution thanks to display planes then xbox games might perform better than the competition if the compeitition drops framerates every once in a while due to not having hardware for dynamic resolution scaling.

i think thats a bigger deal than anything else no?

Did you miss that part where DF mentioned Wipeout HD, a PS3 game? This technique is not new.

You could add 1000 cu, but if the CPU is the bottleneck, whats the point?

Well why is your resolution dynamically dropping if you are CPU bound? It is the GPU that is the issue in these cases.
 
Status
Not open for further replies.
Back
Top