Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Am I wrong in thinking a 256-bit bus sounds prohibitively expensive for a console? I would have thought 128-bit GDDR5 were both less expensive and easier to manufacture (as well as plain faster) than such a config. Especially in the long term.

Wouldn't the lower volume of GDDR5 produced ultimately make it more expensive? DDR3 is plentiful and cheap already. And the 256-bit lanes should be split over multiple chips on the mainboard converging at the endpoint(north bridge/memory controller) to create a bus that wide. Like the first Xbox, it used four chips of PC1600 to achieve 6.4GB/s bandwidth IIRC.

What's DDR3 memory density like at this point, how many chips would you need to reach 8GB?
 
Wouldn't the lower volume of GDDR5 produced ultimately make it more expensive? DDR3 is plentiful and cheap already. And the 256-bit lanes should be split over multiple chips on the mainboard converging at the endpoint(north bridge/memory controller) to create a bus that wide. Like the first Xbox, it used four chips of PC1600 to achieve 6.4GB/s bandwidth IIRC.

What's DDR3 memory density like at this point, how many chips would you need to reach 8GB?

16 at the most. Commercial 8GB sticks are dual sided with 8 (4Gb) chips to each side.

edit: Maybe only 8 chips. I found a 16GB DIMM with 16 chips.
 
Last edited by a moderator:
At 1080p, subsampled particles will be still high quality, and deferred rendering could make shortcuts. eg. Render effectively a 422 YUV image by rendering light at 1080p and albedo etc. at 720p.

Remember, a lot of people think BLOPS on PS3 runs at 1920*1080, whereas it's 960*540 with 2x MSAA.

Native 1920*1080 is overrated, I don't think too many games would go for that.
 
if this is durango, than Microsoft will suicide. UE4 won't run on it, nor the next engine from crytek.. this is total nonsense, I believed that crytek was involved in some way developing durango. Then I hope Orbis will not disappoint too or I will stick to pc; I don't have idea of what is the DME accelerator unit, but hardly will fill the weakness of the system
 
if this is durango, than Microsoft will suicide. UE4 won't run on it, nor the next engine from crytek.. this is total nonsense, I believed that crytek was involved in some way developing durango. Then I hope Orbis will not disappoint too or I will stick to pc; I don't have idea of what is the DME accelerator unit, but hardly will fill the weakness of the system

That would be suicide for Epic not MS.
I would bet UE4 runs on both next gen consoles, in fact it will be tailored to the, just because that's where Epic makes their money.
 
We're very clueless here. I think it's wrong to assume they are DMA units as they'd be called DMA units in that case. ;) There must be more they are doing with the memory, although what, we can only guess. I assume scaling and filtering of sorts, saving the GPU. Mipmaps and compressed textures and who-knows-what. My custom hardware thread didn't really uncover any clear, obvious uses that convinced me of their value, but here we are with them reportedly in, so they must have some noteworthy value, at which point I feel we should explore all the possible uses and see what they could bring to the table.

I don't know if dedicated chip for those very functions is really worth adding over extra shaders. Why can't those improvements be possible within the GPU itself?
 
Wouldn't the lower volume of GDDR5 produced ultimately make it more expensive? DDR3 is plentiful and cheap already. And the 256-bit lanes should be split over multiple chips on the mainboard converging at the endpoint(north bridge/memory controller) to create a bus that wide. Like the first Xbox, it used four chips of PC1600 to achieve 6.4GB/s bandwidth IIRC.

What's DDR3 memory density like at this point, how many chips would you need to reach 8GB?

I'm not sure, but that would still add to motherboard complexity. It only makes sense if the price of DDR3 is low enough to offset the increased cost of GDDR5. And my point is (I think) that the price of GDDR5 is likely to fall much faster than the cost of making a 256-bit mainboard throughout the entire lifespan of the console.

There's a reason why low-end Radeon cards use 128-bit GDDR5 over 256-bit DDR3 to achieve the same bandwidth. It's ultimately cheaper.
 
That would be suicide for Epic not MS.
I would bet UE4 runs on both next gen consoles, in fact it will be tailored to the, just because that's where Epic makes their money.

Of course you're right, Epic will make a cut-down version of the engine for durango and wiiu, and maybe for ps4 (but let me hope to see al least an highend system)
What I want to say is that we can forget Samaritan level quality, Epic told that to run samaritan the low end requisite is a minum of 2.5 GFlops. And here we have less than half this number.
The only wayout is that MS will produce two version onf xbox next, a lite one and a pro, with different power and different price; but again by common sense using the old 360 system as low end model is a better choice than using a new system
 
if this is durango, than Microsoft will suicide. UE4 won't run on it, nor the next engine from crytek.. this is total nonsense

You have that backwards. It would be suicidal business act for engine providers not to support what may likely remain the dominant market in the US.
 
Of course you're right, Epic will make a cut-down version of the engine for durango and wiiu, and maybe for ps4 (but let me hope to see al least an highend system)
What I want to say is that we can forget Samaritan level quality, Epic told that to run samaritan the low end requisite is a minum of 2.5 GFlops. And here we have less than half this number.
The only wayout is that MS will produce two version onf xbox next, a lite one and a pro, with different power and different price; but again by common sense using the old 360 system as low end model is a better choice than using a new system

Samaritan is a UE3 demo, which may not even be used for next gen systems. That also doesn't take into account the efficiency improvements since then. GCN had just come out and it was on Nividia hardware at the time. Also, the fact that console code is more optimized.

He's referring to either the ARM security core or a Kinect block, so no, they haven't left out the hw ray tracer ;)

Aegis was claiming to do so to protect his source. I doubt his source would be in hot water over a kinect block. Maybe an ARM core, but I'd guess it's neither of those.
 
What I want to say is that we can forget Samaritan level quality, Epic told that to run samaritan the low end requisite is a minum of 2.5 GFlops. And here we have less than half this number.

You could have forgotten that level of quality from the get go given the thermals and cost levels of such a system. It was going to be a non-starter from the beginning for 2013/2014 hardware.
 
They likely couldn't have launched in November of 2012 due to poor 28nm volume and availability.

You launch when the market is right, otherwise, you could suspend indefinitely waiting for more power.


You are mostly making my point for me. 28 nm is old tech at this point and more than sufficient volume would have been available for a launch, say 750K units. The market was right for late 12 launch, tech is right for a 14 launch. 13 is the odd one out, where it is late to market but too early for tech advances. All in all 13 is the worst possible year to launch. MS introduced significant and unnecessary business risk from WiiU, which failed to generate a threat, and especially phones, tablets, phablets, and Apple and Google setup boxes if they materialize.
 
Aegis was claiming to do so to protect his source. I doubt his source would be in hot water over a kinect block. Maybe an ARM core, but I'd guess it's neither of those.

Aegies is a tease, he hasn't told us anything particularly useful or interesting, unlike bgassassin, thuway, andyh or even Rangers.

He was even implying the 1.2 TF rumours weren't true and that the GPU was more powerful than that. He was wrong.

So forgive me if I don't believe that vgleaks left out docwiz's discrete 2TF application GPU or the hw ray tracer.
 
I'm not sure, but that would still add to motherboard complexity. It only makes sense if the price of DDR3 is low enough to offset the increased cost of GDDR5. And my point is (I think) that the price of GDDR5 is likely to fall much faster than the cost of making a 256-bit mainboard throughout the entire lifespan of the console.

There's a reason why low-end Radeon cards use 128-bit GDDR5 over 256-bit DDR3 to achieve the same bandwidth. It's ultimately cheaper.

The problem is that the latencies of GDDR5 may make it less than ideal for the CPU part of the chip.
 
Status
Not open for further replies.
Back
Top