The pros and cons of eDRAM/ESRAM in next-gen

Anyways just enable the two redundant CU's

...yes, and go explain your xbox has become unstable, or after 1-2 years it crashes or whatever problem you may have.

You cannot change the game once its on the market, or you'd be sued to death due to insurance problems etc.

Same for frequencies. They won't even do it silently, as the risk to be discovered would be too high.

About costs, remember that Sony is sporting clamshell memory, which means they can likely halve the number of chips in future.
Also, if you watched the motherboard, one is beautiful and the other looks like to be thrown up using a random number generator.
A cost factor will come from shrinking the (8 layers, it seems) motherboard, which I see very doable on PS and not so much on xbox (gddr5 uses training).
Still, it was my first impression on PS - why the mb had to be so big?
 
Perhaps, and I agree mostly, but it's different clocking a APU up vs a GPU or CPU alone. That is why the steamroller based APU's actually saw a CPU clock drop vs Richland. They claimed to move to a process that was a better middle ground between CPU and GPU, rather than optimized for either, which actually resulted in a lower CPU clock.

Basically I dont think APU is necessarily as easy to clock up as individual CPU or GPU.

The new 28 nm GloFo process is supposed to be a compromise between performance and density. The 32nm process of Richland was higher performance, and TSMC 28 nm is higher density. On 32 nm, the CPU in Richland had no trouble reaching Piledriver clocks. Llano clocks sucked, of course, but that was to do with moving K10 to 32nm and not AFAIK anything to do with being on an APU.

Kaveri (Steamroller) APUs don't appear to have trouble getting the GPU to 1 gHz in the hands of an overclocker, despite it being AMD's first GPU on the process. The reason for the conservative GPU clocks on Kaveri seem to be due to power consumption, and the fact that the GPU is now powerful enough that it's becoming BW limited anyway.

X1 is made in GCN's natural environment of TSMC, so I don't see any reason why AMD couldn't have got the GPU portion of X1 to discrete GPU clocks, if MS had wanted that kind of power consumption. If the sram could clock higher too, you could have had a fast GPU with huge amounts of BW.

Anyways just enable the two redundant CU's (which they considered) and clock to 900, you're already at ~1.6 teraflops, functionally very close to PS4, if that's your aim, with the current ESRAM architecture.

Indeed. MS could have gone head to head with the PS4 and still decided that they wanted to use esram. esram isn't "to blame" for anything IMO.
 
Indeed. MS could have gone head to head with the PS4 and still decided that they wanted to use esram. esram isn't "to blame" for anything IMO.

A little late to turn on the remaining CUs.

Not too late to clock higher :devilish::devilish::devilish::devilish::devilish:
 
they could implement thermal throttling for when the chips get 'too hot'; so yeah, the clock speed can certainly and safely be upped in a firmware update. The noise level will increase though.. but so will the frame rates
 
Would they really upclock the processor? It seems a bit unrealistic. The launch consoles will find it a bit difficult, they are already quite hot.
 
Yes, when the chips under their normal frequency get to (example) 90degrees Celsius, they throttle down. Which would be in a hot environment anyway.

What I proposed is that instead of (example) 500mhz, they clock the xbox one at 650mhz.
As soon as it reaches 85degrees, it throttles down.
Meanwhile, usage diagnostics are measuring how everyones xboxes are doing. If none are nearing the 85degree point under stress, then it's safe to assume that Xbox One can operate at 650mhz, no problem.

However, should MS get data that the 85 degree point is reached within 1 hour, then of course, the up clock will not be compatible with peoples' homes/environments/temperatures

Would they really upclock the processor? It seems a bit unrealistic. The launch consoles will find it a bit difficult, they are already quite hot.

Well no, ask anyone on this forum with an Xbox One: it is virtually silent. This means that the fans are never even running at full speed; meaning that it could dissipate a lot more thermal energy, handle a higher frequency
 
Would they really upclock the processor? It seems a bit unrealistic. The launch consoles will find it a bit difficult, they are already quite hot.

I think there could be some room to breathe here, MS might be overly conservative.
xfKIZXK.jpg


That fan and heatsink is massive. And is only responsible for cooling that SOC down.
 
Is there a cpu design from the last decade that doesn't (serious question I just assume thermal throttling is in a modern cpu/gpu design these days)? It's the major reason why a lot of the Core-i tablets offer worse performance than notebooks based off the same CPUs.

I believe this type gate voltage cutting/thermal throttling made it's first appearance in the Radeon 7790.

Which many would claim (and rightly so) shares many similarities to X1.
 
Is there a cpu design from the last decade that doesn't (serious question I just assume thermal throttling is in a modern cpu/gpu design these days)? It's the major reason why a lot of the Core-i tablets offer worse performance than notebooks based off the same CPUs.
Generally speaking everything has "death preventative" measures, but there is plenty of graduations of sophistication between them. On the GPU side the levels of sophistication are much more recent developments as well.
 
I think there could be some room to breathe here, MS might be overly conservative.

That fan and heatsink is massive. And is only responsible for cooling that SOC down.
Oh right I forgot I was in an Xbone thread. I had the PS4 in mind. The Xbox has a lot more breathing room when it comes to heat and power draw.
 
A little late to turn on the remaining CUs.

Not too late to clock higher :devilish::devilish::devilish::devilish::devilish:

I wonder how much headroom there is in terms of power supply? Not so much the power brick, which probably has loads, but the on-board power ... stuff ... that keeps the chip supplied with stable power.

A "turbo" option that boosted clocks to 900+ at the cost of a little additional noise would be nice. The cooler certainly has the headroom. Perhaps they could shut down one of the reserved CPU cores to minimise additional overall draw from the APU when in "boost" mode.

The nice thing about esram with regards to changing clocks is that on the X1 an upping of the GPU clocks saw the same increase in esram BW without having to re-spec the main memory (which would have been impossible at that late stage).
 
What I proposed is that instead of (example) 500mhz, they clock the xbox one at 650mhz.
As soon as it reaches 85degrees, it throttles down.
Already discussed in another appropriate thread. Your points have nothing at all to do with ESRAM. If you want to talk business choice, use the business thread. If you want to talk hardware, use the hardware thread or open a new thread asking if such-and-such is possible.
 
Oh right I forgot I was in an Xbone thread. I had the PS4 in mind. The Xbox has a lot more breathing room when it comes to heat and power draw.
Sorry if this is getting really off-topic... but XB1 heatsink is about the same surface area as the PS4 heatsink. XB1 fan being more pressure-limited, and the PS4 fan being more flow-limited. Very different approach, but neither look over-designed.
 
I wonder how much headroom there is in terms of power supply? Not so much the power brick, which probably has loads, but the on-board power ... stuff ... that keeps the chip supplied with stable power.

A "turbo" option that boosted clocks to 900+ at the cost of a little additional noise would be nice. The cooler certainly has the headroom. Perhaps they could shut down one of the reserved CPU cores to minimise additional overall draw from the APU when in "boost" mode.

The nice thing about esram with regards to changing clocks is that on the X1 an upping of the GPU clocks saw the same increase in esram BW without having to re-spec the main memory (which would have been impossible at that late stage).

Bad question:
If you set your APU to XXX Mhz, is it always running that at rate at all times?

Is there a possibility that perhaps the OS is behind in terms of controlling the hardware? That they had intended for a higher clockrate but are conflicted in having the X1 run at higher clockrate for doing something like watching TV - so they set it to something in between until they could get the OS to variably control the clockrate for different applications?
 
Sorry if this is getting really off-topic... but XB1 heatsink is about the same surface area as the PS4 heatsink. XB1 fan being more pressure-limited, and the PS4 fan being more flow-limited. Very different approach, but neither look over-designed.

It's off topic, but not your fault, mine really. The likely-hood that the APU will be clocked higher is next to nil. But clocking the APU higher would in additional bandwidth for ESRAM, and it would scale perfectly in step with the GPU in this regard.

Pressure as in RPM?
The fan is whisper quiet indicating to me that it's not nearly running the maximum it could be.
There were earlier reports that first X1 SDK kits had 0 control over fan speed, so they just ran it at maximum, and it was 'very loud' then.
 
Does esram<->ddr3 read\writes consume a sizeable amount of the ddr3 bandwidth in most games?

quick answer is yes, as DDR3 alone would not be able to sustain the bw need, but what exactly do you mean by "esram<->ddr3 read\writes" and what's "sizable"?
 
Last edited by a moderator:
Back
Top