Predict: The Next Generation Console Tech

Status
Not open for further replies.
I don't see the point of touting the low power draw of the always-on system subsection, not for a plugged-in system with an alleged 150W TDP.

There's a number of things about the chimera of ISAs and processing cores that makes the so-called savings a rounding error.
The biggest one is that a power supply or power brick specified to handle 150W is likely to be active in the "always-on" mode, and the efficiency outside of the optimal load range is very poor.
The <500mW SOC would be attached to a power supply that is radiating ten watts just for being on.

There could be ways around it, like a more complex power supply or more complex power delivery mechanism that decouples the always on portion from the regular power circuitry.
It doesn't seem like a big win for a plugged-in device. Modern multicore CPUs can have very low power consumption if the bulk of their functionality is gated off and a single core drops to a low clock state. It may not be milliwatts, but what consumer is going to feel cheated by a 15.2W idle device versus a 14.2W?

If the that subsection is still somehow linked or embedded in the CPU section, the difference would be even smaller.
 
I don't see the point of touting the low power draw of the always-on system subsection, not for a plugged-in system with an alleged 150W TDP.

Then you don't understand the issues of current gen DVRs and the emerging standards/requirements wrt EnergyStar and consumer electronics.

There's a number of things about the chimera of ISAs and processing cores that makes the so-called savings a rounding error.
The biggest one is that a power supply or power brick specified to handle 150W is likely to be active in the "always-on" mode, and the efficiency outside of the optimal load range is very poor.
The <500mW SOC would be attached to a power supply that is radiating ten watts just for being on.

That's not actually true. Modern designs can handle much better scaling from max load to idle. It is certainly possible to support effectively a standby voltage plane that provides on the order of 5-10W max while maintaining 75%+ efficiency. For a bit more money you can get that up to 85%+ efficiency.

There could be ways around it, like a more complex power supply or more complex power delivery mechanism that decouples the always on portion from the regular power circuitry.
It doesn't seem like a big win for a plugged-in device. Modern multicore CPUs can have very low power consumption if the bulk of their functionality is gated off and a single core drops to a low clock state. It may not be milliwatts, but what consumer is going to feel cheated by a 15.2W idle device versus a 14.2W?

The device described is always running and doing something in the envisioned environment. The vast majority of the time the processing requires are low but non-zero which means that using a large SOC with monolithic power plane would burn a non-negligible amount of power. It is likely in the DVR/Media server mode they are targeting on the order of ~5W total system power which is well within the envelope of modern technology. It would be impossible to hit this utilizing a monolythic system approach and reasonable levels of clock gating/power saving. Modern laptops already employ similar strategies by utilizing on SOC graphics the majority of the time and only powering on the graphics chip when more performance is required.

If the that subsection is still somehow linked or embedded in the CPU section, the difference would be even smaller.

Not really. Any half decent engineer is going to use a separate power plane for the always on sub-soc, and given the performance requirements, it is likely that it would always run at significantly lower voltage and frequency. Assuming 1/2 base frequency and 20% voltage reduction, you are looking on the order of 70% reduction in max power. Likely more once you add in the a reasonable level of automatic frequency reduction with likely the possibility of additional voltage reduction. Also given the realistic performance requirements, it is likely that the design is also designed with lower power circuits than the game sub-soc.
 
by always-on, you mean the power it uses when it's turned off?
sorry if something is sneakily using power while showing a red LED or something, I'd rather have it use 1W not 15W. or maybe a little more but only briefly spiking when receiving specific network activity, or waking up itself to download or update in the middle of the night then sleep again, etc.

the power suppply is a solved problem, it's in every ATX power supply and called +5VSB.
 
by always-on, you mean the power it uses when it's turned off?
sorry if something is sneakily using power while showing a red LED or something, I'd rather have it use 1W not 15W. or maybe a little more but only briefly spiking when receiving specific network activity, or waking up itself to download or update in the middle of the night then sleep again, etc.

Yep, right now both the 360 and ps3 basically burn significant power just doing stuff like downloading. The goal with the next gen is going to be able to support things like DVR + download while sipping power. AKA, even when off the 720 is envisioned to still be active as a DVR + media center and needs to be able to do that at really low power in the 5w or less range depending if the HDD is spun up.

If it has an flash cache, I could certainly see them using it as a write cache for DVR/download and then bursting to HDD then powering back down the HDD.

the power suppply is a solved problem, it's in every ATX power supply and called +5VSB.

Yep, though in this case, we're likely to see a +12VSB instead of +5.
 
I wouldn't mind a link as to why "energy Star" makes any difference at all in a consumer luxury choice. Seems a bit like buying a Ferrari and complaining about the gas mileage.
 
I wouldn't mind a link as to why "energy Star" makes any difference at all in a consumer luxury choice. Seems a bit like buying a Ferrari and complaining about the gas mileage.

Almost all EnergyStar products of consumer luxury good: dishwashers, clothes washers/dryers, refrigerators, etc

Also one of the larger users of power right now in homes is DVRs. There are also incentives and some regulations for EnergyStar as well.

Right now consoles are pretty bad about non-gaming power which is increasingly an issue as they are used in many other workloads than gaming and the manufacturers want to expand that over time.
 
Then you don't understand the issues of current gen DVRs and the emerging standards/requirements wrt EnergyStar and consumer electronics.
The numbers that initially came to my mind were for DVR set-top boxes. If the ceiling for acceptability has dropped from tens of watts to single digits, then I failed to keep track of those changes and the premises I was operating under were wrong.

Not really. Any half decent engineer is going to use a separate power plane for the always on sub-soc, and given the performance requirements, it is likely that it would always run at significantly lower voltage and frequency. Assuming 1/2 base frequency and 20% voltage reduction, you are looking on the order of 70% reduction in max power. Likely more once you add in the a reasonable level of automatic frequency reduction with likely the possibility of additional voltage reduction. Also given the realistic performance requirements, it is likely that the design is also designed with lower power circuits than the game sub-soc.
My thinking was that if the always-on portion of the system is physically in the same chip as the rest of the cores, it would have to at some point wake up the uncore and external interfaces, so that share of the power consumption would not go away like it would if the gaming chip were physically separate and all its components off.
 
My thinking was that if the always-on portion of the system is physically in the same chip as the rest of the cores, it would have to at some point wake up the uncore and external interfaces, so that share of the power consumption would not go away like it would if the gaming chip were physically separate and all its components off.

You have a lot of options. You don't have to have all the interfaces up and running. For one, if you've blocked your say 256b memory interfaces into 32 or 64b chunks, you can only have 1 chunk running. Can powerdown/standby the storage interfaces when you don't need them. Devices already do this kind of thing.
 
Last edited by a moderator:
I wouldn't mind a link as to why "energy Star" makes any difference at all in a consumer luxury choice. Seems a bit like buying a Ferrari and complaining about the gas mileage.

The state of California has to deal with tremendous energy demands especially during the summer, and thus they pass a lot of regulations to curb wasteful use of power. Those legislations often become a nationwide standard simply because it's silly to design just for California.
 
So what would be a feasible amount of eDRAM for Xbox 3 based on the early info we have?

If the WiiU at 45nm supposedly has around 32MB of eDRAM, I don't think it's unreasonable to think the next Xbox at 32nm/28nm could have 64-96 MB. I believe IBM is already manufacturing eDram at 32nm.
 
If the WiiU at 45nm supposedly has around 32MB of eDRAM, I don't think it's unreasonable to think the next Xbox at 32nm/28nm could have 64-96 MB. I believe IBM is already manufacturing eDram at 32nm.

Well with Wii U we still don't know what process is being used on the GPU and I would assume it will be on an MCM with the GPU (or maybe we'll see an SoC).

I don't see MS using IBM for that if they are going all AMD. NEC/Renesas has been working on 28nm eDRAM for the last couple of years now and they first provided the eDRAM for 360.

Just adding some possible details.
 
Well with Wii U we still don't know what process is being used on the GPU and I would assume it will be on an MCM with the GPU (or maybe we'll see an SoC).

I don't see MS using IBM for that if they are going all AMD. NEC/Renesas has been working on 28nm eDRAM for the last couple of years now and they first provided the eDRAM for 360.

Just adding some possible details.

Really? I always thought IBM did all the eDRAM. Interesting
 
I don't see MS using IBM for that if they are going all AMD. NEC/Renesas has been working on 28nm eDRAM for the last couple of years now and they first provided the eDRAM for 360.

MS also moved away from NEC/Renesas likely for cost reasons. :p

There is also the tech sharing that seems to be going on between IBM and GlobalFoundries, so take that as you will - they're planning to fab chips with IBM eDRAM technology at Fab8:
http://www.xbitlabs.com/news/other/...uce_First_Commercial_Chips_for_Customers.html

Really? I always thought IBM did all the eDRAM. Interesting
They switched to TSMC in 2007, but I don't know if they've changed that since. The slim's eDRAM die size is highly indicative of 65nm still, so I'd imagine they'd only go with IBM once they have the final integrated chip fabbed at IBM.
 
MS also moved away from NEC/Renesas likely for cost reasons. :p

There is also the tech sharing that seems to be going on between IBM and GlobalFoundries, so take that as you will - they're planning to fab chips with IBM eDRAM technology at Fab8:
http://www.xbitlabs.com/news/other/...uce_First_Commercial_Chips_for_Customers.html


They switched to TSMC in 2007, but I don't know if they've changed that since. The slim's eDRAM die size is highly indicative of 65nm still, so I'd imagine they'd only go with IBM once they have the final integrated chip fabbed at IBM.

I had tried to see real quick before making that post if TSMC had 28nm eDRAM in the works. I'm probably misremembering, but I was thinking TSMC made the GPU from the beginning while NEC provided the eDRAM before everything when to TSMC. And with AMD's shift to using TSMC, I was envisioning a repeat of that at least in the beginning.
 
Well up to this point I think the leaks would give Sony a pretty good idea to gauge at the kind of performance in a nextgen xbox. Seems like Sony can take this as an advantage to finalize their PS4.
 
Status
Not open for further replies.
Back
Top