Predict: The Next Generation Console Tech

Status
Not open for further replies.
Games are supposed to be built around a specification and having a turbo mode in a console not only increases the ambiguity of development and how well software performs, but could lead to another RROD fiasco if some piece of software keeps the turbo mode going too long, even in just one core of many. I would highly discourage such an idea, multi-thread your goddamn code if it's overloading just one core/thread.

Controlling the clock speed for non gaming tasks I would think is already implemented though. As pulling 100W just to run a DVD is madness.

I wasn't advocating that the developers would have control over the overclocking portion of the system. I don't know how smart the system would have to be to automatically overclock based on certain code criteria or even if it would be beneficial. My thought was the developer wouldn't even have to know it was happening, for all they know that is the speed they are getting for that particular code, if it was because it was overclocked or not. This could however lead to some developers writing code in such a way as to cheat the system to provide that overclock more often then not.

Was just curious if certain small amounts of code would benefit with a slight overclock that the system would just throttle to and fro depending on the data it needs to crunch. I would also hope current throttling of some kind is being performed during less demanding tasks.

I only bring this up because I have a rooted Droid and have set my Droid to overclock and underclock ondemand. It has given me both better performance and increased battery life as my stock 550mhz processor goes down to 250mhz when the screen is off and or the load is small up to 800mhz when its doing extensive tasks. The only problem is there is an inherent delay with ondemand, by the time the system realizes it needs to overclock it had already started to crunch data at the slower speed.
 
Overclocking isn't an option as these things already run hot. The way to think of it is throttling down, which I'm surprised they don't do. Have a peak performance for games, but throttle back for low power use especially in menus, dashboard/XMB, playing media etc.

That can be incorporated into any tech these days though, so wouldn't in any way define what hardware should go in.
 
http://www.zdnet.com/blog/btl/exclusive-microsoft-looking-to-2015-for-next-gen-xbox-release/46247



xbox-2015-th.png


Very interesting if true. It would mean they are absolutely confident that Xbox360 will continue to grow until 2015 and Sony will not launch their next generation system until then. Their growth had already stopped in Japan and I expect Europe to follow at least next year. Besides, it also suggests that they are not aiming to beat Sony on console launch this time around, but rather try to align their launch time with them.
 
Overclocking isn't an option as these things already run hot. The way to think of it is throttling down, which I'm surprised they don't do. Have a peak performance for games, but throttle back for low power use especially in menus, dashboard/XMB, playing media etc.


The other option for in-game is to have hardware units that are flexible enough to do different things like... unified shaders. XD
 
:p That's till no use when they plain aren't needed. No reason why the 'idling' power draws of these systems couldn't be way down. PS3's a shocker for guzzling watts even in XMB. What are they doing with all those electrons?! Or are you thinking the PS3/George Foreman Grill combo, turning RSX heat into lean cooking?

But this is all moot really, with no bearing on the topic. No hardware design will have an 'overdrive' option; they'll be designed with a peak heat output and use that or lower.
 
Oh... so it was just attention whoring that he labeled it with a blatently false description... :runaway: :p
 
:p That's till no use when they plain aren't needed. No reason why the 'idling' power draws of these systems couldn't be way down. PS3's a shocker for guzzling watts even in XMB. What are they doing with all those electrons?! Or are you thinking the PS3/George Foreman Grill combo, turning RSX heat into lean cooking?

But this is all moot really, with no bearing on the topic. No hardware design will have an 'overdrive' option; they'll be designed with a peak heat output and use that or lower.

Agree, both the PS3 and 360 seem to be using an excessive amount of power for such trivial things. Introducing any kind of overclocking feature would probably cause some increased quality concerns that no company would want to be responsble for fixing.

I do wish though that the next crop of consoles come out with a "environment" type of system like some hybrid cars display. Something in the corner you can access that shows how efficient the machine is at that moment. Like at the XMB or Dashboard these machines should be showing a tree or something and then during game you'd expect that tree to be missing a lot of leaves or what not.

People are becoming more envirornmentally concious and this could be an area that the consoles could differentiate themselves from other equipment on the market. Imagine a hippy wanting a Blu-Ray player and seeing that the 720 and or PS4 can play Blu-Rays with less energy then that $120 BD player next to them. Or those of us who are not so strung up on the environment but still want to know our impact on it.

This would also maybe "Maybe" give people a reason to upgrade thier console when a "newer" version of it comes out. Generally a newer version means the console has been redesigned to save money but at the same time reduces energy consumption. I think this is something the console manufactures should promote. "Made with 20% less materials then the original and up to 9% more efficient"

For me that is much better then the branding done on "organic" and other products who just use a label hoping to dupe people. At least the console manufactures could easily backup their claims.

Just a thought
 
The real problem with the overclocking scenario is that at the beginning of a console's life, when a chip spec has to be set in stone and the production binned for parts that meet that spec, you risk a scenario in which selective overclocking rights given to the developer could potentially push forward the failure dates of certain systems that are within x range of the binning spec, and possibly push certain systems to immediate failure.

Of course no one is necessarily advocating for the above position to begin with.

As far as marketing goes, I do agree touting some of the "green" features of later console iterations is generally a plus, but the way to get the word out is via PR and press, or organically as in these forums. Reason being that for an active advertising campaign (read: money), in all likelihood there are other more pertinent points you need to be highlighting as they relate to your competition.
 
How much power does a cpu core or gpu core consume when it's idling? Think about a situation where you would have hypothetical 8 core cpu+gpu. What if only 2 cpu cores are needed to do some calculations at time x and at that same time gpu has more load than it can handle. Assuming(random number)2W per core could be saved by disabling a core then 6*2W=12W thermal capacity could be used to throttle gpu higher.

If there is power to be saved by underclocking/closing cores and said core throttling can be fast enough it surely would make sense to create dynamic clocks for cpu and gpu. Similarly what if you have loads in game engine which cannot be parallelized and must be run with 1 core only. Closing 7 cores and clocking 1 core higher surely would be good idea... In a closed box dev could manually hint the clocks instead of cpu+gpu combo trying to be smart.

I would hazard guess that due to thermal constraints console cpu+gpu will be underclocked at launch so there would be room for dynamic clock adjustment assuming the dynamic adjustments can be done fast enough and there is savings to be had there to begin with. For integrated chips it's quite typical that some sort of clock compromise must be reached as otherwise running all the blocks at their maximum clockspeed would cause excessive heat production and failure.

How much do the new intel chips gain on single threaded tasks due to dynamic clocking, 10%? How much could we gain performance in a closed console box? Probably more than what intel gains with it's cpus as gpu would be part of equation too. Designing whole system for dynamic loads within power envelope could also enhance increase in computing power we see in closed box environment.
 
Last edited by a moderator:
Overclocking isn't an option as these things already run hot. The way to think of it is throttling down, which I'm surprised they don't do. Have a peak performance for games, but throttle back for low power use especially in menus, dashboard/XMB, playing media etc.

That can be incorporated into any tech these days though, so wouldn't in any way define what hardware should go in.

What about something like Intel turbo mode where a developer can selectively increase the clocks of X number of cores whilst reducing the clocks on the rest of the cores in order to deal with certain workloads which may not be able to be or may be inefficient if parrellaled or however you spell it? Sometimes more single thread performance is worth the tradeoff for less overall performance even if this isn't the best tradeoff at all times.
 
Turbo is useful in that it allows a chip to push closer to the power limit for the device. Otherwise, the conservative estimates of what a chip may or may not consume tend to leave performance on the table.

I can see some benefit to an algorithmically derived turbo, which AMD's latest chips have. This is based on unit utilization and a somewhat conservative calculation of how much power the chip would be consuming if X unit is in use but Y is not.
If based on utilization, the performance should be more deterministic.
It could get a little unpredictable based on software mix, such a few FPU instructions hitting frequently enough that some threshold in the algorithm is tripped and turbo scales back.

A turbo in the vein of Sandy Bridge may not be a good idea, since it is more aggressive and may actually exceed TDP for a short (unpredictable) period of time, depending on measures of power draw such as temperature.
The variability may not be acceptable for a uniform experience.

However, I can see an argument against having turbo in the case of a console, where there is one speed grade and a focus on yields. In the case of desktop chips, the conservative clock is the base clock, with turbo being a non-guaranteed bit of icing on the top.
If a chip cannot meet power and clock targets within device parameters, it can be binned down to a different SKU.
That doesn't look like it would work for a console, where the goal is to get as many chips that meet a consistent spec as possible. A chip either meets the criteria, or it is discarded.
Having a chip that works fine at base, but doesn't hit turbo level 2 is just as broken as one that can't hit stock speeds.
 
Turbo is only useful when the rest of the chip is "idling". This is useful with PCs when the code often benefits for higher single threaded performance, due to inadequate threading. I think in the consoles this wouldn't be so important. maybe if GPU and CPU were on the same die, you could dynamically change their clocks based on game requirements.
 
Throttling certainly makes sense going forwards as the consoles have a wide range of titles now with simple 2D download games. No point burnign the whole system just for drawing some sprites. Turbo shouldn't matter for consoles where fixed hardware means games are designed for multiple cores. Powerful single-threaded performance in a console would just be a matter of a convenience, and one that can't really be afforded any more. Besides, we already have 3 GHz cores in these machines!
 
Is is possible that Microsoft would use a CPU like this for the next generation:

*3 Ghz.
*ARMv8 based.
*2, 3 or 4 CPU modules.
*Every module is composed by 16 Custom ARMv8 Cores.
*Every core is OOE, SIMD8 support with FMADD (16 op/cycle).

The minimum configuration would have a performance of 1.5 TFLOPS in the CPU part, an huge increase for the next gen but I believe that this type of increase is needed in the future. What do you think?
 
Is is possible that Microsoft would use a CPU like this for the next generation:

*3 Ghz.
*ARMv8 based.
*2, 3 or 4 CPU modules.
*Every module is composed by 16 Custom ARMv8 Cores.
*Every core is OOE, SIMD8 support with FMADD (16 op/cycle).

The minimum configuration would have a performance of 1.5 TFLOPS in the CPU part, an huge increase for the next gen but I believe that this type of increase is needed in the future. What do you think?
What would be the purpose of such a CPU? How big the thing would be? ARMv8 are A15 right?
 
Last edited by a moderator:
I think you are far better going with a improved Cell type design then going absolutely bonkers with the amount of CPU cores. There is only so much parallelism to exploit.
 
I think you are far better going with a improved Cell type design then going absolutely bonkers with the amount of CPU cores. There is only so much parallelism to exploit.
Well that's why I ask with that much power at hand you might consider moving some graphic processing for the GPU.

I believe that if one were to do so a simplier "pure" vector processors in the same vein as SPUs would do a beter job.
 
Why not just take the original Xbox360 CPU, add another core, add OOO execution, improve caches and clock frequency, add more cache, add a memory controller on die, improve vector operations efficiency, improve hyper-threading and be done with it?
 
Why not just take the original Xbox360 CPU, add another core, add OOO execution, improve caches and clock frequency, add more cache, add a memory controller on die, improve vector operations efficiency, improve hyper-threading and be done with it?

Because you want cpu/gpu joined at the hip next gen.
 
Status
Not open for further replies.
Back
Top