Predict: The Next Generation Console Tech

Status
Not open for further replies.
How much power would xenos have consumed if it were a 256MB add in board? An x1800 was using something like 150 watts.


We don't know for sure, but if we have any means of comparison might be something like * Radeon 2600 (192Gflop) to 2900GT (288Gflop) that reach 50 to 150 watts at 65nm (xenos more "hot"...90nm at launch) and generating more or less the same processing power.

* http://en.wikipedia.org/wiki/Radeon_R600
" The R600 is the first personal computer graphics processing unit (GPU) from ATI based on a unified shader architecture. It is ATI's second generation unified shader design and is based on the Xenos GPU implemented in the Xbox 360 game console, which used the world's first such shader architecture ".


New info: I just found this link * but don't know its accuracy, but it seems that does not increase as much as imagined the consumption of 256 to 512MB.

http://www.tomshardware.com/reviews/geforce-radeon-power,2122-6.html
 
How come a "half clocked" 6870, which the 6990M essentially is, has more TFLOPs per Megahertz?

Speaking of AMD manufacturer's own figures (and if they are real ....).. The 6870 reaches at 900 MHz a 2016Gflop and Radeon 6990M and both with the same 1120 strean processors at 715MHz reaches the 1600Gflop * 2 = ~ 3.2 (3.3 I put by mistake sorry ) TFLOP, That's clearly seeing the numbers roughly by AMD.

* http://www.amd.com/us/products/note...md-radeon-6990m/Pages/amd-radeon-6990m.aspx#3

http://en.wikipedia.org/wiki/Comparison_of_AMD_graphics_processing_units
 
Yes 100 watts is too high for early RSX and Xenos...something range 80 to 100 watts is the best shot but here* we see the consumption/Wattage of the GeForce 7800 GTX512 and 7950GT that has some characteristics with RSX (RSX has more transistors,more cache,more RAM on GTX512 and 7950GT) with similar clock and reaches somewhere around 80 to 100 watts and maybe Xenos/C1 and eDRAM(332/337 million transistors) have even more wattage than RSX.


* http://www.xbitlabs.com/articles/graphics/display/geforce7800gtx512_5.html

http://www.guru3d.com/article/geforce-7950-gt-512mb-shootout-review/5

Well, the 7950 GT (90nm and launched just before the PS3) has peak power consumption of 61.1W in this peak power consumption test, and the entire board probably contains a some stuff that could be removed from the PS3 (or that would also be used by the rest of the system):

http://www.xbitlabs.com/articles/graphics/display/geforce7950gt_3.html#sect0

The 7950 GT also has more ROPs than RSX (twice as many), runs 10% faster (so likely > 10% more power consumption if on a comparable process), has twice as much ram, and the ram runs about 25% faster. So that's 61.1W peak power in a hi-res 3D Mark benchmark and there's a good chance it's drawing more than RSX.

Whether it's 360, PS3 or WiiU I think people generally have massively overinflated ideas about the power budget of consoles, while also focusing unfairly on the power consumption of expensive, carefully binned mobile parts.
 
Speaking of AMD manufacturer's own figures (and if they are real ....).. The 6870 reaches at 900 MHz a 2016Gflop and Radeon 6990M and both with the same 1120 strean processors at 715MHz reaches the 1600Gflop * 2 = ~ 3.2 (3.3 I put by mistake sorry ) TFLOP, That's clearly seeing the numbers roughly by AMD.

* http://www.amd.com/us/products/note...md-radeon-6990m/Pages/amd-radeon-6990m.aspx#3

http://en.wikipedia.org/wiki/Comparison_of_AMD_graphics_processing_units

Well then, my mistake... I thought it had much lower clocks.

Whether it's 360, PS3 or WiiU I think people generally have massively overinflated ideas about the power budget of consoles, while also focusing unfairly on the power consumption of expensive, carefully binned mobile parts.

I guess this really comes from early PS3 power usage numbers at the wall, which were in the realm of 200 watts. I am not sure how much power the PS2 chipset uses or if it was even running when playing PS3 stuff, but other than that and the super companion chip, I can't see much else using that much power. It just leaves Cell and RSX, and given that even subtracting 50 Wattsf for "misc", this still leaves 150 Watts for these chips. Cell might draw 90 Watts and RSX 60 in this console, but that also means, if you'd use a cooler CPU, there's enough headroom to use a 100 Watt GPU, easily.
 
Well, the 7950 GT (90nm and launched just before the PS3) has peak power consumption of 61.1W in this peak power consumption test, and the entire board probably contains a some stuff that could be removed from the PS3 (or that would also be used by the rest of the system):

http://www.xbitlabs.com/articles/graphics/display/geforce7950gt_3.html#sect0

The 7950 GT also has more ROPs than RSX (twice as many), runs 10% faster (so likely > 10% more power consumption if on a comparable process), has twice as much ram, and the ram runs about 25% faster. So that's 61.1W peak power in a hi-res 3D Mark benchmark and there's a good chance it's drawing more than RSX.

Whether it's 360, PS3 or WiiU I think people generally have massively overinflated ideas about the power budget of consoles, while also focusing unfairly on the power consumption of expensive, carefully binned mobile parts.

Seeing and agreeing with your numbers and tomshardware link those 60 watts (not range 80/100 watts as im think before for this GPU) for 7950GT probably dissipate more than the RSX cause has more ROPs etc., but we must remember that the 7950GT has 278* million transistors and 196 mm2 while we know that the RSX so we have seen in several comments that have more cache(+flexIO for acess XDR) and is listed as reaching at least 300 million and 258 mm2 **(or something like same size of cell at 90nm). So we can see that 7950GT and RSX could have at least similar numbers.

On wii, ps3 and x360 we have this link***,but today there are actually a lot more experience by the manufacturers (or im pray...crossing fingers here) in dealing with the GPU range 50/100 watts tham in 2005/2006 and may be can deal a universe with two 6950M or 6990M GPU tweaked/customized on same package/closed box for console at 28nm and range 100 watts.



* http://www.rage3d.com/reviews/video/nvidia7950gt/index.php?p=2
http://maps.thefullwiki.org/GeForce_7_Series

** http://www.edepot.com/playstation3.html#PS3_RSX_GPU

*** http://www.hardcoreware.net/reviews/review-356-2.htm
Xbox360 186.5 watt peak and ps3 199.7 peak.
Very interesting...in x360 case almost reach ps3 and probably Xenos wattage surpass Xenon cpu(165 million transistors almost 2/3 of cell) instead cell on ps3 maybe reach 90 watts and RSX have the 'low" numbers wattage..after all maybe Xenos and eDRAM at 90nm could reach range 80/100 watts...
 
Last edited by a moderator:
Well then, my mistake... I thought it had much lower clocks.



I guess this really comes from early PS3 power usage numbers at the wall, which were in the realm of 200 watts. I am not sure how much power the PS2 chipset uses or if it was even running when playing PS3 stuff, but other than that and the super companion chip, I can't see much else using that much power. It just leaves Cell and RSX, and given that even subtracting 50 Wattsf for "misc", this still leaves 150 Watts for these chips. Cell might draw 90 Watts and RSX 60 in this console, but that also means, if you'd use a cooler CPU, there's enough headroom to use a 100 Watt GPU, easily.


I'm in fully agree here(wow super companion chip is very large),it is also likely the console Manufacturers can deal with numbers like 100 watts GPU or something similar to existing methods of dissipation.

And it gives back again ... is that today's consoles manufacturers think the level of 150/200 + watts?

Today I think so, if manufacturers want to send the consumer a product really effective (in the mind of a consumer here) for a cycle of at least five years and believing that they are continuing able to improve their manufacturing processes to reduce TDP / wattage and costs.
 
Last edited by a moderator:
That answers two questions...

1) Kinect will live on for xb720.

2) xb720 will be dx11 based, meaning we won't have to wait long to find out the details of nextgen. ;)

No offense, but it's not like they can ask them to be proficient in technologies that aren't accessible to the public :p
 
The Windows version of Kinect that is announced for 2012 is also interesting.
 
How much power would xenos have consumed if it were a 256MB add in board? An x1800 was using something like 150 watts.
Geizhals/skinflint lists x1800 XLs at around 60 Watts. You're probably thinking of x1900/x1950, but those are specced way beyond Xenos.
 
Quick info. There was a rumor that MS would stick with the 2 model setup. Even going so far that the "set top" box model would be a kinect enabled, netflix, lower end gaming machine. The "hardcore" model having the the optical drive, hdd, and backwards compatibility.

If they took it this far, what if the lower end model had the "single" gpu to run Live games and the like, and the "hardcore" model had 2? That seems like it coud actually work. No difference in architecture, same chips, just a different board. If the people playing with Kinect and Netflix (and ONLY with those forms of gaming and general useage) just how cheap could it be made?
 
Quick info. There was a rumor that MS would stick with the 2 model setup. Even going so far that the "set top" box model would be a kinect enabled, netflix, lower end gaming machine. The "hardcore" model having the the optical drive, hdd, and backwards compatibility.

If they took it this far, what if the lower end model had the "single" gpu to run Live games and the like, and the "hardcore" model had 2? That seems like it coud actually work. No difference in architecture, same chips, just a different board. If the people playing with Kinect and Netflix (and ONLY with those forms of gaming and general useage) just how cheap could it be made?

It would be interesting, but I'd think it would make more sense to utilize the existing xbox360 architecture as it will be significantly cheaper to produce to provide these "lighter" gaming and multimedia experiences.

A variant of this idea may be to utilize a more multi-gpu, multi-cpu architecture for xb720.

Why?

Binning & Yields.

Suppose the xb720 is a 9 core xcpu and a 4-8 "core" gpu. For backwards compatibility, all they may need is 3 active xcpu cores and 1 active gpu core. These may also only need to run at a fraction of the speed of the higher spec xb720.

Thus, MS wouldn't be throwing away cpus and gpus which aren't up to spec on cutting edge 28nm, and at the same time, it would allow for MS to freely "experiment" with a cutting edge manufacturing node.


At the end of the day, they would still have xb360 and xb720 as the only two architectures to support, but the xb360 going forward would essentially be a "gimped" version of the newer xb720 with chips that couldn't cut the mustard as a true xb720.

:cool:
 
Last edited by a moderator:
Binning & Yields.

Suppose the xb720 is a 9 core xcpu and a 4-8 "core" gpu. For backwards compatibility, all they may need is 3 active xcpu cores and 1 active gpu core.

That is a LOT of wasted silicon if they intend to keep producing the 360 model in volumes.

I think they need to be much closer to each other with regard to used die area to make any business sense.

I don´t think the dual chip model is that bad and it could still be used together with a Yield Binning scheme as well. The crux with a dual chip model, as I see it, is that you need some high speed communication between the chips that will require additional logic and pins, but if that can be kept low why not?

For example the PS4 could fairly easy use two Cells (with possibly 8 working SPES) by using the glueless dual-cpu setup that is part of the Cell architecture. There are already PPE commands that let a PPE start up to 16 SPE threads distributed over the two chips, so from a software point of view it should be easy to scale. But to get a full speed coherent memory setup it would require some of the XIO ports that are currently used for the RSX, so we will not see this happen without some heavy modification of the current chips, but who knows maybe a merge of Cell and RSX is the works at 32 nm? Xenon and Xenos were merged already at 45 nm.
 
Doing different cpu/gpu levels for different sku's = horrible mistake. Part of the reason people move to consoles is to get away from that.
 
Doing different cpu/gpu levels for different sku's = horrible mistake. Part of the reason people move to consoles is to get away from that.

I agree. I think it would only make sense to use the current XB360 architecture (maybe shrinked to 32nm) for use in such a settopbox design and the new design only for Loop (or whatever it's name will be)
 
That is a LOT of wasted silicon if they intend to keep producing the 360 model in volumes.

You're missing the point.

If MS are producing the xb720 gpu(s) and cpu(s) anyway, and the yields are such that it leaves a good portion of them unfit for xb720, but plenty useful for xb360, then it is getting better utilization of the runs they are making, while waiting for the yields to improve.

Granted, it would be better to have yields high enough to not be concerned with, but as we saw with Cell only having 7 active spu's instead of 8, yields are likely to be an issue at first.

Another way MS/Sony might want to get around this issue would be to have a use for the gpu outside of the strict specs of a nextgen console.

Using off the shelf gpus would enable them to utilize dies which can't quite cut it in a console, but are fine in a low/mid-range add-on card.

If the die-size/transistor budget is anything like I think they will be for nextgen consoles (4B trans), the GPU's budget will be a huge part of that (~2.8B trans) as they will be taking over more number crunching duties from the CPU, along with more work for graphics. With such a large die budget, the chances of getting each chip perfect are pretty low if it is indeed one large GPU. Splitting the die into two enables significantly better yields, and splitting it again increases the yields even further. I don't imagine they would want to go too far with this approach, but 4 dies on a package is doable and using one of these xb720 gpus for the gpu replacement in a future xbox360 slim2 would be a good way to utilize leftovers that couldn't meet spec in xb720.

More expensive than a 28nm apu designed just for being put into a xb360? Absolutely. But I'm sure at some point, utilizing the leftover dies of the xb720 gpu's which couldn't make spec DOES make sense.

I just have no idea where that point is, nor if it is even necessary as yields may be good enough to not be a concern.
 
Last edited by a moderator:
You're missing the point.

If MS are producing the xb720 gpu(s) and cpu(s) anyway, and the yields are such that it leaves a good portion of them unfit for xb720, but plenty useful for xb360, then it is getting better utilization of the runs they are making, while waiting for the yields to improve.

Granted, it would be better to have yields high enough to not be concerned with, but as we saw with Cell only having 7 active spu's instead of 8, yields are likely to be an issue at first.
That was redundancy against single defect per die, at the cost of 10% unused area. What you are proposing is in an entirely different league.

TheChefO said:
If the die-size/transistor budget is anything like I think they will be for nextgen consoles (4T trans), the GPU's budget will be a huge part of that (~2.8T trans) as they will be taking over more number crunching duties from the CPU, along with more work for graphics. With such a large die budget, the chances of getting each chip perfect are pretty low if it is indeed one large GPU. Splitting the die into two enables significantly better yields, and splitting it again increases the yields even further. I don't imagine they would want to go too far with this approach, but 4 dies on a package is doable and using one of these xb720 gpus for the gpu replacement in a future xbox360 slim2 would be a good way to utilize leftovers that couldn't meet spec in xb720.
Never. Power and cost constraints will keep the chips small enough that multi-GPU (and its inherent ineffciences) will never even enter the picture.

3 billion transistors today (never mind trillions...) is GTX580 leage, which is a 250W part on 40nm, and probably still a 180W part on 28nm. You will not see anything even close to that in a console that will at best launch on a 28nm process.
 
Granted, it would be better to have yields high enough to not be concerned with, but as we saw with Cell only having 7 active spu's instead of 8, yields are likely to be an issue at first.
Wasn't it so that IBM sold a ton of fully-functioning Cells for servers/supercomputers and Sony got the things that had 7 SPEs working? I don't think the yield on Cell was that bad, just that IBM wanted the best for itself :)
 
Status
Not open for further replies.
Back
Top