Predict: The Next Generation Console Tech

Discussion in 'Console Technology' started by Acert93, Jun 12, 2006.

Thread Status:
Not open for further replies.
  1. Heinrich4

    Regular

    Joined:
    Aug 11, 2005
    Messages:
    596
    Likes Received:
    9
    Location:
    Rio de Janeiro,Brazil

    We don't know for sure, but if we have any means of comparison might be something like * Radeon 2600 (192Gflop) to 2900GT (288Gflop) that reach 50 to 150 watts at 65nm (xenos more "hot"...90nm at launch) and generating more or less the same processing power.

    * http://en.wikipedia.org/wiki/Radeon_R600
    " The R600 is the first personal computer graphics processing unit (GPU) from ATI based on a unified shader architecture. It is ATI's second generation unified shader design and is based on the Xenos GPU implemented in the Xbox 360 game console, which used the world's first such shader architecture ".


    New info: I just found this link * but don't know its accuracy, but it seems that does not increase as much as imagined the consumption of 256 to 512MB.

    http://www.tomshardware.com/reviews/geforce-radeon-power,2122-6.html
     
  2. Heinrich4

    Regular

    Joined:
    Aug 11, 2005
    Messages:
    596
    Likes Received:
    9
    Location:
    Rio de Janeiro,Brazil
    Speaking of AMD manufacturer's own figures (and if they are real ....).. The 6870 reaches at 900 MHz a 2016Gflop and Radeon 6990M and both with the same 1120 strean processors at 715MHz reaches the 1600Gflop * 2 = ~ 3.2 (3.3 I put by mistake sorry ) TFLOP, That's clearly seeing the numbers roughly by AMD.

    * http://www.amd.com/us/products/note...md-radeon-6990m/Pages/amd-radeon-6990m.aspx#3

    http://en.wikipedia.org/wiki/Comparison_of_AMD_graphics_processing_units
     
  3. function

    function None functional
    Legend Veteran

    Joined:
    Mar 27, 2003
    Messages:
    5,135
    Likes Received:
    2,248
    Location:
    Wrong thread
    Well, the 7950 GT (90nm and launched just before the PS3) has peak power consumption of 61.1W in this peak power consumption test, and the entire board probably contains a some stuff that could be removed from the PS3 (or that would also be used by the rest of the system):

    http://www.xbitlabs.com/articles/graphics/display/geforce7950gt_3.html#sect0

    The 7950 GT also has more ROPs than RSX (twice as many), runs 10% faster (so likely > 10% more power consumption if on a comparable process), has twice as much ram, and the ram runs about 25% faster. So that's 61.1W peak power in a hi-res 3D Mark benchmark and there's a good chance it's drawing more than RSX.

    Whether it's 360, PS3 or WiiU I think people generally have massively overinflated ideas about the power budget of consoles, while also focusing unfairly on the power consumption of expensive, carefully binned mobile parts.
     
  4. TheWretched

    Regular

    Joined:
    Oct 7, 2008
    Messages:
    830
    Likes Received:
    23
    Well then, my mistake... I thought it had much lower clocks.

    I guess this really comes from early PS3 power usage numbers at the wall, which were in the realm of 200 watts. I am not sure how much power the PS2 chipset uses or if it was even running when playing PS3 stuff, but other than that and the super companion chip, I can't see much else using that much power. It just leaves Cell and RSX, and given that even subtracting 50 Wattsf for "misc", this still leaves 150 Watts for these chips. Cell might draw 90 Watts and RSX 60 in this console, but that also means, if you'd use a cooler CPU, there's enough headroom to use a 100 Watt GPU, easily.
     
  5. Heinrich4

    Regular

    Joined:
    Aug 11, 2005
    Messages:
    596
    Likes Received:
    9
    Location:
    Rio de Janeiro,Brazil
    Seeing and agreeing with your numbers and tomshardware link those 60 watts (not range 80/100 watts as im think before for this GPU) for 7950GT probably dissipate more than the RSX cause has more ROPs etc., but we must remember that the 7950GT has 278* million transistors and 196 mm2 while we know that the RSX so we have seen in several comments that have more cache(+flexIO for acess XDR) and is listed as reaching at least 300 million and 258 mm2 **(or something like same size of cell at 90nm). So we can see that 7950GT and RSX could have at least similar numbers.

    On wii, ps3 and x360 we have this link***,but today there are actually a lot more experience by the manufacturers (or im pray...crossing fingers here) in dealing with the GPU range 50/100 watts tham in 2005/2006 and may be can deal a universe with two 6950M or 6990M GPU tweaked/customized on same package/closed box for console at 28nm and range 100 watts.



    * http://www.rage3d.com/reviews/video/nvidia7950gt/index.php?p=2
    http://maps.thefullwiki.org/GeForce_7_Series

    ** http://www.edepot.com/playstation3.html#PS3_RSX_GPU

    *** http://www.hardcoreware.net/reviews/review-356-2.htm
    Xbox360 186.5 watt peak and ps3 199.7 peak.
    Very interesting...in x360 case almost reach ps3 and probably Xenos wattage surpass Xenon cpu(165 million transistors almost 2/3 of cell) instead cell on ps3 maybe reach 90 watts and RSX have the 'low" numbers wattage..after all maybe Xenos and eDRAM at 90nm could reach range 80/100 watts...
     
    #8345 Heinrich4, Nov 23, 2011
    Last edited by a moderator: Nov 24, 2011
  6. Heinrich4

    Regular

    Joined:
    Aug 11, 2005
    Messages:
    596
    Likes Received:
    9
    Location:
    Rio de Janeiro,Brazil

    I'm in fully agree here(wow super companion chip is very large),it is also likely the console Manufacturers can deal with numbers like 100 watts GPU or something similar to existing methods of dissipation.

    And it gives back again ... is that today's consoles manufacturers think the level of 150/200 + watts?

    Today I think so, if manufacturers want to send the consumer a product really effective (in the mind of a consumer here) for a cycle of at least five years and believing that they are continuing able to improve their manufacturing processes to reduce TDP / wattage and costs.
     
    #8346 Heinrich4, Nov 23, 2011
    Last edited by a moderator: Nov 24, 2011
  7. DopeyFish

    Newcomer

    Joined:
    Jun 20, 2004
    Messages:
    134
    Likes Received:
    1
    No offense, but it's not like they can ask them to be proficient in technologies that aren't accessible to the public :p
     
  8. Arwin

    Arwin Now Officially a Top 10 Poster
    Moderator Legend

    Joined:
    May 17, 2006
    Messages:
    17,674
    Likes Received:
    1,194
    Location:
    Maastricht, The Netherlands
    The Windows version of Kinect that is announced for 2012 is also interesting.
     
  9. Rolf N

    Rolf N Recurring Membmare
    Veteran

    Joined:
    Aug 18, 2003
    Messages:
    2,494
    Likes Received:
    55
    Location:
    yes
    Geizhals/skinflint lists x1800 XLs at around 60 Watts. You're probably thinking of x1900/x1950, but those are specced way beyond Xenos.
     
  10. TheChefO

    Banned

    Joined:
    Jul 29, 2005
    Messages:
    4,656
    Likes Received:
    32
    Location:
    Tampa, FL
    Touché!
     
  11. Silenti

    Regular

    Joined:
    May 25, 2005
    Messages:
    457
    Likes Received:
    41
    Quick info. There was a rumor that MS would stick with the 2 model setup. Even going so far that the "set top" box model would be a kinect enabled, netflix, lower end gaming machine. The "hardcore" model having the the optical drive, hdd, and backwards compatibility.

    If they took it this far, what if the lower end model had the "single" gpu to run Live games and the like, and the "hardcore" model had 2? That seems like it coud actually work. No difference in architecture, same chips, just a different board. If the people playing with Kinect and Netflix (and ONLY with those forms of gaming and general useage) just how cheap could it be made?
     
  12. TheChefO

    Banned

    Joined:
    Jul 29, 2005
    Messages:
    4,656
    Likes Received:
    32
    Location:
    Tampa, FL
    It would be interesting, but I'd think it would make more sense to utilize the existing xbox360 architecture as it will be significantly cheaper to produce to provide these "lighter" gaming and multimedia experiences.

    A variant of this idea may be to utilize a more multi-gpu, multi-cpu architecture for xb720.

    Why?

    Binning & Yields.

    Suppose the xb720 is a 9 core xcpu and a 4-8 "core" gpu. For backwards compatibility, all they may need is 3 active xcpu cores and 1 active gpu core. These may also only need to run at a fraction of the speed of the higher spec xb720.

    Thus, MS wouldn't be throwing away cpus and gpus which aren't up to spec on cutting edge 28nm, and at the same time, it would allow for MS to freely "experiment" with a cutting edge manufacturing node.


    At the end of the day, they would still have xb360 and xb720 as the only two architectures to support, but the xb360 going forward would essentially be a "gimped" version of the newer xb720 with chips that couldn't cut the mustard as a true xb720.

    :cool:
     
    #8352 TheChefO, Nov 25, 2011
    Last edited by a moderator: Nov 25, 2011
  13. Crossbar

    Veteran

    Joined:
    Feb 8, 2006
    Messages:
    1,821
    Likes Received:
    12
    That is a LOT of wasted silicon if they intend to keep producing the 360 model in volumes.

    I think they need to be much closer to each other with regard to used die area to make any business sense.

    I don´t think the dual chip model is that bad and it could still be used together with a Yield Binning scheme as well. The crux with a dual chip model, as I see it, is that you need some high speed communication between the chips that will require additional logic and pins, but if that can be kept low why not?

    For example the PS4 could fairly easy use two Cells (with possibly 8 working SPES) by using the glueless dual-cpu setup that is part of the Cell architecture. There are already PPE commands that let a PPE start up to 16 SPE threads distributed over the two chips, so from a software point of view it should be easy to scale. But to get a full speed coherent memory setup it would require some of the XIO ports that are currently used for the RSX, so we will not see this happen without some heavy modification of the current chips, but who knows maybe a merge of Cell and RSX is the works at 32 nm? Xenon and Xenos were merged already at 45 nm.
     
  14. AlphaWolf

    AlphaWolf Specious Misanthrope
    Legend

    Joined:
    May 28, 2003
    Messages:
    8,359
    Likes Received:
    216
    Location:
    Treading Water
    Doing different cpu/gpu levels for different sku's = horrible mistake. Part of the reason people move to consoles is to get away from that.
     
  15. Crossbar

    Veteran

    Joined:
    Feb 8, 2006
    Messages:
    1,821
    Likes Received:
    12
    Apart from that it would be kind of pointless if you continue to sell a prev gen console.
     
  16. stiftl

    Newcomer

    Joined:
    Jun 24, 2006
    Messages:
    107
    Likes Received:
    2
    I agree. I think it would only make sense to use the current XB360 architecture (maybe shrinked to 32nm) for use in such a settopbox design and the new design only for Loop (or whatever it's name will be)
     
  17. TheChefO

    Banned

    Joined:
    Jul 29, 2005
    Messages:
    4,656
    Likes Received:
    32
    Location:
    Tampa, FL
    You're missing the point.

    If MS are producing the xb720 gpu(s) and cpu(s) anyway, and the yields are such that it leaves a good portion of them unfit for xb720, but plenty useful for xb360, then it is getting better utilization of the runs they are making, while waiting for the yields to improve.

    Granted, it would be better to have yields high enough to not be concerned with, but as we saw with Cell only having 7 active spu's instead of 8, yields are likely to be an issue at first.

    Another way MS/Sony might want to get around this issue would be to have a use for the gpu outside of the strict specs of a nextgen console.

    Using off the shelf gpus would enable them to utilize dies which can't quite cut it in a console, but are fine in a low/mid-range add-on card.

    If the die-size/transistor budget is anything like I think they will be for nextgen consoles (4B trans), the GPU's budget will be a huge part of that (~2.8B trans) as they will be taking over more number crunching duties from the CPU, along with more work for graphics. With such a large die budget, the chances of getting each chip perfect are pretty low if it is indeed one large GPU. Splitting the die into two enables significantly better yields, and splitting it again increases the yields even further. I don't imagine they would want to go too far with this approach, but 4 dies on a package is doable and using one of these xb720 gpus for the gpu replacement in a future xbox360 slim2 would be a good way to utilize leftovers that couldn't meet spec in xb720.

    More expensive than a 28nm apu designed just for being put into a xb360? Absolutely. But I'm sure at some point, utilizing the leftover dies of the xb720 gpu's which couldn't make spec DOES make sense.

    I just have no idea where that point is, nor if it is even necessary as yields may be good enough to not be a concern.
     
    #8357 TheChefO, Nov 25, 2011
    Last edited by a moderator: Nov 25, 2011
  18. TheChefO

    Banned

    Joined:
    Jul 29, 2005
    Messages:
    4,656
    Likes Received:
    32
    Location:
    Tampa, FL
    Agreed.

    xb360 & xb720 are enough to carry forward.
     
  19. Rolf N

    Rolf N Recurring Membmare
    Veteran

    Joined:
    Aug 18, 2003
    Messages:
    2,494
    Likes Received:
    55
    Location:
    yes
    That was redundancy against single defect per die, at the cost of 10% unused area. What you are proposing is in an entirely different league.

    Never. Power and cost constraints will keep the chips small enough that multi-GPU (and its inherent ineffciences) will never even enter the picture.

    3 billion transistors today (never mind trillions...) is GTX580 leage, which is a 250W part on 40nm, and probably still a 180W part on 28nm. You will not see anything even close to that in a console that will at best launch on a 28nm process.
     
  20. hoho

    Veteran

    Joined:
    Aug 21, 2007
    Messages:
    1,218
    Likes Received:
    0
    Location:
    Estonia
    Wasn't it so that IBM sold a ton of fully-functioning Cells for servers/supercomputers and Sony got the things that had 7 SPEs working? I don't think the yield on Cell was that bad, just that IBM wanted the best for itself :)
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...