Predict: The Next Generation Console Tech

Discussion in 'Console Technology' started by Acert93, Jun 12, 2006.

Thread Status:
Not open for further replies.
  1. tunafish

    Regular

    Joined:
    Aug 19, 2011
    Messages:
    627
    Likes Received:
    414
    AFAIK the PS3 Cells only had 32-bit floats, making them practically useless for most scientific computing tasks. (And much saner for gaming.)
     
  2. hoho

    Veteran

    Joined:
    Aug 21, 2007
    Messages:
    1,218
    Likes Received:
    0
    Location:
    Estonia
    Didn't the version with proper 64bit floats emerge somewhere in 2008?
     
  3. AzBat

    AzBat Agent of the Bat
    Legend

    Joined:
    Apr 1, 2002
    Messages:
    7,747
    Likes Received:
    4,845
    Location:
    Alma, AR
    The new rumor is from our very own GrandMaster...

    http://www.gamesindustry.biz/articles/digitalfoundry-next-gen-xbox-in-2012-analysis?page=1

    He also delves into the idea of dual GPUs...

    The dual GPUs idea sounds plausible. Just not sure it's likely. However, I am coming around to the idea of the two SKUs approach. Makes sense that they would use a 360 that's updated & streamlined for the DVD-less set-top box SKU. Then have the high end SKU be the the 720 with hard drive, DVD and/or Bluray.

    Tommy McClain
     
  4. function

    function None functional
    Legend

    Joined:
    Mar 27, 2003
    Messages:
    5,854
    Likes Received:
    4,400
    Location:
    Wrong thread
    Was Grandmaster's source TheChefO? :p
     
  5. Acert93

    Acert93 Artist formerly known as Acert93
    Legend

    Joined:
    Dec 9, 2004
    Messages:
    7,782
    Likes Received:
    162
    Location:
    Seattle
    Dual GPU in current designs = duplication of memory with all the added draw backs of board complexity, power, cost, etc.

    I don't think it is a horrible idea for the reasons given (I have echoed those long ago) but it does pose hurdles. If they do go with a SI maybe they could invest in cross-chip traffic/memory controller for shared memory. Then going with 2x130mm^2 chips has the benefits of manufacturing. That said there could be structural losses within the GPUs like schedulers and whatnot that would be duplicated and dedicated logic to get the cross-GPU to work. I wonder if it is a dedicated chip design a memory controller / side port communication could be efficiently worked out to minimize such issues.

    Part of me says another options, if it could be pulled off, would be to use the PC market for binning of usable parts:

    10% Top Bin = $500 tier PC parts / no defects, best speed/TDP bin
    11-50% Bin = Xbox 3 Bin / Mid-PC bin, 80% frequency, 10% block disabled
    51-100% Bin = Mid-Range & Low-End PC Binning, various disabled blocks, reduced frequencies

    This would only be helpful the 12-18 months until new DX models come out. But if MS could coordinate this with a chip maker it could be a boost to the chip maker, "Game GPU as Xbox 3" or even better "Faster Xbox Chip". And while it would only be helpful the first year it would allow more usable parts--so even if they are sold at a loss to MS (lets say ATI doesn't need 1M extra parts at $40 chip, so they buy them at $30, BUT MS loses $10 instead of $40 on the unusable chips for the Xbox) they could come ahead until the processes matures to the 80% usable rate and/or the next process reduction. If course a PC compatible part is going to be larger than a console specific one.

    I bet there is a lot on the table...
     
  6. TheChefO

    Banned

    Joined:
    Jul 29, 2005
    Messages:
    4,656
    Likes Received:
    32
    Location:
    Tampa, FL
    I can't take credit for the dual GPU solution as the concept was brought about much earlier in this very thread!

    I believe Acert93 was the first to propose the possibility.

    All I've been doing is proposing reasons for which they might consider a multi-GPU solution.

    edit: And viola! There's Acert! Happy Thanksgiving!
     
  7. Dominik D

    Regular

    Joined:
    Mar 23, 2007
    Messages:
    782
    Likes Received:
    22
    Location:
    Wroclaw, Poland
    Also cert and testing would become much more involved.
     
  8. Heinrich4

    Regular

    Joined:
    Aug 11, 2005
    Messages:
    596
    Likes Received:
    9
    Location:
    Rio de Janeiro,Brazil
    Very important aspect : Size matters a lot!

    Your post make me think here about complexity, size, etc ... and really would be big trouble combining double acess memory bandwidth for 2 GPUs etc, but what if they could put both under the somekind memory controller/crossbar or watheaver more efficient than the current Radeon HD 6990 its possible?

    About the size gpu... in fact something more than found xbox360 and ps3 and specifically around 250mm2 RSX(90nm) would be very difficult to place in a closed box console and please forgive me my dreaming mode ON: seeing what we have today perhaps the best option ont these limitation (relation power/wattage/size) is put 2 Juniper like (Radeon HD 6770/6870M/800 stream processors each) current with 170mm2 at 40nm or counting with 28nm with about 250mm2(2*120/125 mm2 juniper like at 28nm), depending of the (under)clock become something like 70 watts "double gpu".

    ( I was dreaming with something power like 2 * 6990M on same package,but every debate I can tell as much as dream after all possibly the best to come to us will be 40% of something imagined.)
     
  9. TheChefO

    Banned

    Joined:
    Jul 29, 2005
    Messages:
    4,656
    Likes Received:
    32
    Location:
    Tampa, FL
    Hmm ... Multiple Radeon 6770's ... you don't say! :razz:


     
  10. Heinrich4

    Regular

    Joined:
    Aug 11, 2005
    Messages:
    596
    Likes Received:
    9
    Location:
    Rio de Janeiro,Brazil
    Sorry i don't see your post forgive me,but 2 gpus is very dificult(more complex acess memory,cache to hide some latencies,"perfect"efficiency sinc 2 processor etc),3 its too much size and wattage even counting with 28nm (3* 125=375mm2 and 100/110+ watts).
     
  11. Rolf N

    Rolf N Recurring Membmare
    Veteran

    Joined:
    Aug 18, 2003
    Messages:
    2,494
    Likes Received:
    55
    Location:
    yes
    You don't need multiple GPU dice to guard against manufacturing defects. You can just add redundancy onto the chip and disable portions that carry a defect. Or disable a working portion to have performance parity.

    ATI does this exact thing with the Radeon 6870/6850 (which are the same physical chip, where the lesser SKUs have portions disabled).

    NVIDIA does this exact thing with GTX580/GTX570 (which are the same physical chip, where the lesser SKUs have portions disabled).

    IBM/Toshiba/Sony have done this exact thing with Cell BE for the PS3 (8 SPUs built, one permanently disabled for redundancy).

    The idea that now, somehow, splitting a relatively small GPU into multiple pieces had become a better guard against manufacturing defects is complete and utter hogwash. If that were the case, where are ATI's reference designs coupling two lower-end chips on one board? 120mm² per die has been quoted. These boards do not exist because they make no sense whatsoever. Noone benefits from them.

    Multi-GPU is an inefficient enthusiast-only crutch to produce more performance than you can manufacture on a single die. It's a waste of transistors for anything below that. Never mind the significant software overhead to get any performance scaling out of it.

    It's also certainly not a developers' dream feature as quoted above. It's sort of acceptable in a "doesn't necessarily suck as much as you might think" way, but there is no benefit over single GPU with the same aggregate specs. Only drawbacks. Less performance. Higher manufacturing cost. Complete nonsense.
     
  12. TheChefO

    Banned

    Joined:
    Jul 29, 2005
    Messages:
    4,656
    Likes Received:
    32
    Location:
    Tampa, FL
    So the argument here is that a 2.8b transistor chip built on a 28nm node will have better yields than a pair of 1.4b transistor chips on that same 28nm node.

    I strongly disagree.

    I see what you're saying in building in redundancy, but with a chip that big, that is a lot of redundancy!



    I don't think anyone would argue with you about one 2.8b transistor GPU being better than (2) 1.4b trans GPU's, but the issue at hand is, can you get high enough yields out of a chip that big on 28nm that will run cool enough to fit in a console TDP without costing an arm and a leg.

    I'm not saying it's impossible, I just don't see how.
     
  13. Rolf N

    Rolf N Recurring Membmare
    Veteran

    Joined:
    Aug 18, 2003
    Messages:
    2,494
    Likes Received:
    55
    Location:
    yes
    2.8B is enthusiastlevel high end. Never mind that we don't know NVIDIA's/ATI's yields on the current chips in that class, you're looking at an estimated power draw of 180W for the GPU alone, which is so far beyond the budget that you can drop the whole idea entirely.
     
  14. TheChefO

    Banned

    Joined:
    Jul 29, 2005
    Messages:
    4,656
    Likes Received:
    32
    Location:
    Tampa, FL
    It's 8x the current generation transistor budget:

    500m x 8 = 4b

    I'm not expecting anything less than this for next gen.


    How this budget is broken up is debatable, but with current trends of GPU's taking more of the work load off the cpu, I'd bet on 3/4ths of the budget or roughly 3b transistors.



    As for wattage, I'm not sure how they get around this, but there are possibilities outlined in this very thread. Binning is a possibility. Even more so if the GPU is broken up into multiple pieces instead of one monster chip.

    Another possibility is don't confine the console to a micro machine. Let it breathe in a standard 17" wide av case.

    Maybe both, maybe none, I don't know, but anything less than that transistor budget will be a waste of everyone's time.
     
  15. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    22,146
    Likes Received:
    8,533
    Location:
    ಠ_ಠ
    Heat and power do not scale similarly.
     
  16. Rolf N

    Rolf N Recurring Membmare
    Veteran

    Joined:
    Aug 18, 2003
    Messages:
    2,494
    Likes Received:
    55
    Location:
    yes
    Splitting does not help with wattage nor with cooling. Not logically and not in practice. Look at the EVGA dual 560Ti card. Its cooler is even bigger and more obnoxious than the highest-end single-GPU cards.
    Binning for "mobile" style power consumption reduces yields, just as binning for higher clock frequencies reduces yields. Only a sliver of any given run is good enough to cut it as a mobile part. Which is fine in PCs, since high-end mobile GPUs are super niche. Console GPUs aren't.
    Prepare to have your time wasted then.
     
  17. TheChefO

    Banned

    Joined:
    Jul 29, 2005
    Messages:
    4,656
    Likes Received:
    32
    Location:
    Tampa, FL
    Did I suggest otherwise?
     
  18. TheChefO

    Banned

    Joined:
    Jul 29, 2005
    Messages:
    4,656
    Likes Received:
    32
    Location:
    Tampa, FL
    Cooling is easier over a larger surface than a smaller one.

    I never said wattage would be absolutely less, but it could be with binning.

    Indeed, but binning two low power 1.4b trans GPUs would produce higher yields than a single low power 2.8b trans GPU.

    Of course, an engineer could just take your approach and say, "can't do it", slap in whatever fits in a single gpu package in a small box and call it a day.



    But then we'd have xb720 competing with Wuu instead of ps4. Which I'm sure some folks around here would be thrilled with.
     
  19. MarkoIt

    Regular

    Joined:
    Mar 1, 2007
    Messages:
    392
    Likes Received:
    0
    I don't see where the current Xbox360 would find its target in the 2 model line-up rumor. Xbox360 can still generare a lot of profits. It would make sense for MS to launch a revised Xbox360, a truly slim design, with a new SoC at 32nm, maybe already at CES2012, and then launch the next-generation next fall. Xbox360 would target mainstream-casual gamers, in the range 149-249$, and the Next Xbox will be sold at a much higher price range 399-499+$, targeting hard-core gamers.
    I think that people are willing to pay more today for entertainment and electronics than few years ago.
     
  20. TheChefO

    Banned

    Joined:
    Jul 29, 2005
    Messages:
    4,656
    Likes Received:
    32
    Location:
    Tampa, FL
    Indeed.

    It would help them sell consoles.

    A more targeted approach on the marketing front and xb360+Kinect helps them really narrow in on the casual market while top-end hardware in xb720 helps reestablish the hardcore gamer.

    For this reason, I expect much out of xb720.

    Perhaps more than others on this board.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...