Predict: The Next Generation Console Tech

Discussion in 'Console Technology' started by Acert93, Jun 12, 2006.

Thread Status:
Not open for further replies.
  1. antwan

    Banned

    Joined:
    May 26, 2012
    Messages:
    200
    Acer93 was using it as some kind of.... performance measurement ( ? ) though.
    I like number of transistors a lot more, for that kind of comparison (still doesn't really relate to performance that much, if at all).

    edit: maybe someone has a list of "fps per mm2", or "transistor per fps" for a number of GPU's, then you would see how much sense that makes (or doesn't make) when comparing architectures.
     
  2. bgassassin

    Regular

    Joined:
    Aug 12, 2011
    Messages:
    507
    I don't see it as straight up raw power. Someone earlier talking about MS' multiplier and what it possibly meant I believe and I see it similar for PS4 as well in that this "10x" includes efficiency of modern hardware along with raw power to achieve that target.
     
  3. SKYSONY

    Newcomer

    Joined:
    Jun 12, 2012
    Messages:
    131
    Back when the targets specs of the PS4 were leaked, people thought it was going to have an APU + discrete GPU. That would have been nice for the PS4. For if it is only going to have the APU, with a mid-range GPU and 2 Gb of fast RAM, then the specs appear to be quite weak IMO (without know much about hard). Maybe the only hope is what lherre said about the specs still open to be changed. But if what sweetavatar said at neogaf was true, and Sony has changed steamroller cores for jaguar ones, this would mean they are targeting a mid-low range APU.

    I hope at least one system offers an interesting performance. Last months we┬┤ve been reading interesting things about the 720 (something that has not happened with the PS4), I hope MS comes with a good hardware at least.
     
  4. Mianca

    Regular

    Joined:
    Aug 7, 2010
    Messages:
    330
    That wikipedia number is misleading because it seems to be calculated in a weird way.

    As far as my own math goes, RSX is a ~250 GFLOPS chip [24*2*4 way Pixel ALUs + 8*5 way Vertex ALUs)@550Mhz].

    Also, consider that XENOS is rated @ 240GFLOPs [48*5 way unified ALUs @500Mhz] - and is generally considered faster than RSX.

    Curiously enough, Pitcairn XT offers ~ 2.500 GFLOPs. So that should basically be the target spec (although, as I mentioned earlier, Pitcairn@1Ghz is way too power hungry to make its way into a console). An optimized and underclocked Oland-derivative should probably be capable of reaching those target numbers within a reasonable power budget, though.

    They'd roughly need 24CUs running @800Mhz to end up with the target of 2.500 GFLOPs.
     
  5. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    34,882
    Location:
    Under my bridge
    Xenos and RSX work in fundamentally different ways, giving Xenos a significant efficiency advantage. All things being equal, two GPUs built around the same sort of design will ahve their performance defined by number of transistors (summarised with mm^2 at the same manufacturing process) and clock. Where the compared GPUs different in design, it's impossible to gauge performance counting mm^2, transisitors, FLOPs, or anything else, other than to have them broadly identify performance brackets they come under.
     
  6. aaronspink

    Veteran

    Joined:
    Jun 20, 2003
    Messages:
    2,636
    That's the Nvidia marketing number, it bears about as much relation to reality as if Intel marketed Ivy Bridge as being a 10+ TeraOp processor (which it is for 1b ands and ors!)
     
  7. antwan

    Banned

    Joined:
    May 26, 2012
    Messages:
    200
    Great explanation!
    I guess that comparing 2005 and 2012 GPU designs on the basis of mm2 is pretty useless then.
     
  8. onQ

    onQ
    Veteran

    Joined:
    Mar 4, 2010
    Messages:
    1,540
    How often are Dev-Kits updated?
     
  9. ERP

    ERP
    Moderator Veteran

    Joined:
    Feb 11, 2002
    Messages:
    3,669
    Location:
    Redmond, WA
    It depends.
    With 360 we had mac's, got a graphics card update and then final boxes.
    There were probably also very small runs of near final kits that we never saw.

    Generally they get updated if there is value in the update, the Mac's were not good indicators of anything in the final hardware, but they ran the prerelease OS, the 9600's were underpowered and did affect development, so they were changed later. After that the leap is to final hardware.
    On PS3 it was basically some variation of the final hardware all the way to release, I only remember 2 kits, but I might be forgetting one.

    It should also be noted that traditionally MS has based it's dev machines on retail hardware, Sony hasn't

    Also it should be noted these upgrades aren't dropped on a whim, there is a roadmap based on the release schedule, any change to any of the hardware requires additional work by the OS team and the schedules are usually very tight to ship anything.
     
  10. upnorthsox

    Veteran

    Joined:
    May 7, 2008
    Messages:
    1,845
    In other words, don't expect large scale changes once dev-kits have gone out unless already telegraphed by the platform holder.
     
  11. ERP

    ERP
    Moderator Veteran

    Joined:
    Feb 11, 2002
    Messages:
    3,669
    Location:
    Redmond, WA
    Things do change, but it's not usually radical.

    The most common case is that for whatever reason they can't manufacture the original design at the original specs, or a part provided by a vendor doesn't meet spec.
    But things change for other reasons like reactions to a competitor MS doubled the memory on 360.
    3DO probably made the biggest change I've ever seen when they added a second CPU to M2, but that never had a release date.
    There are rumors that N64 had a fairly significant change (downgrade), but the only teams affected were the "dream team" members.

    Things are a lot more stable now than they used to be, the original Saturn devkits were the size of a small fridge, were missing half the hardware ran really hot and tended to last about 5 minutes before dieing, but that was when all the console manufacturer did was ship hardware with badly translated register documentation.

    It should also be noted that not all developers get initial devkits at the same time and not all developers see all of the devkits that are produced. For example there will probably be a small run of pre release hardware used by the OS team, since you can't ship devkits without the OS.
     
  12. bkilian

    Veteran

    Joined:
    Apr 22, 2006
    Messages:
    1,539
    The Xbox GPU (according to a presentation the Xbox guys gave us when we were starting HD DVD development) can and does routinely achieve max throughput, and apparently RSX doesn't. So in real terms, the stated max numbers are misleading. I don't know if current gen GPUs are similarly misleading.
     
  13. archangelmorph

    Veteran

    Joined:
    Jun 19, 2006
    Messages:
    1,551
    Location:
    London
    Do you have a source/citation for that?
     
  14. liolio

    liolio French frog
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,519
    Location:
    Bx, France
    DO we need a source for that? I mean ain't that the whole point behind the shift to unified shader architecture?
    Xenos thanks to the edram may also suffer from less bottlenecks involving bandwidth?
     
  15. kagemaru

    Veteran

    Joined:
    Aug 23, 2010
    Messages:
    1,350
    Location:
    Ohio
    I thought the biggest complaint regarding eDRAM in the 360 was how the frame buffer had to reside in the eDRAM without any way to bypass it. Isn't there some way to use eDRAM for bandwidth without forcing the frame buffer to sit on the eDRAM? Apologies if this is a dumb question but some of the dev complaints seem to indicate the eDRAM could have been implemented differently in the 360.

    Not a big deal, but RSX is 500Mhz.

    I'm guess he's estimating this through the efficiencies gained through a unified shader architecture versus a more discrete shader model where you can't tailor your game's shader load specifically to your GPU spec 100% of the time. Meaning at some point your GPU may be underutilized. I can be wrong of course. :razz:
     
  16. ERP

    ERP
    Moderator Veteran

    Joined:
    Feb 11, 2002
    Messages:
    3,669
    Location:
    Redmond, WA
    On a none unified GPU at all points it's under utilized, you're either pushing simply shaded tris in which case the pixel shaders are underutilized, or you're doing complex shading in which case your vertex shaders are under utilized.
    Real games do both at different points in a frame, drawing shadows you're not doing any pixel shading, when you're doing the pretty lighting model, pixel shading is dominant.

    On 360 the EDRAM also makes a difference, since it means the frame buffer memory is never the bottleneck.

    Having said that I would be surprised if real games saw 100% utilization on a 360 for a significant portion of a frame, there are still other things that gate through put and cause ALU's to sit idle. Texture fetches, triangle setup, number of rops etc etc.
     
  17. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    34,882
    Location:
    Under my bridge
    Depends what you're comparing. For cost purposes, taking 1 mm^2 of transistors to be about the same price no matter what year it's made, then a console with a given silicon area will cost about the same. Hence the idea that if 300 mm^2 was the cost-effective limit for a $300 console in year a, 300 mm^2 would also be the limit for a $300 console in year b. Factors are at play, but it seems an okay ballpark reference to me. That might be an unrealistic assumption.

    Another reference point is Moore's law. If transistor density increases 2x every 18 months, then a 10x increase in power in the same chip area happens every 5ish years, which is our expected console generation, and the OP's (and most us) original expectation.
     
  18. antwan

    Banned

    Joined:
    May 26, 2012
    Messages:
    200
    Sorry but I don't quite get the logic:
    You are comparing (estimated) die sizes of 90nm 2005 GPU architectures against 28nm 2012 ones. The 28nm fab process allows for a lot more transistors on the same die size. Not to mention the GPUs themselves having improved vastly (to (over)simplify it; even the same amount of transistors would yield a lot more performance in the 2012 GPUs.
    So I believe your conclusion based on the comparison of the die sizes is not correct.

    A chip that is 50% smaller on the outside could have 5 times the performance (2012 vs 2005).
     
  19. Sonic

    Sonic Senior Member
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,833
    Location:
    San Francisco, CA
    5 times the performance isn't going to cut it, and I think that is part of Acert's point. If they can go for bigger and still be profitable then they should do it, as that will help them in the long run. It will be entirely sad to see these companies release sub-par hardware, especially in the graphics department, and then get rocked when their biggest competitor comes in with overpriced hardware and still manages to sell millions in minutes on day one.




    But for real, any new details about the PS4 APU? The cores have changed from bulldozer to jaguar cores are the current rumors. Are Jaguar cores significantly less powerful than bulldozer cores?
     
  20. Sonic

    Sonic Senior Member
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,833
    Location:
    San Francisco, CA
Thread Status:
Not open for further replies.

Share This Page

Loading...