Predict: The Next Generation Console Tech

Discussion in 'Console Technology' started by Acert93, Jun 12, 2006.

Thread Status:
Not open for further replies.
  1. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,237
    Likes Received:
    4,260
    Location:
    Guess...
    Strictly speaking Xenon was equalled (in peak GFLOP terms) in January 2007 with the launch of the Q6600. Both CPU's are theoretically capable of pushing 76.8 GFLOPs.

    Cell wasn't exceeded for another 4 years until Sandybridge launched in January 2011 with AVX. At that time a 2500K was capable of 211.2 GFLOPS peak vs 204.8 in the PS3 implementation of Cell.

    The highest end Sandybridge-E today can push 316.8 GFLOPS.
     
  2. Acert93

    Acert93 Artist formerly known as Acert93
    Legend

    Joined:
    Dec 9, 2004
    Messages:
    7,782
    Likes Received:
    162
    Location:
    Seattle
    Thanks pjb, I stand corrected. The lack progressive core count and flops, with a focus for IPC as well as adding GPU, have had the PC CPUs lag IMO. The number of 2 core and 4 core CPUs is dominant; I guess the problem is chicken/egg. You need good examples of needing many cores, but until software asks for it Intel/AMD have little prompting them to offer such. But we won't get software pushing such until there is hardware to target it.
     
  3. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,237
    Likes Received:
    4,260
    Location:
    Guess...
    Yes it does seem CPU performance has been significantly slower to progress than GPU. But then I guess that's down to the fact that CPU code isn't particularly parallelizable and thus it can't just scale with moores law like GPU's. CPU's have to get smarter rather than wider and that's seemingly a lot more difficult.

    Hell, if CPU code were parallelizable then we'd probably just have 1000 core Pentiums now ;)
     
  4. RudeCurve

    Banned

    Joined:
    Jun 1, 2008
    Messages:
    2,831
    Likes Received:
    0
    Larrabee says hello...:wink:
     
  5. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,237
    Likes Received:
    4,260
    Location:
    Guess...
    Indeed! But it was designed to run GPU code rather than CPU code so it's kinda case in point ;)
     
  6. RudeCurve

    Banned

    Joined:
    Jun 1, 2008
    Messages:
    2,831
    Likes Received:
    0
    But it could run x86 CPU code as well. No point in creating Larrabee in the first place using x86 cores if it couldn't run x86 software. Larrabee is a CPU/GPU hybrid.
     
  7. Acert93

    Acert93 Artist formerly known as Acert93
    Legend

    Joined:
    Dec 9, 2004
    Messages:
    7,782
    Likes Received:
    162
    Location:
    Seattle
    While I don't doubt making gaming engines and whatnot is very difficult across many threads we already are seeing developers turn the corner--maybe not as much on the PC but definitely on consoles (ironic, eh?) There really is no running away from using a job model across many cores. This was known way back in 2004 when the consoles were being discussed: serial CPU performance hate hit a wall. The pace of serial performance gains has been at a crawl for almost a decade. It is sad when a new process and design and inflated silicone footprints bring excitement for 15% gains in IPC.

    The multicore future has been here for a long time. Like it or not. As someone else mentioned memory is also a big part of this (look how slow that has moved).

    So I say CPUs better get wider, faster, or we are going to continue be stuck in the glut. But I think Intel wants this. They have won over the serial performance war. AMD committed to APUs when they bought ATI so there really is no pressure. But the only way to find best practices, develop new languages and tools that are multicore friendly, and start advancing on this front is getting the hardware out...

    Or for the consoles to push forward with core-heavy designs and start pushing software in a direction that utilizes the hardware. Once there is software that can utilize those kinds of resources there will be more pressure for Intel to offer such. As it stands they have no interest in providing an 8 or 12 core desktop chip when it *undervalues their lucrative server market.* But this is the same market that has shipped tens of millions, maybe hundreds, of quad core PCs with with IGP class GPUs.

    I am not confident it will work out of the box, but the concept of AMD's HSA should also help longterm--using the shaders on an APU as a giant SIMD (or however they are going to market it) will offer a chance to offload some major work and things that can benefit from a much wider chip design to get a big enough bump in a resource pool (looking at 4x bump in flops in the first HSA models over the peak flops of a high end CPU, much more from the pedestrian models) should also encourage this. Of course the PC market is so fragmented with 90% of sales at the much lower end it will take time. A lot of time. Unless someone in the console business decides there is more to be done with CPUs. Or, the fruit may be so high hanging and the ROI for consumers so low they just say meh, why should we lose our shirt when the CPU industry just doesn't care?
     
  8. Blazkowicz

    Legend

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    hardware is getting out, just relatively slowly. we have GCN, Kepler, Maxwell that attack the problem from the GPU side and Intel's Larrabee follow-up from the CPU side.

    it's not the big thing right now, because it's hard not only from the software programming side. you can buy "tile processors" from a few vendors where 64 small cores or some other numbers are arranged in a grid and pass data to each other.
    so it says, "64 cores in 20 watts!" or something.
    but as you may imagine, the cores in the middle of the grid are starved for data.

    so rather than go through this, and design a PCB or order an evaluation board, and hire PHDs to program it, you can just put a sandy bridge on a $50 motherboard with everything (ethernet, sata, serial interface, usb etc.)
     
  9. Acert93

    Acert93 Artist formerly known as Acert93
    Legend

    Joined:
    Dec 9, 2004
    Messages:
    7,782
    Likes Received:
    162
    Location:
    Seattle
    If MS or Sony were looking at NV's and AMD's current architecture for GPGPU it seems AMD has a pretty wide lead right now with GCN over Kepler. Kaotik posted this in another thread and noted, ""suurempi on parempi" means bigger bar is better, while "pienempi on parempi" means smalle bar is better".

    So, say, if Sony's PS4 had an APU like the A8-3850 and a discreet GPU and wanted to offer up the APU as an evolutionary concept to Cell (you can offload some physics as well as post process and pre-process tasks to the CPU's GPU) it would seem right now AMD has the architecture most relevant. Oh, and they also have a CPU, relatively a pretty good one per core, to go along with such.
     
  10. Megadrive1988

    Veteran

    Joined:
    May 30, 2002
    Messages:
    4,723
    Likes Received:
    242
    Saw this on GAF concerning the Wii U:

    http://www.neogaf.com/forum/showthread.php?t=469930&page=32
     
  11. Blazkowicz

    Legend

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    that's interesting ; nvidia might still have an edge with software and the fact you write in CUDA's latest version.

    but there's a problem with these benchmarks, you have to rewrite the applications for the GTX 680 for them to make sense.
    it's not as with SMP CPUs where you can hit a compile switch and have it be mostly optimized.
     
  12. novcze

    Regular

    Joined:
    Jan 5, 2007
    Messages:
    689
    Likes Received:
    211
    APU + GPU with some sort of asymmetrical CrossFire is something what came to my mind when I first heard about APU in PS4. One can use GPU part in APU for GPGPU stuff or for pure GPU stuff cross-firing with discreet GPU.

    I also thought GPU part in APU can be used to emulate SPU's but that is unlikely as someone in this forum said.
     
  13. Heinrich4

    Regular

    Joined:
    Aug 11, 2005
    Messages:
    596
    Likes Received:
    9
    Location:
    Rio de Janeiro,Brazil
    Excellent post. I could not express better, because as much as we imagine the efficiency of the APU on the most favorable scenario these initial reports of alleged SDKs with 1/1.25Tflop levels are disappointing from all points of view (developers, gamer and even competitors will not have much leverage and accelerate their technology ...).

    (We're talking 1/1.25Tflop shaders for 2013/2014! Even with sustained maximum efficiency would be imagined ... this is very few for consoles that should last at least 5 years)
     
  14. TheWretched

    Regular

    Joined:
    Oct 7, 2008
    Messages:
    830
    Likes Received:
    23
    According to Timothy Lotte (designer of FXAA et al), he expects next gen to be about 6 times as powerful as 360 is. This is just a tad faster than todays very high end laptop GPUs. I find this quite a small leap, considering a 7+ year leap in technology.
     
  15. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    Because of the power wall and issues with process shrinks, we can't expect the technology curve to carry on to the exponential degree we are used to.
     
  16. Gubbi

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,661
    Likes Received:
    1,114
    Six times the performance of Xenos is roughly a 7850 (240 GFLOPS vs. 1.8 TFLOPS) . Sounds reasonable to me.

    Cheers
     
  17. DuckThor Evil

    Legend

    Joined:
    Jul 9, 2004
    Messages:
    5,996
    Likes Received:
    1,062
    Location:
    Finland
    Well let's hope MS's "roughly" also ends up closer to 7.5x instead of 4.5x :)
     
  18. french toast

    Veteran

    Joined:
    Jan 5, 2012
    Messages:
    1,667
    Likes Received:
    9
    Location:
    Leicestershire - England
    Even a modern 1TF design would be 6 times:smile:
     
  19. anexanhume

    Veteran

    Joined:
    Dec 5, 2011
    Messages:
    2,078
    Likes Received:
    1,535
    Indeed. TDP sounds reasonable too given it will likely be at least partially based on sea islands if it debuts holiday 2014. Peak 7850 TDP is around 100W, leaving 80 to 90W for CPU and the remainder for other peripherals in a 200W budget.
     
  20. TheWretched

    Regular

    Joined:
    Oct 7, 2008
    Messages:
    830
    Likes Received:
    23
    I was more thinking of "6x across the board". Not something as meaningless as flops. But I might've been off anyways, as I was thinking of my 6870 as "at least 6x the power".
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...