Is everything on one die a good idea?

Discussion in 'Architecture and Products' started by punchinthejunk, Jul 21, 2014.

  1. homerdog

    homerdog donator of the year
    Legend Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,294
    Likes Received:
    1,075
    Location:
    still camping with a mauler
    People get it stuck in their head that in some number of years integrated graphics will be as fast as discreet graphics, without considering that in a few years discreet graphics will be an order of magnitude or more faster than they are today. By the time Intel or AMD IGP is on par with GM107, NVIDIA will be rocking Volta or Einstein.

    So until the we reach the point where to the average PC gamer the IGP is good enough to render nearly indistinguishable dGPU quality visuals at 60+Hz, the dGPU lives on. We are a long way from that.
     
  2. dnavas

    Regular

    Joined:
    Apr 12, 2004
    Messages:
    375
    Likes Received:
    7
    Indeed. I don't know the performance numbers in detail, but I got a good laugh looking at the price :)
    I'll know that GPUs are in the same place when only the Quadros get faster from year to year....

    Maybe. I get the sense that Apple went and scared Intel into action re:APUs, and once the desktop sales lagged because none of us saw the need to upgrade (among other reasons), it all turned into a non-virtuous cycle. Now tablets are the hotness, laptops are the stodgy business item, and there seems little desktop market beyond workstations. I'm not sure a competitive AMD would change that.

    To bring this back on topic -- I definitely do not think that single-die / APUs are a "good idea" -- not for me or my use-case, anyway. It's practically a necessity in the high-volume tablet/phone markets though. I'm in the process of resigning myself to the notion that I'll be paying significantly more for my next desktop, and taking comfort that some of my first machines were quite expensive as well.
     
  3. Sure, we'll always have discrete graphics cards, at least well into the 22nd century because oh my god don't let this change... I hate change! :roll:

    Likewise, I could go to the Handheld forum and retrieve some posts from 8 years ago with people claiming "it's pointless to invest in ultra-low-power GPUs because we'll always need lots of power to make 3D gaming work"

    Always, never, forever... Gotta love looking at those words in a tech forum.


    Exactly. And looking at roadmaps, stacked memory in APUs should happen at least within the next 3-4 years. Killing the dGPUs (at least for the consumer) during the 6 years following that... isn't hard to imagine at all.

    Also, keeping a GPU very close to the CPU cores should become increasingly more efficient for compute loads, which are already taking their part in games.
     
  4. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    There's a decent chance at that point that large chips like Einstein are going to have CPUs on them, so what does that mean as to having everything on the same die?
     
  5. constant

    Newcomer

    Joined:
    Feb 9, 2014
    Messages:
    22
    Likes Received:
    8
    The future is definitely heterogeneous (aa spelling).

    The large dGPU will live on but will instead be a hetereogenous system with ARM+GPU oand /or x86+GPU.

    This is basically already a reality with AMD:s APUs (although they're not really in dGPU form factor... yet). Nvidia has plans to place denver cores out on the dGPU to handle more of the gaming operations out there. Although they seem to be delayed as we don't appear to be getting these ARM cores with Maxwell as was previously roadmapped.
     
  6. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    GM107 is a 150mm² GPU. Right now, putting that much graphics silicon on an APU makes little sense because of bandwidth limitations, but once those are gone, there will be nothing keeping APUs from matching GM107 (or the Pascal/Volta/Einstein equivalent).
     
  7. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    Consider this: After 5 years, An APU with stacked memory, achieving 300GB/s of bandwidth, And a dGPU with a much wider and higher clocked stacked memory, achieving 900GB/s of bandwidth, how is that different from now?
     
  8. homerdog

    homerdog donator of the year
    Legend Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,294
    Likes Received:
    1,075
    Location:
    still camping with a mauler
    I did a Ctrl+F and the only one using the word Always is you. Beware the strawman! :razz:

    Oh just those pesky TDP and power density concerns. GM107 may not have been the best example - how about GK104? And by the time it is feasible to make a GK104+i7-4770 APU, GK104 will be nothing special.

    Of course at some point the dGPU will go away as all things tech do, but that day is a good decade+ off.
     
    #28 homerdog, Jul 22, 2014
    Last edited by a moderator: Jul 22, 2014
  9. Why are you assuming that the dGPU will have a much wider and higher clocked memory?
    If we're talking about heat dissipation, last I checked the APU/CPU coolers can generally be much larger than the dGPU ones.

    Again, having much lower latencies between CPU and GPU in an APU should become very important as well.

    Check what is written in the post. This was a reference to the handheld forums and the "3D in handhelds will never take off" naysayers.
     
  10. Pixel

    Veteran

    Joined:
    Sep 16, 2013
    Messages:
    1,008
    Likes Received:
    477
    So going even beyond the disappearance of the dgpu, eventually we might have a pcb with only one giant monolithic chip? Even memory stacked on the chip.
     
  11. Yes. One PCB with an APU socket, power conversion components, I/O only for peripherals (USB, HDMI, audio out, etc.). I guess even external RAM is bound to disappear eventually when the APUs start carrying enough memory.
    On desktops, the Mini-ITX should be standard by then, IMO.

    Of course, such chips are likely to cost less than the CPU+GPU+RAM discrete equivalents, but not much less since I think the tech companies will take advantage of the cheaper BoM to make more money in the end.
     
  12. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,884
    Likes Received:
    5,334
    Not if your comparing current console cpu's with current desktop cpu's. A quadcore is faster now it will still be faster in 5 years time

    ps: we've also had avx2 will also be the death of dgpu's ;)
     
  13. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    19,426
    Likes Received:
    10,320
    Someday, perhaps... Things are continually being move onto the CPU/APU/whatever. Math co-processors, storage controllers (unthinkable 20 years ago), memory controllers, audio and low end video are also being moved off the MB and into the CPU/APU, etc.

    There's always going to be a place for discrete components. There's still uses for discrete storage controllers for a small market segment. There's still uses for discrete audio for an even smaller market segment. I'm sure the same will eventually hold true for discrete graphics controllers.

    But that isn't going to be the case for your average consumer and even for your casual gamers. And in the future who knows if integrated video will serve the needs of your "hard core" gamers. The question isn't so much "if" it will happen but more likely "when" it will happen for the vast majority of consumers, including most gamers.

    Regards,
    SB
     
  14. dnavas

    Regular

    Joined:
    Apr 12, 2004
    Messages:
    375
    Likes Received:
    7
    Oh, I can hope so, because it would be infinitely more interesting to program. There's something that appeals to me about two optimized solutions with a bit of cross-fertilization. But I do think we need to consider the possibility that there's no 'definitely' about it. Discrete is likely to survive under the alternate scenario, but only at quadro-style pricing. Similarly for high-core-count, low-latency cpus.
     
  15. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    Power isn't that much of a problem. If you have a good reason to draw ~150W on an APU, you can dissipate that just as well as you can on a GPU.

    GK104 is big (almost 300mm²) and GPUs of this class will last longer. But 150~200mm² GPUs don't make sense if you already have that much silicon dedicated to graphics on an APU with sufficient bandwidth. And stacked memory means sufficient bandwidth.

    Besides, APUs are sort of converging towards GPUs anyway. What I mean by that is that both Intel and AMD seem to agree that 4 CPU cores are enough. Those cores tend to grow a little bit (in transistor count) but not as fast as processes evolve, which means that 4-core blocks are shrinking. Therefore, the proportion of silicon dedicated to graphics in APUs is increasing.

    Give it a few generations, and PC chips will be 4 tiny CPU cores + a massive GPU and whatever else is necessary (cache, memory controllers, I/O, etc.). AMD's Ontario and Temash/Mullins already look very much like that.
     
  16. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    Why are you assuming they will not?!!
    Because they CAN, technology is always pushing boundaries,
    you seriously think dGPUs will have the same memory bandwidth as APUs?
    Even with that, they can't handle a mid-range GPU with a powerful CPU.
     
  17. Yes.
     
  18. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    Allow me, but that is a naive and ridiculous idea, stacked memory is not a one time only feature that will be slapped into a processor and then be called it a day, it will have many configurations with variable frequencies, data output and power consumption. APUs will get the lower end of the stack, dGPUs will naturally incorporate the higher variations.
     
  19. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,742
    Likes Received:
    152
    Unless the high end is an APU as well. As 3dilettante pointed out, it is not so much a question of GPUs coming to CPUs but rather the other way around.

    Ultimately, the fate of such devices will be determined by how much CPU power one needs for a particular application relative to GPU power, and whether or not that can be accommodated in a reasonably small amount of die area.

    Then one weighs that inefficiency against the cost of producing another unique chip (which will only address a much smaller subset of the market), and other market factors such as the competitiveness of others' solutions.

    I don't know whether the tradeoff is worth it at 16/14nm, but at 10nm and beyond I imagine a unified product stack will be very tempting indeed.
     
  20. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    Ray Tracing, physics and particles simulations, Ultra crazy resolutions (beyond 4K and 6K), multiple monitors, hologram decks, 3D , VR Goggles (like Occulus Rift), etc. The future is stuffed full of crazy things that necessitates dGPUs, and those are the things that we know about, in 10 years time, there will probably be more, so dGPUs are here to stay. Progress requires more data and thus processing, not the other way around.
     
    #40 DavidGraham, Jul 23, 2014
    Last edited by a moderator: Jul 23, 2014
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...