Is everything on one die a good idea?

People get it stuck in their head that in some number of years integrated graphics will be as fast as discreet graphics, without considering that in a few years discreet graphics will be an order of magnitude or more faster than they are today. By the time Intel or AMD IGP is on par with GM107, NVIDIA will be rocking Volta or Einstein.

So until the we reach the point where to the average PC gamer the IGP is good enough to render nearly indistinguishable dGPU quality visuals at 60+Hz, the dGPU lives on. We are a long way from that.
 
I mean compare the Westmere-EX 10 core Xeon E7-2870 to the Ivy Bridge E 15 core Xeon E7-2880 v2. Now you're looking at a 3x performance increase at the same TDP.

Indeed. I don't know the performance numbers in detail, but I got a good laugh looking at the price :)
I'll know that GPUs are in the same place when only the Quadros get faster from year to year....

If AMD could compete with that stuff, maybe we'd see Intel offer it to desktop users at sub facemelting prices.

Maybe. I get the sense that Apple went and scared Intel into action re:APUs, and once the desktop sales lagged because none of us saw the need to upgrade (among other reasons), it all turned into a non-virtuous cycle. Now tablets are the hotness, laptops are the stodgy business item, and there seems little desktop market beyond workstations. I'm not sure a competitive AMD would change that.

To bring this back on topic -- I definitely do not think that single-die / APUs are a "good idea" -- not for me or my use-case, anyway. It's practically a necessity in the high-volume tablet/phone markets though. I'm in the process of resigning myself to the notion that I'll be paying significantly more for my next desktop, and taking comfort that some of my first machines were quite expensive as well.
 
Sure, we'll always have discrete graphics cards, at least well into the 22nd century because oh my god don't let this change... I hate change! :rolleyes:

Likewise, I could go to the Handheld forum and retrieve some posts from 8 years ago with people claiming "it's pointless to invest in ultra-low-power GPUs because we'll always need lots of power to make 3D gaming work"

Always, never, forever... Gotta love looking at those words in a tech forum.


The main reason discrete GPUs are still alive is bandwidth. When you're limited to a 128-bit bus of DDR3, you're just not going to get much out of any integrated graphics, even if you throw a lot of computing resources at it.

Stacked memory will change that, and only the biggest dedicated GPUs will still make sense.

Exactly. And looking at roadmaps, stacked memory in APUs should happen at least within the next 3-4 years. Killing the dGPUs (at least for the consumer) during the 6 years following that... isn't hard to imagine at all.

Also, keeping a GPU very close to the CPU cores should become increasingly more efficient for compute loads, which are already taking their part in games.
 
People get it stuck in their head that in some number of years integrated graphics will be as fast as discreet graphics, without considering that in a few years discreet graphics will be an order of magnitude or more faster than they are today. By the time Intel or AMD IGP is on par with GM107, NVIDIA will be rocking Volta or Einstein.

There's a decent chance at that point that large chips like Einstein are going to have CPUs on them, so what does that mean as to having everything on the same die?
 
The future is definitely heterogeneous (aa spelling).

The large dGPU will live on but will instead be a hetereogenous system with ARM+GPU oand /or x86+GPU.

This is basically already a reality with AMD:s APUs (although they're not really in dGPU form factor... yet). Nvidia has plans to place denver cores out on the dGPU to handle more of the gaming operations out there. Although they seem to be delayed as we don't appear to be getting these ARM cores with Maxwell as was previously roadmapped.
 
People get it stuck in their head that in some number of years integrated graphics will be as fast as discreet graphics, without considering that in a few years discreet graphics will be an order of magnitude or more faster than they are today. By the time Intel or AMD IGP is on par with GM107, NVIDIA will be rocking Volta or Einstein.

So until the we reach the point where to the average PC gamer the IGP is good enough to render nearly indistinguishable dGPU quality visuals at 60+Hz, the dGPU lives on. We are a long way from that.

GM107 is a 150mm² GPU. Right now, putting that much graphics silicon on an APU makes little sense because of bandwidth limitations, but once those are gone, there will be nothing keeping APUs from matching GM107 (or the Pascal/Volta/Einstein equivalent).
 
GM107 is a 150mm² GPU. Right now, putting that much graphics silicon on an APU makes little sense because of bandwidth limitations, but once those are gone, there will be nothing keeping APUs from matching GM107 (or the Pascal/Volta/Einstein equivalent).
Consider this: After 5 years, An APU with stacked memory, achieving 300GB/s of bandwidth, And a dGPU with a much wider and higher clocked stacked memory, achieving 900GB/s of bandwidth, how is that different from now?
 
Sure, we'll always have discrete graphics cards, at least well into the 22nd century because oh my god don't let this change... I hate change! :rolleyes:

Likewise, I could go to the Handheld forum and retrieve some posts from 8 years ago with people claiming "it's pointless to invest in ultra-low-power GPUs because we'll always need lots of power to make 3D gaming work"

Always, never, forever... Gotta love looking at those words in a tech forum.

I did a Ctrl+F and the only one using the word Always is you. Beware the strawman! :p

GM107 is a 150mm² GPU. Right now, putting that much graphics silicon on an APU makes little sense because of bandwidth limitations, but once those are gone, there will be nothing keeping APUs from matching GM107 (or the Pascal/Volta/Einstein equivalent).

Oh just those pesky TDP and power density concerns. GM107 may not have been the best example - how about GK104? And by the time it is feasible to make a GK104+i7-4770 APU, GK104 will be nothing special.

Of course at some point the dGPU will go away as all things tech do, but that day is a good decade+ off.
 
Last edited by a moderator:
Consider this: After 5 years, An APU with stacked memory, achieving 300GB/s of bandwidth, And a dGPU with a much wider and higher clocked stacked memory, achieving 900GB/s of bandwidth, how is that different from now?

Why are you assuming that the dGPU will have a much wider and higher clocked memory?
If we're talking about heat dissipation, last I checked the APU/CPU coolers can generally be much larger than the dGPU ones.

Again, having much lower latencies between CPU and GPU in an APU should become very important as well.

I did a Ctrl+F and the only one using the word Always is you. Beware the strawman! :p

Check what is written in the post. This was a reference to the handheld forums and the "3D in handhelds will never take off" naysayers.
 
So going even beyond the disappearance of the dgpu, eventually we might have a pcb with only one giant monolithic chip? Even memory stacked on the chip.
 
So going even beyond the disappearance of the dgpu, eventually we might have a pcb with only one giant monolithic chip? Even memory stacked on the chip.

Yes. One PCB with an APU socket, power conversion components, I/O only for peripherals (USB, HDMI, audio out, etc.). I guess even external RAM is bound to disappear eventually when the APUs start carrying enough memory.
On desktops, the Mini-ITX should be standard by then, IMO.

Of course, such chips are likely to cost less than the CPU+GPU+RAM discrete equivalents, but not much less since I think the tech companies will take advantage of the cheaper BoM to make more money in the end.
 
That advantage will slowly become less important over the time. An 8 core CPU will likely last "better".
Not if your comparing current console cpu's with current desktop cpu's. A quadcore is faster now it will still be faster in 5 years time

ps: we've also had avx2 will also be the death of dgpu's ;)
 
So going even beyond the disappearance of the dgpu, eventually we might have a pcb with only one giant monolithic chip? Even memory stacked on the chip.

Someday, perhaps... Things are continually being move onto the CPU/APU/whatever. Math co-processors, storage controllers (unthinkable 20 years ago), memory controllers, audio and low end video are also being moved off the MB and into the CPU/APU, etc.

There's always going to be a place for discrete components. There's still uses for discrete storage controllers for a small market segment. There's still uses for discrete audio for an even smaller market segment. I'm sure the same will eventually hold true for discrete graphics controllers.

But that isn't going to be the case for your average consumer and even for your casual gamers. And in the future who knows if integrated video will serve the needs of your "hard core" gamers. The question isn't so much "if" it will happen but more likely "when" it will happen for the vast majority of consumers, including most gamers.

Regards,
SB
 
The future is definitely heterogeneous (aa spelling).

Oh, I can hope so, because it would be infinitely more interesting to program. There's something that appeals to me about two optimized solutions with a bit of cross-fertilization. But I do think we need to consider the possibility that there's no 'definitely' about it. Discrete is likely to survive under the alternate scenario, but only at quadro-style pricing. Similarly for high-core-count, low-latency cpus.
 
Oh just those pesky TDP and power density concerns. GM107 may not have been the best example - how about GK104? And by the time it is feasible to make a GK104+i7-4770 APU, GK104 will be nothing special.

Of course at some point the dGPU will go away as all things tech do, but that day is a good decade+ off.

Power isn't that much of a problem. If you have a good reason to draw ~150W on an APU, you can dissipate that just as well as you can on a GPU.

GK104 is big (almost 300mm²) and GPUs of this class will last longer. But 150~200mm² GPUs don't make sense if you already have that much silicon dedicated to graphics on an APU with sufficient bandwidth. And stacked memory means sufficient bandwidth.

Besides, APUs are sort of converging towards GPUs anyway. What I mean by that is that both Intel and AMD seem to agree that 4 CPU cores are enough. Those cores tend to grow a little bit (in transistor count) but not as fast as processes evolve, which means that 4-core blocks are shrinking. Therefore, the proportion of silicon dedicated to graphics in APUs is increasing.

Give it a few generations, and PC chips will be 4 tiny CPU cores + a massive GPU and whatever else is necessary (cache, memory controllers, I/O, etc.). AMD's Ontario and Temash/Mullins already look very much like that.
 
Why are you assuming that the dGPU will have a much wider and higher clocked memory?
Why are you assuming they will not?!!
Because they CAN, technology is always pushing boundaries,
you seriously think dGPUs will have the same memory bandwidth as APUs?
If we're talking about heat dissipation, last I checked the APU/CPU coolers can generally be much larger than the dGPU ones.
Even with that, they can't handle a mid-range GPU with a powerful CPU.
 
Allow me, but that is a naive and ridiculous idea, stacked memory is not a one time only feature that will be slapped into a processor and then be called it a day, it will have many configurations with variable frequencies, data output and power consumption. APUs will get the lower end of the stack, dGPUs will naturally incorporate the higher variations.
 
DavidGraham said:
Allow me, but that is a naive and ridiculous idea, stacked memory is not a one time only feature that will be slapped into a processor and then be called it a day, it will have many configurations with variable frequencies, data output and power consumption. APUs will get the lower end of the stack, dGPUs will naturally incorporate the higher variations.
Unless the high end is an APU as well. As 3dilettante pointed out, it is not so much a question of GPUs coming to CPUs but rather the other way around.

Ultimately, the fate of such devices will be determined by how much CPU power one needs for a particular application relative to GPU power, and whether or not that can be accommodated in a reasonably small amount of die area.

Then one weighs that inefficiency against the cost of producing another unique chip (which will only address a much smaller subset of the market), and other market factors such as the competitiveness of others' solutions.

I don't know whether the tradeoff is worth it at 16/14nm, but at 10nm and beyond I imagine a unified product stack will be very tempting indeed.
 
Ultimately, the fate of such devices will be determined by how much CPU power one needs for a particular application relative to GPU power, and whether or not that can be accommodated in a reasonably small amount of die area.

Then one weighs that inefficiency against the cost of producing another unique chip (which will only address a much smaller subset of the market), and other market factors such as the competitiveness of others' solutions.
Ray Tracing, physics and particles simulations, Ultra crazy resolutions (beyond 4K and 6K), multiple monitors, hologram decks, 3D , VR Goggles (like Occulus Rift), etc. The future is stuffed full of crazy things that necessitates dGPUs, and those are the things that we know about, in 10 years time, there will probably be more, so dGPUs are here to stay. Progress requires more data and thus processing, not the other way around.
 
Last edited by a moderator:
Back
Top