Will there be 300W Discrete GPUs in 5 years? 10?

http://jonpeddie.com/download/media/slides/JPR_Slides_from_GDC_Press_Conf_2015.pdf

Slide 23 is the clincher really.

Slide 21 is also interesting, because PC is only "maintaining" its scale gap over console and mobile because power consumption has ~tripled since 2005. One could use that as an argument for 500W discrete GPUs in 2025
I think the power draw observation is the crucial one. Lithographic advances for logic is largely driven by mobile platforms. If a SoC/GPU is allowed, say, 4W maximum, how would performance grow with power draw from there?
In graphics you could at best hope for linear scaling with silicon area (not really, but close enough for back of the envelope estimations), so say a factor of 5-10. On top of that, you can push frequencies a bit, which increases power draw depending on where on the curve you lie, but f^3 or at best f^2. So for something tike the Titan X, you could estimate a factor of ten for die area, growing to 40W, and frequencies just over doubling for a factor of 4-9, giving a power draw of 160-360W. Meaning that the very highest end discrete GPUs can be assumed to maintain a lead of roughly a factor 20 over high-end phone SoCs. Which actually turns out to be a ballpark accurate estimation, and sustainable over time, if relative die sizes remain as now, and we don't care too much about dGPUs realistically lagging a bit on nodes since getting good yields on 600mm2 dies is non-trivial.
 
http://jonpeddie.com/download/media/slides/JPR_Slides_from_GDC_Press_Conf_2015.pdf

Slide 23 is the clincher really.

Slide 21 is also interesting, because PC is only "maintaining" its scale gap over console and mobile because power consumption has ~tripled since 2005. One could use that as an argument for 500W discrete GPUs in 2025.

Or, er, maybe not.

I haven't found a reason to recommend watching the video, but here it is:

On a slightly different topic, slide 33 was also fascinating, although I have no idea how to interpret it.
 
Personally I found slide 24 very interesting, albeit completely unrelated to this thread.

That said, slide 23 does show very graphically how small the dGPU market is that is likely to be relatively immune to absorption into the iGPU market once high bandwidth memory devices become more widespread for use with APUs and CPUs.

The XBO/PS4 already show that mainstream and much of the high end can already be serviced quite well by integrated graphics on an APU that uses ~100 watts. Granted a desktop version would likely need a more powerful CPU to deal with a full OS as well as non-gaming applications. But with Dx12 that should still allow for a ~150 watts APU with the same gaming performance as the PS4 which is solidly in the high-end spectrum (albeit not the upper high-end) with regards to gaming.

Regards,
SB
 
Last edited:
Well the PS4 and especially the XBOX One are pretty darn weak compared to the PC I've had for almost 3 years. They are super weak with respect to CPU performance, but I suppose my GTX970 runs circles around both consoles as well.

I think as long as there are people like me who will pay for a better experience AMD and NVIDIA will gladly keep selling us graphics cards.
 
http://jonpeddie.com/download/media/slides/JPR_Slides_from_GDC_Press_Conf_2015.pdf

Slide 23 is the clincher really.

Slide 21 is also interesting, because PC is only "maintaining" its scale gap over console and mobile because power consumption has ~tripled since 2005. One could use that as an argument for 500W discrete GPUs in 2025.

Or, er, maybe not.

I haven't found a reason to recommend watching the video, but here it is:
Slide 23 is probably unit share, instead of revenue share.
 
Er, yeah, it says that in big bold letters at the top. It basically means that the current desktop enthusiast sector is about 2 million cards per year (if that).
 
Er, yeah, it says that in big bold letters at the top. It basically means that the current desktop enthusiast sector is about 2 million cards per year (if that).

Meaning that enthusiast + performance desktop would come in at around 8 million per year and adding in high end notebooks as well you're looking at around 18 million PC GPU's sold per year that outstrip console performance. That's a huge market for high end games.
 
Well the PS4 and especially the XBOX One are pretty darn weak compared to the PC I've had for almost 3 years. They are super weak with respect to CPU performance, but I suppose my GTX970 runs circles around both consoles as well.

I think as long as there are people like me who will pay for a better experience AMD and NVIDIA will gladly keep selling us graphics cards.

Of course, the GTX 970 is a performance level card. It SHOULD run rings around the consoles. Titan and 980 would be solidly enthusiast. 970 is performance/upper high-end. 960 is high end. Your computer fits in the upper range of that 1-4% of computer systems in the world. Considering that graph, now that I look at it again includes high end dGPU within the performance category, I'd say your system likely fist into the 1-2% of computer systems put together each year.

The point is that anything below a GTX 960 will likely only really be useful for legacy systems and likely not offer any benefits over integrated once HBM or other high speed memory technologies becomes common in consumer devices. Even something along the GTX 960 (or future equivalent tier) could likely be replaced by an APU.

So my conclusion was actually more dire for dGPU's than I originally thought as I was including the 5% High-end with dGPU for desktop when it's dGPU for Notebooks.

If this all comes about, I'd expect dGPU prices to double or more relative to what they are now as Nvidia loses revenue from the mainstream/lower high end. In other words they will potentially need to rely on revenue from only the Titan/980/970 class (and future equivalents) as well as possibly 960 class GPUs depending on how aggressive Intel/AMD are with their integrated solutions.

Probably mostly AMD as I don't imagine Intel are too interested in chasing the high-end graphics business. They'll likely be happy with mainstream/midrange GPU performance (so 950 class, if it existed, and lower).

Regards,
SB
 
GPUs below enthusiast desktop cards at 300W or higher are off-topic ;)

Maybe, however, will the enthusiast card around 300W today be 300W in 5 years? Or will it be 400W/500W? Or will it stay at 300W but only see minor incremental (5-10%) performance gains each year/generation as we get to the limits of fabrication technology. Assuming something new and revolutionary doesn't come out between now and then.

Regards,
SB
 
I find myself wondering about another question: Will there be Discrete RAM sticks?

Seems that HBM APUs must be going to kill off the bulk of the market other than for upgrades on existing stock.
 
The real question is, if APUs are going to take over the world, where on earth are they? I mean, you have the PS4 and XBOX1, which have moderately powerful APUs, but you see nothing even remotely close in the PC space. Why is this?

I suspect a large part of that answer has to do with upgradeable memory. For instance, I have a system with 48 GB, of which I have a program that uses 35. Thing is, this is pretty much impossible without socketed memory. Without sockets, you're looking at a top of 8 GB or so currently, which is inadequate for a lot of users.

Speaking of which, DDR3 is the big reason that XBOX1 is quite wimpy compared to PS4. In order to have reasonable bandwidth, they had to use an awful lot of the chip for local memory. The PS4, by comparison has GDDR5, which is likely impossible to socket, and had not only more bandwidth than XBOX1's local memory for its global memory, but also nearly twice as powerful a GPU on a slightly smaller chip!
 
The real question is, if APUs are going to take over the world, where on earth are they? I mean, you have the PS4 and XBOX1, which have moderately powerful APUs, but you see nothing even remotely close in the PC space. Why is this?

I suspect a large part of that answer has to do with upgradeable memory. For instance, I have a system with 48 GB, of which I have a program that uses 35. Thing is, this is pretty much impossible without socketed memory. Without sockets, you're looking at a top of 8 GB or so currently, which is inadequate for a lot of users.

Speaking of which, DDR3 is the big reason that XBOX1 is quite wimpy compared to PS4. In order to have reasonable bandwidth, they had to use an awful lot of the chip for local memory. The PS4, by comparison has GDDR5, which is likely impossible to socket, and had not only more bandwidth than XBOX1's local memory for its global memory, but also nearly twice as powerful a GPU on a slightly smaller chip!

I think Intel's edram "apu" is reasonably close. It won't be long until they catch the consoles (and tbh if they "tried" now they could probably do it). But in general the answer is memory bandwidth and cost. I've heard amd even planned a GDDR5 apu at one point but it was scrapped (I'm guessing for cost reasons). So there's no technical limitation, but it's not cheap to get around the memory bandwidth problem (either some local memory like X1's esram/intel's edram or some expensive memory configuration like GDDR5/something greater than dual-channel/HBM/etc.). I believe we're at the point where amd could increase the CU count on its apu and not really change performance. So they can't turn it up to 11 without solving the memory bandwidth problem in a cost-efficient way.

Finally, I'm sure amd isn't in any rush to replace their dgpus with apus (complete wild guess but I assume in this hypothetical scenario they would make more on a mid-range dgpu than a high end apu).
 
Finally, I'm sure amd isn't in any rush to replace their dgpus with apus (complete wild guess but I assume in this hypothetical scenario they would make more on a mid-range dgpu than a high end apu).

That may be true, however, if they can make a compelling APU (one with good gaming performance around console level), they could theoretically claw back some marketshare from Intel. Which may or may not mean more to their bottom line as they can inevetably charge more for that APU than a dGPU of equivalent gaming performance as you also get the CPU included. It'd obviously still be more limited in CPU performance in general tasks, but Dx12 should go a long ways towards making it a relatively even playing field when it comes to gaming.

It's also potentially attractive in a case such as now when they are getting their pants handed to them by their dGPU competitor in terms of marketshare. Again, if they can offer a competitive all in one solution that is more affordable than not only the competitor's GPU, but their competitor's GPU + competitor's CPU.

It's basically the one area they have available to them that neither competitor can match in the next few years.

Regards,
SB
 
I have a system with 48 GB, of which I have a program that uses 35.
Well yeah, OK when you put it that way o_O
At home I have 16GB & rarely go past 8 used, we have only 4GB at work & I seem to be rare there in maxing it fairly often...
In that context I'm thinking an HBM APU (or Intel equivalent) with 8GB could really hurt the bulk mid-range RAM suppliers but on reflection yeah Servers & Workstations/top end Gaming setups will be still needing discrete RAM.

I mean, you have the PS4 and XBOX1, which have moderately powerful APUs, but you see nothing even remotely close in the PC space.
Well I may be getting a bit over-enthusiastic on the implications of HBM :runaway: but it seems reasonable to expect a pretty big performance bump with 14nm & HBM.
Especially so if they put out something GPU heavy in the 2-300W range.

How would discrete RAM co-exist with an HBM APU, like an extra cache level?
 
Well yeah, OK when you put it that way o_O
At home I have 16GB & rarely go past 8 used, we have only 4GB at work & I seem to be rare there in maxing it fairly often...
In that context I'm thinking an HBM APU (or Intel equivalent) with 8GB could really hurt the bulk mid-range RAM suppliers but on reflection yeah Servers & Workstations/top end Gaming setups will be still needing discrete RAM.

Well I may be getting a bit over-enthusiastic on the implications of HBM :runaway: but it seems reasonable to expect a pretty big performance bump with 14nm & HBM.
Especially so if they put out something GPU heavy in the 2-300W range.

How would discrete RAM co-exist with an HBM APU, like an extra cache level?

An extra cache level is what Intel did with Crystalwell. It's arguably the most elegant solution, as it lets CPU cores use the embedded memory transparently.

But a simpler solution would be to make the stacked memory a separate pool, which would be fine for graphics, since drivers are already designed to manage a separate pool of VRAM. However, CPU cores wouldn't be able to access this memory, except by doing it explicitly, but history shows that very few people would bother. This solution would be sub-optimal, especially for HSA, but then again two channels of DDR4 dedicated to the (nearly) exclusive use of 4~8 CPU cores wouldn't be too bad.
 
An extra cache level is what Intel did with Crystalwell. It's arguably the most elegant solution, as it lets CPU cores use the embedded memory transparently.

But a simpler solution would be to make the stacked memory a separate pool, which would be fine for graphics, since drivers are already designed to manage a separate pool of VRAM. However, CPU cores wouldn't be able to access this memory, except by doing it explicitly, but history shows that very few people would bother. This solution would be sub-optimal, especially for HSA, but then again two channels of DDR4 dedicated to the (nearly) exclusive use of 4~8 CPU cores wouldn't be too bad.

I'm with Hoom on this. Over Easter I went shopping for a laptop in a number of shops, and pretty much none of a hundred or so on display had more than 8GB of memory. Either all of it was soldered on without any SO-DIMM slot at all, or they had 4GB soldered and 4GB in a single slot, making upgrades awkward. (Neglecting of course all the 4GB laptops. Ick.) It seems a given that offering a 14nmFF APU with 4-16GB of HBM as total system memory would work brilliantly in laptops, and would actually work pretty damn well for most pretty much all consumers apart from the lunatic fringe to which most of us here belong. Look at the Jonpeddie pdf page 23 again, and you'll see that integrated GPUs are already overwhelmingly the platform for gaming in the wider sense.

It is an opening AMD has to go for in 2016 and onwards. What intel will do is uncertain - the Crystalwell approach is indeed elegant in that it provides good performance along with upgradability of total system memory, but fact of the matter is that the vast majority of systems just don't get upgraded. Intel has to mull over whether and how they want to split their market in terms of memory interfaces. AMD however has nothing to loose by offering CPU/GPU/memory solutions, and I would love to be a fly on the wall in their discussions with Lenovo/Acer/Dell et al.
 
the Crystalwell approach is indeed elegant
Whatever happened to that?
I remember being all excited as heck about the prospect of a giant 128MB L4 eDRAM desktop CPU :runaway:
But they don't seem to have released a proper desktop version & all talk of eDRAM seems to be absent on Broadwell :|
 
Whatever happened to that?
I remember being all excited as heck about the prospect of a giant 128MB L4 eDRAM desktop CPU :runaway:
But they don't seem to have released a proper desktop version & all talk of eDRAM seems to be absent on Broadwell :|

From what I've read desktop versions are incoming but I think it's with Skylake and for some reason the cache is reduced to 64MB on phone atm so can't provide a link.
 
Back
Top