dGPU vs APU spin-off

  • Thread starter Deleted member 13524
  • Start date
Status
Not open for further replies.
D

Deleted member 13524

Guest
I find that last sentence particularly amusing from the author:

Not really....but hey whatever floats anyones....errr autonomous boat.... *cough*

Well discrete graphics cards will probably be extinct on the (~10 years?) long term. We'll eventually just be switching high-performance SoCs/APUs with HBM/HMC/H-whatever on desktops. If nvidia as a handheld SoC maker is failing, nvidia as an automotive SoC maker could be their future.
 
Well discrete graphics cards will probably be extinct on the (~10 years?) long term. We'll eventually just be switching high-performance SoCs/APUs with HBM/HMC/H-whatever on desktops. If nvidia as a handheld SoC maker is failing, nvidia as an automotive SoC maker could be their future.

The VR market will ensure the continued development of high end GPUs. APUs will never to able to compete with high end GPUs. APUs dont even compete with low end GPUs today. Even Volta will probably not offer enough GPU Horsepower for next gen VR (8K). Both Automotive and VR will be huge markets for Nvidia in the next few years.

ModEdit: relevant bits copied
 
Last edited by a moderator:
APUs will never to able to compete with high end GPUs.

"Never" is such a dirty word in tech.

I perfectly remember when I suggested people being able to play 3D games in handheld consoles with "HL2 IQ" graphics back in 2004-2005 (when bitboys mobile graphics IP rumors started to appear), and some connoisseurs stated there would never be a market for such a thing because sub-10W GPUs would never get that kind of performance.
 
"Never" is such a dirty word in tech.

I perfectly remember when I suggested people being able to play 3D games in handheld consoles with "HL2 IQ" graphics back in 2004-2005 (when bitboys mobile graphics IP rumors started to appear), and some connoisseurs stated there would never be a market for such a thing because sub-10W GPUs would never get that kind of performance.
The consoles are somewhat interesting APUs. But on the desktop they're not really going anywhere exciting. APUs are mostly about higher integration and lower costs on the desktop. Really the consoles are the same story though.

I'm not sure why someone would have said handhelds would never reach Half Life 2 graphics. Even though smartphones didn't exist yet, gaming handhelds inevitably would get there. The original PSP was almost there.
 
Last edited:
The consoles are somewhat interesting APUs. But on the desktop they're not really going anywhere exciting. APUs are mostly about higher integration and lower costs on the desktop.

We'll have this talk again when socket APUs with interposers, HBM and >8 TFLOPs GPUs arrive at the consumer market.
 
We'll have this talk again when socket APUs with interposers, HBM and >8 TFLOPs GPUs arrive at the consumer market.
Those things add cost. They're not even really happening with discrete GPUs yet because of that. They're also apparently not worthwhile yet in consoles either. I guess you think there will be a market for a super beefed-up desktop/laptop APU?

It's 2016 and we've had PC IGPs since like 1998. Look where they still fit in on the excitement scale relative to their contemporaries on discrete cards. ;)
 
Last edited:
The consoles are somewhat interesting APUs. But on the desktop they're not really going anywhere exciting. APUs are mostly about higher integration and lower costs on the desktop. Really the consoles are the same story though.

Bingo! You hit the nail right on the head!
I'm not sure why someone would have said handhelds would never reach Half Life 2 graphics. Even though smartphones didn't exist yet, gaming handhelds inevitably would get there. The original PSP was almost there.

Arguably Palm PDAs were the smartphones of the day but I dont think anyone gamed on one. But yea the PSP was a good enough indication what of mobile gaming had in store. Either ways..those people who said that had clearly never heard of Moore's law.
We'll have this talk again when socket APUs with interposers, HBM and >8 TFLOPs GPUs arrive at the consumer market.

And discrete, high end GPUs will be 30+ TFLOPS when that happens.
 
Those things add cost.
Integration in general results in lower costs and power consumption on the long run and bigger picture. This is pretty much a fact.

I guess you think there will be a market for a beefed-up APU on the desktop?
Yes, I do.
I think when 7nm is available, I think Socket AM4 is getting a consumer APU with 8 Zen cores, HBM2/3 and >5 TFLOPs GPU. This is within 2 years.
Raven Ridge is a 4-core Zen + 12 CU GPU, coming next year, with a 35W max TDP.

JYKq5E.jpg


In 10 years? Yeah I think the big unwieldy dedicated graphics cards that need big slots and dedicated VRMs, dedicated VRAM, etc. will be pretty much gone from the picture. Upgrades will be done by switching the socket-able package with a SoC and stacked memory beneath the same heatspreader.


And discrete, high end GPUs will be 30+ TFLOPS when that happens.
I don't think you'll have "30+ TFLOPs" discrete graphics cards in 2 years, except maybe for multi-gpu solutions (though you could get that right now, so..).
Point is, the performance difference between "highest-performing graphics card" and "highest-performing iGPU" will follow a downward path, until the graphics card simply stops making sense for consumers.
 
Integration in general results in lower costs and power consumption on the long run and bigger picture. This is pretty much a fact.

Integration of a NB+SB onto a CPU will lower costs. Integration of additional things will not. Interposers and HBM are expensive. This IS a fact.
Yes, I do.
I think when 7nm is available, I think Socket AM4 is getting a consumer APU with 8 Zen cores, HBM2/3 and >5 TFLOPs GPU. This is within 2 years.
Raven Ridge is a 4-core Zen + 12 CU GPU, coming next year, with a 35W max TDP.

7nm at Globalfoundries within 2 years? I'm not so optimistic. Neither am I optimistic about AMD releasing a new APU on a new process a year after its last one. But lets take Raven Ridge and give it a 1.2 ghz clock speed (I'm being charitable, I expect less). Thats ~1.8 Tflops in late 2017. I think thats about the best you're going to get until 2019. And I dont think graphics performance is then goin to Triple. Heck Xavier will probably beat Raven Ridge in graphics.

Either ways the average consumer does not need nor wants to pay for additional graphics performance and HBM. And nobody is going to be spending money to develop such a part for a niche market, which is better served by a discrete GPU anyway.
In 10 years? Yeah I think the big unwieldy dedicated graphics cards that need big slots and dedicated VRMs, dedicated VRAM, etc. will be pretty much gone from the picture. Upgrades will be done by switching the socket-able package with a SoC and stacked memory beneath the same heatspreader.
Yes because people are totally going to be ok with significantly reduced flexibility AND significantly increased cost while upgrading.
I don't think you'll have "30+ TFLOPs" discrete graphics cards in 2 years, except maybe for multi-gpu solutions (though you could get that right now, so..).
Point is, the performance difference between "highest-performing graphics card" and "highest-performing iGPU" will follow a downward path, until the graphics card simply stops making sense for consumers.

I said you'd see 30+ TFLOPS discrete GPUs by the time you see 8+ TFLOPS APUs. So are you now claiming that we'll see 8+ TFLOPS APUs in 2 years?

The gap between the highest performance discrete GPU and APU has actually been growing..not shrinking.
 
Last edited:
Point is, the performance difference between "highest-performing graphics card" and "highest-performing iGPU" will follow a downward path, until the graphics card simply stops making sense for consumers.
NEVER going to happen, it's just against evolution and nature, people make the mistake that because of HBM, iGPUs will level the playing field with dGPUs. but HBM can be beefed up in dGPUs beyond what iGPUs make feasible. dGPUs will also have tremendous processing capabilities far greater than iGPUs, and with the crazy race for denser resolutions, VR, and the hunt for ever increasing graphics details and realism, cheap integrated solution will remain a no go for for the majority of graphics seekers. A quick look at the market today will reveal that even though APUs made very low-end GPUs obsolete, they helped expand Middle and High end GPUs market to a greater degree. People want even more graphics fidelity and resolution. Current APUs just fall flat on their faces any time a 1080p resolution and something more than Low graphics is selected (and they deliver horrendous fps at that).

People DO want graphics and fidelity, that's the only reason why so many switched to PS4 and XO at the start of the generation, and the reason why so may preferred PS4 over XO. And the reason why the PC gaming is booming at the current moment with most developers under the sun porting console exclusives to PCs, flashing and bragging about their higher resolution and frame rate. Heck that's why we are seeing an upgrade path for consoles for the first time in years, with Sony and Microsoft confident there is a BIG market to sell more graphics capable hardware.

While consoles can be served as a good example for APUs gaining ground, they are not really a good showcase for that at all, they are machines tailored to be gaming specific, with high amount of high speed memory, custom point to point connections and different memory hierarchy. Desktop APUs are completely different at the moment, most APUs have poor system/video memory capacity for cost reasons, they are also coupled with low performing CPUs for adequate power consumption, because no body with their right mind would couple two power hungry silicons on the same die. Not to mention the far lower bandwidth (will remain true even with HBM).

We've heard this argument before 5 years ago and it was a fantasy, it just continues to be even more so today.
 
Last edited:
The gap between the highest performance discrete GPU and APU has actually been growing..not shrinking.
Absolutely !
1/ Graphic demand is a moving target. 4k, VR and soon 8k are pushing the envelop. 8 TFLOPs in 2020 will be nothing...
2/ dGPUs have much higher TDP limit than APUs due to the fact that people buying dGPUs want performance. APUs, on the other side, are for mainstream market where price/power/efficiency is much more important than pure performance.
 
There's an obvious plain logical argument that dGPU can always be faster than iGPU: no matter what you put in your iGPU, there's always something you can remove and replace with some more graphic units, and that's the CPU :mrgreen:
 
Its a huge market. The potential revenues from it could far exceed those from the GPU market.

For which NV isn't and won't be the only contender either. They're concentrating mostly at the high end part of the automotive market which as always means higher margins, lower volumes. With the cadence the automotive market is moving and considering how much lower the volume of high end cars is then for any kind of GPUs and how much less often car owners replace their cars, I'm obviously parked diagonally in a parallel universe to not see it happening all that soon.

We haven't reached full autonomous driving yet and I severely doubt we ever will, not because the technology won't be able to cover the needs, but most likely because of legal and moral reasons. NVIDIA is obviously doing the right thing right now investing in the automotive market, but its secure future is and will remain GPUs IMHO.

Any severe change of balances in the less foreseeable future for automotive, could actually support TottenTranz' theory of SoCs being the actual future.
 
Last edited:
Well discrete graphics cards will probably be extinct on the (~10 years?) long term. We'll eventually just be switching high-performance SoCs/APUs with HBM/HMC/H-whatever on desktops. If nvidia as a handheld SoC maker is failing, nvidia as an automotive SoC maker could be their future.

Considering all the other posts above: why would you think it's better to switch in and out SoCs instead of dedicated units in any sort of high end system? Since that part of the debate originated from Xavier, that as a SoC should weigh under 16FF+ at least around 270mm2 if not more. In which you have a 512SP Volta GPU block; by which magnitude would you estimate full Volta would exceed that one and why would you have extreme problems to host the full VOLTA high end GPU on an SoC at the same time with a CPU that can keep pace with the latter and not just 8 ULP ARM derived CPU cores? Want to dare an estimate in die sizes?
 
The gap between the highest performance discrete GPU and APU has actually been growing..not shrinking.


Sigh...


_________________________________________________________________________________________________

2010:
Highest performance MCM GPU+CPU: X360 Trinity with 240 GFLOPs GPU.
Discrete consumer GPU with highest FP throughput in 2010: Cypress at 2700 GFLOPs

Difference: 2700/43 = 11.3x

_________________________________________________________________________________________________

2011:
First APU: Llano with a 480GFLOPs GPU
Discrete consumer GPU with highest FP throughput in 2011: Cayman at 2700 GFLOPs

Difference: 2700/480 = 5.6x

_________________________________________________________________________________________________

2013:
Highest performance APU: PS4's Liverpool with 1.8 TFLOPs GPU
Discrete consumer GPU with highest FP throughput in 2013: Hawaii at 5.6 TFLOPs

Difference: 5.6/1.8 = 3.1x

_________________________________________________________________________________________________

2016:
Highest performance APU: PS4 Pro's 4.3 TFLOPs GPU
Discrete consumer GPU with highest FP throughput in 2016: Pascal GP102 Titan X at 10 TFLOPs

Difference: 10/4.3 = 2.32x


_________________________________________________________________________________________________

2017:
Highest performance APU: Scorpio's 6 TFLOPs GPU
Discrete consumer GPU with highest FP throughput in 2017 (rumored): Vega 10 at 12 TFLOPs

Difference: 12/6 = 2x

_________________________________________________________________________________________________




For those who want to see it , here's a neat little graphic:

eAIdtC.png


This graph only refers to the past 7 years. I suggested discrete GPUs will disappear around the next ~10 year mark. It's a very long time, so no need to set the pants on fire.

This is just one metric, which is theoretical compute throughput at FP32, but anyone is free to do the same exercise regarding fillrate or bandwidth.
The conclusion will be the same: integration is the future.


Some people seem mentally incapable of imagining anything in tech past a 1 year period. Or rather, they're incapable of imagining anything at all because they can only come up with whatever's for sale right now and short-term public roadmaps.
Thankfully, technological progress doesn't depend on that kind of people. Never did and probably never will.
 
Some people seem mentally incapable of imagining anything in tech past a 1 year period. Or rather, they're incapable of imagining anything at all because they can only come up with whatever's for sale right now and short-term roadmaps.
Again, you are comparing consoles to dGPUs, which is a fail on you part, consoles are not desktop APUs, they are not for sale as APUs, they are not upgradable or exchangeable as APUs, they don't have memories like APUs. You are just grasping at straws here to come up with something to justify your absurd and illogical claims.

What's even worse is you comparing FLOPs between different architectures as a metric for performance, that's just total deliberate ignorance! Let alone choosing specific timelines to suit and cater to your claim! That's not educated imagination, that's just fantasy.
 
I thought Scorpio is rumored for holidays 2017 the earliest? If yes why would I want to compare those rumored TFLOPs against a TFLOP value from a desktop GPU almost a year earlier exactly? Let's do Scorpio vs. Volta in the given case and use a less lame metric than sterile FLOPs. If then at least something like DGEMM or SGEMM.

http://www.extremetech.com/wp-content/uploads/2015/03/Pascal1.png

So early 2018 will have an APU/SoC that will be capable of around 72 GFLOPs/Watt in SGEMM? I'd have reason to pop a champagne cork if it's even at 1/3rd at the time in a console APU which is a console chip to start with.
 
Last edited:
NEVER going to happen, it's just against evolution and nature, people make the mistake that because of HBM, iGPUs will level the playing field with dGPUs. but HBM can be beefed up in dGPUs beyond what iGPUs make feasible. dGPUs will also have tremendous processing capabilities far greater than iGPUs, and with the crazy race for denser resolutions, VR, and the hunt for ever increasing graphics details and realism, cheap integrated solution will remain a no go for for the majority of graphics seekers. A quick look at the market today will reveal that even though APUs made very low-end GPUs obsolete, they helped expand Middle and High end GPUs market to a greater degree. People want even more graphics fidelity and resolution. Current APUs just fall flat on their faces any time a 1080p resolution and something more than Low graphics is selected (and they deliver horrendous fps at that).

...

We've heard this argument before 5 years ago and it was a fantasy, it just continues to be even more so today.

It was actually 6 years ago.

5-11-2010

I would ask the question in a more general sense. Will GPUs exist in 5 years. The answer there would be no.

The low end dies this year, or at least starts to do a PeeWee Herman at the end of Buffy the Vampire Slayer (the movie, not the show). There goes the volume. 2012 sees the same happening for the high end. The middle isn't enough to sustain NV.

They have 2 years to make compute and widgets profitable. Good luck there guys.

-Charlie

http://www.semiaccurate.com/forums/showpost.php?p=48497&postcount=10
 
Some people seem mentally incapable of imagining anything in tech past a 1 year period. Or rather, they're incapable of imagining anything at all because they can only come up with whatever's for sale right now and short-term public roadmaps.
Thankfully, technological progress doesn't depend on that kind of people. Never did and probably never will.

And yet you are the one who was going lalalala about upcoming competition to Polaris 11 refusing to analyse the situation in the long term. Hypocrite how much? Your view goes only as far as it benefits AMD. That is why you are only comparing AMD hardware when it is a Known fact that theoretical Flops means nothing when the competitor has less Flops but higher performance.
 
Again, you are comparing consoles to dGPUs, which is a fail on you part, consoles are not desktop APUs, they are not for sale as APUs, they are not upgradable or exchangeable as APUs, they don't have memories like APUs. You are just grasping at straws here to come up with something to justify your absurd and illogical claims.

What's even worse is you comparing FLOPs between different architectures as a metric for performance, that's just total deliberate ignorance! Let alone choosing specific timelines to suit and cater to your claim! That's not educated imagination, that's just fantasy.

If you don't mind I will add to your argument that these APUs have a terribly bad low power CPU, which is already limiting what developers can do, even if they had all the GPU Flops in the world! An APU that would have a CPU with the performance of a Core i5 / i7 would not have the power or transistor budget necessary to accommodate the GPUs these APUs have! When they were designed a decision was clearly taken to trade off CPU for GPU power. That fact alone should show him that comparing FLOPS of these APUs with FLOPS of dedicated graphics cards that can benefit from faster CPUs is utterly pointless!
 
Status
Not open for further replies.
Back
Top