dGPU vs APU spin-off

  • Thread starter Deleted member 13524
  • Start date
Status
Not open for further replies.
Skylake + EDRAM is pretty nice. But it probably can't be built much larger when it usually has power consumption restrictions in the designs it's implemented into.
 
Last edited:
If NVIDIA can build a 500mm^2 GeForce, I don't see why Intel couldn't build a 500mm^2 Skylake + IGP chip if they saw fit to do so. They'd probably want to price it something like Xeon Phi though. :)

Which defeats the argument that integration reduces costs. Additionally, Intel high performance CPUs have TDPs around 95W. That leaves you with around 155W left for GPU + HBM. At that point you would barely be touching Mid-End.
 
Which defeats the argument that integration reduces costs. Additionally, Intel high performance CPUs have TDPs around 95W. That leaves you with around 155W left for GPU + HBM. At that point you would barely be touching Mid-End.
I wasn't being serious. ;) I also rewrote that post before you replied because I felt it was ridiculous.
 
And yet you are the one who was going lalalala about upcoming competition to Polaris 11 refusing to analyse the situation in the long term. Hypocrite how much?
Where exactly did I gratuitously throw accusations of doom and gloom about specific GPUs based on unreleased competition?
It's completely different to what I did here: look at the big picture, how things are evolving in terms of volatile memory form factors and integration, long-term roadmaps, etc. and come up with a completely IHV-independent opinion about how things will be in 10 years.


Your view goes only as far as it benefits AMD.
Lol I don't even know if AMD will exist within the next 10 years.
It doesn't change my opinion that discrete GPUs for consumer PCs will be gone within 10 years, at least in their current cumbersome slot form.
I don't know if it'll benefit AMD. I don't know if Intel's iGPUs start punching above AMD's, or if Intel starts putting GPU IP from nvidia in their higher-end CPUs, or if nvidia gets a x86 license and starts making their own PC APUs, if AMD gets bought by nvidia or apple or google. This is definitely not a "AMD is going to win in the long term because.." statement. That would be stupid.


That is why you are only comparing AMD hardware when it is a Known fact that theoretical Flops means nothing when the competitor has less Flops but higher performance.
Theoretical Flops mean something, but they certainly don't mean everything. Had you actually read the whole post before bringing out the pitchfork:

This is just one metric, which is theoretical compute throughput at FP32, but anyone is free to do the same exercise regarding fillrate or bandwidth.
The conclusion will be the same: integration is the future.

Yeah, I wish we could run the exact same benchmarks across multiple platforms within PCs and consoles alike, but we can't. Therefore all we can do is compare theoretical specs.
It's true that you can't compare per-theoretical-GFLOP performance across architectures, but you can definitely establish expect performance brackets across different orders of magnitude (e.g. a 10TFLOPs FP32 GPU is most probably going to perform better than a 1TFLOPs FP32 one, no matter the architecture).

You want to take away the GP102 data point I put in there, so that in only has GPUs from onde vendor? Is that what bothers you so much? You want me to fill in the years without a reasonable breakthrough in APU performance?

XmIbI0.png


APUs iterate slower than GPUs, so it's natural to see a small GPU>APU distancing every other year. It doesn't change the general landscape, though, which is that the highest-performance APUs are getting closer to the highest-performance dGPUs.


As for your comment about refusing to answer what you deemed as "reasonable arguments" from another user, I'll go over this only once.
See that post again and compare it for example to Ailuro's post. One is inviting discussion, the other is inviting shit flinging (you're gasping at straws, your absurd claims, that's just total deliberate ignorance).
It's garbage because it's shit flinging. I'm not answering posts of that level because I know I'll eventually go down to that level and hurt my presence and the forum. This is me recognizing my own flaws.
Regardless, I've been to tech forums for like 18 years and this user has consistently shown some of the most extreme bias I've ever seen. Past a certain level of bias, you just know discussion with this kind of people is worthless. That said, rest assured this will be my last confrontation with the user as he now stands as the sole entry in my ignore list (my first ever in 18 years).





I thought Scorpio is rumored for holidays 2017 the earliest? If yes why would I want to compare those rumored TFLOPs against a TFLOP value from a desktop GPU almost a year earlier exactly? Let's do Scorpio vs. Volta in the given case and use a less lame metric than sterile FLOPs. If then at least something like DGEMM or SGEMM.

Vega 10 is coming H1 2017, Scorpio is coming Q4 2017. Technically, the distance between the two could be between almost a year and as little as 3 months.
Regardless, people got upset because I used theoretical TFLOPs between GPUs of different IHVs because they took it as some kind of bias. I simply picked the dGPU with the highest theoretical FP32 throughput at the time of each new relevant APU.


http://www.extremetech.com/wp-content/uploads/2015/03/Pascal1.png
So early 2018 will have an APU/SoC that will be capable of around 72 GFLOPs/Watt in SGEMM? I'd have reason to pop a champagne cork if it's even at 1/3rd at the time in a console APU which is a console chip to start with.
A discrete Volta GPU still needs a CPU to drive it, whereas an APU does not. The separate CPU will lower those performance/watt numbers.
Regardless, I'm not sure where I suggested there would be a console APU with better performance/watt than Volta in 2018. I'm pretty sure I didn't.
I wrote that I think socket AM4 in 2018 will see an APU with >5 TFLOPs GPU and HBM2/3 in the same MCM beneath the same heatspreader. This isn't that much of a long shot, it's just assuming that a variation of the long-rumored Zeppelin/Greenland MCM will find its way into consumer space. You think it won't?




Skylake + EDRAM is pretty nice. But it probably can't be built much larger when it usually has power consumption restrictions in the designs it's implemented into.
Intel makes 22-core CPUs with 135W TDP. If the 8-core Haswell E had 2.6B transistors, these chips probably have 8B transistors or more.
The only reason they don't make larger CPUs for the consumer space is because they have an overwhelming performance advantage with much smaller chips, and they love the $/die-area they're getting with every CPU sold (who wouldn't?).
Let's not mistake what Intel chooses to make due to lack of competition to what Intel could be making if they had competition.
 
ToTTenTranz said:
Intel makes 22-core CPUs with 135W TDP. If the 8-core Haswell E had 2.6B transistors, these chips probably have 8B transistors or more.
The only reason they don't make larger CPUs for the consumer space is because they have an overwhelming performance advantage with much smaller chips, and they love the $/die-area they're getting with every CPU sold (who wouldn't?).
Let's not mistake what Intel chooses to make due to lack of competition to what Intel could be making if they had competition.
Yeah I am aware of that, and hardware like Xeon Phi where they use their own form of HBM. I don't think they're gonna build your super APU though.

I wonder if their MCDRAM will replace the EDRAM at some point with desktop APUs however.
 
I don't think they're gonna build your super APU though.

I don't think so either, not anytime soon. Not within the next 3-4 years, unless there's already a GPU IP licensing agreement with e.g. nvidia taking shape.
Intel is much more likely to cling to discrete GPUs for quite a bit longer than AMD, since the former's GPU IP is generally weaker at the moment.
 
Saying that IGPs will kill dGPUs due to performance is like saying that a 25W chip will have similar performance to a 250W chip of the same generation.
 
Where exactly did I gratuitously throw accusations of doom and gloom about specific GPUs based on unreleased competition?
It's completely different to what I did here: look at the big picture, how things are evolving in terms of volatile memory form factors and integration, long-term roadmaps, etc. and come up with a completely IHV-independent opinion about how things will be in 10 years.

You are not making any sense and trying to twist my words here. I made an opinion based on short term information which is more predictable than long term. After all Pascal has already not one but three chips out in wild (four if you count GP100 but we do not know anything about its graphical performance). So you think your prediction for 10 years, with everything that can happen meanwhile, is more accurate than one that has a time span of 2-3 months? Really? Where is the logic in that?


Lol I don't even know if AMD will exist within the next 10 years.
It doesn't change my opinion that discrete GPUs for consumer PCs will be gone within 10 years, at least in their current cumbersome slot form.
I don't know if it'll benefit AMD. I don't know if Intel's iGPUs start punching above AMD's, or if Intel starts putting GPU IP from nvidia in their higher-end CPUs, or if nvidia gets a x86 license and starts making their own PC APUs, if AMD gets bought by nvidia or apple or google. This is definitely not a "AMD is going to win in the long term because.." statement. That would be stupid.

True you don't, but you have shown time and time again that you have a strong bias against anything nVIDIA. Blaming nVIDIA for the fail of the Android gaming ecosystem was laughable really.. as was your complete silence after someone who actually has experience in developing Android apps shown you the several non-nVIDIA factors that concur for it. In other words, anything that paints nVIDIA in a bad light or bad future seems to be your thing.

Theoretical Flops mean something, but they certainly don't mean everything. Had you actually read the whole post before bringing out the pitchfork:

Yes, I did read that. The problem here is that even if you recognise it, you still go ahead and insist on your theory. Anyone that has a genuine interest in discussing a subject and not just maintain a "I know better than all of you" attitude, confronted with lack of information would certainly not be as sure as you seem to be. In other words, if you do not have enough data to back it up (especially with a 10 year time frame), listen to others and consider what they say. Most of the times you ignore them or hide behind a very convenient "he's bad, I'm not answering", just like you did with David Graham when he actually made good points. You should have enough maturity to ignore the "eye candy" and concentrate on the important information. The moderator has just shown you this path by deleting your whining and self entitled post (calling HIS post to be removed) and keeping David's!

You want to take away the GP102 data point I put in there, so that in only has GPUs from onde vendor? Is that what bothers you so much? You want me to fill in the years without a reasonable breakthrough in APU performance?

No, that was not my point at all. My point is that you have to compare actual performance and not FLOPS. See for example Star Wars Battlefront. It runs at 900p on PS4 at 60 FPS at what is equivalent to High settings on a PC. On a GTX1080 that has 8.2 TFlops, so 4,5 times the FLOPs of the PS4 GPU, runs it at 72 FPS at 2160p in Ultra High Settings! 2160p is 5,76 times the resolution of 900p and it is still 20% faster on a higher quality preset! It is clear it runs the game faster than you would predict by FLOPS alone.

http://www.legitreviews.com/nvidia-geforce-gtx-1080-founders-edition-video-card-review_181298/8

http://wccftech.com/star-wars-battlefront-pc-ultra-vs-ps4-graphics-comparison/

APUs iterate slower than GPUs, so it's natural to see a small GPU>APU distancing every other year. It doesn't change the general landscape, though, which is that the highest-performance APUs are getting closer to the highest-performance dGPUs.

Why do you keep comparing APUs to GPUs without mentioning their CPU cores? Are they free? Don't they have a power and transistor budget? Like I said above (with no answer from you whatsoever), the APUs you are comparing have very low performance CPUs (hell some ARM CPUs you can have on a mobile phone are already more powerful!) which allowed those APUs to invest more in GPU power. As GPU power demand increases with things like VR, 8K, etc, (which you keep time and time again ignoring no matter how many times they are thrown at you), it will get harder and harder to accommodate a decent CPU that avoids the GPU being bottle necked inside an APU. I don't think you are factoring into account the 7nm cliff hanger, plus Intel change from tick tock to a three family iteration on the same node. GPUs are parallel by nature and may walk around this debacle, but CPUs not as easy. IPC has been stagnating and the current situation does not paint a nice picture.

As for your comment about refusing to answer what you deemed as "reasonable arguments" from another user, I'll go over this only once.
See that post again and compare it for example to Ailuro's post. One is inviting discussion, the other is inviting shit flinging (you're gasping at straws, your absurd claims, that's just total deliberate ignorance).
It's garbage because it's shit flinging. I'm not answering posts of that level because I know I'll eventually go down to that level and hurt my presence and the forum. This is me recognizing my own flaws.
Regardless, I've been to tech forums for like 18 years and this user has consistently shown some of the most extreme bias I've ever seen. Past a certain level of bias, you just know discussion with this kind of people is worthless. That said, rest assured this will be my last confrontation with the user as he now stands as the sole entry in my ignore list (my first ever in 18 years).

It is your own problem if you cannot see past that them. Like I said above, why do you think moderators deleted your post and kept his?

Intel makes 22-core CPUs with 135W TDP. If the 8-core Haswell E had 2.6B transistors, these chips probably have 8B transistors or more.
The only reason they don't make larger CPUs for the consumer space is because they have an overwhelming performance advantage with much smaller chips, and they love the $/die-area they're getting with every CPU sold (who wouldn't?).
Let's not mistake what Intel chooses to make due to lack of competition to what Intel could be making if they had competition.

And yet this 22 core CPU offers less Single Threaded performance on average than an 8 core Sandy Bridge from 4 years before, plus even on multi threaded loads it needed more cores to achieve the performance of its predecessor. There are no free lunches.

http://www.anandtech.com/show/10158/the-intel-xeon-e5-v4-review/8
 
Last edited:
I don't think you'll have "30+ TFLOPs" discrete graphics cards in 2 years, except maybe for multi-gpu solutions (though you could get that right now, so..).
Point is, the performance difference between "highest-performing graphics card" and "highest-performing iGPU" will follow a downward path, until the graphics card simply stops making sense for consumers.

I definitely echo that Sigh... We were discussing GPUs and integrated APUs for the consumer market and as usual you change the goal posts to include consoles. I'd suggest you go ahead and buy a console APU to satisfy your desire.
integration is the future.
Integration is the past..and the present. Just not for everything.
Again, you are comparing consoles to dGPUs, which is a fail on you part, consoles are not desktop APUs, they are not for sale as APUs, they are not upgradable or exchangeable as APUs, they don't have memories like APUs. You are just grasping at straws here to come up with something to justify your absurd and illogical claims.

What's even worse is you comparing FLOPs between different architectures as a metric for performance, that's just total deliberate ignorance! Let alone choosing specific timelines to suit and cater to your claim! That's not educated imagination, that's just fantasy.

The constantly changing goal posts and metrics make it completely pointless to have a discussion really.
Skylake + EDRAM is pretty nice. But it probably can't be built much larger when it usually has power consumption restrictions in the designs it's implemented into.

Skylake + EDRAM is quite disappointing actually. The GPU has a higher die size than GM108 (Not even counting the EDRAM and the node advantage) and yet manages to match it at best. I don't really see any point in making it bigger really.
Which defeats the argument that integration reduces costs. Additionally, Intel high performance CPUs have TDPs around 95W. That leaves you with around 155W left for GPU + HBM. At that point you would barely be touching Mid-End.

Actually 155W is already GTX1070 and thats not including the power savings of HBM. With HBM a 1080 may well fit in that power budget. That's not exactly mid-end. But given Skylake's GPU perf, it would seem scaling it up wouldn't really be worth it anyway.

AMD has done better with their APUs but the lack of bandwidth hurts them. Hopefully they can do better with Raven Ridge and high speed DDR4. DDR4-2400 + 20% or more improvement in efficiency results in about 80% more effective bandwidth vs DDR3-1600.
 
Actually 155W is already GTX1070 and thats not including the power savings of HBM. With HBM a 1080 may well fit in that power budget. That's not exactly mid-end. But given Skylake's GPU perf, it would seem scaling it up wouldn't really be worth it anyway.

For me that is mid end, with high end being GP102 level of performance, irrespective of the price. Why? Because we are discussing the end of discrete GPUs, therefore I have to compare it with the top chip. The X104 line was mid end not that long ago and would probably still be if AMD managed to compete in metrics other than price.
 
Last edited:
There's an obvious plain logical argument that dGPU can always be faster than iGPU: no matter what you put in your iGPU, there's always something you can remove and replace with some more graphic units, and that's the CPU :mrgreen:

True, but the portion of silicon allocated to CPU cores has been shrinking steadily for years. It may soon bounce back up a bit, as Intel apparently intends to move to 6 cores with its APUs, but I'd expect the overall downward trend to continue.

Small GPUs, (under 100mm² and with 64-bit buses, roughly speaking) are already pretty much dead. Bigger 128-bit GPUs will likely survive until APUs move to some form of high-bandwidth memory, should that actually happen.

But ≧256-bit GPUs are probably safe for a while. Technically, you could build an APU with that kind of bandwidth and computing power, but I don't think there's a market for that, or that such a market will appear within the next decade.
 
You guys who are saying that SoC will replace DGPU and are looking at console SoCs are forgetting the impact on SoC CPUs. Look how lowclocked the cpu cores are to maintain specific thermals.

Even in 10 years if the necessity for cpu performance improvements stall enough (just look now the 2500k is still great) there will be enthusiast gamers and certain workstation users who want that 10-20% gpu performance advantage a dgpu offers simply because cpu adds to die size which impacts cost which impacts whatever price/performance level they are looking at.
 
Vega 10 is coming H1 2017, Scorpio is coming Q4 2017. Technically, the distance between the two could be between almost a year and as little as 3 months.
Regardless, people got upset because I used theoretical TFLOPs between GPUs of different IHVs because they took it as some kind of bias. I simply picked the dGPU with the highest theoretical FP32 throughput at the time of each new relevant APU.

The peak values of a discrete GPU are still going to be by more than 3x times ahead, real time performance probably a bit less. Scorpio is projected for a very close timeframe as GV100 is.

A discrete Volta GPU still needs a CPU to drive it, whereas an APU does not. The separate CPU will lower those performance/watt numbers.

A console APU CPU is going to be leaps and bounds behind any high end CPU you could pair with any high end GPU and that's exactly the point where it's utter nonsense to compare a console SoC which is meant for a specific resolution and N synced maximum framerate against any sort of high end machine. If things would be as favorable as you want them to be, I seriously wonder why none of the HPC PC or any other professional PC configs haven't gone for SoCs already. Could it be that by accident die area for any given SoC is always NOT going to be enough to replace both a high end CPU and a high end GPU? Unless you're willing to go for serious tradeoffs it doesn't sound favorable for SoCs. If process technology for manufacturing wouldn't be increasingly problematic, things could be different; but since IHVs are bound to a theoretical maximum of say 600+mm2 for each chip for current processes there's a specific limit of how many transistors you can fit in those specific boundaries.

Regardless, I'm not sure where I suggested there would be a console APU with better performance/watt than Volta in 2018. I'm pretty sure I didn't.
I wrote that I think socket AM4 in 2018 will see an APU with >5 TFLOPs GPU and HBM2/3 in the same MCM beneath the same heatspreader. This isn't that much of a long shot, it's just assuming that a variation of the long-rumored Zeppelin/Greenland MCM will find its way into consumer space. You think it won't?

I could very well make it into desktop (albeit not meant for it), however by the time that would happen both available high end GPUs and CPUs will still be leaps and bounds ahead. What exactly makes you think that the SoC in question would be that much ahead of a RX480 right now?

Intel makes 22-core CPUs with 135W TDP. If the 8-core Haswell E had 2.6B transistors, these chips probably have 8B transistors or more.
The only reason they don't make larger CPUs for the consumer space is because they have an overwhelming performance advantage with much smaller chips, and they love the $/die-area they're getting with every CPU sold (who wouldn't?).
Let's not mistake what Intel chooses to make due to lack of competition to what Intel could be making if they had competition.

Now try to explain why no workstation, HPC machine or any of the sorts hasn't dumped CPUs and GPUs yet. Technology points more and more in the direction of dedicated hw than ever before than garden variety "jack ot all trades, master of none" SoCs. We already have dedicated HPC only GPUs both from Intel and NVIDIA.
 
Skylake + EDRAM is quite disappointing actually. The GPU has a higher die size than GM108 (Not even counting the EDRAM and the node advantage) and yet manages to match it at best. I don't really see any point in making it bigger really.
Isn't it likely a matter of being limited by power restrictions in its typical implementations though?
 
Last edited:
True, but the portion of silicon allocated to CPU cores has been shrinking steadily for years. It may soon bounce back up a bit, as Intel apparently intends to move to 6 cores with its APUs, but I'd expect the overall downward trend to continue.

Small GPUs, (under 100mm² and with 64-bit buses, roughly speaking) are already pretty much dead. Bigger 128-bit GPUs will likely survive until APUs move to some form of high-bandwidth memory, should that actually happen.

But ≧256-bit GPUs are probably safe for a while. Technically, you could build an APU with that kind of bandwidth and computing power, but I don't think there's a market for that, or that such a market will appear within the next decade.

Even if Intel adds 2 cores I dont think the die size is going to increase all that much. AFAIK the pure CPU portion of the die is under 20% today and the GPUs have been getting larger every gen (Bar Kaby Lake)

64 bit GPUs aren't dead yet. Look at the sales figures for 940M and its predecessors..they've sold reasonably well. I'd expect GP108 to sell reasonably well too.

There might be a niche market for such an APU but I dont think its anywhere near big enough to justify development. For anyone requiring that kind of GPU power..they're much better off with discrete GPUs.
Isn't it likely a matter of being limited by power restrictions in its typical implementations though?

I dont think so. An Intel i7-6770HQ which is a Syklake GT4e with a 45W TDP in an NUC can barely keep up in gaming with a GM108 powered laptop which is ~30W for the GPU +15W for the CPU. And this is despite the higher die size and node advantage. Intel still has quite a way to go to catch up to either AMD or Nvidia.
 
You guys who are saying that SoC will replace DGPU and are looking at console SoCs are forgetting the impact on SoC CPUs. Look how lowclocked the cpu cores are to maintain specific thermals.

Even in 10 years if the necessity for cpu performance improvements stall enough (just look now the 2500k is still great) there will be enthusiast gamers and certain workstation users who want that 10-20% gpu performance advantage a dgpu offers simply because cpu adds to die size which impacts cost which impacts whatever price/performance level they are looking at.

Those enthusiasts wouldn't bring in enough margins to justify a separate chip. The problem with making a bigger SoC has been bandwidth, a good enough quad-core at 45W and overall 300W SoC wouldn't be far off from a 300W dGPU to justify a separate chip.You already run into diminishing performance improvement by increasing power that range.

The better way to kill off dGPUs is to render the current programming model obsolete. It doesn't matter if a bigger chip has more power under the hood if it doesn't have enough VRAM while the smaller chips does.

Now try to explain why no workstation, HPC machine or any of the sorts hasn't dumped CPUs and GPUs yet. Technology points more and more in the direction of dedicated hw than ever before than garden variety "jack ot all trades, master of none" SoCs. We already have dedicated HPC only GPUs both from Intel and NVIDIA.

Because AMD's Bulldozer has been a massive fail and HBM is quite recent. You expect dedicated hardware when tasks are more specialized and integration when they become general purpose. The major bane of GPGPU has been transfer from CPU to GPU where a CPU can perform the task much faster than it takes to transfer data to and from the GPU. AMD have a good chance here to rectify the situation at least on the hardware front. We'll know how well they did soon enough.
 
Because AMD's Bulldozer has been a massive fail and HBM is quite recent. You expect dedicated hardware when tasks are more specialized and integration when they become general purpose. The major bane of GPGPU has been transfer from CPU to GPU where a CPU can perform the task much faster than it takes to transfer data to and from the GPU. AMD have a good chance here to rectify the situation at least on the hardware front. We'll know how well they did soon enough.

The resulting APU and its peak GPU & CPU performance is still going to be leaps and bounds below any high end professional solution. Replacing low end professional dGPUs is of course perfectly feasable and makes absolute sense; what the majority here objects to would be the idea, notion (call it whatever you want) that APUs/SoCs will replace all dedicated GPUs in the foreseeable future.
 
Those enthusiasts wouldn't bring in enough margins to justify a separate chip. The problem with making a bigger SoC has been bandwidth, a good enough quad-core at 45W and overall 300W SoC wouldn't be far off from a 300W dGPU to justify a separate chip.You already run into diminishing performance improvement by increasing power that range.
What about in the professional/commercial/industrial/scientific field where eg 8-16-32 core cpus are needed in combination with a very powerful gpu. Combining a 300TDP GPU with a cpu with huge core count? I think the yields and chip sizes will create issues on such a chip.
 
There are different requirements and constraints for a CPU and a GPU.

The cost structure of the PC market means CPUs are limited by power and bandwidth and GPUs are limited by memory capacity.

A CPU is required to be used with the cheapest memory technologies and support a variety of capacities. The majority of PCs still use LPDDR3 or slowish DDR4. This means CPUs, in general, are limited by bandwidth. It would be senseless to put more than four cores on a die; They'd be bandwidth starved. Cubic/quadratic power scaling means we get modest performance out of a higher TDP. That and an increasing focus on mobile solutions means lower TDP SKUs.

A GPU is a bandwidth engine with a stack of ALUs on top. Everything is done to maximize bandwidth. The price paid is high power consumption and inflexible memory configurations.

The single biggest hurdle to APUs is the lack of cheap, flexible, high bandwidth memory. In order to have a highend APU, you need a highend memory system. That means either GDDR 5/5x/6, Wide IO2, HBM/2 or HMC. They vary a bit, but in general they only allow limited capacity and needs close integration either on substrate or package. As a CPU producers you tie a lot of cost up in memory chips, as a system vendor you sacrifice a lot of configurability/production flexibility.

There are several advantages though:
1. You only pay for one memory subsystem; Lower cost and no artificial segmentation of memory pools
2. Aggregate power; No need to power PCIe interface when exchanging data.
3. More bandwidth for CPUs; 32 cores? Hell yeah!
4. More capacity for GPUs

An APU doesn't have to be limited by the TDPs we see for CPUs today. There is no reason why an APU couldn't be allowed a 2-300W TDP since it is replacing both CPU+GPU.

Cheers
 
There's nothing that speaks against a 600+mm2 sized SoC right now. A high end dGPU paired with a high end CPU are still going to be by at least a magnitude ahead in performance.
 
Status
Not open for further replies.
Back
Top