If NVIDIA can build a 500mm^2 GeForce, I don't see why Intel couldn't build a 500mm^2 Skylake + IGP chip if they saw fit to do so. They'd probably want to price it something like Xeon Phi though.
I wasn't being serious. I also rewrote that post before you replied because I felt it was ridiculous.Which defeats the argument that integration reduces costs. Additionally, Intel high performance CPUs have TDPs around 95W. That leaves you with around 155W left for GPU + HBM. At that point you would barely be touching Mid-End.
Where exactly did I gratuitously throw accusations of doom and gloom about specific GPUs based on unreleased competition?And yet you are the one who was going lalalala about upcoming competition to Polaris 11 refusing to analyse the situation in the long term. Hypocrite how much?
Lol I don't even know if AMD will exist within the next 10 years.Your view goes only as far as it benefits AMD.
Theoretical Flops mean something, but they certainly don't mean everything. Had you actually read the whole post before bringing out the pitchfork:That is why you are only comparing AMD hardware when it is a Known fact that theoretical Flops means nothing when the competitor has less Flops but higher performance.
This is just one metric, which is theoretical compute throughput at FP32, but anyone is free to do the same exercise regarding fillrate or bandwidth.
The conclusion will be the same: integration is the future.
I thought Scorpio is rumored for holidays 2017 the earliest? If yes why would I want to compare those rumored TFLOPs against a TFLOP value from a desktop GPU almost a year earlier exactly? Let's do Scorpio vs. Volta in the given case and use a less lame metric than sterile FLOPs. If then at least something like DGEMM or SGEMM.
A discrete Volta GPU still needs a CPU to drive it, whereas an APU does not. The separate CPU will lower those performance/watt numbers.http://www.extremetech.com/wp-content/uploads/2015/03/Pascal1.png
So early 2018 will have an APU/SoC that will be capable of around 72 GFLOPs/Watt in SGEMM? I'd have reason to pop a champagne cork if it's even at 1/3rd at the time in a console APU which is a console chip to start with.
Intel makes 22-core CPUs with 135W TDP. If the 8-core Haswell E had 2.6B transistors, these chips probably have 8B transistors or more.Skylake + EDRAM is pretty nice. But it probably can't be built much larger when it usually has power consumption restrictions in the designs it's implemented into.
Yeah I am aware of that, and hardware like Xeon Phi where they use their own form of HBM. I don't think they're gonna build your super APU though.ToTTenTranz said:Intel makes 22-core CPUs with 135W TDP. If the 8-core Haswell E had 2.6B transistors, these chips probably have 8B transistors or more.
The only reason they don't make larger CPUs for the consumer space is because they have an overwhelming performance advantage with much smaller chips, and they love the $/die-area they're getting with every CPU sold (who wouldn't?).
Let's not mistake what Intel chooses to make due to lack of competition to what Intel could be making if they had competition.
I don't think they're gonna build your super APU though.
Where exactly did I gratuitously throw accusations of doom and gloom about specific GPUs based on unreleased competition?
It's completely different to what I did here: look at the big picture, how things are evolving in terms of volatile memory form factors and integration, long-term roadmaps, etc. and come up with a completely IHV-independent opinion about how things will be in 10 years.
Lol I don't even know if AMD will exist within the next 10 years.
It doesn't change my opinion that discrete GPUs for consumer PCs will be gone within 10 years, at least in their current cumbersome slot form.
I don't know if it'll benefit AMD. I don't know if Intel's iGPUs start punching above AMD's, or if Intel starts putting GPU IP from nvidia in their higher-end CPUs, or if nvidia gets a x86 license and starts making their own PC APUs, if AMD gets bought by nvidia or apple or google. This is definitely not a "AMD is going to win in the long term because.." statement. That would be stupid.
Theoretical Flops mean something, but they certainly don't mean everything. Had you actually read the whole post before bringing out the pitchfork:
You want to take away the GP102 data point I put in there, so that in only has GPUs from onde vendor? Is that what bothers you so much? You want me to fill in the years without a reasonable breakthrough in APU performance?
APUs iterate slower than GPUs, so it's natural to see a small GPU>APU distancing every other year. It doesn't change the general landscape, though, which is that the highest-performance APUs are getting closer to the highest-performance dGPUs.
As for your comment about refusing to answer what you deemed as "reasonable arguments" from another user, I'll go over this only once.
See that post again and compare it for example to Ailuro's post. One is inviting discussion, the other is inviting shit flinging (you're gasping at straws, your absurd claims, that's just total deliberate ignorance).
It's garbage because it's shit flinging. I'm not answering posts of that level because I know I'll eventually go down to that level and hurt my presence and the forum. This is me recognizing my own flaws.
Regardless, I've been to tech forums for like 18 years and this user has consistently shown some of the most extreme bias I've ever seen. Past a certain level of bias, you just know discussion with this kind of people is worthless. That said, rest assured this will be my last confrontation with the user as he now stands as the sole entry in my ignore list (my first ever in 18 years).
Intel makes 22-core CPUs with 135W TDP. If the 8-core Haswell E had 2.6B transistors, these chips probably have 8B transistors or more.
The only reason they don't make larger CPUs for the consumer space is because they have an overwhelming performance advantage with much smaller chips, and they love the $/die-area they're getting with every CPU sold (who wouldn't?).
Let's not mistake what Intel chooses to make due to lack of competition to what Intel could be making if they had competition.
I don't think you'll have "30+ TFLOPs" discrete graphics cards in 2 years, except maybe for multi-gpu solutions (though you could get that right now, so..).
Point is, the performance difference between "highest-performing graphics card" and "highest-performing iGPU" will follow a downward path, until the graphics card simply stops making sense for consumers.
Sigh...
Integration is the past..and the present. Just not for everything.integration is the future.
Again, you are comparing consoles to dGPUs, which is a fail on you part, consoles are not desktop APUs, they are not for sale as APUs, they are not upgradable or exchangeable as APUs, they don't have memories like APUs. You are just grasping at straws here to come up with something to justify your absurd and illogical claims.
What's even worse is you comparing FLOPs between different architectures as a metric for performance, that's just total deliberate ignorance! Let alone choosing specific timelines to suit and cater to your claim! That's not educated imagination, that's just fantasy.
Skylake + EDRAM is pretty nice. But it probably can't be built much larger when it usually has power consumption restrictions in the designs it's implemented into.
Which defeats the argument that integration reduces costs. Additionally, Intel high performance CPUs have TDPs around 95W. That leaves you with around 155W left for GPU + HBM. At that point you would barely be touching Mid-End.
Actually 155W is already GTX1070 and thats not including the power savings of HBM. With HBM a 1080 may well fit in that power budget. That's not exactly mid-end. But given Skylake's GPU perf, it would seem scaling it up wouldn't really be worth it anyway.
There's an obvious plain logical argument that dGPU can always be faster than iGPU: no matter what you put in your iGPU, there's always something you can remove and replace with some more graphic units, and that's the CPU
Vega 10 is coming H1 2017, Scorpio is coming Q4 2017. Technically, the distance between the two could be between almost a year and as little as 3 months.
Regardless, people got upset because I used theoretical TFLOPs between GPUs of different IHVs because they took it as some kind of bias. I simply picked the dGPU with the highest theoretical FP32 throughput at the time of each new relevant APU.
A discrete Volta GPU still needs a CPU to drive it, whereas an APU does not. The separate CPU will lower those performance/watt numbers.
Regardless, I'm not sure where I suggested there would be a console APU with better performance/watt than Volta in 2018. I'm pretty sure I didn't.
I wrote that I think socket AM4 in 2018 will see an APU with >5 TFLOPs GPU and HBM2/3 in the same MCM beneath the same heatspreader. This isn't that much of a long shot, it's just assuming that a variation of the long-rumored Zeppelin/Greenland MCM will find its way into consumer space. You think it won't?
Intel makes 22-core CPUs with 135W TDP. If the 8-core Haswell E had 2.6B transistors, these chips probably have 8B transistors or more.
The only reason they don't make larger CPUs for the consumer space is because they have an overwhelming performance advantage with much smaller chips, and they love the $/die-area they're getting with every CPU sold (who wouldn't?).
Let's not mistake what Intel chooses to make due to lack of competition to what Intel could be making if they had competition.
Isn't it likely a matter of being limited by power restrictions in its typical implementations though?Skylake + EDRAM is quite disappointing actually. The GPU has a higher die size than GM108 (Not even counting the EDRAM and the node advantage) and yet manages to match it at best. I don't really see any point in making it bigger really.
True, but the portion of silicon allocated to CPU cores has been shrinking steadily for years. It may soon bounce back up a bit, as Intel apparently intends to move to 6 cores with its APUs, but I'd expect the overall downward trend to continue.
Small GPUs, (under 100mm² and with 64-bit buses, roughly speaking) are already pretty much dead. Bigger 128-bit GPUs will likely survive until APUs move to some form of high-bandwidth memory, should that actually happen.
But ≧256-bit GPUs are probably safe for a while. Technically, you could build an APU with that kind of bandwidth and computing power, but I don't think there's a market for that, or that such a market will appear within the next decade.
Isn't it likely a matter of being limited by power restrictions in its typical implementations though?
You guys who are saying that SoC will replace DGPU and are looking at console SoCs are forgetting the impact on SoC CPUs. Look how lowclocked the cpu cores are to maintain specific thermals.
Even in 10 years if the necessity for cpu performance improvements stall enough (just look now the 2500k is still great) there will be enthusiast gamers and certain workstation users who want that 10-20% gpu performance advantage a dgpu offers simply because cpu adds to die size which impacts cost which impacts whatever price/performance level they are looking at.
Now try to explain why no workstation, HPC machine or any of the sorts hasn't dumped CPUs and GPUs yet. Technology points more and more in the direction of dedicated hw than ever before than garden variety "jack ot all trades, master of none" SoCs. We already have dedicated HPC only GPUs both from Intel and NVIDIA.
Because AMD's Bulldozer has been a massive fail and HBM is quite recent. You expect dedicated hardware when tasks are more specialized and integration when they become general purpose. The major bane of GPGPU has been transfer from CPU to GPU where a CPU can perform the task much faster than it takes to transfer data to and from the GPU. AMD have a good chance here to rectify the situation at least on the hardware front. We'll know how well they did soon enough.
What about in the professional/commercial/industrial/scientific field where eg 8-16-32 core cpus are needed in combination with a very powerful gpu. Combining a 300TDP GPU with a cpu with huge core count? I think the yields and chip sizes will create issues on such a chip.Those enthusiasts wouldn't bring in enough margins to justify a separate chip. The problem with making a bigger SoC has been bandwidth, a good enough quad-core at 45W and overall 300W SoC wouldn't be far off from a 300W dGPU to justify a separate chip.You already run into diminishing performance improvement by increasing power that range.