Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Mobile is not dGPU, it's constrained by power, memory bandwidth, and by years of sub optimal old archs, it's easy to achieve great scaling numbers on new nodes because of that.
Precisely because they're constrained by a ton more metrics is why they are forced to innovate on architecture and design; Apple had vastly beat Nvidia's mobile GPUs in terms of all PPA metrics over the years, that's actually proper product comparison you can make. Process node here is largely irrelevant to the discussion, look at the A13 which is only a 5% node improvement at the same density and same die area size, yet we've seen 30-35% improvement in performance - again don't say it's because they're starting from a worse off arch, as that's demonstrated to be false since they actually beat Nvidia best in the category.
You are saying that within the same node "7nm", a high end GPU will be 250% faster than it's middle end brother, and both use the same overall general arch, excecuse me if I don't believe this one iota, it's simply absurd.
Quit trolling, "same overall arch" just isn't factual when we know there's going to be larger changes. Matter of fact is that today you can underclock Navi10 by 15-20% and essentially halve its power. Doubling it up and actually making a larger GPU for the actual high-end, the expected 10% process node boosts, and whatever architectural improvements they can make it all sound perfectly reasonable.

Also in this respect, don't forget that AMD does have a lot of headroom in terms of die size for going slower and wider, their transistor density is only at 60-65% of what's actually possible at 7nm.
 
Last edited:
Just to be clear, a 12 TF+ Navi gpu should be faster than a 2080 right? And at what point does it reach a 2080 Super? Aside from raytracing capability.
 
CWl52r9.jpg


Source at 21:16:


Probably a stupid question, but how would Nvidia know the power of AMD based next gen GPUs?
 
look at the A13 which is only a 5% node improvement at the same density and same die area size, yet we've seen 30-35% improvement in performance
NVIDIA did the same with Kepler and Maxwell on the same 28 node, but they never ever reached 250% boost.
Apple had vastly beat Nvidia's mobile GPUs in terms of all PPA metrics over the years
NVIDIA stopped trying years ago, so it really doesn't matter here. Best compare Apple to other "trying" competitors.
Matter of fact is that today you can underclock Navi10 by 15-20% and essentially halve its power. Doubling it up and actually making a larger GPU for the actual high-end
The same thing was said about Vega by the way, and guess what? AMD never bothered to do, why? because it wouldn't have worked, the architecture wasn't capable of scaling that wide, Navi could be under the same constraints, or at best could be under the constraint of rapidly diminishing returns as it goes wider.
Quit trolling, "same overall arch" just isn't factual when we know there's going to be larger changes. Matter of fact is that today you can underclock Navi10 by 15-20% and essentially halve its power. Doubling it up and actually making a larger GPU for the actual high-end, the expected 10% process node boosts, and whatever architectural improvements they can make it all sound perfectly reasonable.
None of what you are describing can achieve 250% boost, we've seen bigger node transitions and complete architectural overhauls fail to even reach a quarter of that. So I don't really know where you are pulling these numbers out of. You can essentially say the same thing about any company, Intel can learn from mobile and boost it's core performance by 300%, AMD Zen 3 will do the same and will be 200% faster than Zen 2!

See where I am going with this? This is not useful, there are physical limitations to any design, you of all people should know about this, comparing mobile scaling where it's easy to achieve certain gains due to forced limitations is not the same as discrete where you are essentially free, scaling is never linear, and even mobile scaling usually never reaches half of the "250% boost" you are talking about.
 
See where I am going with this? This is not useful, there are physical limitations to any design

because it wouldn't have worked, the architecture wasn't capable of scaling that wide
You're being purposefully obtuse, you're confusing physical limitations with commercial viability and design resource availability. There's nothing in Vega's "architecture" that limited it, it was just a matter of resources of designing a micro-architecture to actually do it, they very much said this, but they didn't see the return on investment rather than just working on other parts of the µarch, added in with the manufacturing cost increase.
and even mobile scaling usually never reaches half of the "250% boost" you are talking about.
https://images.anandtech.com/doci/15156/PowerVR-GPU-Slides9.png ??

It's the exact same comparison of a current market GPU against a next generation design, on the same process node, why are they allowed to make such improvements and desktop vendors like AMD aren't?

It's the same idiotic rhetoric Intel internally had a few years ago in regards to hitting an "IPC wall" and rationalising yearly 8-10% improvements, until suddenly a competitor suddenly comes sprinting and jumping over that wall to the point where they're now 80% ahead, and companies like Nuvia being created simply because the those better engineers in the industry saw that the incumbents are sleeping on the wheel and saw an opening to disrupt the market.

You brought up that 250%, but it's less than that. The 5700XT is already 20% faster than a 1080, so to get to that 2.7x target you only need a 2.25x improvement; as I said roughly going 75% speed at half power enables you for a 1.5x perf increase if you go wider at the ~same TDP. Let's say process brings you 10%, that brings you to 1.65x, you just have to make up another 36% through architectural or design improvements, or you simply just go increase the TDP above 225W, which for a high-end GPU from AMD again isn't something very surprising. So please do tell me how those numbers are not realistic?

I know that AMD presented Samsung with a blueprint so rosy and promising that they actually made them drop their own high-end plans and go with something totally unproven - there has to be some material improvements in RDNA2+ that made them rationalise to make such a big multi-$100m gamble seem like a worthwhile business choice.
 
Last edited:
until suddenly a competitor suddenly comes sprinting and jumping over that wall to the point where they're now 80% ahead
Don't forget to factor in their failing 10nm process, this is the main reason for their lagging behind.
as I said roughly going 75% speed at half power enables you for a 1.5x perf increase if you go wider at the ~same TDP
All I see are a lot of assumptions and ninja half assed math, once more, large scaling doesn't work linearly like this. Your assumption that AMD can increase performance by 150% on the same 225w TDP, just from downclocking and going wider is simply amusing. I will give you that, but no such data or historical precedence to back it up.

I guess we shall see then, big Navi announcements is drawing near anyway.
 
Don't forget to factor in their failing 10nm process, this is the main reason for their lagging behind.
I'm talking microarchitecture only - them not being to produce 10nm isn't related to their non-advancements on architecture.
I will give you that, but no such data or historical precedence to back it up.
What do you mean there's no precedence? It's exactly what Nvidia has been doing for years now. The Ti series is exactly that counterpart to the smaller regular parts, going slower and wider - the difference here is simply AMD didn't have the resources to actually make that big counterpart till now.
 
The Ti series is exactly that counterpart to the smaller regular parts, going slower and wider -
And higher TDP too, the 2080Ti is a 280w part. Yet it's never a 250% faster than it's 225w 2080 variant. This never happened with the 2080Ti, nor 1080Ti, 980Ti, 780Ti going all the way back the original 8800GTX. In fact all the Ti version ever did is being 30%~40% faster than their regular non Ti variant.

Big Navi can be treated the same, expect 30% more performance, if you believe in major architectural enhancements then may be add another 15~20% and that's it. The best you can really hope for is 50% faster than 5700XT at 280w TDP.
 
And higher TDP too, the 2080Ti is a 280w part. Yet it's never a 250% faster than it's 225w 2080 variant. This never happened with the 2080Ti, nor 1080Ti, 980Ti, 780Ti going all the way back the original 8800GTX. In fact all the Ti version ever did is being 30%~40% faster than their regular non Ti variant.

Big Navi can be treated the same, expect 30% more performance, if you believe in major architectural enhancements then may be add another 15~20% and that's it. The best you can really hope for is 50% faster than 5700XT at 280w TDP.
Are you REALLY just trying to troll now with these arguments? Nvidia's Ti variants also came at the same time as the regular variant, not 1+ year (and process, and architecture) apart from the vs 5700XT situation we're talking about right now. It's also 250W, not 280W, 35W / 16% more than the 2080's 215W. There's also the difference that Nvidia's cards aren't riding the voltage curve as absurdly as AMD is - the 5700XT's power curve (All AMD cards in fact) is far steeper at its peak performance point than Nvidia's on the 2080 for example - going slower for them gives them a significantly better efficiency boost / power reduction. The performance hit on an 5700XT limited to 150W is very minor precisely because of this.

At 1600MHz a 5700XT is essentially half (90-110W) the power of stock, with only a 15% perf decrease.
 
Last edited:
Are you REALLY just trying to troll now with these arguments?
No, trolling would be anyone thinking at the same TDP, same node and same architecture, AMD can give you a 150% more performance variant just by playing with the voltages and the die size. This is not a technical discussion based on logical numbers, at his point, it's an exercise in futility.

No one argues that with a different enough architecture and on an upgraded node (5nm?), AMD can achieve significant gains, but 150% on the same node, same arch, same TDP? that's just fantasy.
 
No, trolling would be anyone thinking at the same TDP, same node and same architecture, AMD can give you a 150% more performance variant just by playing with the voltages and the die size. This is not a technical discussion based on logical numbers, at his point, it's an exercise in futility.

No one argues that with a different enough architecture and on an upgraded node (5nm?), AMD can achieve significant gains, but 150% on the same node, same arch, same TDP? that's just fantasy.
Yes, I agree it's not a technical discussion anymore - you seem to be keen to ignoring the very fact of PPA and GPU configuration scaling in the industry and the various products out there exemplifying such different implementations.
 
Status
Not open for further replies.
Back
Top