Speculation: GPU Performance Comparisons of 2020 *Spawn*

Status
Not open for further replies.
I think the big issue AMD are going to have this time around is the apples to oranges problem in benchmarks. Are reviewers going to compare everything at native resolution and then create 2nd sets comparing AMD native to Nvidia DLSS? I can see that happening a lot.

We don't know , there aren't that many games right now with DLSS support. The real question is if AMD has improved or added ML to Fidelity FX or any new features they might have.
 
I don't think that will be a problem at all. As it stands today reviewers always include DLSS off numbers and there's no reason for that to change.

It's a problem if DLSS performance is significantly more performent than AMD's native, since DLSS is being written up as "better than native" image quality.
 
I think the big issue AMD are going to have this time around is the apples to oranges problem in benchmarks. Are reviewers going to compare everything at native resolution and then create 2nd sets comparing AMD native to Nvidia DLSS? I can see that happening a lot.

No, its Raytracing. With Minecraft, Fortnite and Cyberpunk you have three of the biggest games at the end of the year. All of them will be included in benchmark parcours. DLSS is just the cherry on top.
 
He's had more hits than misses. I don't think he ever said 8nm, he just said Samsung. And the verdict is still out on that - all we have so far is that Gainward spec sheet listing 7nm.

His early 3090/GA102 info has been 100% accurate so far.

He did say "10nm, you can call it 8nm". And it is 8nm :)
 
I can see a new era of fanboyism will start, if RDNA 2 delivers, now that Nvidia did go for doubling FP32 ALU. The good old "1 AMD ALU is weaker than 1 Nvidia SP/CUDA core" talk is gonna flip in the direction.

It would also be interesting to see how AMD marketing counters the halo numbers from Nvidia, e.g. 10000+ CUDA cores in RTX 3090, vs (alllegedly) 5120 ALUs in Navi 21.
:LOL:
The issu is that you have people like Troyan which believe this big numbers have any meaning.

Even if you have 10.000 Shader you need to fill them. Also major values like Rasterizer, Rops and TMUs are missing.

If you look UE5 Nanite Engine with polygons smaller than a pixel you will see that Rasterizer is not dead. Maybe AMD have with higher clocks and one Rasterizer more the better hardware to offer.
 
Plus Nvidia loves to shake the market from time to time, like with 8800 GT and 980/970 where these cards would destroy previous Nvidia cards $200-300 more expensive, with absolutely no AMD competition on the horizon.
 
If Big Navi can do 2x 5700XT, it should be within 10% of the 3080. If AMD manage to improve it 10-20% from that, it'd be really quite uncomfortable for nvidia.

It'd end up too close to 3090 with decent amount of RAM to not really matter for gaming.
 
Plus Nvidia loves to shake the market from time to time, like with 8800 GT and 980/970 where these cards would destroy previous Nvidia cards $200-300 more expensive, with absolutely no AMD competition on the horizon.
? "Nvidia shake up the market"?
8800GT beat out 3870 since Nvidia knew what AMD was doing. Great card but wasn't Nvidia shaking up the market for funsies.
GTX970(not sure why you mention the GTX980) was due to AMD pricing Hawaii aggressively (for obvious reasons). GTX970 was a great deal for those that needed a card at that time but launched against Hawaii that had been on the market for a year.

Nvidia is a business and when talking about GPUs that take at least 2-3years from design to hitting shelves, you need to plan both your moves while thinking about your competitor. Nvidia has been doing exceptionally well with that.
There is also the factor of what market those GPUs were targeting. Both G92 and GTX970 were aiming at the performance tier with a large TAM where you need to make a splash.
That is exactly what AMD did after R600/RV670 by moving to the small die/sweetspot strategy which they tried again with Polaris (and one could argue Navi10).
 
Last edited:
Yea, they have to have their $500 GPU at least beat the consoles by a decent margin or else people won't be buying them.

And the Series X is already close enough to a 3070 to be a basic wash as far as that's concerned. If I had exactly $500 and wanted "Max graphics" I'd know which one I'd pick (the one I don't have to buy a bunch of other expensive components to match with).

Nvidia's reveal today does leave open the possibility that AMD can beat it this year across the board. But that's just a possibility.
 
And the Series X is already close enough to a 3070 to be a basic wash as far as that's concerned. If I had exactly $500 and wanted "Max graphics" I'd know which one I'd pick (the one I don't have to buy a bunch of other expensive components to match with).

If you actually wanted "max graphics" you'd get the 3070. It's faster than a 2080Ti which is is turn faster than the XSX. It's also likely far faster in RT or heavily ALU limited scenario's before considering DLSS. If you want "near max graphics" (taking a 3070 to be "max") at a much lower cost then you'd get an XSX.
 
? "Nvidia shake up the market"?
8800GT beat out 3870 since Nvidia knew what AMD was doing. Great card but wasn't Nvidia shaking up the market for funsies.
GTX970(not sure why you mention the GTX980) was due to AMD pricing Hawaii aggressively (for obvious reasons). GTX970 was a great deal for those that needed a card at that time but launched against Hawaii that had been on the market for a year.

What the hell are you even talking about? 8800 GT released practically 2 months before HD3000, there was zero motivation for 8800GT's low price: https://www.techpowerup.com/review/zotac-geforce-8800-gt/

Only competitors were 2900 Xt and Nvidia's own 8800 GTS 640 or 8800GTX, all of which were more than $100 more expensive.

Same with Maxwell, where they priced it well below AMD's or their own cards, especially the 970: https://www.techpowerup.com/review/nvidia-geforce-gtx-980/
 
If you actually wanted "max graphics" you'd get the 3070. It's faster than a 2080Ti which is is turn faster than the XSX. It's also likely far faster in RT or heavily ALU limited scenario's before considering DLSS. If you want "near max graphics" (taking a 3070 to be "max") at a much lower cost then you'd get an XSX.

Utter nonsense, DLSS isn't special in any way, devs will get there soon enough, the GPUs are within less than 10% of each other, over a lifetime the Xsx is a far better investment as a 3070 will be outclassed by both consoles within a few years due to targeted optimizations, and if you've only got $500 then you don't have enough money for an NVME SSD, 8+ core CPU, etc. etc.

Regardless, one of the biggest questions now is if AMD will feel free to make its own 345 watt GPU. They've done it before, and with Nvidia putting out a 3 slot massive card it's not some detrimental comparison. Heck maybe we'll see the return of their liquid cooled cards, the engineers that did those coolers might still be kicking around the company. Anyone familiar with how long it would take to engineer a cooler specifically, as binning the right sort of chips and etc. shouldn't be that hard.
 
What the hell are you even talking about? 8800 GT released practically 2 months before HD3000, there was zero motivation for 8800GT's low price: https://www.techpowerup.com/review/zotac-geforce-8800-gt/

Only competitors were 2900 Xt and Nvidia's own 8800 GTS 640 or 8800GTX, all of which were more than $100 more expensive.

Same with Maxwell, where they priced it well below AMD's or their own cards, especially the 970: https://www.techpowerup.com/review/nvidia-geforce-gtx-980/
Ummm... did you even read my post? I mentioned and explained what you did with some additional information.
What didn't you understand?
 
Status
Not open for further replies.
Back
Top