Speculation: GPU Performance Comparisons of 2020 *Spawn*

Status
Not open for further replies.
Well hopefully there won't be scaling problems with RDNA2. 80CU clocked high enough should be able to more than double the performance of the 5700XT which would make it competitive with the 3080. AMD has been really quiet but stuff should start coming out now that Nvidia announced their stuff.
 
What the hell are you even talking about? 8800 GT released practically 2 months before HD3000, there was zero motivation for 8800GT's low price: https://www.techpowerup.com/review/zotac-geforce-8800-gt/

Only competitors were 2900 Xt and Nvidia's own 8800 GTS 640 or 8800GTX, all of which were more than $100 more expensive.

Same with Maxwell, where they priced it well below AMD's or their own cards, especially the 970: https://www.techpowerup.com/review/nvidia-geforce-gtx-980/

He explained it pretty clearly: G92 was priced and positioned in anticipation of RV670, while GTX 970/980 was launched to compete with the then old and discounted R9 290/290X.

Neither AMD nor Nvidia price their chips out of the kindness of their hearts. If anything, it's been Nvidia's that's been caught by surprise by AMD's performance/price ratios (e.g. RV770 causing refunds and pricing reversals of GT200 cards, and Hawaii disrupting Kepler Titan pricing).
 
Well hopefully there won't be scaling problems with RDNA2. 80CU clocked high enough should be able to more than double the performance of the 5700XT which would make it competitive with the 3080. AMD has been really quiet but stuff should start coming out now that Nvidia announced their stuff.

Yeah unleash that 20+TF beast amd.
 
Utter nonsense. It's not about "special", it's about time to market. 2 years after DLSS introduction, still not a single competitive solution on the horizon...
To be fair, DLSS hasn't been good until a few months ago. The first iteration was worse than a post processing filter. Although I doubt AMD would be able to provide a similar solution to DLSS 2.0 any time soon, other companies might have more of a chance.
 
To be fair, DLSS hasn't been good until a few months ago. The first iteration was worse than a post processing filter. Although I doubt AMD would be able to provide a similar solution to DLSS 2.0 any time soon, other companies might have more of a chance.
This is fully agree :yes:
 
Probably similar to this gen with more HZD situations since Sony is planning to release more games on PC.
I'm not sure what HZD PC is indicative of. That game had a ton of patches on the PS4 and was/is prone to crashing. And that's on the closed platform! And even with PS4's fancy video recording of crash and debug upload system that I thought was pretty cool lol. Hauling it over to the PC was sure to bring forth more joyful problems. It seems very Bethesda-like in its robustness. :D

https://horizon.fandom.com/wiki/Horizon_Zero_Dawn_updates#PlayStation_4_updates
 
Last edited:
I'm not sure what HZD PC is indicative of. That game had a ton of patches on the PS4 and was/is prone to crashing. And that's on the closed platform! And even with PS4's fancy video recording of crash and debug upload system that I thought was pretty cool lol. Hauling it over to the PC was sure to bring forth more joyful problems. It seems very Bethesda-like in its robustness. :D

https://horizon.fandom.com/wiki/Horizon_Zero_Dawn_updates#PlayStation_4_updates
In that you will need noticeably more powerful hardware to offer an equivalent experience.
 
Utter nonsense. It's not about "special", it's about time to market. 2 years after DLSS introduction, still not a single competitive solution on the horizon...
I'd say two years after introduction this solution starts to deliver, what was described in the two years old introduction. In selected games.
 
Utter nonsense, DLSS isn't special in any way, devs will get there soon enough,

Well aside from the fact that it's the only machine learning based resolution upscaling solution on the market from any vendor, has been for the last 2 years, and no-one else has even announced they're working on a competing solution at this stage (outside of a few high level patents that may or may not lead to something).

And of course there's the fact that the only truly good version of it (DLSS 2.0) requires Tensor cores with huge INT8/INT4 performance giving the 3070 something like a 5x performance advantage over the XSX.

But other than that, no, nothing special at all.

the GPUs are within less than 10% of each other,

This is baseless speculation unsupported by any evidence at present. Our only data points so far for the performance of the XSX are the Gears 5 and Minecraft RTX demo's. Both put it at or below (in RTX) 2080 level performance. While I acknowledge 2 data points are not enough to reach a decent conclusion, and do expect it to move up from there, at least in regular rasterization, when more data points become available, for the time being that's all we have, so it's not logical to assume something that contradicts the only 2 available data points without some other evidence. A 2080Ti is almost 30% faster than a 2080 in 4k and the 3070 is faster still. So claiming a 10% difference seems a stretch at best. And what of Ray tracing performance? Or ALU heavy workloads where the 3070 has a 66% advantage over the XSX?

[over a lifetime the Xsx is a far better investment as a 3070 will be outclassed by both consoles within a few years due to targeted optimizations,

Not if you're after "max graphics" like you stated above. For the next 2-3 years "max graphics" is more than likely going to require a 3070 given a choice between the two, especially where Raytracing is involved. While I agree that longer term, a 3070 will fall back due to lack of driver support from Nvidia and game developers moving on to more modern and powerful architectures, that doesn't change the situation in the here and now. Nor is it a particular problem given that the vast majority of PC gamers who purchase a x070 class GPU don't intend to keep it for an entire console cycle and would usually upgrade to something much more powerful within 2-4 years.

So yes, as a value proposition over the long term, the XSX is likely better. But if you want "max graphics" you'll want to go with the 3070, and then reconsider your options again in a few years.

and if you've only got $500 then you don't have enough money for an NVME SSD, 8+ core CPU, etc. etc.

That goes without saying.
 
While I agree that longer term, a 3070 will fall back due to lack of driver support from Nvidia and game developers moving on to more modern and powerful architectures
Only thing i disagree with in the wider context of your post.
The 3070 will not fall behind the xsx, if it was within the 10% possibily.
At worst you could set it to console level graphics if performance was actually that close anyway.
But the performance is high enough that you wouldn't need to do that, the same things that would make the 3070 have such a big drop in performance would also affect the consoles. So devs wouldn't do it without a fallback implementation to use.
 
Well aside from the fact that it's the only machine learning based resolution upscaling solution on the market from any vendor, has been for the last 2 years, and no-one else has even announced they're working on a competing solution at this stage (outside of a few high level patents that may or may not lead to something).

And of course there's the fact that the only truly good version of it (DLSS 2.0) requires Tensor cores with huge INT8/INT4 performance giving the 3070 something like a 5x performance advantage over the XSX.

But other than that, no, nothing special at all.



This is baseless speculation unsupported by any evidence at present. Our only data points so far for the performance of the XSX are the Gears 5 and Minecraft RTX demo's. Both put it at or below (in RTX) 2080 level performance. While I acknowledge 2 data points are not enough to reach a decent conclusion, and do expect it to move up from there, at least in regular rasterization, when more data points become available, for the time being that's all we have, so it's not logical to assume something that contradicts the only 2 available data points without some other evidence. A 2080Ti is almost 30% faster than a 2080 in 4k and the 3070 is faster still. So claiming a 10% difference seems a stretch at best. And what of Ray tracing performance? Or ALU heavy workloads where the 3070 has a 66% advantage over the XSX?



Not if you're after "max graphics" like you stated above. For the next 2-3 years "max graphics" is more than likely going to require a 3070 given a choice between the two, especially where Raytracing is involved. While I agree that longer term, a 3070 will fall back due to lack of driver support from Nvidia and game developers moving on to more modern and powerful architectures, that doesn't change the situation in the here and now. Nor is it a particular problem given that the vast majority of PC gamers who purchase a x070 class GPU don't intend to keep it for an entire console cycle and would usually upgrade to something much more powerful within 2-4 years.

So yes, as a value proposition over the long term, the XSX is likely better. But if you want "max graphics" you'll want to go with the 3070, and then reconsider your options again in a few years.



That goes without saying.

The test were done in March 2020 without final devkit, only delivered in June. We can compare end of year with Digitalfoundry Test... I suppose the 3070 will be faster but we have no idea how much it will be faster.
 
The test were done in March 2020 without final devkit, only delivered in June. We can compare end of year with Digitalfoundry Test... I suppose the 3070 will be faster but we have no idea how much it will be faster.

Yes agreed they're not great data points. Much better to wait a few weeks for the proper RDNA2 benchmarks and derive real performance from there. I was just pointing out that when you only have 2 data points, it doesn't make much sense to assume something different that completely contradicts them both, without some further data point upon which to base your assumption. That's more "faith" than reasoning.
 
The test were done in March 2020 without final devkit, only delivered in June. We can compare end of year with Digitalfoundry Test... I suppose the 3070 will be faster but we have no idea how much it will be faster.
New gpu releases also takes a couple driver releases also, so could say haven't seen the full potential there either.

Either way, I took it as what data points we currently have doesn't match what is being presented. When we know more we can adjust our views accordingly.
 
And the Series X is already close enough to a 3070 to be a basic wash as far as that's concerned.

Almost double the performance in rasterization for the 3070. Then theres RT, dlss etc. Not even close.


both consoles within a few years due to targeted optimizations

10TF vs a 20TF gpu, hell of an optimization there.

But if you want "max graphics" you'll want to go with the 3070, and then reconsider your options again in a few years.

A 3070 will last a entire gen. A 7870 did, plays hzd fine. And thats closer to the console then the 3070 is.
 
Almost double the performance in rasterization for the 3070. Then theres RT, dlss etc. Not even close.




10TF vs a 20TF gpu, hell of an optimization there.



A 3070 will last a entire gen. A 7870 did, plays hzd fine. And thats closer to the console then the 3070 is.

We will wait they release more drivers but currently Ampere seems less efficient than Turing in real game performance per flops.

3070 will be faster than Xbox Series X the most powerful console but we need to wait to know the gap.
 
Btw, i wonder if Big Navi could be more than 80 CUs? Would be barely enough to compete 3070 but not more.
But i think all rumors were 80, and only Arcturus is 128?
 
Status
Not open for further replies.
Back
Top