Speculation: GPU Performance Comparisons of 2020 *Spawn*

Status
Not open for further replies.
Sggh, AMD, is it so difficult to give an hint about perf. compared to RTX3080 (or their last best)...They make it so easy for nV and more important, so difficult for me :D
 
Last edited:
Sggh, AMD, is it so difficult to give an hint about perf. compared to RTX3080 or something...They make it so easy for nV and more important, so difficult for me :D

And let's say they hint it's faster in one specific cherry picked benchmark. It's not hard to predict what the response will be.
 
Last edited:
Computerbase.de did a comparison between multiple architectures.

They clocked both the 5700 and 2070 at 1500 core/1700 mem with 2304ALUs. On average, the 5700 was 1% faster, which is pretty much margin of error.

https://www.computerbase.de/2019-07.../4/#diagramm-performancerating-navi-vs-turing

Ok, I'm not sure why it's relevant to the Neon Noir results where the gap between the 2070 super and 5700xt is much wider than usual. Clearly the IPC in that particular application isn't identical.
 
Ok, I'm not sure why it's relevant to the Neon Noir results where the gap between the 2070 super and 5700xt is much wider than usual. Clearly the IPC in that particular application isn't identical.
True. But for RDNA2 every CU is an RT core, so to speak. So if the IPC was this close already with rasterization, and not really THAT far behind with Neon Noir, and games will not be pure RT for the time being, RDNA2 has a fair chance to compete on this front. I have no doubt it will beat Turing. We don't know how it will compete with Ampere though.
 
NV did to say the least. AMD probably will too, 30TF gpus incoming with ray tracing features. Can't wait for the Zen3 event either.
People need to stop staring at theoretical numbers that much, that 30 TF GPU isn't even twice as fast as same companys last gens 11 TF GPU
 
People need to stop staring at theoretical numbers that much, that 30 TF GPU isn't even twice as fast as same companys last gens 11 TF GPU

Yes it is in different workloads. Don't worry, im certain AMDs new GPUs will match or even surpass what NV has in terms of raw TF metrics. Atleast at the 3070/80-ish level. The 3090 is a kind of its own though, AMDs answer to that probably isn't prioritized for now.

Edit: You can't just claim that already now either way. The 3080 is a modern 30TF gpu, no more no less. And dont misstaken me for NV fanboy or something, to be clear, i want AMD to compete, it's in a pc gamers intrest.
 
Last edited:
If the 290X is a furnace then what are the RTX30 series to you?

This new nvidia gen and all the people who are now super excited for it just proves that absolute power consumption on desktop graphics cards was never a real concern, but one that was manufactured by marketing divisions and further picked up by fanboys.


Now that nvidia cards pull over 300W, let's see how many times we'll have with people - and reviewers - doing those napkin maths of how much more they need to pay in annual power bills by using completely bonkers numbers like assuming everyone will play 8 hours a day every single day of the year.
All of a sudden I also don't see many people complaining how there aren't any mITX cards on the RTX30 range to put in the tiniest cases they saw in the Internet.

What matters is whether or not the card fits the PSU (most decent 650W ones will), and whether or not the cooling is adequate for its chip and silent.

Nvidia seemingly went above and beyond with their new coolers which is great, but it also proves that the FUD they and their fanboys generated over cards that consume 300W+ is completely manufactured.
I was surprised people missed 80 series going from 220W card on 16nm to 320W card on 8nm. That is effectively going from mid high range wattage, to OC Titan wattage (on smaller node and new arch)

2080TI was 250W card, so for ~30-40% increase you get with 3080, card pulls ~25-30W more. Got to hand it to Nvidia, their marketing was on point again, but perf per watt is not looking too hot (or is it?), so it surprises me that people think AMD with ~50% increase in perf/watt over RDNA2 cannot match it.
 
It's not about the ROPs. The 2070 only has 3 GPC's so max 48 rasterized pixels per clock.

My bad.

It's, but there are rastirezers, these blocks are responsible for producing pixels.
How on earth would ROPs process more pixels than what's carried to them by rasterizers?

My bad, I forgot about the frontend. However, it is also true that a comparison of the architecture at ISO clocks cannot be done without forcing the frequency. Saying "RTX2070 Super is faster" is OK but the 2070 Super also has a real boost clock much higher than real boost clocks on 5700XT:

https://forums.tomshardware.com/threads/2070-super-clock-speed-higher-than-expected.3538316/

So... even this is a not completely equal comparison.
 
Highly doubt pc gamers getting GPUs like that care the least how many watts they draw. They most likely just care for the performance and sound levels ofcourse. I for one dont care.
 
It's been very important metric ever since Maxwell AFAIR.

No idea, it sure has some importance, but for the pc gaming market watts are less of an issue. It's probably the only way forward to get massive leaps. It's a brute force approach to be honest (if true), otherwise we wouldnt see those gains and be stuck at half that performance.

AMD can and will follow to compete, fun times ahead.
 
It depends, there is a desktop PC gaming market and a laptop PC gaming market (I belong to the latter). In the second case this metric is simply the most important, as in a laptop you are most if not all of the times limited by power and cooling, to the point that it happens often to a notebook with an 2080 being run over another notebook with a 2070/super but with better cooling or different power limits.
 
Status
Not open for further replies.
Back
Top