Nvidia Ampere Discussion [2020-05-14]

iXBT RTX 3070 Review

ft15wkcw.png

ft29mk8o.png

ft3kik56.png

ft6vbktn.png

luxmark95jif.png

rtxbrightmemoryqsj0f.png
 
Yes, I agree. People were criticizing the architecture as not having sufficient gains over Turing because the power consumption on the 3080 is so high. But now we can see because of smart improvements they've managed to equal a 2080ti with a lower power part that's on paper more limited in many ways.

The architecture itself dosen't seem to have improved perf/w in a meaningful way though, and the power savings are in line with what can be expected from a node transition (or less in case of 3080/3090). Part of the shortfall is attributable to the choice of node/implementation and the rest is due to the fact that that Nvidia chose to push the voltage and clocks higher than they traditionally have. The resulting increase in absolute power with only a minor increase in perf/w is what people are unimpressed by.

You have to keep in mind that the xx70 cards have traditionally been more power efficient than the flagships so the fact that the 3070 sort of lives up to its reputation is not unexpected as such (aside from the slight increase in absolute power). While the 3070 seems to be a better implementation of Ampere, if we compare the transistor count vs the 2080Ti, they are very similar. The TU102 has a transistor count of 18.6B vs the 17.4B of the GA104, with no significant differences in features between the two. The 2080Ti has a 384bit MC & 4/72 SMs disabled vs the 256 bit & 2/48 SMs disabled for the 3070 so it has an advantage there. I did expect the 3070 to trail the 2080Ti slightly so the fact that it matches it is a good thing, but not a huge surprise as such.
 

Fun to see the 3070 beat the 3080 in the 3dmark vantage color fill test in the iXBT review.

The 96 ROPs are clearly pulling their weight for the 3070, although I'm not quite sure why the 3080 is slower - given the memory bandwidth deficit that the 3070 has, I was expecting it to be a bit behind, even though 3070/3080 both have 96 ROPs.

Might be a bottleneck in the fabric connecting the ROPs to cache/memory somewhere in GA102?

ft2.png


EDIT: DegustatoR below is absolutely right, I missed the text further down in the iXBT article that mentions this. Comparing apples to apples is quite tough, it seems.
 
PCGH with an excellent technical dive into VRAM limitations on the new 3070 cards:
https://www.pcgameshardware.de/Gefo...76747/Tests/8-GB-vs-16-GB-Benchmarks-1360672/

You can also see from the first benchmark (Horizon Zero Dawn) that the frame time 'spikes' when overflowing local memory are about 1/2 to 2/3 the size on the 3070, presumably due to PCIe 4.0.
I think its very likely the 3080 and 3070 will start displaying semi regular performance drops due to VRAM limitations within a year or 2.
 
Last edited:
I think its very likely the 3080 and 3070 will start displaying semi regular performance drops due to VRAM limitations within a year or 2.

At 4K I think it's almost certain we will. The 3060 with 6 GB VRAM is the bigger issue I feel. Even for 1440p gaming it doesn't seem sufficient.
 
6GB should be sub $250 segment now. They could have at least used 16Gbs GDDR6 in the 3070 so that the 3060 could differentiate with 14Gbs but still keep the 256bit bus and 8GB. It doesn't really look like the 3070 needed 16Gbs, but hampering any card with 6GB seems out of the question these days even for 1080p.
 
An added consumer issue with VRAM is given the current market situation VRAM is possibly going to increase substantially at each segment as soon a "refresh" range next year which puts those looking to buy now in a bit of a "sticky" situation. There's also the added possibility of a resell value impact for those looking to recoup from that avenue as well.
 
From the look of it Anandtech doesn’t review graphics cards at all any more.

A lot of their hadware reviews, even CPU have been quite delayed of late. This is what Ryan had to say in the 3070 launch article - "This will be followed by our long-awaited (and badly delayed) NVIDIA Ampere series review in about a week, going over the architecture in depth as well as our complete performance breakdown for the RTX 3080 and RTX 3070."

So no 3090 review it looks like.
 
PCGH with an excellent technical dive into VRAM limitations on the new 3070 cards:
https://www.pcgameshardware.de/Gefo...76747/Tests/8-GB-vs-16-GB-Benchmarks-1360672/

You can also see from the first benchmark (Horizon Zero Dawn) that the frame time 'spikes' when overflowing local memory are about 1/2 to 2/3 the size on the 3070, presumably due to PCIe 4.0.

That's a really interesting review. I hadn't realised that the consequences could be so severe in current games of overrunning 8GB. These are obviously corner cases but they will grow over time. The Minecraft and Wolfenstein benchmarks are real eye openers. This would certainly make me think twice about a 3070 8GB unless I'm sure I'll be replacing it within 2 years or so.
 
Yes, PCGH article is excellent. Very clear evidence that 1% lows are misleading and 0.1% lows capture stutter. In the HZD Percentiles graph, 3070 and 2080Ti are identical at the 99% mark.

In my opinion averages should not be used in reviews. 0.1% results are enough to rank GPUs.
 
Yes, PCGH article is excellent. Very clear evidence that 1% lows are misleading and 0.1% lows capture stutter. In the HZD Percentiles graph, 3070 and 2080Ti are identical at the 99% mark.

In my opinion averages should not be used in reviews. 0.1% results are enough to rank GPUs.

An issue is that 0.1% results are going to be much more context sensitive, which would exarate issues such as testing in play sections that are not repeatable or what is more representative of a "neutral" test section in a game.

Solely giving 0.1% results might also not provide enough context in terms of whether of well a game would actually play. So for instance a situation in which horrible 0.1% lows happen in a specific part or sequence in the game while maybe 99.9% of the rest of the game is well above playable frame rates.

Also in the VRAM context it also might be problematic as stutter in most cases will likely go away with one setting drop down (typically texture size) which like most graphics options is can be subjective, and therefore vary widely in terms of opinion, in terms of how impactful it is.

So I don't feel solely giving 0.1% data by itself provides enough information.

Ideally I still think you would want all 3 data points, perhaps with additional over time metrics (eg. graphs). The other thing I'd like to see is actually more specifics on what the test sequence actually is (and why that sequence was chosen) to be more commonly given. In terms of impracticalness I'd actually like to see much longer test sequences then what is common (eg. representative of an actual game play cycle, although a short one).
 
I always set game graphics options for "0.1% lows". I find the worse case in a game during early game play and tweak from there.

A stutter every 5 to 10s is not playable in my opinion. Variable refresh rate can soften the blow, but how well is up for debate.

Reviews are primarily to guide purchase decisions. For example 3070 is clearly being signalled as a 1440p card, and some games like Wolfenstein are not going to be "2080Ti equivalent" at maximum settings.

Reviews are meant to highlight the limits - they're not marketing. Otherwise, let's just look at NVidia's pretty slides and talk trash.
 
I always set game graphics options for "0.1% lows". I find the worse case in a game during early game play and tweak from there.

A stutter every 5 to 10s is not playable in my opinion. Variable refresh rate can soften the blow, but how well is up for debate.

Reviews are primarily to guide purchase decisions. For example 3070 is clearly being signalled as a 1440p card, and some games like Wolfenstein are not going to be "2080Ti equivalent" at maximum settings.

Reviews are meant to highlight the limits - they're not marketing. Otherwise, let's just look at NVidia's pretty slides and talk trash.

But how would you consistently account for this in practice? Games can have very uneven performance from start to finish. I know from experience I've seen games where the hardest performing part might have been just the opening sequence. At the same time I've played games where performance demands were much heavier much later in the game. There's games in which there might only be 1 specific part, or even 1 brief moment in that causes problems, do you completely adjust settings for that 1 singular maybe 10 second sequence (a 0.1% low could just be that moment) out of a 20hr+ game? How would you isolate and find it in the first place before playing all the way through?

In terms of the "max settings" issue I've always found it's more of a psychological benchmark point, which I'll admit that I fall for as well (eg. I have to play through a game at "max settings"). From what I've heard is that Wolfenstein's highest texture setting (texture streaming pool or something? I don't have nor have ever played the game) is something that adjusts how much VRAM gets reserved for texture data that might not even be used. Is a difference in this setting noticeable between max and max-1 in normal play? So this setting may not be important for everyone. (Much like how many people felt max settings actually meant max settings - ray tracing, which is an interesting commentary by itself).
 
Right, which is why reviewers should provide more information not less.
0.1% performance, which is missing from pretty much every review, shows that 3070 isn't 2080Ti equivalent in some key ways.

Notice from the PCGH article that 2080Ti's percentile charts are close to a "flat line", without the hockey stick, which indicates the card is working well. This would lead a thorough reviewer to say that 3070 will provide 2080Ti equivalent performance only when the gamer reduces settings.

It's as if Tech Report's frame time analysis had never happened.
 
Back
Top