Nvidia Ampere Discussion [2020-05-14]

$1500 after two years for a 2080Ti replacement. Who's buying one?

I'm just not going to spend $1500 on a gaming card and another $200+ on a waterblock & backplate. First time in a long time since Kepler I'm going with the x80 model instead of Ti or Titan. Probably even stick with the stock cooler for once too. Feels weird to only spend $700 on a video card and still be getting a massive upgrade. I guess I shouldn't complain to get 80%+ of the performance for less than half the price. Put the extra money towards another OLED display :LOL:
 
These specs are confusing. Are the die pictures accurate at nvidia homepage? Then we will see 7 gpc and rasterizer. not much for this sm count.
attachment.php
7 GPC is accurate, I believe.
*Ahem* Can we please keep baseless tweets out of the thread? There should be plenty of solid technical information to keep everyone busy discussing.
Apologies.
 
Is nvidia anticipating stronger competition this time?

If you mean from AMD I'd think it's complicated. In general I feel that enthusiasts somewhat over value how important the direct AMD vs Nvidia (or Intel) product competition affects the business side, not saying there is no consideration just that there is way more to it.

This is of course all going to be idle speculation and just some ramblings but -

1) Nvidia competes more than against AMD's PC GPUs. Regardless of what AMD does the upcoming consoles need to considered.

2) Nvidia also to some extent needs to compete against themselves, as you need to convince existing users to upgrade. It's been mentioned in past that Nvidia's marketing really targets more so 2 generation upgrades. This is why I speculated they would be aggressive perf/cost wise relative to Pascal.

3) There's actually opportunity here to grow the overall market due to the Covid pandemic. The pandemic situation has actually been increasing interest in gaming as an activity in lieu of other physical options.

4) The word is the cost structure of using Samsung is much better in terms of both wafer costs and mask costs. Availability might as be better meaning a volume push is more viable. It doesn't make sense to push for volume if you can't supply it after all. In terms of FE pricing my opinion on that has always been it was done to capture more of the money going towards the sellers with initial launch availability issues. It's interesting to note that with the "refresh" releases (1080ti, Super) there was never a higher FE price.

5) Part of Turing's pricing issue was likely coming off the heels of the mining bubble.

6) Personally I've always felt that the broader opinion with respect to AMD vs Nvidia pricing is actually off. Nvidia has actually more often taken the initiative to push perf/$ forward with new releases than AMD and set the benchmarks. AMD is more reactive one. They often price lower due to market necessity which gets interpreted as AMD setting the price to keep Nvidia "honest."

7) We might also want to wait to see how the current "theory" with regards to performance actually works out in practice.
 
So two FP32 instructions per clock. What are the implications of that, in terms of being able to actually utilize it effectively?

FMUL is 2 flops but can be pipelined every cycle, so they count that as 2 flops. That's always been the case for this and previous gens too.
 
Prices actually went down, the prices listed in the presentation were FE prices.

https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/

RTX 3080 FE $699
RTX 2080 FE was $799

RTX 3070 FE $499
RTX 2070 FE was $599
Anchoring works well. Now people think the prices actually went down, whereas really they're just not charging the FE "tax" this time. Do I think Ampere is worth the asking price for what you get? Yes, definitely. But I'm not going to get all warm and fuzzy at Nvidia for not gutting consumers completely this time around.
 

So if I read that chart correctly, 2xFP32 cores will not translate into ~2x rasterization performance. I suspect only 1xFP32 will be used for shading, the other FP32 exclusively reserved to support Raytracing. Or is it because engines aren't programmed to used both FP32 cores concurrently?
 
From the limited info, looks like 40-50% improvement at each respective tier when we ignore Nvidia marketing shenanigans.
 
So if I read that chart correctly, 2xFP32 cores will not translate into ~2x rasterization performance. I suspect only 1xFP32 will be used for shading, the other FP32 exclusively reserved to support Raytracing. Or is it because engines aren't programmed to used both FP32 cores concurrently?

Performance depends on more than just flops. 3070 bandwidth is lower than the 2080 Ti.
 
Back
Top