Nvidia Turing Product Reviews and Previews: (Super, TI, 2080, 2070, 2060, 1660, etc)

Yeah but what’s the point? It has the same bus width as TU104 and 75% of the SMs. Are yields so wacky on 12nm that it made sense to spin a separate chip instead of salvaging TU104 dies? Maybe yields are pretty good and they didn’t want to have to sell an expensive-to-manufacture TU104 chip with 1/4 of the units disabled.

I could understand if RTX 2060 used a salvaged TU106 but if there’s really a TU116 the whole setup seems strange.
Yeah the only reason I can think of myself with limited understanding of salvage is that RT and/or Tensor core structures don't support disabling portions.
 
NVIDIA RTX 2080 & 2080Ti spotted in FINAL FANTASY XV Benchmark




http://benchmark.finalfantasyxv.com/result/

Meh

1080 Ti 24/7 OC

1440p High
222arcsd.png


2160p High
333locf4.png


Considering those are pre-overclocked FE GPUs with half decent cooling these results aren't impressive at all in FFXV. The 2080 is 9-13% slower than my results with the 2080 Ti being 9-15% faster...
 
The comparison is between a 1080 Ti FE @ stock clocks vs the new FE design + Overclock + extended power limit :LOL:

Not the first time Nvidia does something like this of course. Pascal was the same case vs Maxwell, which overclocked extremely easily to +25-30% over the advertised stock clocks. Now they can't increase clocks so they improve the thermal design + overclock the FE beyond stock clock + increase the power limit to look comparatively better. This makes for great marketing but I'm still curious about the actual results versus good 1080 Tis.
 
Last edited:
3840x2160 Standard FFXV benchmark (no gameworks)

sa7iw5.png

sh2rcr5.png


What I assume to be a FE 2080 Ti is about 12% faster at 2160p compared against an overclocked 1080 Ti.
 
Last edited:
3840x2160 Standard FFXV benchmark (no gameworks)


What I assume to be a FE 2080 Ti is about 12% faster at 2160p compared against an overclocked 1080 Ti.

Seems to me that disabling Gameworks would shift the bottleneck away from the GPU for this title - no? I mean, if consoles can run the game at "4k" without Gameworks effects then how demanding can it be? Since this benchmark doesn't provide average FPS, let alone frame times, it's hard to know what kind of performance difference is observable between the tested GPUs.
 
Seems to me that disabling Gameworks would shift the bottleneck away from the GPU for this title - no? I mean, if consoles can run the game at "4k" without Gameworks effects then how demanding can it be? Since this benchmark doesn't provide average FPS, let alone frame times, it's hard to know what kind of performance difference is observable between the tested GPUs.

It's all GPU bound. This game is very demanding on the GPU, especially at 3840x2160, there's no way the 2080 or 2080 Ti are CPU bound at those settings.
 
It's all GPU bound. This game is very demanding on the GPU, especially at 3840x2160, there's no way the 2080 or 2080 Ti are CPU bound at those settings.

"Standard quality" as indicated by the benchmark is equivalent to a "normal" preset (i.e. low detail by the standards of most PC gamers). This is exactly the type of settings preset that is used in CPU bottleneck testing, while keeping things relatively comparable to real-world scenarios (as opposed to a "low" preset). 4k or not, if we have the same CPU horsepower as yesterday but add say 50% more GPU power, then turn off GPU-intensive features, you have a recipe for a CPU bottleneck. Sure, it's *less* of a CPU bottleneck than at 1080p, but there's no getting around the fact that you're asking the GPU to do less work. Naturally, this presents a scenario wherein the CPU is being asked to feed the GPU data more quickly in order to generate frames quicker. At some point the CPU won't be able to keep up.
 
"Standard quality" as indicated by the benchmark is equivalent to a "normal" preset (i.e. low detail by the standards of most PC gamers). This is exactly the type of settings preset that is used in CPU bottleneck testing, while keeping things relatively comparable to real-world scenarios (as opposed to a "low" preset). 4k or not, if we have the same CPU horsepower as yesterday but add say 50% more GPU power, then turn off GPU-intensive features, you have a recipe for a CPU bottleneck. Sure, it's *less* of a CPU bottleneck than at 1080p, but there's no getting around the fact that you're asking the GPU to do less work. Naturally, this presents a scenario wherein the CPU is being asked to feed the GPU data more quickly in order to generate frames quicker. At some point the CPU won't be able to keep up.

What are you on about? This benchmark is not CPU bound even at 1080p.
 
The lack of performance leaks is surprising. I fully expected once the driver was out in the wild it would eventually spread to folks who had access to hardware and cared nothing for NDAs.

Is the driver installation process authenticated somehow? I know you have to login to download the driver but does the installation process also require a connection to nvidia?
 
Back
Top