Nvidia Turing Product Reviews and Previews: (Super, TI, 2080, 2070, 2060, 1660, etc)

Isn't the MSRP baseline for RTX cards the Founders Edition price?
The Founders edition is the baseline yes, for reality. After all, why would an AIB sell better cards for less than FE? Even blower models are the same or only slightly less than FE.

If you're talking official marketing MSRP, the prices that Nvidia want people to remember as opposed to actual prices, then no the Founders are not the MSRP baseline. The MSRP was announced during the Turing launch and provided to all reviewers to show.

https://www.anandtech.com/show/13346/the-nvidia-geforce-rtx-2080-ti-and-2080-founders-edition-review

So if he was basing the value of the 2070 on the MSRP, then it seems like a decent investment.
 
Regarding power consumption, the just released version of GPU-Z now shows nVidia GPU power consumption in watts.

https://www.techpowerup.com/download/techpowerup-gpu-z/

I just ran the Metro Last Light Redux Benchmark using my 2080 FE. By the end of it my clock speed was at 2040 MHz, fwiw. Peak power consumption, and it was just for a second, was 121% TDP/275 watts, according to GPU-Z. I run a custom fan profile, temperature peaked at 72C (22C ambient). Overclocking done using Afterburner, Memory + 1000, Core Clock +165, Temp. Limit 88, Power Limit 124, Voltage left unchanged. Though I started the overclock based on using the "Curve" that the new automatic overclocking supplied. And that was actually very decent, I only get 2%, or a little more, when benchmarking with the manual overclock settings. Using the Curve, I still raise the power limit and memory to the max, while leaving voltage alone.

Benchmark numbers, fwiw.

METRO REDUX BENCHMARK RESULTS
10/15/2018 12:36:56 PM
Preset 0
Options: Resolution: 3840 x 2160; Quality: Very High; SSAA: Off; Texture filtering: AF 16X; Motion Blur: Normal; Tesselation: Very High; VSync: Off; Advanced PhysX: Off;
Run 0 (Scene 1 )
  • Total Frames: 11727, Total Time: 171.1911 sec
  • Average Framerate: 68.54
  • Max. Framerate: 150.08 (Frame: 11405)
  • Min. Framerate: 8.21 (Frame: 8)
 
Last edited:
I just ran the benchmark of the previous Tomb Raider installment, Rise of the Tomb Raider, and it was both a bit more demanding overall of power (it stayed closer to the peak than Metro Last Light Redux did), and with a higher peak, that being 276.3 watts. I can't max out all the settings while maintaining constant 60/near 60 fps, but simply lowering VXAO to HBAO+ made enough of a difference for me. 67 fps in the benchmark.

So this is interesting. nVidia's FE is an extremely efficient piece of kit, and it appears to max out the potential of its GPU using its power profile, when tweaked, but it does come very close to bumping into the limitations of its power connectors/set limits.

If you add another fan, that uses more power; if you beef up the power phases, and connectors, you can use more power, but you also eat more power. So the question remains as to what is the sweet spot of this GPU regarding voltages, and the power available to it. Where's the line of diminishing returns set, and to what degree does the silicon lottery determine if a given chip will get much/any benefit from more power and voltage?

I have the two most recent Hitman games that I can benchmark, and check the power consumption of, if anyone is very interested. But I already have the sense that they won't be more demanding than Rise of the Tomb Raider, as I've previously checked to see how many watts my whole system used, while monitoring using the software of my CyberPower UPS.

In conclusion, I'm guessing that there's maybe another two to five percent of performance available over the FE cards by using a beefy AIB partner's card that has a BIOS that's been updated to exceed the limits nVidia initially imposed/suggested. But that will depend on the silicon lottery, and that raises the question of whether or not any binning is going on, beyond what nVidia has suggested it's already doing with its distinction between what it provides for chips going into boards marked as "overclocked", vs. those that aren't.

If an AIB partner card is using the limits nVidia suggested/imposed, then it might be hitting a wall regardless of its beefed up power delivery, and regardless of its cooling.
 
Last edited:
If an AIB partner card is using the limits nVidia suggested/imposed, then it might be hitting a wall regardless of its beefed up power delivery, and regardless of its cooling.
Check to see whether there is a bios update for your card. Evga has since released a bios update increasing the power target from 123% to 130%.
 
Check to see whether there is a bios update for your card. Evga has since released a bios update increasing the power target from 123% to 130%.

I have the FE, from nVidia. :) I'm not holding my breath waiting for them to free up the last few/20? watts between what they allow now and the nominal 300 watts its power connectors offer. That EVGA bios is likely applicable to other Turing cards, but I'm guessing nVidia cards are an exception. Lol, we wouldn't want over a hundred watts of power coming down a six pin connector, though I've read that they should be able to take it.

Anyway, I watched the readings closely, and my card will downclock to 2040 MHz even though it's not supposedly exceeding its power budget, or hitting its temp limit. But Afterburner will report that I am hitting the power limit most of the time, and the voltage limit part of the time.

I'll be interested to see what's what when a review of a card with that EVGA bios, and which is fed all the power its non-stock power connectors can feed it, gets overclocked and benchmarked.

My uninformed guess is that these chips have redundant, internal, constraints.
 
Woops, I forgot to mention what I consider a possibly important factor that might make the beefier AIB partner cards more desirable. That factor being, what happens when a game uses ray tracing and/or DLSS? How does power get budgeted to the transistors dedicated to ray tracing?
 
Woops, I forgot to mention what I consider a possibly important factor that might make the beefier AIB partner cards more desirable. That factor being, what happens when a game uses ray tracing and/or DLSS? How does power get budgeted to the transistors dedicated to ray tracing?

DLSS doesn't seem to take up anymore power than TAA, which is relatively little. Raytracing on the other hand, I don't expect the 2070 to be very capable there if the quoted performance targets for this stuff turn out to be true. Is the "60fps at 1080p" target for a 2080 or 2080ti? I've seen that thrown around a lot but I don't know what the target card is.

Still, while the 2080 seems vastly overpriced, the 2070 is actually an very nice improvement therein! I don't know why NVIDIA didn't trumpet this card more instead of the 2080/ti and raytracing. They'd probably have done a lot better on PR if they'd showed off a card with above 1080 performance for just $399, instead of getting a bunch of "RTX On/Off" memes thrown at them. Ah well, it's still there for anyone looking to upgrade.
 
I think the relation of real price vs. MSRP, of the 2080 Ti, and the 2080, will carry over to the RTX 2070.

Though on the other hand, there's an incentive here for nVidia to aggressively expand the base of people with cards that have these new features.

When will they get listed at retailers? This will be interesting.
 
MSRPs:

2070 $499
2080 $699
2080ti $999

Now you tell me whether they're being sold at MSRP :)

With Pascal pricing, Nvidia wanting to still sell Pascals, lack of competition at that market segment and Nvidia being who they are, it's going to be a long time until Turing is sold at those PR bulletpoint prices.

Never witnessed a GPU launch with early adopters tax? Nearly every launch of faster chips needs 2,3 months to reach MSRP. Retailers and distributors are always price gouging in the beginning. In Europe you can already get a 2080 for 740$ (without tax + conversion from euro). I bet we're at 700$ around black friday. Price in US will also come down, just needs a bit more time, as the european retailer market is more competitive.
 
Well apparently EVGA are releasing a 2070 at $500, which is very surprising! It's apparently the cheapest build they can make but given the die sizes and the brand new architecture, their profits can't be much and it directly competes with their 1080s which are currently more expensive. That doesn't make much sense since the 2070 basically makes the 1080 worthless at the same price, let alone more expensive.

We'll see how long the apparent $500 price lasts, I imagine they're going to sell quite fast.

 
That's good news.
Evga historically doesn't increase card prices even when in short supply. Instead they tend to limit the number cards an individual can buy.
 
That's good news.
Evga historically doesn't increase card prices even when in short supply. Instead they tend to limit the number cards an individual can buy.
We'll see what the reality is. Remember Vega launch and MSRP? Was only for a very limited stock then the rest at higher prices. Retailers will often adjust prices automatically based on demand anyway unless they're locked to a certain price.
 
Nvidia RTX 2070 Reviews

4Gamer MSI RTX 2070 GAMING Z
AnandTech NVIDIA RTX 2070 Founders Edition
ComputerBase ASUS RTX 2070 TURBO
Expreview NVIDIA RTX 2070 Founders Edition
Eteknix MSI RTX 2070 ARMOR
Guru3D MSI RTX 2070 ARMOR & ASUS RTX 2070 TURBO
HardwareBattle EMTEK RTX 2070 BLACK OC
HardwareCanucks ASUS RTX 2070 TURBO
Hardware.info
HardwareUnboxed MSI RTX 2070 GAMING Z
Hexus PALIT RTX 2070 DUAL
HotHardware EVGA RTX 2070 XC
JayzTwoCents EVGA RTX 2070 XC
JokerProductions NVIDIA RTX 2070 Founders Edition
KitGuru MSI RTX 2070 GAMING Z
LesNumeriques MSI RTX 2070 ARMOR
Overclock3D MSI RTX 2070 GAMING Z & RTX 2070 ARMOR
OverclockersClub MSI RTX 2070 ARMOR
Paul’s Hardware MSI RTX 2070 GAMING Z & RTX 2070 ARMOR
PCGamer NVIDIA RTX 2070 Founders Edition
PCGamesHardware NVIDIA RTX 2070 Founders Edition
PCPerspective EVGA RTX 2070 XC
PCWorld NVIDIA RTX 2070 Founders Edition & EVGA RTX 2070 XC
SweClockers MSI RTX 2070 ARMOR
TechSpot MSI RTX 2070 ARMOR
Tom’s Hardware USA NVIDIA RTX 2070 Founders Edition
Tom’s Hardware Germany MSI RTX 2070 ARMOR
TweakersGIGABYTE RTX 2070 GAMING
TweakTown NVIDIA RTX 2070 Founders Edition
 
Well apparently EVGA are releasing a 2070 at $500, which is very surprising! It's apparently the cheapest build they can make but given the die sizes and the brand new architecture, their profits can't be much and it directly competes with their 1080s which are currently more expensive. That doesn't make much sense since the 2070 basically makes the 1080 worthless at the same price, let alone more expensive.

We'll see how long the apparent $500 price lasts, I imagine they're going to sell quite fast.

I wonder if that much less expensive card will use the "B-grade", non-OC, gpu from nVidia. We've heard about them, but afaik nobody is reviewing them. Maybe they're going to be mostly used in blower units, and sold to companies like Dell. Though I guess they'd also work well in smaller form factor cards using only one or two small diameter fans. Though the 2080 Ti, and even the 2080, would likely be different, and people would still pay a lot for a decently cooled non-A variant.

https://www.techpowerup.com/247660/...overclocking-forbidden-on-the-cheaper-variant

We reached out to industry sources and confirmed that for Turing, NVIDIA is creating two device IDs per GPU to correspond to two different ASIC codes per GPU model (for example, TU102-300 and TU102-300-A for the RTX 2080 Ti). The Turing -300 variant is designated to be used on cards targeting the MSRP price point, while the 300-A variant is for use on custom-design, overclocked cards. Both are the same physical chip, just separated by binning, and pricing, which means NVIDIA pretests all GPUs and sorts them by properties such as overclocking potential, power efficiency, etc.
 
I wonder if that much less expensive card will use the "B-grade", non-OC, gpu from nVidia. We've heard about them, but afaik nobody is reviewing them. Maybe they're going to be mostly used in blower units, and sold to companies like Dell. Though I guess they'd also work well in smaller form factor cards using only one or two small diameter fans. Though the 2080 Ti, and even the 2080, would likely be different, and people would still pay a lot for a decently cooled non-A variant.

https://www.techpowerup.com/247660/...overclocking-forbidden-on-the-cheaper-variant


Does it even matter what the clocks are when the "one click Scanner" gets you at least a 200+ MHz bump on the Boost clock to 1950 MHz even on the low end EVGA $499 model.

With a quick run through NVIDIA Scanner and an increase to the maximum available power target, we were able to achieve a clock speed increase of almost 200 MHz, to around 1950 MHz overall. Temperatures increased a few degrees but stayed right around the 60C mark.

https://www.pcper.com/reviews/Graph...eaturing-EVGA/Detailed-Power-Consumption-and-
 
Does it even matter what the clocks are when the "one click Scanner" gets you at least a 200+ MHz bump on the Boost clock to 1950 MHz even on the low end EVGA $499 model.

I'm seeing that it's base boost clock is over that of reference, which suggests that they're using nVidia's "A" chip.
Clock speed: 1,410MHz base, 1,710MHz boost (1,620MHz reference)
https://www.pcworld.com/article/3313423/components-graphics/evga-geforce-rtx-2070-xc-review.html

That makes it an even better bargain at $499. I'm curious if it gets sold at that.
Edit: I didn't notice this at first glance.
Price: $550
https://www.pcworld.com/article/3313423/components-graphics/evga-geforce-rtx-2070-xc-review.html


I have my fingers crossed that DLSS matures into offering an excellent option in games that can most benefit from such a technique. I hate digital noise, and I'm hoping it removes it.
 
Last edited:
Back
Top