Nvidia Turing Product Reviews and Previews: (Super, TI, 2080, 2070, 2060, 1660, etc)

A friend was on the market for a 500-600€ graphics card and I just told him to snatch a day-one MSI RTX 2070 for 530€.
I doubt these cards will stay at this price for long. Most GTX 1080 cards cost over 550€ in my country.
 
Weird. Was looking at Newegg and there was about 10 2070 models, the cheap ones on back order and some $549+ ones available. All of a sudden only 1 model is available and there are none on Amazon?
 
not sure if this is the best thread or not, mods move if necessary

https://www.golem.de/news/geforce-r...e-an-leistung-kosten-koennen-1810-137115.html

Remedy's Northlight-demo spends 9,2 ms per frame just for raytracing and denoising at 1080p on RTX 2080 Ti, and the result is still clearly noisy

Which all but confirms that's a simply a slapped on feature for marketing purpose (paid by Nvidia obviously). Spending 1/3rd of your frame budget @1080P on a single graphical feature that is barely noticeable (compare the regular Control footage to the RT trailer..) and runs on less that 1% of your install base is genius /s.. But it is at the very least that some good R&D for the future.
 
Which all but confirms that's a simply a slapped on feature for marketing purpose (paid by Nvidia obviously). Spending 1/3rd of your frame budget @1080P on a single graphical feature that is barely noticeable (compare the regular Control footage to the RT trailer..) and runs on less that 1% of your install base is genius /s.. But it is at the very least that some good R&D for the future.
How does the performance and quality compare with the algorithms it replaces?
 
not sure if this is the best thread or not, mods move if necessary

https://www.golem.de/news/geforce-r...e-an-leistung-kosten-koennen-1810-137115.html

Remedy's Northlight-demo spends 9,2 ms per frame just for raytracing and denoising at 1080p on RTX 2080 Ti, and the result is still clearly noisy

Did they ever show this demo running on the 2080? As far as I know this is from when MS D3D RT extensions were announced, before Nvidia's hardware acceleration was disclosed. When 2080 was demoed, Remedy demoed it on their new WIP title Control. Different demo altogether.
 
Looks like 9.2 ms is an unoptimized result for the Remedy engine and within the 60 fps at 1080p. Can't wait to see the finished result next year after they play and optimize the engine.
Edit: The video above is the wrong one.

 
How does the performance and quality compare with the algorithms it replaces?
This is obviously something that none of us can answer until the game is released next year. But based on the content already shown and the tech used (similar to Quantum Break with a slightly improved SSR implementation) I think that it's safe to assume that, just like all other RT implementation already shown (TR,Metro etc.) the impact of RT is rather extreme compared to its benefits (I'm fairly sure that any mid-range GPU and console will run it @1080P 30/60fps without breaking a sweat while a $1000 GPU is apparently need to hit the similar perfs with RT on). There's some new PS4 Pro gameplay footage available on YouTube which looks really great and you'll be hard pressed to go "ah shit it would look so much better with RT and some noise..".
 
Remedy's Northlight-demo spends 9,2 ms per frame just for raytracing and denoising at 1080p on RTX 2080 Ti, and the result is still clearly noisy

9.2ms out of 16.6ms at 1080p on a completely scripted demo when using the top-end $1200 GPU.
Get gameplay in there plus non-planned viewpoints, viewpoint displacements and wider FOVs and it probably goes down a lot. With a RTX 2070 it's either back to 2005's 720p or less than 30 FPS.


Yeah I don't think real-time RT in games is going anywhere with Turing, to be honest.
 
9.2ms out of 16.6ms at 1080p on a completely scripted demo when using the top-end $1200 GPU.
Get gameplay in there plus non-planned viewpoints, viewpoint displacements and wider FOVs and it probably goes down a lot. With a RTX 2070 it's either back to 2005's 720p or less than 30 FPS.


Yeah I don't think real-time RT in games is going anywhere with Turing, to be honest.
But that's for both shadows and reflections. Other games (such as BF, SOTR or Metro) will use RT only for shadows or reflections.

The mere fact that those games will effectively have RTRT in some sort of form is something.
 
Isn't RTRT performance a factor of how many rays are being used and wasn't there talk about having the ability to "dial" back the number of rays used by a gamer? They don't mention how many rays were used in the 9.2 ms demo, but likely was a punishing amount.
 
This is obviously something that none of us can answer until the game is released next year. But based on the content already shown and the tech used (similar to Quantum Break with a slightly improved SSR implementation) I think that it's safe to assume that, just like all other RT implementation already shown (TR,Metro etc.) the impact of RT is rather extreme compared to its benefits (I'm fairly sure that any mid-range GPU and console will run it @1080P 30/60fps without breaking a sweat while a $1000 GPU is apparently need to hit the similar perfs with RT on). There's some new PS4 Pro gameplay footage available on YouTube which looks really great and you'll be hard pressed to go "ah shit it would look so much better with RT and some noise..".
4K is even less of a perceptual improvement yet people are willing to sacrifice half their performance or worse for it...
 
But that's for both shadows and reflections. Other games (such as BF, SOTR or Metro) will use RT only for shadows or reflections.

The mere fact that those games will effectively have RTRT in some sort of form is something.

Reflections take a ton of non raytracing processing. Short range high gloss reflections should be doable, but that's the limit for any performance, BFV for example uses way too high a reflection range for its own good. They could probably more than double the RT performance by halving the range.

Otherwise the only performant raytracing we'll see for a while is shadows and short range ambient occlusion. Once shadows are done entirely via raytracing without a need to have shadow map fallbacks they'll look great, far more shadowcasting lights than we can have now (doable regardless of shadowmaps or not) and not nearly as sharp and unnatural a falloff (not doable with a fallback). Hell RT shadows can be faster than shadow maps if done smartly; they already are in UE4, that's without RTX support at all and sans some major optimizations.

I'm not sure how great ambient occlusion will look over a really good SSAO though, and it'd be hard to do with reflections AND ambient occlusion.
 
If there ever is a "B" chip it likely will be used by OEM's.

Thanks to a guru3d poster with sharp eyes, Hilbert amended his review to include the following.

In the photos below you can see a number of things, GDDR6 memory chips are fabbed by Micron labeled D9WCW. This is a 1750 MHz (14 Gbps effective) type of GDDR6 graphics memory. Photos show the TU106-400 GPU, this not tagged as the A revision (the A meaning this is a binned/sorted overclocking ready GPU). These can still overclock though.

So now I/we know about that, and as HH states, these can still overclock. Lol, more work for reviewers, people will soon start to want to know what they're getting. ASUS is selling its Strix version for a lot more than MSI's Armor. Presumably their use of the A revision chips for the Strix will partly be used to justify the price difference. Though the card itself is obviously more expensive to make. Same as with the 2080/2080 Ti cards, all the vendors offer beefier versions.

https://www.guru3d.com/articles-pages/msi-geforce-rtx-2070-armor-8g-review,5.html

 
I must have missed it, but did Nvidia specifically state there would be an "A" and "B" chip versions and the purpose? Or is it speculation based on rumor.
 
I must have missed it, but did Nvidia specifically state there would be an "A" and "B" chip versions and the purpose? Or is it speculation based on rumor.
Not A and B, but A, and not A. "B" has a connotation nVidia would likely want to avoid (EVGA for one uses it for returned/refurbished stock, so does Newegg for its scratched/dented refurbs), I used it in quotes as a convenience. But I'll stick to "non-A" when referring to it in the future.

I posted a link a bit back, I'll get it.

Here we go: https://www.techpowerup.com/247660/...overclocking-forbidden-on-the-cheaper-variant

We reached out to industry sources and confirmed that for Turing, NVIDIA is creating two device IDs per GPU to correspond to two different ASIC codes per GPU model (for example, TU102-300 and TU102-300-A for the RTX 2080 Ti). The Turing -300 variant is designated to be used on cards targeting the MSRP price point, while the 300-A variant is for use on custom-design, overclocked cards. Both are the same physical chip, just separated by binning, and pricing, which means NVIDIA pretests all GPUs and sorts them by properties such as overclocking potential, power efficiency, etc.

Edit: Another link: https://www.guru3d.com/news-story/nvidia-sells-two-of-each-turing-gpu-(a-normal-and-oc-model).html
 
Last edited:
But that's for both shadows and reflections. Other games (such as BF, SOTR or Metro) will use RT only for shadows or reflections.
Control is actually the game that uses RTX the most among those announced so far, they use it for reflections, shadows AND Global Illumination, that's why RTX takes 9.2ms. They simply use too much RTX.
starting with glossy Ray Traced Reflections, Ray Traced Diffuse Global Illumination and Contact Shadows for most influential light sources.
https://www.nvidia.com/en-us/geforce/news/control-game-rtx-ray-tracing-technologies/
 
Back
Top