Nvidia Turing Product Reviews and Previews: (Super, TI, 2080, 2070, 2060, 1660, etc)

Nvidia Turing: Mobile RTX graphics chips 2050/2060/2070 announced during CES
Over at the pending CES in Las Vegas Nvidia might be introducing first mobile offspring based on the Turing graphics architecture. If correct then the first Turing notebooks could already be available in about two months.
Nvidia would plan to launch the first laptop with a Turing graphics unit but with availability in February. Initially, there would be two revisions, GeForce RTX 2070 as a temporary mobile flagship and then a lower clocked Max-Q variant.

GeForce 2060, 2050 Ti and 2050 are also mentioned, the prefix (GTX/RTX) remains unclear. For while the 2070 versions of the RTX-exclusive features raytracing and DLSS support, the smaller chips could not have dedicated RT and tensor cores.
...
A possible RTX 2060 also surfaced in a benchmark. These are in the RTX 2060 in the normal and Max-Q variant. With over 19,000 points in 3DMark, the RTX 2060 is expected to rank between GTX 1070 and 1070 Max-Q.
index.php


https://www.guru3d.com/news-story/n...-chips-and-2050-2060-released-during-ces.html
 
So it's a TU106 with 6 less SM units (36 > 30) and two less 32bit GDDR6 chips.
I'll guess the first cards will even use RTX 2070 PCBs.

50% more raw compute resources compared to GTX1060 with supposedly 75% higher bandwidth. Given Volta's higher performance per ALU, it seems to make sense.

Those 6GB may become a hindrance in the long run, though. Does nvidia implement anything close to Vega's HBCC on Turing?
 
More RTX 2060 speculation ...
12 December 2018
In terms of specs, the GeForce RTX 2060 is set to b a cut-down version of the RTX 2070, with 1,920 CUDA cores compared to the 2,304 its bigger brother offers. The GPU will also feature 30 ray-tracing cores that might allow it to offer some of the very slick rendering capabilities, but not likely enough to make it a major selling point at the moment.

And the RTX 2060 will have 240 Tensor cores in comparison to the 288 of the RTX 2070, which should give it enough chops for carrying out some smart machine-learning related graphics wizardry.
https://www.theinquirer.net/inquire...s-navi-graphics-cards-to-the-mid-range-market
 
In the cases where ray intersect calculations are the limiting factor, how can that be scaled? If you reduce ray counts, would that simply result in softer traced graphics (denoised more aggressively)? That probably wouldn't be too bad. If the results are noisy, I think that'd be very offputting.
 
In the cases where ray intersect calculations are the limiting factor, how can that be scaled? If you reduce ray counts, would that simply result in softer traced graphics (denoised more aggressively)? That probably wouldn't be too bad. If the results are noisy, I think that'd be very offputting.
You could also ray trace over several frames but the lighting would be slightly laggy.
 
I find laggy lighting really distracting. One fo the voluemtric lighting demos was a few frames behind on the lighting and it really stuck out. Softer with a little bleed is probably preferable in all cases as it'll be less perceptible.
 
I find laggy lighting really distracting. One fo the voluemtric lighting demos was a few frames behind on the lighting and it really stuck out. Softer with a little bleed is probably preferable in all cases as it'll be less perceptible.
Screenspace bleed (if properly taking depth discontinuities) into account is acceptable. But spacial light bleed can cause light leaking through walls and such which is less passable. That is the worst plague of voxel lighting. But hey, they shipped AC unity with these exact light bleeding problems near roofs and corners of buildings and called it a day.

Still in screenspace, the big problem is, what do you do when you have thin structures, like 1 or 2 pixels thin, with no RT sample around to borrow sampling from... It's potential for some really disconcerting artifacts, or maybe very ignorable ones. It depends a lot on the scene.
 
Afaik DLSS isn't limited to just making making 2560 x 1440 look nearly as anti-aliased/good as 3840 x 2160. It was also intended to offer the option of trying to make 1920 x 1080 look close to 2560 x 1440. Maybe the RTX 2060's limited number of Tensor cores will work just fine with that, and that would dovetail with its general rendering power with the latest, and most demanding, titles. My friend who got my GTX 1070 is just able to play Monster Hunter World at 1080p, 60 fps, with everything maxed out, so I guess a title like that would benefit from the DLSS on a RTX 2060. Just gotta get developers and nVidia to put in the work.

I'm guessing that work is more than a little daunting, as we're not being flooded with news of upcoming titles planning on having it. We have that existing list nVidia published, but I'm unaware of it being expanded yet. But since it is scheduled for some titles, I hope and expect that it will be easier to implement as time goes on, and nVidia expands the hardware resources needed to put it in place.
 
So it's a TU106 with 6 less SM units (36 > 30) and two less 32bit GDDR6 chips.
A 445 sqmm die versus a ~200 sqmm die for a modest 30% improvement and likely higher power consumption. Not to mention that 1060 cards can be easily overclocked to 2000Mhz (many are already factory OCed) with good performance gains It does not make a lot of sense to use such a large die in a midrange card, unless NVidia is sitting atop a huge pile of defective TU106 cores.
 
Huh, apologies if this was discussed, but on a whim I just now connected my USB-C to USB-3 hub to the USB-C connector on my Founders Edition RTX 2080, and all the connected drives are showing up, and I was able to transfer a file using two of the drives connected to it. I also have a USB-3 hub connected to the USB-C one.

I only have the one regular USB-C connection, so if this works, trouble free, it will be a nice find. I was thinking of adding some external SSDs down the road once prices drop another couple of notches, and afaik USB-C offers better performance. I have one SSD in a USB-3 enclosure, and its file read and write speeds aren't bad (though lower than it was when using SATA 3), but the other metrics are notably lower (not that they matter much to me).

Edit: My external SSD benches a bit faster using my PC's regular USB-C port, but the speeds suggest it would be more than ample for regular USB-3 hard drives. Only question now is does accessing drives through the card's USB-C connection limit gaming performance. I have the Final Fantasy XV benchmark installed on the external SSD, so I'll see if it even runs.

Edit 2: Benchmark score was consistent with the one from a while back, and I didn't see any glaring hitching. Now that I think about it, the bi-directional ability of USB-C and PCI-E should mean that reading from a drive shouldn't impinge on outputting to a display.

Whatever, this is just something I might want to use down the road for my non-SSDs. It might be neat to see a pro-site/reviewer look at this though.
 
Last edited:
Back in october, we saw about 1GByte/s in both directions when connected to a fast USB-3-SSD.
edit: correction: It was of course a USB 3.1-SSD.
 
Last edited:
Back
Top