Nvidia Turing Product Reviews and Previews: (Super, TI, 2080, 2070, 2060, 1660, etc)

Anybody else really confused with the naming scheme? I honestly thought this was a joke and couldn't take it seriously as a rumor, unless it was meant as a notebook part, in which case the lack of Tensor cores would make sense. I can see who this would be for, but I think Nvidia might be shooting themselves in the foot with a product like this, it only muddles the waters and makes RTX seem like a gimmick. We'll wait and see I guess, but wow this looks weird right now.
 
Anybody else really confused with the naming scheme? I honestly thought this was a joke and couldn't take it seriously as a rumor, unless it was meant as a notebook part, in which case the lack of Tensor cores would make sense. I can see who this would be for, but I think Nvidia might be shooting themselves in the foot with a product like this, it only muddles the waters and makes RTX seem like a gimmick. We'll wait and see I guess, but wow this looks weird right now.
Maybe they're just being realistic for once and understand that RT isn't something that will scale down to low end GPUs?
 
Good point, but it was still GeForce 4, which is exactly what everyone was complaining about at the time, it wasn't called GeForce 4 Fixed Point or GeForce 2 uber :p. I don't know I just find this situation really weird.
 
Anybody else really confused with the naming scheme?
It’s a non-issue. Give it a week or two and it’s completely forgotten. Just like the discussion about how 1180 made sense and 2080 did not.

I can see who this would be for, but I think Nvidia might be shooting themselves in the foot with a product like this, it only muddles the waters and makes RTX seem like a gimmick. We'll wait and see I guess, but wow this looks weird right now.
One could argue that having ray tracing cores on a device that’s too slow to make practical use of them would be the real gimmick.

Meanwhile, having tensor cores on a mid-end GPU makes sense because those can also be used in the data center. (See Nvidia’s Tesla P4 and T4 products.) I don’t think there is such use for smaller GPUs.
 
Yes, you are right in saying that putting tensor cores in a low end part makes zero sense, I'm with you on that. The thing is, reports for this card have been all over the place, from it being almost identical to the 2060 barring Tensor cores, to it being a notebook part or a 1050 Ti replacement. If the latter is true, then why not name it GTX 2050 and call it a day? Is there another chip that will fill that role? If not, where does this part fit in?
I also don't see how the 1180 - 2080 has any relevancy here, since that happens almost every generation, i.e. see how NV moved from 780 --> 980. Once they lock on a naming scheme, they usually follow it for the entire series, even for the inevitable rebrands. But to have two concurrent naming-numbering schemes going on, it's just weird to me because Nvidia's marketing dept is usually pretty good at this. Strikes me as odd, which is why I'm waiting to see what this is exactly.

I don't know, it's just the naming scheme that bothers me, perhaps I'm just super neat and pedantic!
 
The 2080 vs 1180 is relevant because it was another example (of many) where people are bothered/opinionated by what is in the end just a name.

As for GTX 2050 instead of GTX 1660. I guarantee you that people would then be complaining about evil Nvidia deliberately confusing things by there not being enough naming differentiation between ray tracing and non-ray tracing GPUs.
 
Anybody else really confused with the naming scheme? I honestly thought this was a joke and couldn't take it seriously as a rumor, unless it was meant as a notebook part, in which case the lack of Tensor cores would make sense. I can see who this would be for, but I think Nvidia might be shooting themselves in the foot with a product like this, it only muddles the waters and makes RTX seem like a gimmick. We'll wait and see I guess, but wow this looks weird right now.

I was lost until I saw a website explaining what it was in detail. I literally thought it was yet another bin of the previous gen, and was wondering why they were still producing those.

Nvidia, I know I criticized the price of Turing, it should be criticized. There's too much specialty hardware in a consumer focused game playing GPU. Removing tensore cores is a good idea.

But... uhhh, this whole thing is definitely weird right? The naming, removing raytracing??? I thought that was Nvidia's whole thing. Like, what? Kind of feel like this is a panic over reaction to a tanking stock price, at least as a guess.
 
I was lost until I saw a website explaining what it was in detail. I literally thought it was yet another bin of the previous gen, and was wondering why they were still producing those.

Nvidia, I know I criticized the price of Turing, it should be criticized. There's too much specialty hardware in a consumer focused game playing GPU. Removing tensore cores is a good idea.

But... uhhh, this whole thing is definitely weird right? The naming, removing raytracing??? I thought that was Nvidia's whole thing. Like, what? Kind of feel like this is a panic over reaction to a tanking stock price, at least as a guess.

It is weird, it looks panicky and very un-nvidia.
A complication like this and sending a mixed message when it comes to RTX and DLSS can not lead to a stock price recovery.

Then again stock market is weird itself - like losing tens of Billions USD and 40% of stock price due to 500 Millions of unsold inventory; and what's more,everyone knew months ahead that the mining party is over.
 
Well I don' t really see problems here. Based on the benchmarks so far, scaling ray tracing below the 2060 doesn't seem to make a whole lot of sense, that'll probably happen at 7nm. There is a large gap between $350 and the integrated solutions and it makes sense to go there with a more cost effective product. The naming scheme at least to me immediately puts this at between the 1000 and 2000 series and the x60 classifies the performance segment. No confusion there.
 
Surely you must be serious right now?

Also, "RTX Sales less than expected" in official stock price warning: https://www.engadget.com/2019/01/28/nvidia-earnings-warning-on-china-and-rtx/

Looks like the "too expensive versus the last gen" criticism was valid from a business perspective. Gives some context for the sudden creation of these new, cheaper cards too.

Kind of like Apple's fortunes. They kept pushing the price thinking that people will pay whatever Apple wants them to pay. Turns out, there's a limit to how much people are willing to pay for a smartphone...even one made by Apple. Hopefully, NV is running into that with graphics cards as well as I'm close to being priced out of the PC gaming space if Turing was an indicator of where prices for GPUs was headed.

Which was making me hope that MS would push for universal KB/M support in games for their next console so that I would have a viable alternative for abandoning high end gaming on the PC.

Regards,
SB
 
Thinking how expensive pc gaming was in the 90's and early 2000's i sure hope we arent going back to that. Heres hope AMD has a nice Navi or arcturus competitor around the 500 dollar price mark. Also, control input isnt really a thing that seperates platforms anymore, one can play KB/M on PS4 or DS on pc. XIM fixes that on consoles, might help in FPS online games too :p

PC gaming aint going anywhere, same for consoles.
 
Kind of like Apple's fortunes. They kept pushing the price thinking that people will pay whatever Apple wants them to pay. Turns out, there's a limit to how much people are willing to pay for a smartphone...even one made by Apple.
Also because the delta between generations was actually decreasing, so you paid more for a smaller increase on the previous gen. If someone has a working graphics card and say spends $300 when they do upgrade to something faster, offering a typical advance but at twice the cost is going to position the new hardware well beyond the average upgrade cycle. If you want to charge more, it needs to be justified.

What I'm curious with though is nVidia's expectations? Sales to professional imaging should be stellar. Did nVidia posit that sales to gaming would be bonkers gang-busters? Or is RTX not yet being adopted in pro imaging? I guess the software isn't there to make use of it yet. I don't think any of the main renderers are working with RTX enhancements.
 
Also because the delta between generations was actually decreasing, so you paid more for a smaller increase on the previous gen. If someone has a working graphics card and say spends $300 when they do upgrade to something faster, offering a typical advance but at twice the cost is going to position the new hardware well beyond the average upgrade cycle. If you want to charge more, it needs to be justified.

What I'm curious with though is nVidia's expectations? Sales to professional imaging should be stellar. Did nVidia posit that sales to gaming would be bonkers gang-busters? Or is RTX not yet being adopted in pro imaging? I guess the software isn't there to make use of it yet. I don't think any of the main renderers are working with RTX enhancements.

Pro-sales are not taking off either. There's currently only one soft which makes use of the RT Cores (Substance Designer during the baking process and they are currently working on an other solution to support non Turing GPUs..the company has also been bought by Adobe last week..so a switch to OpenCL wouldn't be surprising) and all you can use the Tensor cores for is OptiX denoising acceleration....or not.. Actually...you can't (!) because Turing GPUs are still not supported by the lastest release (OptiX 5.1.1 SDK) . Also what matters most; number of CUDA cores & VRAM, are barely an improvement compared to the 1080 series (less ram actually for the 2070/2080) for nearly double the price...
 
Last edited:
Exactly. Until software is using the RTX cores to accelerate raytracing, it's redundant hardware. Sales should pick up once the software is up to speed, but who's going to update the software for RTX specific paths if no-one's using it? The old chicken-and-egg issue. Once DXR is widespread across hardware, it'll makes sense for the software companies to target it.

nVidia would perhaps have been better off working to create pro imaging renderers rather than games.
 
I personally feel like NVIDIA's mistake in the short-term was taping-out *3* GPUs with raytracing and tensor cores. It's an interesting differentiator for the high-end where pricing is less of an issue and AMD doesn't have anything to compete at that level of performance right now anyway, but at the RTX 2070/2060 level, it doesn't really make any sense until a lot more games support it - and even then, it only makes sense if those games don't need more raytracing performance than a RTX 2060 can provide...

On the other hand, the only way NVIDIA can avoid the chicken-and-egg problem is by taking that short-term hit and putting raytracing HW into as many consumers' hands as they can so that developers decide it's worth their time to add raytracing support. So what they are doing feels like it's still the best long-term strategy to me, except that: 1) their RTX pricing is too high to make it really successful, 2) the magnitude of their earnings miss shows they've clearly underestimated how much of an impact this would have... (then again, if GTX inventories were too high at RTX launch, lower RTX prices would have made that even worse - so really the biggest mistake there was producing too many Pascals).

It will be interesting to see how area/power efficient the rumoured TU116 is (if it exists). My suspicion is that without RTX/Tensors/fp64/fp16 (don't forget FP16 is also costing area vs Pascal but not very beneficial in games yet), Turing should be quite a bit more area efficient than Pascal. If so, their best strategy against Navi might be a 7nm chip with RTX2070-level performance but without raytracing/tensors/fp64/fp16. But that could hurt the long-term penetration of RTX which could be an advantage against AMD once there are more game with raytracing, so it's a complex trade-off.
 
Back
Top