Nvidia Turing Product Reviews and Previews: (Super, TI, 2080, 2070, 2060, 1660, etc)

Pro-sales are not taking off either. There's currently only one soft which makes use of the RT Cores (Substance Designer during the baking process and they are currently working on an other solution to support non Turing GPUs..the company has also been bought by Adobe last week..so a switch to OpenCL wouldn't be surprising) and all you can use the Tensor cores for is OptiX denoising acceleration....or not.. Actually...you can't (!) because Turing GPUs are still not supported by the lastest release (OptiX 5.1.1 SDK) . Also what matters most; number of CUDA cores & VRAM, are barely an improvement compared to the 1080 series (less ram actually for the 2070/2080) for nearly double the price...


I came here to post exactly this.
I might be needing a new laptop soon for solidworks and I tend to use raytraced images and videos for presentations, so I was feeling enamoured by those MSI slim GS65 laptops with the RTX 2060/70.
After researching a bit, I got really surprised with the fact that there still isn't available any working OptiX plugin for Visualize with RTX hardware denoising (despite it being announced half a year ago), and apparently a real time raytracing engine isn't even in the works.

So if I bought a laptop with a RTX GPU right now, I'd probably use… Radeon Pro Render?
I thought Turing was going to kill it at offline rendering, but turns out it still doesn't.



As for the rest, sure they hit a wall with how much they thought they could charge their consumers, but I don't think nVidia was counting with this much inertia from game devs to actually launch games or updates that support DLSS or raytracing.
RT is only on BF5 but that game is pretty much a sales failure. DLSS was promised for dozens of games but so far we've got… zero? Half a year after release?
Anthem is a high profile launch coming late February but they already confirmed it's also not coming with DLSS at launch (supposedly coming "shortly" after), and it seems they don't really know if it'll have raytracing at all. The game is from Bioware and it's supposed to use Frostbite like BF5..

Maybe it's best to take a step back and bring RT back when it's viable for at least mid-range cards at $150-200.
 
Like many have said, its price thats the problem, not the hardware itself. Professional market i dont know but for gaming its a nice bonus over the massive rasterization performance Turing offers. Quite many games in the works that support RTX, obviously one cannot expect it to be avaible right off the bat with new tech.
 
If a few people were able to create something like q2vkpt as a spare time project, then Nvidia should work with game devs and the modding community in order to make raytraced mods of other classics like Quake1, Half-life, Unreal, Bioshock, etc.
That probably could be done rather quickly, it would bring new life to old games and also be a compelling reason for lots of people to buy RTX cards.
 
Like many have said, its price thats the problem, not the hardware itself. Professional market i dont know but for gaming its a nice bonus over the massive rasterization performance Turing offers. Quite many games in the works that support RTX, obviously one cannot expect it to be avaible right off the bat with new tech.

The hardware is what dictated the price, though.
 
No. It was their desired profit margins.

They are physically bigger chips which contain more transistors relative to their Pascal analogues (by performance level). I can't speak to how much this added to the cost to manufacture them, but it did add something. Sure, Nvidia could have eaten this by lowering their profit margin on each unit, but that wouldn't thrill investors either.
 
Now it all depends on how they plan to correct this shortfall. It can be a very complicated situation with many options. Do they drop profit margins which means dropped prices which may mean higher market uptake? Do they cut wafer orders to produce less Turing GPUs? Do they not adjust margins/prices while sitting on an over-abundance of Pascal cards and unknown inventory of Turing cards? With all that said, their choice may have been made for them already given the latest massive wafer accident at TSMC.

Anyways, that seems a good part off-topic for this thread, other than to say the product reception hasn't faired well because of price and software applications.
 
If you've a new feature that the market is crying out for, like raytracing, and you have the monopoly on it, you charge to the max to maximise profits.

If you have a new feature you're trying to establish so you become the de facto standard with a controlling stake in the industry, like raytracing, the you price as low as possible to drive adoption.

It would appear nVidia are pricing for a must-have feature that is only must0have on paper. As I've mentioned, where hardware RT is definitely a must-have for pro imaging and can be priced accordingly, without the software to drive that, it's worthless. To get devs to target RT in their software, it needs to be in a sizeable market segment, which means pricing it low to drive adoption.

It sounds too like DXR being DX12 only, and DX12 not being popular, RT itself is facing an uphill struggle for adoption.

I don't think nVidia's business plan was well considered and we're seeing the economic impact of that. If I were in charge, I'd have looked only at pro imaging, got on board with the key players, and had RT software come out with the hardware, day one, buy RTX2080 (maybe not bother with 2070?) and get 10x speed up in your workflows. That gives you 6 months of raytracing being developed and used, and then you supply free cards to engine devs etc. to integrate, and then roll out RT cards to gamers when there's software to make use of it and give them a reason to buy it. At the moment nVidia are only selling a promise - some day this card is going to result in awesome graphics - but we all know we can just wait for that day and get cheaper/better options. It's like selling a new console at $600 with a library of two games. Sales will be crap as gamers know in a year's time it'll be $400-500 and there'll be a library worth playing...
 
If you've a new feature that the market is crying out for, like raytracing, and you have the monopoly on it, you charge to the max to maximise profits.

If you have a new feature you're trying to establish so you become the de facto standard with a controlling stake in the industry, like raytracing, the you price as low as possible to drive adoption.

It would appear nVidia are pricing for a must-have feature that is only must0have on paper. As I've mentioned, where hardware RT is definitely a must-have for pro imaging and can be priced accordingly, without the software to drive that, it's worthless. To get devs to target RT in their software, it needs to be in a sizeable market segment, which means pricing it low to drive adoption.

It sounds too like DXR being DX12 only, and DX12 not being popular, RT itself is facing an uphill struggle for adoption.

I don't think nVidia's business plan was well considered and we're seeing the economic impact of that. If I were in charge, I'd have looked only at pro imaging, got on board with the key players, and had RT software come out with the hardware, day one, buy RTX2080 (maybe not bother with 2070?) and get 10x speed up in your workflows. That gives you 6 months of raytracing being developed and used, and then you supply free cards to engine devs etc. to integrate, and then roll out RT cards to gamers when there's software to make use of it and give them a reason to buy it. At the moment nVidia are only selling a promise - some day this card is going to result in awesome graphics - but we all know we can just wait for that day and get cheaper/better options. It's like selling a new console at $600 with a library of two games. Sales will be crap as gamers know in a year's time it'll be $400-500 and there'll be a library worth playing...
A fairly good description of the landscape here.

It would appear Nvidia is taking the brunt of the cost to move this feature forward.
As you write, it's unclear why Nvidia decided to push this forward now, but they wouldn't move forward without good reason.
Perhaps the risk was worth the gain or that they are attempting to capture the industry to get a full leg up on their competitors before their competitors arrive.

We are all waiting for that 'system seller' game with RT that will drive sales into the hardware, but that could take a while - perhaps it may take as long as next gen arriving (where I still expect RT to arrive).
 
A fairly good description of the landscape here.

It would appear Nvidia is taking the brunt of the cost to move this feature forward.
As you write, it's unclear why Nvidia decided to push this forward now, but they wouldn't move forward without good reason.
Perhaps the risk was worth the gain or that they are attempting to capture the industry to get a full leg up on their competitors before their competitors arrive.

We are all waiting for that 'system seller' game with RT that will drive sales into the hardware, but that could take a while - perhaps it may take as long as next gen arriving (where I still expect RT to arrive).

Maybe when Turing was in the initial planning stages, they expected they'd be on 7nm by the time it released. Then, once it was determined that wasn't going to happen they decided to push forward anyway.
 
Maybe when Turing was in the initial planning stages, they expected they'd be on 7nm by the time it released. Then, once it was determined that wasn't going to happen they decided to push forward anyway.
It all comes down to money. I think RTX's issue is that it wasn't priced well and I agree with the flood of Pascal. I mean, I own 3 of them at one point in time, my friends 9 of them.

It was a bad follow up that relied a lot of factors for people to be wowed with.

Having said that, I'm not sure if the alternative is any better, which is just introduce a more powerful GPU with no real additional functionality.
Perhaps the bubble was bound to pop regardless and they felt that this was their best play.
 
I came here to post exactly this.
I might be needing a new laptop soon for solidworks and I tend to use raytraced images and videos for presentations, so I was feeling enamoured by those MSI slim GS65 laptops with the RTX 2060/70.
After researching a bit, I got really surprised with the fact that there still isn't available any working OptiX plugin for Visualize with RTX hardware denoising (despite it being announced half a year ago), and apparently a real time raytracing engine isn't even in the works.

So if I bought a laptop with a RTX GPU right now, I'd probably use… Radeon Pro Render?
I thought Turing was going to kill it at offline rendering, but turns out it still doesn't.



As for the rest, sure they hit a wall with how much they thought they could charge their consumers, but I don't think nVidia was counting with this much inertia from game devs to actually launch games or updates that support DLSS or raytracing.
RT is only on BF5 but that game is pretty much a sales failure. DLSS was promised for dozens of games but so far we've got… zero? Half a year after release?
Anthem is a high profile launch coming late February but they already confirmed it's also not coming with DLSS at launch (supposedly coming "shortly" after), and it seems they don't really know if it'll have raytracing at all. The game is from Bioware and it's supposed to use Frostbite like BF5..

Maybe it's best to take a step back and bring RT back when it's viable for at least mid-range cards at $150-200.

Might want to wait to see if Navi has raytracing support for a laptop. The sheer advantage of 7nm vs 12nm is going to be unbeatable for laptops, the power, and thus tdp advantage is near double.

As for gaming, of course game devs aren't going to jump on the train ASAP just for the sake of it. An entirely new rendering architecture, only useable by however many people happen to buy the only cards that support such, without console support? Nvidia would have be idiots indeed to assume rapid adoption, and I'm not sure they were. Having RT support for the future is very nice, and they heavily advertised that. It seems to be more about charging as much as they did for it that's more of the mistake.

But the worst scenario for Nvidia is that AMD has different, more popular raytracing support. RTX concentrates entirely on BVH acceleration, but that's not the only acceleration structure you can use. If AMD comes up with something more flexible, a way for specialized hardware accelerated cone tracing and sphere tracing to exist efficiently, RTX could feel dead in the water. Accessing out of order, bandwidth heavy textures is the primary cost of raytracing in things like BFV, and that's what BVH and triangle tracing are best at.

Cone tracing on the other hand reduces bandwidth cost a lot at the expense of lightleak and etc., but it's already shipped in titles like Kindgom Come Deliverance without any special hardware needed. Sphere tracing (signed distance fields) can do things like ambient occlusion, shadows, and indirect shadows very very very quickly. That too has already shipped (Claybook) without any special hardware, hell it runs on an iphone. If say, Navi or Arcturus or whatever the next gen consoles have specialized hardware for such, especially if it can accelerate structure generation (the really expensive parts of these techniques) it could make RTX end up being a dead end put out too quickly.

Since sphere tracing and cone tracing are useable on older hardware as well using these techniques would have the bonus of not needing to worry about backwards compatibility for older PC hardware and such. I kind of wonder if Nvidia has been tipped off to this being what AMD is doing, and that's why it's decided to abruptly put out cards without specialized RT hardware.
 
Might want to wait to see if Navi has raytracing support for a laptop. The sheer advantage of 7nm vs 12nm is going to be unbeatable for laptops, the power, and thus tdp advantage is near double.

As for gaming, of course game devs aren't going to jump on the train ASAP just for the sake of it. An entirely new rendering architecture, only useable by however many people happen to buy the only cards that support such, without console support? Nvidia would have be idiots indeed to assume rapid adoption, and I'm not sure they were. Having RT support for the future is very nice, and they heavily advertised that. It seems to be more about charging as much as they did for it that's more of the mistake.

But the worst scenario for Nvidia is that AMD has different, more popular raytracing support. RTX concentrates entirely on BVH acceleration, but that's not the only acceleration structure you can use. If AMD comes up with something more flexible, a way for specialized hardware accelerated cone tracing and sphere tracing to exist efficiently, RTX could feel dead in the water. Accessing out of order, bandwidth heavy textures is the primary cost of raytracing in things like BFV, and that's what BVH and triangle tracing are best at.

Cone tracing on the other hand reduces bandwidth cost a lot at the expense of lightleak and etc., but it's already shipped in titles like Kindgom Come Deliverance without any special hardware needed. Sphere tracing (signed distance fields) can do things like ambient occlusion, shadows, and indirect shadows very very very quickly. That too has already shipped (Claybook) without any special hardware, hell it runs on an iphone. If say, Navi or Arcturus or whatever the next gen consoles have specialized hardware for such, especially if it can accelerate structure generation (the really expensive parts of these techniques) it could make RTX end up being a dead end put out too quickly.

Since sphere tracing and cone tracing are useable on older hardware as well using these techniques would have the bonus of not needing to worry about backwards compatibility for older PC hardware and such. I kind of wonder if Nvidia has been tipped off to this being what AMD is doing, and that's why it's decided to abruptly put out cards without specialized RT hardware.
Cone tracing quality is bad compared to ray tracing. SDF is tracing is only applicable to static and procedural geometry.

Right now RTX allows developers to get their feet wet in preparation for next-gen consoles.
 
Maybe when Turing was in the initial planning stages, they expected they'd be on 7nm by the time it released. Then, once it was determined that wasn't going to happen they decided to push forward anyway.
This theory would make a lot of sense, however the foundry roadmaps at least back to 2014 would not support it, no foundry was expected to be ready for 7nm mid 2018 (well, other than intel 10nm...).
Although possibly one of the 10nm nodes was expected to be ready for high performance and not just mobile back then, but I've no idea really (the roadmaps usually don't give too many details).
 
Having said that, I'm not sure if the alternative is any better, which is just introduce a more powerful GPU with no real additional functionality.
Perhaps the bubble was bound to pop regardless and they felt that this was their best play.

Why not just keep selling Pascal and wait until 7nm to add all the extra transistors?
 
Back
Top