Nvidia Turing Product Reviews and Previews: (Super, TI, 2080, 2070, 2060, 1660, etc)

Turing RTX is not really orientated at end users. It's orientated at developers. Developers absolutely need hardware to run their code on, and unless Nvidia wants to try to somehow take half the AAA game industry in house, they have to release hardware for them. Likewise, the RTX announcement is aimed at developers, not end users. Nvidia needs to get *them* hyped so that they will actually target the hardware in the first place. There are also indie developers to think of, who tend to be the ones exploring novel techniques. They have to buy the hardware just like every one else - they don't get special channels with Nvidia like AAA developers do.

As for industry adoption, you already have things like Unreal Engine incorporating RTX/DXR. UE4 is pretty huge in the industry. And last time I checked, BF5 was a AAA game. But for ray tracing to really live up to its potential, we will have to wait for the next generation of engines to be built around it, but this will take *years* from now until completion of games using said engines. As it is, there are just too many bottlenecks in the way ray tracing and existing engines work. BF5 wasn't slow because of bad ray tracing hardware, it was slow because of all the extra work it had to do to accommodate a very intrusive system that the engine was not designed for. Consider that there was almost no performance difference between low quality DXR and ultra quality DXR, but that enabling either massively slowed down the entire system.
 
They did. It takes time. nVidia is not waiting six months to release available hardware.
True, and think all IHV's basically follow the same principle. Vega and the Rapid Packed Math feature did not suddenly appear in games overnight, but took developers months for it to appear in 3 supported games.
 
Please do not turn this into a PC vs Console issue...
 
RT needs the 2080Ti as the top performing chip and as the proof of concept. If all you have outhere is just the 2080, then the case for RT is not really going to be that amazing.

RT needs the hardware to be in people's hands more than anything. If you can sell the hardware on other merits, the RT hardware's effectiveness isn't carrying that burden. BTW, you stating that RTX requires $1,200 hardware to impress people is one of the more damning statements I've seen about the tech.


Yes historically middle dies has been here first, but that's because they introduced a much bigger jump on performance compared to previous gen, this isn't the case with Turing.

It isn't a jump because they chose to launch on the same process.
 
They did.
No, they didn't. They released the hardware before the software was ready. Now they're playing catchup. nVidia worked with DICE to get RT into BFV but not with Autodesk to get RT into Arnold.

And if it takes time and the raytracing software can't be ready for raytracing hardware for a year after its launch, what's the point in buying RT hardware now? Wait a year for when the software is ready for it.

I don't understand the counter-arguments here. People seem to be suggesting that the best move for nVidia was to release a product with no software for it, and that they were right to expect people to spend huge amounts of money buying it ahead of software appearing, and that the lack of meeting sales target is a strange reaction by the market. "I don't know why people aren't buying RTX cards now - it's going to be really good once software comes out for it."
 
Last edited:
If not with someone outside the company, they at least could have primed their internal optix team in order to have support ready at launch. But apparently, Optix 6 has no decided-upon launch date yet.
 
No, they didn't. They released the hardware before the software was ready. Now they're playing catchup. nVidia worked with DICE to get RT into BFV but not with Autodesk to get RT into Arnold.
Indeed. From everything we knew at the launch, DICE had been working on their RT tech 8 months before getting hardware, using Volta and then updating their code to work with Turing in 3 weeks before the RTX release.
 
If not with someone outside the company, they at least could have primed their internal optix team in order to have support ready at launch. But apparently, Optix 6 has no decided-upon launch date yet.
That's exactly what I was saying. People can't the blame it on ISVs when it's Nvidia themselves who weren't even ready. Once OptiX 6 is deployed every DCC software currently using it will be able to usethe Tensor Cores in Turing GPUs to accelerate denoising.
 
If not with someone outside the company, they at least could have primed their internal optix team in order to have support ready at launch. But apparently, Optix 6 has no decided-upon launch date yet.

Geez that thread...
All this time with the cards on the market and they can't even commit to a release window for supporting the RTX features on professional applications?
 
If not with someone outside the company, they at least could have primed their internal optix team in order to have support ready at launch. But apparently, Optix 6 has no decided-upon launch date yet.
I expected to see more views and replies in that link, but could be due to Optix support already working with clients. According to Nvidia they are working with NDA partners with issues regarding Optix 6 SDK which would take advantage of RTX acceleration. I can't see holding up a hardware launch that has a number of major platforms to support (Deep Learning, Data Center, Autonomous Machines, Healthcare, HPC, Self-Driving Cars, Gaming & Entertainment, Design & Pro Visualization) whose customers are not affected by the late Optix 6 SDK. I guessing the hardware currently works with Optix 5 SDK similar to Volta.

Edit: Font
 
Last edited:
True or false - if nVidia had ensured a couple of major applications had RTX acceleration at launch, interest and sales of RTX cards would be stronger than they are now?
True, but that's not how software adoption functioned in the last decade, as I stated above, CUDA, OpenCL and browser acceleration came after the hardware is released, not before.
you stating that RTX requires $1,200 hardware to impress people is one of the more damning statements I've seen about the tech.
It doesn't need 1200$ per se, but it needs big dies (for the shader power and RT/Tensor cores). Larger than possible at 7nm at the moment.

It isn't a jump because they chose to launch on the same process.
Then what do you think are the reasons for launching at 12nm?
 
It doesn't need 1200$ per se, but it needs big dies (for the shader power and RT/Tensor cores). Larger than possible at 7nm at the moment.

You agreed that the 2080 wouldn't be that big at 7nm. So, if you need a bigger die than that, your talking talking about a product that at 12nm costs $1,200 or another that costs even more.

Then what do you think are the reasons for launching at 12nm?

Because they misread the market and their place in it. When everything you touch has been turning to gold you tend to think you can do no wrong and start thinking in terms of what products fit your goals best over what products will best appeal to the consumer.
 
Then what do you think are the reasons for launching at 12nm?
One can imagine that they didn't have many other options, either send out something akin to Radeon VII, ship a Volta consumer derivative, ship Turing RTX, or ship nothing at all.
The last one seems the worst of the 3. The first 2 seem like a waste of time.

I don't think they did the wrong thing. But they are going to eat it until everyone else catches up. It's not like there are 15TF radeon's out there eating their lunch and dominating the playing field reversing years of goodwill. As far as I can see, aside from missed projections which is what we are debating, they can still safely continue this RT program.
 
One can imagine that they didn't have many other options, either send out something akin to Radeon VII, ship a Volta consumer derivative, ship Turing RTX, or ship nothing at all.
The last one seems the worst of the 3. The first 2 seem like a waste of time.

Edit: Why not, "ship nothing at all"?
 
Last edited:
If they knew about the soon to be overabundance of Pascal products in the channel they could have shipped nothing at all and ended up better off.
 
I suspect that Shareholders/business/cashflow/timing/manufacturing issues are all factors at play here. We must suspect that there is a Turing/Post turing derivative lined up for 7nm already. Shipping now may be costly, sure, but it sets up their 7nm product.

It's going to take a long time for developers to start moving towards RT. A long time. No reason to delay the discussion, if you can get developers working for the next 2 years on it now in time for a real RT launch. And when the competition actually comes into play, nvidia would have their RT matured by 2 years of input and driver fixes.
 
If they knew about the soon to be overabundance of Pascal products in the channel they could have shipped nothing at all and ended up better off.
hard to predict that ETH and other GPU mining cryptos would die off so quickly resulting in a flood of the market. If ETH stayed above 400 USD, I wouldn't have sold my 1070s either and there wouldn't have been a flood. Crypto had a massive boom bust all in a matter of 12 months. These cards are in development for much longer.
 
Back
Top