Nvidia Turing Product Reviews and Previews: (Super, TI, 2080, 2070, 2060, 1660, etc)

How many games are released with these features? How many people are going ot buy an expensive GPU now with a view to using its RT features in the future, rather than just wait until the games they want to play are out and then get an RT GPU?

Well your right most arent released yet, and people that know this beforehand what they get shouldnt buy it and wait. Its nvidia then taking the hit. People that cant wait and want to shell out 600 for a mid range RTX, thats their decision, like i said, il wait and get one in a year or two, things are totally different then, thats for sure.

I personally dont mind that Nvidia launched their RTX, let them take the hit, they can afford to do so. Those GPUs are excellent hardware otherwise, and prices will come down sometime as AMD is going to come with better products too.

I dont see it different then the PS4 when it came i didnt see any titles for it really, people complained about a bad library and 'no games', i bought one second hand 2018, when the library was bigger and had games that intrest me.
 
The games are coded for DXR not RTX. So yes, the drivers will play a role here for sure.

Can’t run BFV RT without the Shader Model 6_3 or 6_4 which comes in a windows release with DXR
Games are coded for DX11, DX12, Vulcan or OpenGL. Drivers obviously play a role everywhere. Does not mean vanilla code achieves great performance out of the box on every hardware.

If you want to achieve great performance you need to optimize for specific vendors and even generations (or better with all vendors and generations in mind), because a technique that works well on, say, Vega, runs like s**t on Pascal. Same thing with DXR . Vanilla DXR code may not yield good performance on any hardware. When AMD (or Intel) comes up with their own RT tech (software or hardware based), everybody will have tuned their code to work great on Turing RTX hardware, because that is the hardware that is available now and that everybody is profiling and testing with. Does not mean it will work great on AMD or Intel RT tech (because caches, memory access, shaders, scheduling, etc.). That's how it has always been in the past and I don't see a reason why this would suddenly change with DXR.
 
I personally dont mind that Nvidia launched their RTX.
I don't mind either. It makes zero difference to me whether RTX exists or not. However, the discussion is a theoretical one based on what their best move would have been, which doesn't need one to care about the answer for one to find the discussion interesting and worth partaking in.
 
...everybody will have tuned their code to work great on Turing RTX hardware, because that is the hardware that is available now and that everybody is profiling and testing with.
Really? What's you're definition of 'everyone'? I see 'some devs' and probably a very small proportion of devs unless RTX becomes so widespread to be financially viable to target.

If you were a developer working now, what's the financial incentive to buy RTX cards ad invest the time and effort for adding RTX rendering for an audience of some thousands? Is that audience big enough to justify the expense? There's be some larger studios who invest in pioneering R&D for future projects, but they'll be the ones to optimise accordingly for whatever hardware the market has. For the rest, they'll use whatever technologies are mainstream, typically a few years old.
 
If you were a developer working now, what's the financial incentive to buy RTX cards ad invest the time and effort for adding RTX rendering for an audience of some thousands?
Agreed for smaller devs, but for AAA devs... the incentive is NVIDIA handing them a big bag of cash and/or engineering support for other parts of the project on the condition they support RTX. Are we seriously forgeting all the historical controversy around The Way It's Meant To Be Played?

I honestly don't know what NVIDIA is doing on that front and haven't kept up to date on the nature of their incentives compared to 5+ years ago, but does anyone seriously believe they're letting something of this strategic importance depend entirely game devs who, unlike NVIDIA, don't have a financial incentive to make this work? At the very least, they must be doing a fair bit of the DXR implementation for AAA devs themselves.

So it's actually quite interesting that RTX and DLSS support are taking that long to be added to games IMO... for DLSS specifically which should be relatively simple on paper, I wonder if there are some unexpected technical issues? e.g. image quality for specific kinds of shaders, or games where different scenes have different visual styles...
 
Really? What's you're definition of 'everyone'? I see 'some devs' and probably a very small proportion of devs unless RTX becomes so widespread to be financially viable to target.

If you were a developer working now, what's the financial incentive to buy RTX cards ad invest the time and effort for adding RTX rendering for an audience of some thousands? Is that audience big enough to justify the expense? There's be some larger studios who invest in pioneering R&D for future projects, but they'll be the ones to optimise accordingly for whatever hardware the market has. For the rest, they'll use whatever technologies are mainstream, typically a few years old.
Now you are nitpicking. I mean 'everybody' as in 'everybody that is currently doing RTRT in hardware with DXR on windows or nVidia's Vulcan extension'.

I do expect that, say, 18-24 months from now a lot of AAA titles will have some sort of RT support, some will be more impressive than others. We have already seen preliminary RT support in Unreal Engine and CryEngine, I think. Unity will surely follow, maybe as an upgrade to the HDRP. That means the technology will be usable by smaller Studios and even Indy devs, who use these engines. We could see RT adoption faster than you think.
 
We don't know how RT-RT Hardware and Software will advance. We could just as well see that by the time widespread RT-RT Software hits (say your 18-24 months), that it wont be suitable to run on preliminary/first generation RTX hardware. Such is the life of "version 1.0" hardware.

So again, the question of "why invest now"?
 
It's chicken and the egg, however, the IHV really does have to get it out to the regular consumer in some manner before developers begin to even consider it. Only the top tier developers with the time and money for R&D will be able to further that agenda, and even then the IHV will have to provide very strong support ala GameWorks or TressFX.

At least there is a non-zero chance if the regular consumers start to suck up the first/second gen HW for superficial add-on features. By the time it hits proper mainstream these devs will have had some experience in optimizing so that the greater market in the lower performance tiers will have some chance.
 
It wasn't so much deprecated as the feature sets were actually expanded. The problem with 360 was that it was a single platform with limited performance and capabilities whereas the API was much more limited until the DX11 generation. That said the tessellator was used when performance could be spared, even for somewhat rudimentary things as terrain.

At the very least, MS has been actively working towards putting together an entire API around RT and ML that is perhaps in a much better state than things were for anything in general back in 2005 (DX9+).

So while the software environment may not be wholly comparable, I find it a little hard to believe the notion that nVidia shouldn't be trying to push a consumer-level piece of tech even if it traditionally takes time for the performance to catch up for more advanced usage.

Would it have been for nV to create a gamer focused 800mm^2 piece of HW? Perhaps, but we don't know if there are extant issues that prevent that sort of scaling when we have to consider the implications to power density & scaling up the rest of the HW to balance it.

Just so - nVidia could have just kept the fancy features in the super duper expensive market and gimped the consumer version strictly for performance with older features, creating a clearer divide, but evidently it's a little more confusing now, isn't it.
 
Last edited:
We don't know how RT-RT Hardware and Software will advance. We could just as well see that by the time widespread RT-RT Software hits (say your 18-24 months), that it wont be suitable to run on preliminary/first generation RTX hardware. Such is the life of "version 1.0" hardware.

So again, the question of "why invest now"?
I think one of the reasons why nVidia rushed Turing out of the door was to get some early feedback from actual production game code and incorporate it into their next generation hardware. I think the best way to verify if a concept works or doesn't work and identify bottlenecks is actual real life data. Working with simulations and test code will only get you so far. That's why I think Turing is a beta test.

I'm wildly speculating here, but imagine that nVidia has maybe 2 or 3 different improved RT engines for their next gen GPU and they using the data they currently gather by working with dev in order to decide which one will really make it into their next GPU.

And because they are working closely with devs they can ensure that currently written code will work great with their next gen RT hardware from day one. That reminds me of the following twitter post:


Why invest now? Because RT is the way to go in the future (actually hybrid RT/rasterization). It's the Next Big Thing. Optimizing engines, production pipelines, etc, for RT will take time. I doubt that next gen RT hardware will come with secret performance sauce. Devs have to understand the concepts, limitations, etc, it's a paradigm shift. Better start now.

As a consumer I'd say don't buy a Turing card (for RT) unless you absolutely need a new GPU or have excess money.
 
It's chicken and the egg, however, the IHV really does have to get it out to the regular consumer in some manner before developers begin to even consider it.
The customer in this case could have been professionals. Leave the gaming for a generation, sell high-margin products to pro imaging, work on best practices in the R&D department, then roll out RTX2 to gamers in affordable cards that means all that research will be used.

Unlike every other GPU feature, RTRT has huge value to pro imaging meaning it could (and should IMO) have been targeted exclusively there.
 
Last edited:
You guys are running into a paradox here, on one hand you want NVIDIA to ensure support for pro apps and games before the hardware is released, on the other hand you wonder why the developers would care to code for something that doesn't exist in plentiful numbers! See the paradox here?! You only get to pick one, because developers sure ain't going to develop anything on a hardware that doesn't even exist yet.

Unlike every other GPU feature, RTRT has huge value to pro imaging meaning it could (and should IMO) have been targeted exclusively there.
The kind of workloads and optimizations for DXR are going to be completely different to those in pro apps.
 
I don't think they did the wrong thing. But they are going to eat it until everyone else catches up. It's not like there are 15TF radeon's out there eating their lunch and dominating the playing field reversing years of goodwill. As far as I can see, aside from missed projections which is what we are debating, they can still safely continue this RT program.
Exactly my thoughts, AMD GPUs are still suffering the crypto punch, and they are still behind technologically and architecturally. It's not like people people are flocking to the other side here. If they are not buying Turing, they are buying Pascal.
 
You guys are running into a paradox here, on one hand you want NVIDIA to ensure support for pro apps and games before the hardware is released, on the other hand you wonder why the developers would care to code for something that doesn't exist in plentiful numbers! See the paradox here?!
Yes, but the third option is nVidia steps up and bares the cost of the software implementations. They fund the applications implementing RTX, which gives a reason for users to buy the cards, which leads to an install base, which leads to more devs adding RTX acceleration on their own because there's now an established market for it.

Many new consumer techs face a chicken-and-egg problem. Businesses trying to launch them need to do something to overcome that or else it'll flunk out (or, in this case, cost money). By and large it means up-front investment to secure content and to give people a reason to buy, or acceptance of slow growth whcih one should factor into ones sales expectations.
 
You guys are running into a paradox here, on one hand you want NVIDIA to ensure support for pro apps and games before the hardware is released, on the other hand you wonder why the developers would care to code for something that doesn't exist in plentiful numbers! See the paradox here?! You only get to pick one, because developers sure ain't going to develop anything on a hardware that doesn't even exist yet.

And the widespread availability wont ever be there on products over $300.
 
And the widespread availability wont ever be there on products over $300.
Not for video games at least.

For professional applications the chicken&egg problem wouldn't exist.
If nvidia showed rendering app developers they had new hardware that would accelerate their tasks by 10x, those devs would be lining up at nvidia's HQ even if they had to set up camp on the streets and said Quadros only released 6 months later.
 
Yes, but the third option is nVidia steps up and bares the cost of the software implementations.
Or treat this like any other feature, save costs, and work it out as they release varying levels of hardware capability. That's also a valid option, especially if they are ahead of the competition.
And the widespread availability wont ever be there on products over $300.
I disagree, the most famous product on Steam was a 970, and then a 1060. The 2060 is set to repeat the same thing.

Also ultra cutting edge graphics effects are not made for low end options, the kind that you typically see under 300$. These solutions barely run the games at 1080p60 semi high settings anyway. Doesn't make sense to target them with a new ultra feature. I don't remember ultra levels of tessellation working on low end GPUs, I don't remmeber heavy GPU accelerated particles doing that either, nor high res soft shadows, nor heavy DoF, max view distance, Ultra GI lighting, extensive poly count meshes, ultra high res textures, max levels of AA or SSAA, cloud/cloth/waves simulation, reflections .. etc. Yet some of these things or all of them are already featured in most games.
 
Last edited:
Back
Top