Nvidia Turing Product Reviews and Previews: (Super, TI, 2080, 2070, 2060, 1660, etc)

I don't think there will be a lot of people willing to pay more for a high-end graphics card to play 15-20 year-old games with RT (which will still look a lot worse than modern rasterized games).

Like sony ppl paying for 15/20 year old remasters with no graphics upgrade at all? When your ps4 (or any ps launched), there werent many games either, mostly multiplats with inferior gfx. Yes the price wasnt that high but you got modest hardware at best.

If people want to enjoy RT now they can, more titles will follow. Your ps5 will have RT so its not a bad thing devs can get their feet wet.
 
Shareholders.

Surely still having the promise of a new architecture incoming while your existing architecture (after enjoying massive success) takes the hit of the glut of inventory the mining crash created is a better position to be in than having the reality of your new architecture being that of a (sales) disappointment.
 
Like sony ppl paying for 15/20 year old remasters with no graphics upgrade at all?
I don't live in a world where cheap PS1/PS2 remasters are considered system sellers for $350-$1200 playstations, so I have no idea what you're talking about.
 
As of today, January 29 2019 only ONE publicly available software uses the RT cores
Professional applications take time to implement new features, and then test them and validate them. And then validate them again with NVIDIA drivers on Quadro/Geforce lines. It's just a matter of time. Same thing happened with CUDA acceleration, OpenCL acceleration, and GPU browser acceleration. Things take time to implement. I don't get the obsession to count months after the introduction of a totally new architecture with new features like it's some sort of a sprint race to the finish line. There is no finish line.

Having said that, I'm not sure if the alternative is any better, which is just introduce a more powerful GPU with no real additional functionality.
Perhaps the bubble was bound to pop regardless and they felt that this was their best play.
The situation with the crypto bubble was bound to happen regardless. Pricing Turing competitively means stagnating Pascal sales, which means bigger losses. They needed to move both of them.

I don't see RT situation any different from the T&L situation. In fact RT situation is much better, it's supported directly from DX and Vulkan right out of the gate, and supported by major engines, and quite possibly supported by consoles as well. None of these things were present in the first days of T&L.

Why not just keep selling Pascal and wait until 7nm to add all the extra transistors?
NVIDIA most likely wants to make a bigger bang on 7nm, which means big dies, which means waiting for 7nm to be mature enough, which means waiting till Q4 2019. Which means 15 months without new product on top of the 26 months of the cycle of Pascal. That's too long without a new product.

Since sphere tracing and cone tracing are useable on older hardware as well using these techniques would have the bonus of not needing to worry about backwards compatibility for older PC hardware and such.
Usable at what quality level? can they provide true reflections? Soft PCF shadows? Area Shadows? dynamic GI? proper refractions? Nope. RT is an elegant solution that encompasses everything. See Quake 2 on Vulkan RTX for a proper demonstration of a complete path tracing solution.
I kind of wonder if Nvidia has been tipped off to this being what AMD is doing, and that's why it's decided to abruptly put out cards without specialized RT hardware.
Highly unlikely considering it seems AMD is not even ready with a concept of doing accelerated ray tracing. They don't have an architecture, nor fallback layer drivers, nor anything really. All they do is talk about waiting for the proper circumstances to do RT, which is quite frankly is AMD's way of saying we are not yet ready to do RT. Navi is highly unlikely to support DXR at this point. AMD isn't even teasing it. They wouln't do a Radeon VII without DXR if Navi were to have DXR.
 
The introduction of hardware RT would have provided that at that time.
It still needs big dies to do them with the proper shading power. Which means waiting further as well. They also can't allow AMD to have a leg up on them, or even match them.

It's obvious this 7nm node isn't really the revolution we expected. The transition from 28nm to 16nm provided much much bigger gains than this. If Intel's failed 10nm was significantly better than TSMC's 7nm, then it's not really truly a 7nm. Not even 10nm in accordance with the strict definitions. So any big jump performance is going to need big dies. As big as Pascal or more. Which means waiting for the process to be mature enough.

NVIDIA most definitely weighed the options to use 12nm or wait for 7nm, they chose 12nm, heck they practically co developed it for Volta.
 
The Radeon VII is 13.2B transistors @ 331mm2. How "big" would the 2080's 13.6B have been?
Slightly larger ~360mm2. Why take the 2080 though? let's take the Titan RTX first, this needs ~520mm2 at 7nm, which is probably not feasible at this stage.

And this is if NVIDIA wanted to be content with doing just a die shrink. Which is I think they don't. They will do big dies, they will advance RT and AI performance even further. They could be thinking about doing another 700mm2 for the RTX 3000.
 
Slightly larger ~360mm2. Why take the 2080 though?

Because it was what they successfully did with Pascal and launching the 2080/2070 at 1080/1070 price points but with 1080ti/1080 performance plus RT would have been a "no brainer" purchase for most. The bigger dies could have come later when, as you put it, the process was more mature.
 
Because it was what they successfully did with Pascal and launching the 2080/2070 at 1080/1070 price points but with 1080ti/1080 performance plus RT would have been a "no brainer" purchase for most. The bigger dies could have come later when, as you put it, the process was more mature.
I don't think it would be that much different, people would debate whether RT is worth it or not. And again bigger dies can't come later because NVIDIA needs a halo product at the top that is faster than previous gen. On 12nm they had this with the 2080Ti/ Titan RTX/ Volta. 7nm wouldn't give them any of that during the current time frame.

You also are not factoring in the cost of 7nm, which is still high on it's own regardless of the die size.
 
I don't think it would be that much different, people would debate whether RT is worth it or not. And again bigger dies can't come later because NVIDIA needs a halo product at the top that is faster than previous gen.

They would debate it to a much lesser degree than they are now if it was a value-add on top of getting better performance. Bigger dies can and have frequently come later, especially around process transitions. And it literally just happened with Pascal. Why would it be such a disaster this time?
 
They would debate it to a much lesser degree than they are now if it was a value-add on top of getting better performance.
RT needs the 2080Ti as the top performing chip and as the proof of concept. If all you have outhere is just the 2080, then the case for RT is not really going to be that amazing.

Bigger dies can and have frequently come later, And it literally just happened with Pascal. Why would it be such a disaster this time?
Yes historically middle dies has been here first, but that's because they introduced a much bigger jump on performance compared to previous gen, this isn't the case with Turing. Hence why NVIDIA launched big Turing first out of the door.
 
I don't get the obsession to count months after the introduction of a totally new architecture with new features like it's some sort of a sprint race to the finish line. There is no finish line.
It's no obsession. It's a discussion about nVidia's financials and underselling their expectations. What were their expectations and why? Could they have handled them better? the number of months (years) you have to wait until software supports your hardware feature directly affects its appeal and therefore sales. Ergo, the idea that nVidia should have pushed for pro software to have better support.

True or false - if nVidia had ensured a couple of major applications had RTX acceleration at launch, interest and sales of RTX cards would be stronger than they are now?
 
True or false - if nVidia had ensured a couple of major applications had RTX acceleration at launch, interest and sales of RTX cards would be stronger than they are now?
Can we even try to guess until these features are actually adopted in a wider scale?

I initially thought DLSS sounded like a breakthrough of sorts - apparent wide adoption and "free" performance at minimal IQ cost. Considering the immense cost of native 4K rendering, it should be great for 4K TVs at least.
But for all intents and purposes, DLSS is slowly entering into vaporware status at the moment.
 
True or false - if nVidia had ensured a couple of major applications had RTX acceleration at launch, interest and sales of RTX cards would be stronger than they are now?

What are you suggesting? That developers try to write a complex RTX/DXR application without any hardware that can run it at a reasonable speed (no, Volta doesn't count)? That they build Turing and release it exclusively to a handful of selected developers with the vague promise of consumer hardware at some later date?

It's a chicken and egg problem.
 
What are you suggesting?
That nVidia design the hardware, approach a couple of big applications offering to integrate RTX acceleration for them, and then release a product with software that actually uses it.

It's a chicken and egg problem.
Exactly, if everyone waits for everyone else to do something. However, nVidia were in a position to solve that, and ensure the chicken and egg were developed at the same time.

If that's not the case, then the lacklustre sales and lack of interest were inevitable the moment nVidia decided to release RTX as is. It meant knowingly releasing a costly product that could never sell at launch because there was no demand for it. I don't think that's true. I think the demand is there in the pro imaging sector and I think demand for hardware simply needs the software to actively use it, which is something nVidia should have addressed at launch instead of pushing gaming.
 
Surely still having the promise of a new architecture incoming while your existing architecture (after enjoying massive success) takes the hit of the glut of inventory the mining crash created is a better position to be in than having the reality of your new architecture being that of a (sales) disappointment.
Yes. Without a doubt hindsight is 20/20 and looking back perhaps that’s what Jensun now knows where his mistake lies.

Perhaps it’s elsewhere entirely, I’m sure there is a post Mortem happening.

But at the same time, this is all new for everyone. RT is new. cryptomining is new. Moore’s law is done. Leaders of companies can only learn from their mistakes and make the right decisions to bounce back.

the realities of business demand failure from time to time despite how big you are, eventually a mistake is made whether it was made back in Turing, or in pascal. You cannot constantly win forever, there’s no proper playbook for this type of thing.

Perhaps it was enevitable and jensun felt that turning this into the next generation of graphics could dominate the narrative. Sometimes it works (SJobs). Sometimes it doesn’t.
 
What are you suggesting? That developers try to write a complex RTX/DXR application without any hardware that can run it at a reasonable speed (no, Volta doesn't count)? That they build Turing and release it exclusively to a handful of selected developers with the vague promise of consumer hardware at some later date?

I'd have suggested them publicly announcing and selling the Geforce RTX line when the RT/DLSS enabled games were ready. And doing the same for Quadro RTX with applications.
It's not like nvidia was quickly losing marketshare back in August 2018.


What will happen now is, at best, games and applications will start supporting the RTX features after the marketing push for the RTX announcement has faded away for a long time.
The valid criticism people make about FineWine on AMD also applies here. Yeah it's good that 4 years after launch AMD GPUs tend to outperform their contemporary competition, but from a marketing and sales perspective it's a wasted opportunity.

Imagine if apple or samsung had announced some features like A.I. enhanced photos and fingerprint reader for their 2015 flagships but these weren't available for the first 6 months after release.
A good proportion of their potential customers would have probably kept their older phones another year, or flocked to cheaper flagships from other companies,
 
Back
Top