I disagree, the most famous product on Steam was a 970, and then a 1060.
The 970 launched at $329 and the 1060 6GB launched at $299. The 970 was "above $300", but still in the same price class.
I disagree, the most famous product on Steam was a 970, and then a 1060.
Indeed. On the other hand, it would require nV to do it on a unilateral-IHV-independent basis, but as we've seen with GameWorks and TWIMTBP it's incompatible with their self-serving/competitive business. *ahem* :|Yes, but the third option is nVidia steps up and bares the cost of the software implementations. They fund the applications implementing RTX, which gives a reason for users to buy the cards, which leads to an install base, which leads to more devs adding RTX acceleration on their own because there's now an established market for it.
Many new consumer techs face a chicken-and-egg problem. Businesses trying to launch them need to do something to overcome that or else it'll flunk out (or, in this case, cost money). By and large it means up-front investment to secure content and to give people a reason to buy, or acceptance of slow growth whcih one should factor into ones sales expectations.
The 2060 is in the same league.The 970 launched at $329
That would have made zero, ZERO sense. All the devs need to buy Quadro cards (= more expensive). They would still need to produce Game Ready Drivers, so Devs can test their stuff. And a lot enthusiasts would have bought the Quadro cards anyway because of the better performance and would have felt ripped off and rightly so. The backlash would have been enormous. And the stock market would have shown nVidia the bird, because the cards yield no profit. Sorry, this strategy of yours would have been a disaster of epic proportions for nVidia. Plus Turing is not only RTX, there are a couple of other worthwhile features like tensor cores, mesh shaders and variable-rate shading that need to be in the wild to gather support.The customer in this case could have been professionals. Leave the gaming for a generation, sell high-margin products to pro imaging, work on best practices in the R&D department, then roll out RTX2 to gamers in affordable cards that means all that research will be used.
In Shifty's opinion nVidia should a have released Turing as "high-margin products to pro imaging". That means Quadros. Please correct me if I'm wrong.Game devs buy Quadros? Why? I thought they were very different drivers and use case GPUs.
Depends on what type of game devs it is...Artists (modelers, animators...etc) will/can by Quadros because of the "better" drivers for DCC apps...in reality nobody in the game dev world buys Quadros because they are to damn expensive for what they offer compared to "regular" models. It's a bit different in the VFX & ArchiViz sector where the use of DCC apps is more prominent & the need for more VramGame devs buy Quadros? Why? I thought they were very different drivers and use case GPUs.
All the devs need to buy Quadro cards (= more expensive). They would still need to produce Game Ready Drivers, so Devs can test their stuff.
The 2060 is in the same league.
970 wasn't available for $329 until quite some time after the launch either.
The 2060 is in the same league.
The problem is that nvidia is already acknowledging the TU106 can't be scaled down in price too much, by launching the TU116 cards.Close but around 20% costlier at $360-390 range depending on the model and store and special. Once you see it hitting $300 or below, it'll take off.Of course that's assuming there aren't better bargains out there on higher performing older gen cards (like 1070 / 1080).
Judging by the popularity of 4GB and 6GB 970 and 1060, it's not.Of course that also assumes the 6GB memory footprint isnt a limitation that negatively impacts product desirability.
The 970 and 1060 had the same things applied to them as the 2060, in reality they are 10$ apart.Close but around 20% costlier at $360-390 range depending on the model and store and special.
https://www.techspot.com/article/17...00248&usg=ALkJrhhBQySIeVLh97SaGYv-7VZ-SRf1tg/Today we're investigating claims that the new GeForce RTX 2060 is not a good buy because it only features 6GB VRAM capacity. The RTX 2060 offers performance similar to the GTX 1070 Ti, but that card packs an 8GB memory buffer, as did its non-Ti counterpart.
...
It's clear that right now, even for 4K gaming, 6GB of VRAM really is enough. Of course, the RTX 2060 isn’t powerful enough to game at 4K, at least using maximum quality settings, but that’s not really the point. I can hear the roars already, this isn't about gaming today, it’s about gaming tomorrow. Like a much later tomorrow…
The argument is something like, yeah the RTX 2060 is okay now, but for future games it just won’t have enough VRAM. And while we don’t have a functioning crystal ball, we know this is going to be both true, and not so true. At some point games are absolutely going to require more than 6GB of VRAM for best visuals.
The question is, by the time that happens will the RTX 2060 be powerful enough to provide playable performance using those settings? It’s almost certainly not going to be an issue this year and I doubt it will be a real problem next year. Maybe in 3 years, you might have to start managing some quality settings then, 4 years probably, and I would say certainly in 5 years time.
Usable at what quality level? can they provide true reflections? Soft PCF shadows? Area Shadows? dynamic GI? proper refractions? Nope. RT is an elegant solution that encompasses everything. See Quake 2 on Vulkan RTX for a proper demonstration of a complete path tracing solution.
.
It's not BS. Workstation sales pay for driver optimizations for professional apps. These optimizations are the main thing you're paying for in markets like CAD/CAM.The "better driver" part is also BS. In reality, it's only some hardware error concealing features activated (ECC RAM, error concealing in video decoder etc.), and a couple of virtualization features unlocked in the firmware.
They're not removing the compute units so no need to panic about "purely fixed function hardware".Yes, better, better, better, etc. Sphere tracing is just raytracing with a different acceleration structure, one not amenable to animation as much, and without triangles isn't as directly useable today straight from modeling programs.
I mean, if you really want to know what sphere tracing is: http://mathinfo.univ-reims.fr/IMG/pdf/hart94sphere.pdf is a decent overview plus a look at an interesting use case of sphere tracing that BVH RT can't do.
I wouldn't necessarily want to use it for reflections due to it's scaling problem, a hybrid solution of larger scale SDF tracing with low level polys still used might overcome the exponential data update problem that makes small, non basic shape details too hard to do with SDFs. But programmability is always preferred over narrow, fixed function hardware given just a choice between the two without consideration for other externalities.
Given that AMD has stated they're [doing raytracing in conjunction with game developers] I take this to mean they're asking game devs what they want to do with raytracing, and how they want to do it, rather than the MS and Nvidia strategy of doing so by fiat. The former is probably a better decision, AMD decided to put in tessellation hardware way back, before any open standard was made for it and fairly fixed function, without asking game devs how they'd want to use it. I can't remember much use made of it.
Some games might use pure BVH, and support RTX hardware for doing so. But other might use pure signed distance fields or cone tracing, or some hybrid of all 3. Having purely fixed function hardare for only one of these might not be worth the cost of silicon, relatively fixed as it has become versus the past, compared to hardware with more multi use capabilities.