Nvidia Turing Architecture [2018]

Only 12 GB?

I expected a Turing TITAN to have more memory, especially since the TITAN line has had 12 GB since 2015 (except for the TITAN Z and the TITAN V CEO Edition) and high end GeForce GPUs have gotten memory increases since then.

Keep in mind that these new cards have the added value of having actually passed through all the proper quality assurance steps.
The probability of having to return these will be way lower than a 2080 Ti. :yep2:
:thumbsup:
 
What's the customer for a Titan like that vs the Quadro range? You get better than a Quadro 5000 for more money but no access to Quadro drivers?
 
New What's the customer for a Titan like that vs the Quadro range? Y
They are catering for mainly the data scientists, AI, ray tracing content creators and rich gamers with this release.
You get better than a Quadro 5000 for more money but no access to Quadro drivers?
A Titan RTX is essentially a slightly faster Quadro RTX 6000, but without Quadro drivers. Which explains the price difference.
 
What's the customer for a Titan like that vs the Quadro range? You get better than a Quadro 5000 for more money but no access to Quadro drivers?

I think after the Vega FE opened up pro driver optimizations to "prosumers", nvidia followed suit with the Titan range.
IIRC the Titan drivers now carry the optimizations but not the proper certifications from professional software makers.

In practice, I think this means people who buy Titan/FE add-in boards get about the same performance as the Quadro/Radeon-Pro counterparts, but those cards can't go into new graphics workstations from Dell, HP, etc.

But this is for Quadro, not Tesla. nvidia has forbidden (or tried to forbid) compute farm builders from using Titans and Geforces in their systems, through the driver EULAs, as far as I know.
 
Titan RTX is officially announced: 24GB of GDDR6, delivers 130 teraflops of deep learning performance and 11 GigaRays of ray-tracing performance.

So it's faster than a Titan V/Tesla V100 in AI workloads, and upgrades RT performance even further. Price is 2500$!!

https://videocardz.com/press-releas...07cVOp99CuNQ0lp2hkfVkbhfZfqvtfwK_AG3vCl9oDW0A

Why would anyone buy this, if you can have two RTX2080Ti for that price ?
At least not for gaming graphics.
It's clearly intended for the NN training folks only (as can be seen from the removal of half speed mixed precision training limitation on Geforce).
This is not the kind of Titan that used to be. :(
 
Last edited:
Probally for people that want even faster then 2080ti. 24gb gddr6 also enables higher resolutions and its RT performance is better.
2070 is a nice mid range alternative, 2080 high end, Ti ultra, titan elite range.
 
Is there 16 Gb GDDR6-modules available? Since you can't do over 12 GB with 8 Gb modules on 384-bit GDDR6 (at least I'm assuming that splitting it to 2x 16-bit channels to begin with means you can't do clamshell mode anymore)
Micron's datasheet for GDDR6 includes a clamshell mode. The individual channels get split into 8 per channel.
 
What's the customer for a Titan like that vs the Quadro range? You get better than a Quadro 5000 for more money but no access to Quadro drivers?
You get full FP32 accumulate on the tensor cores for a fraction of the price of a Quadro RTX 6000. That alone will sell it to droves of data scientists.

Tesla: Servers
Quadro: Content creation & CAD
Titan: Developers & data scientists; develop on Titan now, then sell these users some Teslas when they need to scale out
GeForce: Consumer gaming & everyone who can't afford a better card.
 
Very impressive indeed what RTX series are doing, like DF mentioned extremely impressive graphics with RT and now possible @60FPS, you basicly get 2080TI performance with a 2070 after the new patch. Deep learning, DLSS can only improve things, as do new game patches and nvidia drivers.


Variable rate shading seems intresting too. Damn these new RTX gpu's are loaded with new features, and on top of that normal rasterisation performance is much faster then the 10 series.
 
Using Turing Mesh Shaders: NVIDIA Asteroids Demo
December 18, 2018
Turing introduces a new programmable geometric shading pipeline built on task and mesh shaders. These new shader types bring the advantages of the compute programming model to the graphics pipeline. Instead of processing a vertex or patch in each thread in the middle of fixed function pipeline, the new pipeline uses cooperative thread groups to generate compact meshes (meshlets) on the chip using application-defined rules. This approach greatly improves the programmability of the geometry processing pipeline, enabling the implementation of advanced culling techniques, level-of-detail, or even completely procedural topology generation. You can find more information on mesh shading in the detailed technical introduction to Turing mesh shaders.
In addition to dynamic LOD, mesh shading also allows the implementation of smart culling systems, greatly improving rendering efficiency. Culling takes place hierarchically in the demo.
  • First the task shader checks the entire asteroid for visibility and determines which LOD(s) to use.
  • Sub-parts, or meshlets are then tested by the mesh shader.
  • Finally, the remaining triangles are culled by the GPU hardware.
Prior to the arrival of the Turing architecture, GPUs would be forced to cull every triangle individually, creating massive workloads on both the GPU and CPU.

By combining together efficient GPU culling and LOD techniques, we decrease the number of triangles drawn by several orders of magnitude, retaining only those necessary to maintain a very high level of image fidelity. The real-time drawn triangle counters can be seen in the lower corner of the screen. Mesh shaders make it possible to implement extremely efficient solutions that can be targeted specifically to the content being rendered.

Tessellation is not used at all in the demo, and all objects, including the millions of particles, are taking advantage of Mesh Shading.
https://devblogs.nvidia.com/using-turing-mesh-shaders-nvidia-asteroids-demo/

Edit: Clarification.
 
Last edited:
Back
Top