Nvidia Turing Speculation thread [2018]

Status
Not open for further replies.
We porbably gonna see raytracing effects (shadows,AO,reflections) as slower high quality options to their raster counterparts in games

Yup, my money is also on raytraced shadows and reflections as tacked on bonus features with DLAA thrown in as an upgrade to FXAA. Hopefully it’s reliably accurate and not blurry.

The increase in raw flops from Pascal to Turing doesn’t look that impressive on paper but it may not matter. Looks like cache and offchip bandwidth are getting a major bump.
 
I'm thinking it's in the realm of possibilities that either GeForce RTXs Will come much, much later than anticipated OR they will have top of the line GeForce RTX and under it GTXs based on Volta architecture.
 
AMD would be totally screwed if they release Navi without any RT block. Especially in the professional market.

For the professional market, they have a ray tracing renderer with their Radeon ProRender. I don't know about performances and adoption rate...
 
For the professional market, they have a ray tracing renderer with their Radeon ProRender. I don't know about performances and adoption rate...
Adoption has been Increased in the past year & as I noted earlier Radeon Ray's is now part of Unity via its progressive light mapper. But now AMD will really need to have HW RT support but it will once again probably a bit too-little - to late as Nvidia has support for its solution in most pro app (and renderers..
 
Unveiled today by NVIDIA CEO Jensen Huang at the annual SIGGRAPH conference, Turing brings together dedicated hardware acceleration of four core elements: AI, ray tracing, programmable shading and simulation.
...
Designers can now iterate their product model or building and see accurate lighting, shadows and reflections in real time. Previously, they would have to use a low-fidelity approximation to get their design more or less right, then ship files out to a CPU farm to be rendered and get the results back in minutes or even hours, depending on complexity

For artists in the entertainment world, the same is true for visualizing their creations for animation or visual effects. But the benefits don’t stop there. NGX, new NVIDIA technology for bringing AI into the graphics pipeline, is part of the RTX platform. And NVIDIA is providing an SDK that makes it easy for developers to incorporate AI-powered effects into their apps.

NGX technology brings capabilities such as taking a standard camera feed and creating super slow motion like you’d get from a $100,000+ specialized camera. Or using AI to increase the resolution and clarity of archived images. Or removing wires from a photograph and automatically replacing the missing pixels with the right background. Learn more about these NVIDIA Research papers that led to this work.

These new capabilities are combined with increases in the speed and fidelity of drawing raster graphics through newly advanced shaders. And up to 4,608 CUDA cores for parallel compute processing means that software developers have a hardware platform unlike anything before.

And, perhaps unsurprisingly, application developers are jumping at the chance to bring to their customers amazing new capabilities and up to 30x speed increases vs. CPU only for rendering.
https://blogs.nvidia.com/blog/2018/08/13/turing-industry-support/
 
AMD would be totally screwed if they release Navi without any RT block. Especially in the professional market.
My one and only contact doing rendering (small scale) is using CPUs and was very happy with what AMD has started offering in that space. I guess "the professional market" is rather diverse.
 
My one and only contact doing rendering (small scale) is using CPUs and was very happy with what AMD has started offering in that space. I guess "the professional market" is rather diverse.

Can ray tracing benefit from higher precision levels available through AVX512 or is it inherently a series of low-precision calculations?
 
Possible GeForce RTX 2080 (Turing GPU) Ashes of the Singularity Benchmarks Leak?
What makes this a plausible score is that in the Crazy 4k and 5k settings, the scores are abnormally high. The new unit detected is obviously an unverified one, it, however, offers a score faster than a Titan V in most measurements. The entry is spotted under profile name NVGTLTest009.

If it is correct, this score could also be a new Quadro family product as announced today, or the GeForce GTX/RTX 2080. The card tested show ~62 fps in the Crazy 4k preset on average, and that is quite a lot. Also, this entry from the same user might be of interest.
https://www.guru3d.com/news-story/p...ashes-of-the-singularity-benchmarks-leak.html
 
So is this card made for pure RT scenes or hybrid Raster + Ray for games? Just wondering if having an RT specific core was more for non-gaming apps. I figure integration into the shader pipe would be more efficient for hybrid rendering at the cost of transistor cost and possibly layout per shader. But with a full on dedicated RT core, it can be omitted for less powerful GPUs that would be too slow to benefit, just like the Tensor cores. I just wonder if the data bandwidth is there to keep the shader cores and RT core connected.
 
RT core specs: 10 Gray/s, ray triangle intersection, BVH traversal

Are these coherent primary rays or incoherent secondary rays ?
How can you specify the number of rays per second for raytracing, as this does depend on scene complexity AFAIK, like nr of triangles ?
 
So is this card made for pure RT scenes or hybrid Raster + Ray for games? Just wondering if having an RT specific core was more for non-gaming apps. I figure integration into the shader pipe would be more efficient for hybrid rendering at the cost of transistor cost and possibly layout per shader. But with a full on dedicated RT core, it can be omitted for less powerful GPUs that would be too slow to benefit, just like the Tensor cores. I just wonder if the data bandwidth is there to keep the shader cores and RT core connected.
It's everything from the looks of it, with regular shader/compute cores, tensor and dedicated RT.

Pascal_vs_Turing.jpg


Turing.jpg


Images courtesy of AT
 
Raja Koduri on Twitter:

Never a dull millisecond in real-time computer graphics. Nvidia keeps moving the bar higher.


That comparison is kind of deceiving. They are comparing GTX 1080 GP104 (2560 shaders) vs Quadro RTX 6000 which we assume is GT102 (4,608 shaders). I hope that was a typo and it was supposed to be a 1080Ti in the comparison. Otherwise, it's still an improvement, but not the big factor that they are portraying.
 
Some good stuff in this interview. Definitely worth a read.

SIGGRAPH: Players would love to see this kind of photorealism in their games. When might we see ray tracing in games?

KL: At the end of this year we are planning to have official Unreal Engine 4 support for DXR checked-in and available to the GitHub community. As the first official feature, this will include the ray-traced area light shadows. You could see hybrid features such as ray-traced area lights on high-end hardware next year, depending on developer adoption. More features and algorithms will come online as people become more familiar with the API and performance characteristics

https://blog.siggraph.org/2018/08/e...t-twitter&utm_campaign=Oktopost-UE+-+SIGGRAPH
 
Is it possible for Nvidia to simple cut out Tensor Cores and reduce the number of ray tracing cores and integrate this into a much smaller sized die suitable for the consumer market with no significant price bump in manuf cost? They are after-all going from 14nm to 12nm and have a little more room...

I can't imagine why they wouldn't extend this high level micro-architecture down to the consumer side w/ cut down specs.
Also, I the discussions have been quite illuminating. I am currently researching how exactly ray tracing gets accelerated in hardware. Is Nvidia likely going to go into technical details about this soon and the micro-architectural layout? When does this typically occur? Would people in the know say PowerVR is the best company to look into if I'm interested right now in how you accelerate ray tracing in hardware?

Very interesting development IMO and the more exciting aspect of these new GPUs. Anyone know the bit width the ray-tracing cores likely operate on? What data unit is used today in software/CPU processing? Lastly, if anyone can provide some great resources for looking into ray-tracing in both hardware/software that would be greatly appreciated.
 
Nvidia still hasn't divulged any details of the hardware tessellator which is old news by now so I wouldn’t expect them to say much about their raytracing tech.

This is all very exciting but my enthusiasm is dampened somewhat by the reality that even the latest games are still shipping with blurry textures, low poly objects and blocky shadows.
 
Given the leaked info regarding the naming RTX 2080/2070... obviously it will be there. GTX below that likely means lower tiers won't get it until 7nm refresh.
 
For me Turing looks like Volta without fp64 where else is the different?
Besides the RT cores mentioned by @Malo, another difference is the streaming multiprocessor.
The Turing SM (streaming multiprocessor) has also been reworked, with a new ability to issue floating-point and integer operations in parallel. That gives Turing a maximum speed of 16 TFLOPS for floating-point operations (presumably FP32), and 16 TOPS of integer operations.
https://www.pcgamer.com/nvidia-unve...ding-a-glimpse-inside-the-next-geforce-cards/
 
Status
Not open for further replies.
Back
Top