Nvidia Turing Architecture [2018]

Why do they have such a problem with attribution? This statement basically sh*ts on the guys bringing cluster-culling into practice.
? This has been Nvidia's thing since... Forever. The company apparently "invented" everything in the graphics industry...and it's sh@t unless they do it. Then it's the best thing ever...(invented by them.. In the future) Sort of like Apple's reality distortion field.
 
Last edited:
Primitive shaders is close, or am I wrong ? It's not working on PC / Vega, but on a closed system and a semi-custom gpu, I guess they can make it work / available to devs.
 
True. AMD has stated primitive shaders as an operation is similar to GPU culling as demonstrated in Wolfenstein II, using the shader array for accelerated geometry processing.
Unlike some Vega features, AMD says that "primitive shader support isn't something we can just turn on overnight and it happens," noting instead that it'll need to work with the development community to bring this feature to future API versions. Game developers would then presumably need to take advantage of the feature when programming new games in order to enjoy the full performance benefits of the Next-Generation Geometry path.

The company notes that it can work with developers to help them employ rendering techniques similar to primitive shaders today, however, and it cited Wolfenstein II: The New Colossus' GPU culling graphics setting as an operation similar in principle to that of primitive shaders. When this feature is enabled on AMD graphics cards, the game uses the copious Vega shader array to accelerate geometry processing. We didn't perform any directed testing of this feature when we last benchmarked Wolfenstein II, but ComputerBase has done so, and the site saw negligible performance differences when the setting was enabled versus when it was disabled. We may need to revisit this feature in the near future.
https://techreport.com/news/33153/radeon-rx-vega-primitive-shaders-will-need-api-support
 
Nvidias turing implementation is said to be more versitile/superior.
We don’t know how AMD implement primitive shaders. There is no API Code written for Primitive Shaders and in Wolfenstein it’s a emulation not the true primitive shaders.

What I finde interesting, NVIDIA need also a api extension. So it really looks not so simple like AMD stated.
 
Mesh Shaders sure have a lot of potential, but the tasks which they are being put into use in this asteroids demos are no different than what console game engines have been doing through GPU compute since the start of PS4/XB1 gen. Not that Mesh Shaders can't get other more interesting stuff done, just that their asteroids demo itself is not doing those things. Nothing world changing there, just a proof of concept of implementing an already comon system the Nvidia way, which is an important step, but nothing to be bragging about.
 
Yeah, Mesh Shading is atleast as intresting new feature as RT is. Perhaps the new Halo could benefit of mesh shading, thinking of its setting.
 
Mesh Shaders sure have a lot of potential, but the tasks which they are being put into use in this asteroids demos are no different than what console game engines have been doing through GPU compute since the start of PS4/XB1 gen. Not that Mesh Shaders can't get other more interesting stuff done, just that their asteroids demo itself is not doing those things. Nothing world changing there, just a proof of concept of implementing an already comon system the Nvidia way, which is an important step, but nothing to be bragging about.
Still. Mesh shaders are on chip and integrated into graphics pipeline. Compute approach needs a round trip to memory.
 
Why do they have such a problem with attribution? This statement basically sh*ts on the guys bringing cluster-culling into practice.
If you look at the technical article about mesh shaders or the siggraph talk, which are linked in that blog post as well, you will see I do mention the various compute approaches. It always depends on the perspective. "Anything" is possible with compute shaders for a long time (raytracing etc.), but what is really built into the hw pipeline is the difference that is being highlighted here.
 
Mesh Shaders sure have a lot of potential, but the tasks which they are being put into use in this asteroids demos are no different than what console game engines have been doing through GPU compute since the start of PS4/XB1 gen. Not that Mesh Shaders can't get other more interesting stuff done, just that their asteroids demo itself is not doing those things. Nothing world changing there, just a proof of concept of implementing an already comon system the Nvidia way, which is an important step, but nothing to be bragging about.

The combination of draw indirect + compute shaders that allows easy GPU LOD/culling per-drawcall has been around longer than the consoles you mention. I agree with you that the video could have showed a bit more, but the target audience was not really developers (who would already have looked at the other articles). This demo actually also does do cluster culling via using encoded meshlets, not just the LOD picking. Previous approaches with LOD on a per-draw level like with the compute can have some inefficiencies when the LOD level is very high (low complexity drawcalls). So the big question is basically how much time do you spend explaining all these "low-level" details in such videos/articles. Furthermore with a feature packed architecture like Turing there is a lot of work to prepare all that material, and at some point you have to account for the factor of time. The CAD github sample did focus more on the culling aspect and is out for a while now as well.
Hope that gives some background. In a perfect world we would have spent more time and illustrating more of that information.
 
nevermind I misinterpreted your "start of ps4/xb1 gen", as just consoles rather than the timeframe of them and dx11 class hw.
Yeah, that was the idea. And just to clarify, I'm not dismissing the importance of invidia's work on that demo, nor the potential of mesh shaders. Implementing robust systems takes time, even if they are not completely new. Someone had to do it at some point, and it is a very important step in the development of the mesh shaders tech. What I wanted to point out is that this one specific demo was not reinventing nor revolutionizing the wheel, rather, proving the viability of a the wheel with a new material, so to speak. Weird analogy, but I think you get the point. Arguably, that is the best choice of a start one could have. First implement something general and very useful that can benefit everybody. It's a perfect proof of concept. But what I'm holding my breath the most for is the more inventive uses of the tech both under nvidia's mesh shaders approach as well as AMD's next gen primitive shaders thing, which I'm sure are brewing. I can't wait.
 
Last edited:
Yes I get your point. In general most of the "low level" Turing features are not easy to present to consumers, given they bring evolutionary pipeline improvements that developers understand, rather than something flashy like RT reflections ;)
And for true procedural research (plants etc.) was not yet enough time but I also think that has high potential.
 
If you look at the technical article about mesh shaders or the siggraph talk, which are linked in that blog post as well, you will see I do mention the various compute approaches. It always depends on the perspective. "Anything" is possible with compute shaders for a long time (raytracing etc.), but what is really built into the hw pipeline is the difference that is being highlighted here.

As a scientist you should be well aware that it's very important to express yourself exactly right when you talk to a layman audience, because it almost guaranteed stucks, and won't be overwritten later on because they're not exposed to the topic at hand on a constant basis. That's exactly how myths are born.
Instead of simplifying something for your audience (and make it inherently wrong in one way or another), have a bit of faith and patience, and explain things exactly as e.g. ambiguous or complex or crazy as things often really are. It works rather well, for the audience and the whole community.
Otherwise, just don't throw sentences like that around. What is it for? What's the objective information in it? What exactly is it we learn from it? Does Nvidia not have a higher responsibility to present precise and sound information to their consumers?
 
Otherwise, just don't throw sentences like that around. What is it for? What's the objective information in it? What exactly is it we learn from it? Does Nvidia not have a higher responsibility to present precise and sound information to their consumers?
What is your problem?
 
What is your problem?

I ask the author of the lines to question his choice or adequacy of words, using a rethoric device - one which sadly contains bitterness for a reason. Maybe he reconsiders them, for the benefit of his readership, which might be delighted in experimenting with alternative forms of primitive-submission.
This is not my problem, it's actually yours (impersonal). You should want to be informed precisely and adequately.
My problem is, that, as an R&D engineer I don't want to get into the real situation of my invention or contribution to an invention being brushed away/skipped by "history". It'd be bad for anyone's career, or willingness or capability of continued contribution. This is a very big and deep topic, it has been talked about endless times and it leads to conventions and standards. I don't think this thread is the place for it, but I would say there should always be some place to talk about it because it is very important.
 
What I,m asking me how many mashe shader has tu102 and tu104.

So tu102 has 6 rasterizer and also tu104 has 6 rasterizer. That means both have same capabilities for rasterisation. But do they have also the same cacapilieties for mesh shader? Where are the mesh shader sitting in the architecture?
 
My problem is, that, as an R&D engineer I don't want to get into the real situation of my invention or contribution to an invention being brushed away/skipped by "history".
There is nothing you can really do about that. Unless you happen to come up with a contribution similar to the "theory of relativity" or something as magnanimous then there is always the possibility that it will become part of the archive. If your peers deem it worthy enough then it will hold up against the flow of time, and this is true for any profession. But we digress ...
 
Back
Top