Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Have we as forum discussed the possibility that AMD's VRS solution is based off MS's patent? Who is to say that Sony nor MS can't develop patented tech that AMD will readily incorporated into their PC designs?

I've definitely raised that question here and discussed it. When we knew that MS were using VRS incorporating their own patents, and that Sony were silent on VRS (as they still are) people were saying that Sony must have it because it's in PC RDNA 2 parts coming later this year. But that's not necessarily the case.

I said it was entirely plausible that AMD had an agreement with MS and that they could be sharing an implementation. It could make sense from an engineering perspective, and it would fit with the timescale of these products coming to market.

It’s never come up. AMD has its own patent. If it uses MS patent I suspect a long term licensing fee agreement. Considering how tight margins can be, I can’t see this happening.

If AMD has their own patent they could change their implementation for RDNA 3 or whatever suits their timeline.

MS clearly wanted VRS for XSX, and they want it to be adopted as part of DX12U. AMD don't want to be releasing RDNA 2 for PC missing notable features that Nvidia have had for more than two years, especially now they're being pushed as part of the latest DX12 spec. MS want VRS in Xbox and on desktop, and so do AMD. I expect they could come to a mutually beneficial agreement.

Besides, any licensing fee would have to be large to outweigh the hit to competitiveness against Nividia and Intel (a ~15% performance feature is not insignificant), and it might even be in MS's interests to waive a fee altogether for this particular generation of PC GPUs - especially if Sony can't have it.

The question of "why would Sony disable VRS when it's already in RDNA 2" might be answerable with "Sony's RNDA options never included it!" :no:
 
Any link to this AMD VRS patent?

Tommy McClain

https://hexus.net/tech/news/graphics/128066-amd-navi-gpus-might-use-variable-rate-shading-tech/

Filed in 2017, apparently!

Edit: can anyone work out if this is tier 1 or tier 2? I can't. Maybe it covers both ...:???:

Edit 2: Erm ....?

Shading rates are applied to the render target based on one or more of the following techniques: a tile-based rate determination technique, in which the render target is divided into a set of shading rate tiles and each shading rate tile is assigned a particular shading rate; a triangle-based rate determination technique, in which a particular shading rate is assigned to each primitive; and a state-based rate determination technique, in which shading rate state changes propagate through the pipeline, and, at the rasterizer stage 314, set the shading rate for subsequent pixels until the next shading rate state change is processed by the rasterizer stage 314. These techniques may be combined, with different techniques being given different priorities. In one example, the state-based rate determination technique defines a default shading rate for samples of a triangle. In this example, the tile-based rate determination technique overrides the state-based values, and the triangle-based shading rates override the state-based values and the tile-based values. Other priorities may alternatively exist, or the different techniques may be independently applied.
 
Last edited:
on XSX low latency.
Wondering now if this is related:
from HAGS background:
This approach to scheduling the GPU has some fundamental limitations in terms of submission overhead, as well as latency for the work to reach the GPU. These overheads have been mostly masked by the way applications have traditionally been written. For example, an application would typically do GPU work on frame N, and have the CPU run ahead and work on preparing GPU commands for frame N+1. This buffering of GPU commands into batches allows an application to submit just a few times per frame, minimizing the cost of scheduling and ensuring good CPU-GPU execution parallelism.

An inherent side effect of buffering between CPU and GPU is that the user experiences increased latency. User input is picked up by the CPU during “frame N+1” but is not rendered by the GPU until the following frame. There is a fundamental tension between latency reduction and submission/scheduling overhead. Applications may submit more frequently, in smaller batches to reduce latency or they may submit larger batches of work to reduce submission and scheduling overhead.

hmmm.. all this time I thought that was normal to be N and N-1. Going to have to think on this more.

The new GPU scheduler will be supported on recent GPUs that have the necessary hardware, combined with a WDDMv2.7 driver that exposes this support to Windows. Please watch for announcements from our hardware vendor partners on specific GPU generations and driver versions this support will be enabled for.
 
https://hexus.net/tech/news/graphics/128066-amd-navi-gpus-might-use-variable-rate-shading-tech/

Filed in 2017, apparently!

Edit: can anyone work out if this is tier 1 or tier 2? I can't. Maybe it covers both ...:???:

Edit 2: Erm ....?
http://www.freepatentsonline.com/y2019/0066371.html

Actual patent above.

It doesn't read like it's Tier 2 to be honest.
Tier 1
  • Shading rate can only be specified on a per-draw-basis; nothing more granular than that
  • Shading rate applies uniformly to what is drawn independently of where it lies within the rendertarget
  • Use of 1x2, programmable sample positions, or conservative rasterization may cause fall-back into fine shading
Tier 2
  • Shading rate can be specified on a per-draw-basis, as in Tier 1. It can also be specified by a combination of per-draw-basis, and of:
    • Semantic from the per-provoking-vertex, and
    • a screenspace image
  • Shading rates from the three sources are combined using a set of combiners
  • Screen space image tile size is 16x16 or smaller
  • Shading rate requested by the app is guaranteed to be delivered exactly (for precision of temporal and other reconstruction filters)
  • SV_ShadingRate PS input is supported
  • The per-provoking vertex rate, also referred to here as a per-primitive rate, is valid when one viewport is used and SV_ViewportIndex is not written to.
  • The per-provoking vertex rate, also referred to as a per-primitive rate, can be used with more than one viewport if the SupportsPerVertexShadingRateWithMultipleViewports cap is marked true. Additionally, in that case, it can be used when SV_ViewportIndex is written to.

Just to be clear, I don't know what i"m doing. Reading patents in a head scratcher.
But it seems to me, comparing the two, the MS Patent at each stage of the unified shader pipeline, has the option to take in shading rate parameters or output them.

Compared to AMDs patent, which is also runs VRS through the unified shader pipeline, it doesn't seem to indicate that.

If you look at the Tier 2 highlights, it does appear that, yes, some form of supporting optional shading rate parameters on different stages.

MS Patent
Is here https://patents.google.com/patent/US20180047203A1/en

  • The input assembler stage 80 supplies data (triangles, lines, points, and indexes) to the pipeline. It also optionally processes shading rate parameters per object (SRPo), per primitive (SRPp), or per vertex (SRPv), generally referenced at 112, as determined by the application 46 (FIG. 1). As generally indicated at 114, input assembler stage 80 may output the SRPp, or an SRPv if the SRPv is not generated by a vertex shader stage 82.
  • [0042]
    The vertex shader stage 82 processes vertices, typically performing operations such as transformations, skinning, and lighting. Vertex shader stage 82 takes a single input vertex and produces a single output vertex. Also, as indicated at 110, vertex shader stage 82 optionally inputs the per-vertex shading rate parameter (SRPv) or the per-primitive shading rate parameter (SRPp) and typically outputs an SRPv, that is either input or calculated or looked up. It should be noted that, in some implementations, such as when using higher-order surfaces, the SRPv comes from a hull shader stage 84.
  • [0043]
    The hull shader stage 84, a tessellator stage 86, and a domain-shader 88stage comprise the tessellation stages—The tessellation stages convert higher-order surfaces to triangles, e.g., primitives, as indicated at 115, for rendering within logical graphics pipeline 14. Optionally, as indicated at 111, hull shader stage 84 can generate the SRPv value for each vertex of each generated primitive (e.g., triangle).
  • [0044]
    The geometry shader stage 90 optionally (e.g., this stage can be bypassed) processes entire primitives 22. Its input may be a full primitive 22 (which is three vertices for a triangle, two vertices for a line, or a single vertex for a point), a quad, or a rectangle. In addition, each primitive can also include the vertex data for any edge-adjacent primitives. This could include at most an additional three vertices for a triangle or an additional two vertices for a line. The geometry shader stage 90 also supports limited geometry amplification and de-amplification. Given an input primitive 22, the geometry shader can discard the primitive, or emit one or more new primitives. Each primitive emitted will output an SRPv for each vertex.
  • [0045]
    The stream-output stage 92 streams primitive data from graphics pipeline 14 to graphics memory 58 on its way to the rasterizer. Data can be streamed out and/or passed into a rasterizer stage 94. Data streamed out to graphics memory 58 can be recirculated back into graphics pipeline 14as input data or read-back from the CPU 34 (FIG. 1). This stage may optionally stream out SRPv values to be used on a subsequent rendering pass.
  • [0046]
    The rasterizer stage 94 clips primitives, prepares primitives for a pixel shader stage 96, and determines how to invoke pixel shaders. Further, as generally indicated at 118, the rasterizer stage 94 performs coarse scan conversions and determines a per-fragment variable shading rate parameter value (SRPf) (e.g., where the fragment may be a tile, a sub-tile, a quad, a pixel, or a sub-pixel region). Additionally, the rasterizer stage 94performs fine scan conversions and determines pixel sample positions covered by the fragments.

As I review this, and the words 'Hololens' pop up in this patent, it would appear that MS has been working on their own from of VRS for some time in association with trying to extract as much shader power as possible while reducing the amount of power required to do it.

That research got them to this point, and instead of going with AMDs solution, their solution was better (the Hololens one) and they plopped it here.

That may imply, Hololens and any other VR type device that they've been working on with foveated rendering in mind (and thus VRS), could be driven by in this case, XSX. For context, Hololens 3 is supposed to launch with foveated rendering.

In determining the shading rate for different regions of each primitive (and/or different regions of the 2D image), the described aspects take into account variability with respect to desired level of detail (LOD) across regions of the image. For instance, but not limited hereto, different shading rates for different fragments of each primitive may be associated with one or more of foveated rendering (fixed or eye tracked), foveated display optics, objects of interest (e.g., an enemy in a game), and content characteristics (e.g., sharpness of edges, degree of detail, smoothness of lighting, etc.). In other words, the described aspects, define a mechanism to control, on-the-fly (e.g., during the processing of any portion of any primitive used in the entire image in the graphic pipeline), whether work performed by the pixel shader stage of the graphics pipeline of the GPU is performed at a particular spatial rate, based on a number of possible factors, including screen-space position of the primitive, local scene complexity, and/or object identifier (ID), to name a few.
 
Last edited:
Nvidia / Intel / Qualcomm / AMD / MS all have VRS patents .. and roughly similar timeframes .. And it probably came about from a usergroup collaboration between them all .. There were even presentations from certain graphics folks calling out the different OEM VRS implementations making it sound like a entire group collaboration
 
Nvidia / Intel / Qualcomm / AMD / MS all have VRS patents .. and roughly similar timeframes .. And it probably came about from a usergroup collaboration between them all .. There were even presentations from certain graphics folks calling out the different OEM VRS implementations making it sound like a entire group collaboration

this was one such graphics team presentation , it was in the ancillary slides at the end of the presentation ...

upload_2020-7-1_15-11-21.png
 
http://www.freepatentsonline.com/y2019/0066371.html

Actual patent above.

It doesn't read like it's Tier 2 to be honest.
Tier 1
  • Shading rate can only be specified on a per-draw-basis; nothing more granular than that
  • Shading rate applies uniformly to what is drawn independently of where it lies within the rendertarget
  • Use of 1x2, programmable sample positions, or conservative rasterization may cause fall-back into fine shading
Tier 2
  • Shading rate can be specified on a per-draw-basis, as in Tier 1. It can also be specified by a combination of per-draw-basis, and of:
    • Semantic from the per-provoking-vertex, and
    • a screenspace image
  • Shading rates from the three sources are combined using a set of combiners
  • Screen space image tile size is 16x16 or smaller
  • Shading rate requested by the app is guaranteed to be delivered exactly (for precision of temporal and other reconstruction filters)
  • SV_ShadingRate PS input is supported
  • The per-provoking vertex rate, also referred to here as a per-primitive rate, is valid when one viewport is used and SV_ViewportIndex is not written to.
  • The per-provoking vertex rate, also referred to as a per-primitive rate, can be used with more than one viewport if the SupportsPerVertexShadingRateWithMultipleViewports cap is marked true. Additionally, in that case, it can be used when SV_ViewportIndex is written to.

Just to be clear, I don't know what i"m doing. Reading patents in a head scratcher.
But it seems to me, comparing the two, the MS Patent at each stage of the unified shader pipeline, has the option to take in shading rate parameters or output them.

Compared to AMDs patent, which is also runs VRS through the unified shader pipeline, it doesn't seem to indicate that.

If you look at the Tier 2 highlights, it does appear that, yes, some form of supporting optional shading rate parameters on different stages.

MS Patent
Is here https://patents.google.com/patent/US20180047203A1/en



As I review this, and the words 'Hololens' pop up in this patent, it would appear that MS has been working on their own from of VRS for some time in association with trying to extract as much shader power as possible while reducing the amount of power required to do it.

That research got them to this point, and instead of going with AMDs solution, their solution was better (the Hololens one) and they plopped it here.

That may imply, Hololens and any other VR type device that they've been working on with foveated rendering in mind (and thus VRS), could be driven by in this case, XSX. For context, Hololens 3 is supposed to launch with foveated rendering.

The MS patent only describes how the shading rate is generated for a fragment of a primitive, to be used by a GPU "configured for variable pixel shading". How the variable shading rates are actually used in the GPU configured for variable pixel shading" and what this "configuration" actually consist of is described in an all encompassing patent from the inventors involved in the variable shading patent (both patents filed simultaneously): "Multiple Shader Processes in graphics processing" (https://patentimages.storage.googleapis.com/1d/60/ab/e8498e7a9089d4/US20180232936A1.pdf)

Of particular interest here is Fig 7. describing how a particular pixel shader can be selected from a plurality of pixel shaders to shade a particular fragment from a plurality of fragments of a primitive using configurable parameters like a variable shading rate.
MS VRS solution seems to cover both software and hardware.
 
It’s never come up. AMD has its own patent. If it uses MS patent I suspect a long term licensing fee agreement. Considering how tight margins can be, I can’t see this happening.
Yup. Patents are currency in the tech world. You either want to license the right to use patented technology or it's trade use of your patents for use of somebody else's.
 
What about this patent from SIE/Cerny?

An example of such a metadata configuration is shown schematically in FIG. 4D. FIG. 4D illustrates an example of how the metadata MD could be configured to specify different active pixel samples (or active color samples) for different subsections 401 of the display screen 316. In the example illustrated in FIG. 4D, central subsections of the screen 316 are desired to have full resolution, and subsections further from the center have progressively lower resolution.

upload_2020-7-1_16-34-16.png

https://patents.google.com/patent/US10614549B2/en
 
I'm afraid my reading of Patentese failed in this case:

1. A method for graphics processing with a graphics processing system having a graphics processing unit coupled to a display device, comprising: receiving metadata specifying an active sample configuration for a particular region of a screen of the display device among a plurality of regions of the screen, wherein the metadata specifies different active sample configurations for regions of the screen that have different resolutions; receiving pixel data for one or more pixels of an image in the particular region, wherein the pixel data specifies the same number of color samples for each pixel; wherein the number of color samples for each pixel specified by the pixel data is the same over an entire surface of the screen, and for each pixel in the particular region that is covered by a primitive, invoking a pixel shader only for color samples for the pixel specified to be active samples by the active sample configuration, wherein pixel shading computations of the pixel shader super-sample each pixel by virtue of being invoked for each active color sample of each pixel in the particular region that is covered by the primitive.​

Got as far as it being relevant to conditional rendering quality specified by a display device, which would suit a VR headset and foveated rendering.
 
I'm afraid my reading of Patentese failed in this case:
no failure there. it's exhausting. I hate patent diving.

This patent as well as the VRS one from MS, both related and driven by the research from the Hololens team. They may have found its usage as being useful elsewhere as it really assists with reducing the power footprint as much as possible
 
Last edited:
I lose it from this point:

wherein the number of color samples for each pixel specified by the pixel data is the same over an entire surface of the screen, and for each pixel in the particular region that is covered by a primitive, invoking a pixel shader only for color samples for the pixel specified to be active samples by the active sample configuration, wherein pixel shading computations of the pixel shader super-sample each pixel by virtue of being invoked for each active color sample of each pixel in the particular region that is covered by the primitive.

On the one hand, it seems like selective sampling based on an active sampling configuration, but then they mention super-sampling. ¯\_(ツ)_/¯
 

This patent is for the reduction of pixel density at the periphery of an HMD screen to decrease resolution of areas under peripheral vision and increase the resolution of areas under central vision. Quoting relevant part of the patent:

"Another way of looking at this situation is shown in FIG. 2C, in which the screen 102 has been divided into rectangles of approximately equal “importance” in terms of pixels per unit solid angle subtended. Each rectangle makes roughly the same contribution to the final image as seen through the display. One can see how the planar projection distorts the importance of edge rectangles 202 and corner rectangles 203. In fact, the corner rectangles 203 might make less of a contribution to the center rectangles due to the display optics, which may choose to make the visual density of pixels (as expressed as pixels per solid angle) higher towards the center of the display."

The only thing it has in common with VRS is that it enables variance of resolution. However, VRS achieves the latter by varying shading rate on a per primitive basis and is, instead of being tied to screen area, completely arbitrary insofar as it comes to calculating the shading rate parameter and applying it (at least in MS implementation of it). Again, quoting from this very important patent from MS (https://patentimages.storage.googleapis.com/1d/60/ab/e8498e7a9089d4/US20180232936A1.pdf):

"In one example, a mesh shader (which may be part of a rasterizer stage, as described further herein) may operate to execute one or more thread vectors, each of which can include a plurality of lanes (e.g., threads) for independent or parallel execution (e.g., 64 lanes in some examples). In this example, the mesh shader may launch a pixel shader to operate on each of the plurality of lanes to provide substantially simultaneous shading of a plurality of pixels of the primitive, where the pixel shader can, in each lane, execute (e.g., concurrently) the same instructions for shading different sets of one or more pixels.

The mesh shader, in an example, may be capable of providing different pixel shader parameter values for portions of a given primitive. For example, the different pixel shader parameter values may include a variable rate shading parameter, such that different shading rates (e.g., 1 pixel per pixel shader thread, 2 pixels per pixel shader thread, 4 pixels per pixel shader thread, etc.) can be applied for different portions of a given primitive. In another example, the different pixel shader parameter values may include different stencil values from a stencil buffer that can be used to determine pixel values, etc.
"

Shading rate parameter can be calculated using a temporal reprojection transform (described here: https://patentimages.storage.googleapis.com/31/f4/6b/42fca3322ccda9/US20190005714A1.pdf), a metric over screen space (Apple patent) or any other clever algorithm a developer will come up with. The ability to invoke pixel shaders and to vary shading rate on a per primitive basis enables an interesting form of deffered rendering that aims to trump checkerboard rendering:
https://patentimages.storage.googleapis.com/6c/82/9e/652053ef900d5f/US20190005713A1.pdf.
 

Yeah, I'd clinked through to the actual patent to (unsuccessfully) try and decipher it and should have linked it like you did.

It doesn't read like it's Tier 2 to be honest.
Tier 1
  • Shading rate can only be specified on a per-draw-basis; nothing more granular than that
  • Shading rate applies uniformly to what is drawn independently of where it lies within the rendertarget
  • Use of 1x2, programmable sample positions, or conservative rasterization may cause fall-back into fine shading
Tier 2
  • Shading rate can be specified on a per-draw-basis, as in Tier 1. It can also be specified by a combination of per-draw-basis, and of:
    • Semantic from the per-provoking-vertex, and
    • a screenspace image
  • Shading rates from the three sources are combined using a set of combiners
  • Screen space image tile size is 16x16 or smaller
  • Shading rate requested by the app is guaranteed to be delivered exactly (for precision of temporal and other reconstruction filters)
  • SV_ShadingRate PS input is supported
  • The per-provoking vertex rate, also referred to here as a per-primitive rate, is valid when one viewport is used and SV_ViewportIndex is not written to.
  • The per-provoking vertex rate, also referred to as a per-primitive rate, can be used with more than one viewport if the SupportsPerVertexShadingRateWithMultipleViewports cap is marked true. Additionally, in that case, it can be used when SV_ViewportIndex is written to.

Just to be clear, I don't know what i"m doing. Reading patents in a head scratcher.
But it seems to me, comparing the two, the MS Patent at each stage of the unified shader pipeline, has the option to take in shading rate parameters or output them.

Compared to AMDs patent, which is also runs VRS through the unified shader pipeline, it doesn't seem to indicate that.

If you look at the Tier 2 highlights, it does appear that, yes, some form of supporting optional shading rate parameters on different stages.

MS Patent
Is here https://patents.google.com/patent/US20180047203A1/en

The bit that threw me (whom I kidding, it was all of it o_O ) was that it was talking about being able to move between state based, title based, and triangle based coverage rates. But I couldn't work out whether you could do that within a draw call, based on dynamic variables.

Tier 2 talks about per draw (definitely covered), per 16 x 16 or smaller tile (probably covered [1], I didn't see a specified size), per vertex + conditions (maybe covered [2] as per vertex probably means per triangle primitive in an e.g. strip, but can you do it based on changing vertex data?).

And could "state based rate changes" [3] cover the results of various combiners? I mean, if the results of various combiners are deterministic (which I think they should be), then wouldn't that essentially constitute a number of "states"??

Shading rates are applied to the render target based on one or more of the following techniques: [1] a tile-based rate determination technique, in which the render target is divided into a set of shading rate tiles and each shading rate tile is assigned a particular shading rate; [2] a triangle-based rate determination technique, in which a particular shading rate is assigned to each primitive; and [3] a state-based rate determination technique, in which shading rate state changes propagate through the pipeline, and, at the rasterizer stage 314, set the shading rate for subsequent pixels until the next shading rate state change is processed by the rasterizer stage 314. These techniques may be combined, with different techniques being given different priorities. In one example, the state-based rate determination technique defines a default shading rate for samples of a triangle. In this example, the tile-based rate determination technique overrides the state-based values, and the triangle-based shading rates override the state-based values and the tile-based values. Other priorities may alternatively exist, or the different techniques may be independently applied.

Anyhoo...

As I review this, and the words 'Hololens' pop up in this patent, it would appear that MS has been working on their own from of VRS for some time in association with trying to extract as much shader power as possible while reducing the amount of power required to do it.

That research got them to this point, and instead of going with AMDs solution, their solution was better (the Hololens one) and they plopped it here.

That may imply, Hololens and any other VR type device that they've been working on with foveated rendering in mind (and thus VRS), could be driven by in this case, XSX. For context, Hololens 3 is supposed to launch with foveated rendering.

I hadn't thought of this, but I think you're absolutely on the money. MS have been working on Hololense since long before XSX, and so their interest in hardware implementations of this type are far from hypothetical. It wouldn't surprise me in the least if MS were ahead of AMD in this particular area, which might make partnering with MS wise, especially given how far behind Nvidia they have been.

Just another thought, by MS would probably want this stuff in their (potentially) millions of XCloud devices over the next few years. Any saved performance is saved money on power consumption and cooling of enormous server clusters. Like with Hololense, it's not just about buying the silicon it's about the power it needs to run.

It appear to be tier 1 because it is not clear that shading rate can be varied depending on where the primitive lies in the render target. Not sure though.

This is one of the things I've been trying to work out. o_O The AMD patent states that changing shading rate per primitive is covered, and you'd definitely know where in screen space that was, and what angle the normal was to the camera, after transform.

The question is, does the AMD patent mean you can change the shading rate based on a primitive's value, or that you can only bake it in during asset creation submission to to the draw call?

"a triangle-based rate determination technique, in which a particular shading rate is assigned to each primitive"

But when is it assigned, AMD? When you make the asset submit the call, or at the end of your vertex shader operation transform or later? Bah! :mad:

Edit: because I can't handle to even vaguely related things at once without getting confused.
 
Last edited:
Yeah, I'd clinked through to the actual patent to (unsuccessfully) try and decipher it and should have linked it like you did.



The bit that threw me (whom I kidding, it was all of it o_O ) was that it was talking about being able to move between state based, title based, and triangle based coverage rates. But I couldn't work out whether you could do that within a draw call, based on dynamic variables.

Tier 2 talks about per draw (definitely covered), per 16 x 16 or smaller tile (probably covered [1], I didn't see a specified size), per vertex + conditions (maybe covered [2] as per vertex probably means per triangle primitive in an e.g. strip, but can you do it based on changing vertex data?).

And could "state based rate changes" [3] cover the results of various combiners? I mean, if the results of various combiners are deterministic (which I think they should be), then wouldn't that essentially constitute a number of "states"??



Anyhoo...



I hadn't thought of this, but I think you're absolutely on the money. MS have been working on Hololense since long before XSX, and so their interest in hardware implementations of this type are far from hypothetical. It wouldn't surprise me in the least if MS were ahead of AMD in this particular area, which might make partnering with MS wise, especially given how far behind Nvidia they have been.

Just another thought, by MS would probably want this stuff in their (potentially) millions of XCloud devices over the next few years. Any saved performance is saved money on power consumption and cooling of enormous server clusters. Like with Hololense, it's not just about buying the silicon it's about the power it needs to run.



This is one of the things I've been trying to work out. o_O The AMD patent states that changing shading rate per primitive is covered, and you'd definitely know where in screen space that was, and what angle the normal was to the camera, after transform.

The question is, does the AMD patent mean you can change the shading rate based on a primitive's value, or that you can only bake it in during asset creation?

"a triangle-based rate determination technique, in which a particular shading rate is assigned to each primitive"

But when is it assigned, AMD? When you make the asset, or at the end of your vertex shader operation. Bah! :mad:
Can it change the shading rate in the same draw as i understand is the main difference between 1 and 2
 
Status
Not open for further replies.
Back
Top