AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

This appears to be an elaboration of the triangle seive method for the PS4, or a more formal adoption of a compute culling shader from a game engine as a new step at the graphics front end.
The primitive shader itself has signs of similar concepts being part of its heritage, but the patent shows that its new shaders are run as a filter in front of the whole graphics pipeline, which is not where primitive shaders are.

Vega's pipeline creates a merged version of the vertex shader and culling shader somewhere after input assembly or the tessellation stage, whereas this method sits wholly outside.
The idea of compiling a vertex shader, and setting a compiler mode that declares all non-position code dead harkens back to Mark Cerny's description of the optional triangle sieve, though the idea likely applies to all three schemes.
 
I suspect there'll be another patent application along at some point that describes the formation of attribute shading (for non-position attributes) from automatic compilation. Remember, attribute shading is something that can be deferred until the pixel shader, so a description of that process (automatic extraction of non-position shading instructions, vertex index collation, generation of code and whatever else) has its own complexities and is "off topic" for a patent on culling.
 
This appears to be an elaboration of the triangle seive method for the PS4, or a more formal adoption of a compute culling shader from a game engine as a new step at the graphics front end.
The primitive shader itself has signs of similar concepts being part of its heritage, but the patent shows that its new shaders are run as a filter in front of the whole graphics pipeline, which is not where primitive shaders are.

Vega's pipeline creates a merged version of the vertex shader and culling shader somewhere after input assembly or the tessellation stage, whereas this method sits wholly outside.
The idea of compiling a vertex shader, and setting a compiler mode that declares all non-position code dead harkens back to Mark Cerny's description of the optional triangle sieve, though the idea likely applies to all three schemes.
I think you are referring to the JIT compiler, skimming over the paper I read explicitly that shaders for the graphics pipeline are automatically generated.
 
I think you are referring to the JIT compiler, skimming over the paper I read explicitly that shaders for the graphics pipeline are automatically generated.
Yes. The PS4's method was described as being an option explicitly invoked by a developer, while the patent describes this being done automatically. The concept of taking a vertex shader as the source, then compiling a variant with only transformation elements that send a culled stream to the front end is shared. Unlike primitive shaders, the other two methods keep the vertex shader as-is, in the same place.

A primitive shader would in the apparently cancelled automatic scheme would have had a similar vertex shader source, though it would relocate and make conditional non-position calculations. Perhaps the developer-exposed version AMD is promising would behave like the PS4's scheme with an explicit compile, and the option to use it if it helps.

It stands out to me that the PS4's method was noted as needing some testing just to determine whether the culling stage was worthwhile, rather than assuming it should be done, while the automatic methods appear to be unused or cancelled.
 
Anyone have any inside details on current HBM2 prices? Closest thing I found was speculation from last year mentioning the 8 GB for Vega was estimated to be around $160 on it's own. $80 for 4 GB seems like a tough pill to swallow for Kabylake-G if the prices really are that high [still].
 
All indications point towards there being shortage of all kinds of memory chips.(nand, ram, hbm2). It's unlikely that prices would come down, quite the opposite actually.
 
It's really hard to believe all types of volatile and non-volatile memory are continuously suffering from mega shortages all over the world for almost a year now. Especially when smartphones had their first ever sales volume reduction in Q4 2017.


I think we're in for some worldwide price fixing lawsuits during 2018.
 
It's really hard to believe all types of volatile and non-volatile memory are continuously suffering from mega shortages all over the world for almost a year now. Especially when smartphones had their first ever sales volume reduction in Q4 2017.

I think we're in for some worldwide price fixing lawsuits during 2018.
I think it has more to do with SSDs replacing magnetic storage. Once you start stacking, the amount of silicon wafer used can be huge. Fabs are likely price gouging a bit, but they are also trying to build more facilities which takes a substantial amount of capital. Situation gets worse if Korea somehow gets blown up, but it stands to reason they are diversifying a bit geographically at some expense along with the move away from globalism. It may be price fixing, but there are also legitimate reasons for prices to be higher.
 
It's really hard to believe all types of volatile and non-volatile memory are continuously suffering from mega shortages all over the world for almost a year now. Especially when smartphones had their first ever sales volume reduction in Q4 2017.


I think we're in for some worldwide price fixing lawsuits during 2018.
RAM (and by extension, NAND) is traditionally an extremely cyclical market. Gluts and shortages take years to resolve as you have to wait for more capacity to come online or for someone to go bankrupt and exit the industry.
 
In specific, a discussion post on a GTX 1080 Ti on Massdrop from tech community lead Brian Hutchins talked about a visit from Nvidia to Massdrop HQ, in which they discussed the causes for the current shortages. As we've reported for a while now, it's now stated by green team, the two main reasons causing the GPU prices to increase each and every month are the mining craze and the graphics memory shortage.
...
Miners are still purchasing every new high-end graphics card they can get their hands on and as a result of that, all NVIDIA and AMD partners have a hard time filling up inventory and stock. On the other hand, Apple and Samsung are willing to pay more for the memory that will be used in their smartphones. Factories are using the same production lines for the memory that is used in graphics cards and in smartphones and this has created a shortage of memory for companies like MSI, Gigabyte, Asus, and EVGA to make graphics cards.
http://www.guru3d.com/news-story/nvidia-gpu-prices-will-continue-increasing-through-q3-2018.html
 
Back
Top