AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

Complexity depends on how they are using it. For certain HPC tasks, large read-only datasets, most of that overhead would be nonexistent. That should be the case for the oil and gas guys and the implementation somewhat proprietary. Large scale rendering or raytracing could be similar. Multiple GPUs each working a subset of the screen space and HBCC caching pages that get hit. As each GPU would be completely independent the control flow issues go away. CCIX and CAPI are interesting for a certain segment of problems, but shouldn't be necessarily for a SSG type problem with a SAN. Once the GPUs have to start synchronizing it can get more difficult, but automated paging makes that far easier. No different than CPU programming where data pages automatically. That would be familiar to many researches with limited programming ability. It becomes a question of efficiency and any gaps are filled with other work thanks to async compute if practical. So long as all the jobs don't generate stalls simultaneously the chips should stay near peak performance. If that is occurring the implementation will be problematic on any hardware.
Unfortunately that is my point on complexity as they cannot work independently, you still need coherent/control-management mechanisms in place if one is using this as an HPC scaled up/out implementation with said HPC applications; hence CCIX/CAPI/Gen-Z/Mellanox in general should be added to that list and their solutions make up over 50% of HPC implementations.
Bear in mind my context is purely HPC coming back to Vega20.

If you go back to HotChips AMD Radeon Next Generation GPU Architecture 2017 they mention in footnotes regarding HBCC in context you raise (with storage high level rather than specifically multi-accelerator/node implementation):
This feature (Inclusive Cache Model) is still in development and may be better utilized in future releases of Radeon Software, SDKs available via GPUOpen, or updates from the owners of 3D graphics APIs.
Inclusive Cache model is what requires working with storage, but like I said for HPC it would need to be highly complex with overheads and higher level support; look at solutions involving those technologies I mentioned earlier.

Edit:
I should be clearer by saying Inclusive Cache model is what is required when the product is not the Pro SSG as a workstation solution.
 
Last edited:
Testing HBCC should have been a relevant concern when testing the Hades Canyon NUC, but it seems no one did it (no one that I can find, at least).

Most reviewers stuck with reviewing the unit at 1080p since it overlaps in performance with the desktop GTX 1050Ti, but I can think of some titles that could perfectly be played at 1600p to 4K while being pulled back by the 4GB VRAM.
For example, Doom with the Nightmare graphics setting. I'm sure the Vega M GH would run it comfortably at over 30 FPS and it would be a great opportunity to test HBCC in a gaming scenario.


Regardless, when laptops with the discrete Vega M release with higher TDP and maybe the whole 28CUs enabled at higher clock rates, we may get to the bottom of this.
 
I don't recall him claiming he was personally involved in that architectural feature. He discussed it as part of his role as a public face for RTG, but executives tout many projects that didn't require their personal involvement.
 
Yes, but features like hbcc where raja was named in the patent are working. Maybe there was a internal conflict which features should have more priority.
 
Maybe, or maybe not. Engineers do the work, management shows slides in front of media.

Even Raja had partially a management role. He might have been more directly involved in some technical areas, less involved in others. It's therefore a sign of health that he only appears in just a few patents
 
Yeah, plenty of senior engineers/scientists that do not appear on all the relevant patents while having only a few themselves and still being active in managing development-architecture, also applies to RFC and other organisations such teams engage with.
I think most do not appreciate that even a "team engineer" below such as Raja in these companies are incredibly senior with regards to knowledge/experience and usually has some experience with the patent system.
 
Maybe, or maybe not. Engineers do the work, management shows slides in front of media.

Even Raja had partially a management role. He might have been more directly involved in some technical areas, less involved in others. It's therefore a sign of health that he only appears in just a few patents
You’re not allowed to put somebody on a patent who wasn’t involved in the technical work of the invention, so him not being mentioned should indeed be the rule, not the exception.
 
You’re not allowed to put somebody on a patent who wasn’t involved in the technical work of the invention, so him not being mentioned should indeed be the rule, not the exception.
By "put somebody" I imagine you're stating the "inventors", because you can damn well put people in the patent who were not part of the technical work (e.g. investors, grant holders, etc.).

Regardless, who's the omnipotent entity that stops a higher-up from being included as inventor in a patent?
Same thing with a scientific article. You're supposedly not allowed to put the name of someone who didn't make "considerable technical contributions" to a peer-reviewed article, but somehow some leaders of large scientific teams get their names on 50 articles per year or more.
 
By "put somebody" I imagine you're stating the "inventors", because you can damn well put people in the patent who were not part of the technical work (e.g. investors, grant holders, etc.).

Regardless, who's the omnipotent entity that stops a higher-up from being included as inventor in a patent?
IANAL. I’m just repeating the guidelines of my patent lawyer. If people get included in the patent who were not actually involved in the invention process, the patent can be invalidated or weakened one way or the other.

So one could say that the omnipotent entity might be called “the law”.

Maybe your patent lawyer told you something else.

Same thing with a scientific article. You're supposedly not allowed to put the name of someone who didn't make "considerable technical contributions" to a peer-reviewed article, but somehow some leaders of large scientific teams get their names on 50 articles per year or more.
Patents are governed by pretty strict rules and laws. Scientific articles are just a scientific articles. Consequently, that the customs around a scientific article have zero relevance in this context.
 
You mean the rules and laws that permitted patenting slide to unlock, rectangle with rounded corners and god knows what else?
Slide to unlock and a host of other similar software patents were invalidated by a U.S. supreme court ruling that nuked all "real life operation except on a computer" type patents.

The round corner rectangle thing patent was a design patent, which isn't the same thing as a regular patent. Basically they exist to stop others from plagiarizing your physical product design, from what I understand.
 
Slide to unlock and a host of other similar software patents were invalidated by a U.S. supreme court ruling that nuked all "real life operation except on a computer" type patents.

Phew, that's good to know.
 
Are they though? Do we actually know anything else about Vega 20 than 4096bit HBM2 memory and 7nm manufacturing process?

I think AMD is working on implementing their exascale apu architecture with vega 20 as outlined by AMD Research in their paper.

It looks like EPYC seems to be following along with this design. There are rumors that it will have a 5th chip which will be the interconnect between CPUs and GPUs and will have 256M of cache and interfaces to I/O and memory. The CPUs will be chiplets with small caches and high speed interfaces to the interconnect chip. This rumor mirrors the diagram below from the research paper.

It is possible that vega 20 might be the a chiplet as well although I doubt it because the EPYC package is too small and the heat would cause issues. I think it is still most likely going to remain a PCI 4.0 device for now.

http://www.computermachines.org/joe/publications/pdfs/hpca2017_exascale_apu.pdf

Looking at AMD's recent patent applications you can clearly see that they are all about cache and coordinating cache between multiple devices and offloading memory operations.

dfd05b9f2fc57e4c3613df2a6023a79a69ad16cab884a692521bf9bf21559b6d.png
 
Slide to unlock and a host of other similar software patents were invalidated by a U.S. supreme court ruling that nuked all "real life operation except on a computer" type patents.

The round corner rectangle thing patent was a design patent, which isn't the same thing as a regular patent. Basically they exist to stop others from plagiarizing your physical product design, from what I understand.
Design patents are a bit strange. Reasonably they should sort under "copyright" legislation, but copyright has a much longer duration than patents. Patents are basically meant to encourage sharing of ideas and help in their dissemination, in exchange for legally protected exclusive rights for a certain time. Thus their duration is relatively short compared to copyright. The system doesn't quite work as it was meant to, obviously, and it was always a bit of a weakness that the duration of patents, any patent, was always the same, regardless of field.
 
Copyrights should be substantially shorter than they are today, but they were contorted because of megacorporations like Disney.
 
Back
Top