AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

You can optimize for cache size by managing your dataset - making it fit into the cache size.
But you can not prevent some async background task, or access to some minor data from thrashing your caches.
Having some control sounds like a nice option to have in some cases. Hearing them thinking about extensions sounds even better.
 
RDNA2 enters the workstation space:
https://www.heise.de/news/AMD-bring...tation-Karten-Radeon-Pro-W6000-M-6065275.html

Radeon Pro W6800, W6600 and W6600M - the former based on RX 6800, the latter Navi23-products and only 100 and 65-90 Watts TDP.
edit: this paragraph is wrong because I made a stupid excel-mistake for which i am sorry.
We've run SPEC Viewperf on the 6800 in UHD res and saw an average of 25 % better perf compared to it's predecessor W5700, with lows of +3 % in energy-03 and highs of +40 % in PTC Creo (creo-03).
edit: Correction. We've run SPEC Viewperf on the 6800 in UHD res and saw an average of 60 % better perf compared to it's predecessor W5700, with lows of +11 % in creo-03 and highs of +117 % in energy-03.

AMD itself provided benchmarks in older Viewperf 13, where the W6800 especially outshines the W5700 in energy-02, leading by 134%.


W6800 can enable ECC memory protection, which W5700 could not.
 
Last edited:
RDNA2 enters the workstation space:
https://www.heise.de/news/AMD-bring...tation-Karten-Radeon-Pro-W6000-M-6065275.html

Radeon Pro W6800, W6600 and W6600M - the former based on RX 6800, the latter Navi23-products and only 100 and 65-90 Watts TDP.
We've run SPEC Viewperf on the 6800 in UHD res and saw an average of 25 % better perf compared to it's predecessor W5700, with lows of +3 % in energy-03 and highs of +40 % in PTC Creo (creo-03). AMD itself provided benchmarks in older Viewperf 13, where the W6800 especially outshines the W5700 in energy-02, leading by 134%.

W6800 can enable ECC memory protection, which W5700 could not.
what does ECC do for a GPU typically? Is it for virtualization?
 
what does ECC do for a GPU typically? Is it for virtualization?

For scientific workloads it's important to catch errors before they can propagate. As once it's started those errors will get increasingly compounded such that they'll easily obscure or alter the results of what you're attempting to do or find. Likewise, for any industry where accuracy is important.

For consumer workloads these errors are acceptable as ECC does incur a monetary and performance cost.

Regards,
SB
 
For scientific workloads it's important to catch errors before they can propagate. As once it's started those errors will get increasingly compounded such that they'll easily obscure or alter the results of what you're attempting to do or find. Likewise, for any industry where accuracy is important.

For consumer workloads these errors are acceptable as ECC does incur a monetary and performance cost.

Regards,
SB
interesting. I've only ever had to consider overflow issues, this is the first time I've ever thought about ECC on a GPU for compounding problems. I mean, I get the theory, but I've not seen the ds community up in arms about not having ECC on the GPUS.

But I suppose at the professional level, yes, this makes sense. I think all the professional DS cards have ECC.

I guess I should ask the obvious question then, any thoughts what do memory errors on graphics rendering materialize as? Is it a graphical artifact of some sort?
 
interesting. I've only ever had to consider overflow issues, this is the first time I've ever thought about ECC on a GPU for compounding problems. I mean, I get the theory, but I've not seen the ds community up in arms about not having ECC on the GPUS.

But I suppose at the professional level, yes, this makes sense. I think all the professional DS cards have ECC.

I guess I should ask the obvious question then, any thoughts what do memory errors on graphics rendering materialize as? Is it a graphical artifact of some sort?

Basically, yes. If the industry you're in requires as much accuracy as possible then they'll be investing in GPUs with ECC memory.

For rendering ECC is generally not needed as any errors that arise are unlikely to seriously impact what is rendered to the screen unless it is something that requires many multiples of passes such that the errors accumulate to a significant degree.

Even CAD requires speed more than absolute accuracy and thus what memory used is far less important than how fast the card can render.

OTOH, scientific research not only requires a high degree of accuracy (non-graphical) but it often involves multiple hundreds, thousands, or millions of calculations using previous calculations. Errors become quite significant at that point.

Basically, non-graphical uses of GPUs often require a high degree of accuracy that is far beyond what is required for graphics rendering.

Regards,
SB
 
interesting. I've only ever had to consider overflow issues, this is the first time I've ever thought about ECC on a GPU for compounding problems. I mean, I get the theory, but I've not seen the ds community up in arms about not having ECC on the GPUS.

But I suppose at the professional level, yes, this makes sense. I think all the professional DS cards have ECC.

I guess I should ask the obvious question then, any thoughts what do memory errors on graphics rendering materialize as? Is it a graphical artifact of some sort?

The quality of ECC is safe-guarding. Even if under normal circumstances it's highly unlike that stuff goes bad, you have an added insurance. Occasionally you get a super fast particle from space coming around and reacting with the wrong wire, it doesn't even need to be faulty hardware.

In general, the public uses the GPU for display, that is harmless. Even if the error destroys a rendering in the end. In scientific contexts, you often use the GPU to produce data, which you store and reuse. E.g. you optimize the shape of an object according to fluid behaviour. If that data becomes damaged, it's not just the display which is broken, maybe it's the hull of a real ship. Maybe it doesn't happen when it's displayed, but when it's exported, and QA gives the green light even though it's broken.
 
Please see my update above:
edit: this paragraph is wrong because I made a stupid excel-mistake for which i am sorry.
We've run SPEC Viewperf on the 6800 in UHD res and saw an average of 25 % better perf compared to it's predecessor W5700, with lows of +3 % in energy-03 and highs of +40 % in PTC Creo (creo-03).
edit: Correction. We've run SPEC Viewperf on the 6800 in UHD res and saw an average of 60 % better perf compared to it's predecessor W5700, with lows of +11 % in creo-03 and highs of +117 % in energy-03.
 
The W6800 is about the same as an RTX Quadro 5000 in performance, which is equal to an RTX 2080 Super, how is this possible? It's such a low point from a performance perspective.
 
AMD have missed the A in front of 5000 in their promo materials.

No they haven't, they estimate themselves to be less than 30% faster than a pro W5700, which is about an RX 5700, this puts the W6800 right around 2080 Super.

The-AMD-Radeon-Pro-W6800-is-a-top-performer-in-Floating-Point-based-on-these-benchmarks-from-AMD-768x849.jpg
 
Last edited:
The W6800 is about the same as an RTX Quadro 5000 in performance, which is equal to an RTX 2080 Super, how is this possible? It's such a low point from a performance perspective.
doesn't this depend on what you're doing with it?

IIRC, not everyone uses Quadros for rendering. I recall a lot about video processing hardware being a big portion of it. Nvidia locks the number of encodes/decodes streams out of the GPU on the gaming GPUs, but on the Quadros you can support a lot of streams. I think the Quadros are the choice of hardware if you are planning to host a PLEX server etc.

The Professional lines of GPUs have always been interesting to me as they seem like they lack a very focused niche - trying to support a variety of different industries. Whereas the data science cards are fairly optimized for ML processing.
 
Back
Top