AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
Azalia as in Intel High Definition Audio?
Could this be USB-C Audio, similar to what we see in smartphones without jack?
Maybe there's a Tempest in RDNA2 PC GPUs? I mean TrueAudio already failed twice, but third time's a charm and this time they have raytracing.
Though honestly I don't know why we'd need sound coming from the graphics card. So far the motherboard's USB buses have been plenty enough.
Azalia Audio is like a standard Linux HD Audio device derived from Intel sources.
In DCN3 there is a code path added to drive DP/HDMI over USB-C.
 
It seems like a sizable investment to overcome a 20% bandwidth deficit, for a GPU that is in many metrics ~20% lower in performance.
Yes, I would tend to agree. On the other hand, Sony's design contains hardware features (faster hardware decompressor, cache scrubbers, 12-channel SSD) that appear to be refinements beyond what Microsoft has done.

Area-wise, I think the extra GDDR6 channels would be more compact.
Agreed. What about power usage?

Hopefully, such a cache is more generally useful than just enhancing the ROPs, or perhaps a PS5 with such a cache could opt for a smaller capacity since it may be operating at sub-4K in general versus the high resolutions enthusiast PC cards can be set to.
Both of the new top consoles are squarely targetting 4K.
 
Yes, I would tend to agree. On the other hand, Sony's design contains hardware features (faster hardware decompressor, cache scrubbers, 12-channel SSD) that appear to be refinements beyond what Microsoft has done.
Microsoft's compression choice may give it better average compression ratios for game content, since the numbers both vendors gave for sustained SSD bandwidth seemed to close the gap more than raw bandwidth would indicate.
I'm curious about the cache scrubbers in terms of what use cases they are meant for, and whether the improvement is substantial.
There may be area devoted elsewhere, such as the 4x or higher rates of packed operations for inference. At a minimum, the PS5 may at least have double-rate in keeping with 2xFP16, but going further could be optional.

Agreed. What about power usage?
Micron's GDDR6X comparison gives GDDR6 7.5 pJ/bit versus GDDR6X at 7.25 pJ/bit.
560GB/s of GDDR6 seems like it's roughly 34 W versus ~27 W for 448 GB/s.
That would be about 7W for the interface, although I think modules are ~3W or so each, maybe 13W+?
128MB of SRAM would have measurable power consumption when in use and in terms of standby, but I haven't found figures in recent times to how much that could be.
I assume it would be lower power than the extra modules, though Sony would need to weigh if ~10W power matters sufficiently.

Both of the new top consoles are squarely targetting 4K.
I'm curious how dominant native 4K will be long-term, and I think there may be some compromises due to how many systems will be paired with sub-4K TVs. If some kind of DLSS-like solution did take root, internal settings could be even lower for much of the frame's processing.
The GDDR6 bus is also much closer in bandwidth to the competition versus what happened with the ESRAM setup with the Xbox One, and as a cache it can be more flexible in rotating out.
 
https://lists.freedesktop.org/archives/amd-gfx/2020-September/053779.html
"
With these patches the Linux kernel driver will expose the AV1 decoding capabilities of VCN 3.0 to the userspace drivers (VA-API / VDPAU). At the moment there are two GPUs with VCN 3.0 that are supported in Linux: Navi 21 (Sienna Cichlid) and Navi 22 (Navy Flounder).

Also in this patch series we have initial support for variable rate shading (VRS)."

Good to hear
 
dRjkaUe.png


Source
 
Any chance 1T-SRAM is making a comeback for higher density SRAM into a GPU?
 
I assume it would be lower power than the extra modules, though Sony would need to weigh if ~10W power matters sufficiently.
I was thinking beyond that: to the proportion of the frame time that the GPU is bandwidth-limited, even at, say, 1TB/s. But having no way to assess that, I decided to leave the question dangling.

I suppose DCC tests would provide some clue on this subject (against cards of "same spec" without DCC)...
 
Coolers that extend above the PCIe bracket that much wont fit in my wife's Ncase M1, which means no hand me downs to her rig...

They do seem to be the future of high end cards though, especially with TDPs reaching record highs. After she gets my 5700XT and uses it until it's past its use-by I might be forced to buy her a new case. This new trend doesn't align with the recent mITX trend unfortunately.
 
Coolers that extend above the PCIe bracket that much wont fit in my wife's Ncase M1, which means no hand me downs to her rig...

They do seem to be the future of high end cards though, especially with TDPs reaching record highs. After she gets my 5700XT and uses it until it's past its use-by I might be forced to buy her a new case. This new trend doesn't align with the recent mITX trend unfortunately.

Honestly a lot of these cards push the limits of pretty standard cases. Mine has max card length of 31cm if you have the front case fans installed. Lots of the high end cards are really close.
 
Status
Not open for further replies.
Back
Top