Bondrewd
Veteran
GDDR6 inits in their usual BIOS <> driver interface added for N21.What am I supposed to see in that firmware header code commit?
GDDR6 inits in their usual BIOS <> driver interface added for N21.What am I supposed to see in that firmware header code commit?
12, 24 or 48 GByte.Wouldn't a 384-bit bus limit AMD to using 12 GB or 24 GB?
It would indeed be very sad, if AMD had to pick the xx70 as competition.Techtubers and their sources. And they always have a few of them. To cover all the bases.
If I were AMD, I would write some really weird shit in there, just for the LULz.GDDR6 inits in their usual BIOS <> driver interface added for N21.
Because the question was, what kind of limit a 384 bit bus imposes. And 48 GByte is what's possible and has been done already with todays tech, not some fancy yet-to-be-produced memory dies.Why not throw 96 and 192 GB there as well while we're at 48?
This is getting a bit ridiculous. 10-12 may be there at lower acceptable boundary for 4K+RT but anything higher that 16 won't be used in gaming till PS6 or so.
That's the point of silly VRAM configs.This is getting a bit ridiculous
Because.Why 384-bit?
Because not possible with G6, also just stack 3 HBMs side to side if you're this desperate (and AMD's squarely mobile first uArch is obviously not).Why not throwing 512-bit
That's N22.since it lines up with rumours of 8GB/16GB configurations
How is AMD going to feed a supposedly 80cu monster with a 384 bit bus? What speed gddr6 do you imagine them pairing with that?Because.
Because not possible with G6, also just stack 3 HBMs side to side if you're this desperate (and AMD's squarely mobile first uArch is obviously not).
That's N22.
(also how the fuck do I make 8gig setup on 512b with G6s? No 4Gb G6 IC exists.)
Miracles and magic.How is AMD going to feed a supposedly 80cu monster with a 384 bit bus?
14/16/17Gbps depending on segment (or AIC vendor in question).What speed gddr6 do you imagine them pairing with that?
I'm pretty sure there's no 4 Gbit GDDR6 so 512-bit would mean 16GB at minimumWhy 384-bit? Why not throwing 512-bit into the party, since it lines up with rumours of 8GB/16GB configurations and having 16 memory channels (the “HBM confirmed” patch)?
At this point I feel like we need 12Gb GDDRs for oddball but needed 6/9/12/15/18GB configs.I'm pretty sure there's no 4 Gbit GDDR6 so 512-bit would mean 16GB at minimum
They had to hit 8 gigs VRAM and 128b isn't really an option.In my opinion that tells us that RX 5700 XT was severely unbalanced
No, we really-really need 12Gb ICs.If nothing else these new Navi GPUs should fix that ridiculous waste of bandwidth.
RTX 2080 has the same bandwidth as RX 5700 XT, yet is from 10-30% faster (excluding 1080p), usually well over 20% faster. e.g. Doom Eternal from:
https://www.techspot.com/article/1999-doom-eternal-benchmarks/
In my opinion that tells us that RX 5700 XT was severely unbalanced. If nothing else these new Navi GPUs should fix that ridiculous waste of bandwidth.
Yeah and the actual SoC overview was much shorter vs comparable Renoir session aka neither MS nor AMD wanted to really talk the details.The presenter even went out of their way to avoid talking about TDP during the Q&A, like they signed an NDA for it.
3070 appears to have the same bandwidth as 2080 and RX 5700 XT. And it performs like a 2080 Ti. So Navi now has to catch up "2x". It has to get 50%+ more efficient with its bandwidth.AMD playing shit close real close to the vest.
Everything about AMD's new GPU IP is in extremely unlikely tier yet it is real.That seems extremely unlikely to me.
To be the fair I believe the belief is that the PS5 has had so many revisions / respins on its silicon to reach that clock speed it didnt start like that.
3070 appears to have the same bandwidth as 2080 and RX 5700 XT. And it performs like a 2080 Ti. So Navi now has to catch up "2x". It has to get 50%+ more efficient with its bandwidth.
That seems extremely unlikely to me.
5600XT is the living example.RX5700 and seeing how much the performance drops?