AMD: RDNA 3 Speculation, Rumours and Discussion

Status
Not open for further replies.
Wasnt that one having twin vertex shaders, way before NV came with that? It was a capable GPU specs wise i remember. Had one in 2001, played wolfenstein on it.

Well it was a late 2001 card competing with GF3 Ti 500 and was slower than that. It had just awful drivers, glitchy AA, funky filtering limitations, and hardware oversights that killed its pixel shader performance if you tried to do much with PS1.4. :)

It did have the Truform thing that could tessellate blocky model into something a bit nicer but that was implemented in only a few games.
 
Well it was a late 2001 card competing with GF3 Ti 500 and was slower than that. It had just awful drivers, glitchy AA, funky filtering limitations, and hardware oversights that killed its pixel shader performance if you tried to do much with PS1.4. :)

It did have the Truform thing that could tessellate blocky model into something a bit nicer but that was implemented in only a few games.
You can choke anything if you try to do too much with something ;) PS 1.4 was also something NVIDIA couldn't match until FX-series and DX9. The drivers were fixed no matter how awful they were in the beginning.
edit: oh, and launched couple months earlier in August
 
Last edited:
X1800 was ~6 months late and barely fast enough to match GF7 and of course did not in OpenGL. X1900 was finally competitive but eh I think the damage had been done.
To be fair, ATi had the performance crown for 4 consecutive generations: 9700 Pro, X850 PE, X1800 XTX and X1900 XTX, but they had horrible horrible drivers back then, lacked Shader Model 3 for quite a long time, and lagged behind on several features.

Generally, ATi followed the formula of big dies to take the performance crown during that era, but had significantly worse drivers (far less stable and less usable) and less features, so their marketing presence was gradyally diminished compared to NVIDIA, who followed the strategy of lean dies, second performance place, but solid drivers, stable and better features, which expanded their mindshare and market presence. Until NVIDIA cleaned house with the Geforce 8 series, never falling on the performance crown again (gen vs gen).
 
Hmmm...

384bit bus, 192mb cache, twin 96CU GCDs, 2.3ghz clocks, 24gb 24gbps memory, $1599.
384bit bus, 192mb cache, 2 80CU GCD, 2.2ghz clockspeed, 12gb 20gbps memory, $999.
384bit bus, 96mb cache, 1 96CU GCD, 3.2ghz clockspeed, 12gb 18gbps, $749
256bit bus, 128mb cache, 1 60(64?)CU GCD, 3.4ghz clockspeed, 16gb 20gbps, $549
256bit bus, 64mb cache, 1 52(56?)CU GCD, 2.8ghz clockspeed, 8gb 16gbps, $400
128bit bus, 32mb cache, 32CU, 3ghz, 8gb 20gbps, $300
128bit bus, 32mb cache, 28CU, 2.5ghz, 8gb 16gbps, $249.

?
 
Hmmm...

384bit bus, 192mb cache, twin 96CU GCDs, 2.3ghz clocks, 24gb 24gbps memory, $1599.
384bit bus, 192mb cache, 2 80CU GCD, 2.2ghz clockspeed, 12gb 20gbps memory, $999.
384bit bus, 96mb cache, 1 96CU GCD, 3.2ghz clockspeed, 12gb 18gbps, $749
256bit bus, 128mb cache, 1 60(64?)CU GCD, 3.4ghz clockspeed, 16gb 20gbps, $549
256bit bus, 64mb cache, 1 52(56?)CU GCD, 2.8ghz clockspeed, 8gb 16gbps, $400
128bit bus, 32mb cache, 32CU, 3ghz, 8gb 20gbps, $300
128bit bus, 32mb cache, 28CU, 2.5ghz, 8gb 16gbps, $249.

?

it sort-of makes sense, but i doubt a top end card thats not 2x some other card.
ie. the 256bit bus + 128Mb cache looks whack to me.

does the cache live in the GCD or in the mem controller chiplet?
 
it sort-of makes sense, but i doubt a top end card thats not 2x some other card.
ie. the 256bit bus + 128Mb cache looks whack to me.

does the cache live in the GCD or in the mem controller chiplet?
Currently it's presumed to be in MCD(s)
 
Hmmm...

384bit bus, 192mb cache, twin 96CU GCDs, 2.3ghz clocks, 24gb 24gbps memory, $1599.
384bit bus, 192mb cache, 2 80CU GCD, 2.2ghz clockspeed, 12gb 20gbps memory, $999.
384bit bus, 96mb cache, 1 96CU GCD, 3.2ghz clockspeed, 12gb 18gbps, $749
256bit bus, 128mb cache, 1 60(64?)CU GCD, 3.4ghz clockspeed, 16gb 20gbps, $549
256bit bus, 64mb cache, 1 52(56?)CU GCD, 2.8ghz clockspeed, 8gb 16gbps, $400
128bit bus, 32mb cache, 32CU, 3ghz, 8gb 20gbps, $300
128bit bus, 32mb cache, 28CU, 2.5ghz, 8gb 16gbps, $249.

?
You made some questionable decisions regarding specs in your lineup, so I made my own imaginary lineup.

Imaginary N34: 2x 32WGP GCD, 3.2GHz clockspeed, 32GB 20gbps, 512bit bus, 128MB cache. Processing power: 104.9 Tflops (+25.5%), $1499

Full N31: 48WGP, 3.4GHz clockspeed, 24GB 20gbps, 384bit bus, 96MB cache. Processing power: 83.6 Tflops (+24%), $1199
N31: 40WGP, 3.3GHz clockspeed, 20GB 18gbps, 320bit bus, 80MB cache. Processing power: 67.6 Tflops (+29%), $999

Full N32: 32WGP, 3.2GHz clockspeed, 16GB 16gbps, 256bit bus, 64MB cache. Processing power: 52.4 Tflops (+33%), $749
N32: 24WGP, 3.2GHz clockspeed, 12GB 16gbps, 192bit bus, 48MB cache. Processing power: 39.3 Tflops (+33%), $549

Full N33: 16WGP, 3.6GHz clockspeed, 8GB 20gbps, 128bit bus, 32MB cache. Processing power: 29.5 Tflops (+29%), $399
N33: 14WGP, 3.2GHz clockspeed, 8GB 16gbps, 128bit bus, 24-32MB cache. Processing power: 22.9 Tflops (100%), $299

It looks good in my opinion. :)
 
X1800 was ~6 months late and barely fast enough to match GF7 and of course did not in OpenGL. X1900 was finally competitive but eh I think the damage had been done.

Then R600 was ~6 months late as well and we know how that turned out as you say.

But yeah they did have the best filtering and anti-aliasing until GF 8800. I was a Radeon guy post Matrox G400 to GF 8800.
Sorry, but that's just short memory. Unless you are cherry-picking games, X1800 XT was faster than 7800 GTX at the average. The launch driver had a bug related to OpenGL + MSAA 4×, where the performance was slower than with MSAA 6×. It was fixed almost immediately and together with other optimizations the card became significantly faster than 7800 GTX even in OpenGL games. That's well documented in the reviews and post-launch updates. GeForce 7000 had lower MSAA quality, lower AF quality and lacked support of HDR (FP16)+MSAA. Its fixed pipeline resulted in poor performance in ALU-demanding games, which Nvidia tried to compensate by paper-launch of 7800 GTX-512 and later by dual-GPU configurations like 7950 GX2 with a bunch of AFR issues.
 
Sorry, but that's just short memory. Unless you are cherry-picking games, X1800 XT was faster than 7800 GTX at the average. The launch driver had a bug related to OpenGL + MSAA 4×, where the performance was slower than with MSAA 6×. It was fixed almost immediately and together with other optimizations the card became significantly faster than 7800 GTX even in OpenGL games. That's well documented in the reviews and post-launch updates. GeForce 7000 had lower MSAA quality, lower AF quality and lacked support of HDR (FP16)+MSAA. Its fixed pipeline resulted in poor performance in ALU-demanding games, which Nvidia tried to compensate by paper-launch of 7800 GTX-512 and later by dual-GPU configurations like 7950 GX2 with a bunch of AFR issues.
People forget how much better AMD was from R300 onwards. It wasn't until G80 that Nvidia took the lead for several generations until GCN.
 
It looks good in my opinion. :)
At least one really should be with 24 Gbps memory considering Samsung said this:
With customer verifications starting this month, Samsung plans to commercialize its 24Gbps GDDR6 DRAM in line with GPU platform launches, therein accelerating graphics innovation throughout the high-speed computing market.
And we already know it wasn't with NV launch
 
They were so great back in the day. The 9700Pro was my favorite GPU I've ever owned.

I want RDNA3 to be another 9700 Pro.. just smoking the competition!
Same for me. Not only was it the top performer for a reasonable price, it aged great. it didn't start performing poorly in new titles 18 months in.
 
At least one really should be with 24 Gbps memory considering Samsung said this:
And we already know it wasn't with NV launch
Memory speed was just my guess.
It could also be changed to 20gbps instead of 16gbps, 22gbps instead of 18gbps and 24gbps instead of 20gbps.
If I compared N23 vs N33 specs, then N33 would need that 24gbps memory.
 
Continued:
I am not sure If this would be enough and what is AMD planning.
For example, N33 has >2x more Tflops compared to RX6650xt, yet even with 24gbps the bandwidth would increase by only 37%.
Infinity cache is faster, but still only 32 MB.
This also applies to N31, N32 and 3d cache will be used most likely only for the Top N31 model.
 
Last edited:
Continued:
I am not sure If this would be enough and what is AMD planning.
For example, N33 has >2x more Tflops compared to RX6650xt, yet even with 24gbps the bandwidth would increase by only 37%.
Infinity cache is faster, but still only 32 MB.
This also applies to N31, N32 and 3d cache will be used most likely only for the Top N31 model.
Looks to me like they made compromises to save on the area and it will affect performance in higher resolutions. If you want the real deal, you'll want to get N31 with 3D stacked IC.
 
Status
Not open for further replies.
Back
Top