AMD: RDNA 3 Speculation, Rumours and Discussion

Status
Not open for further replies.
What I meant is — those are the two choices, but the patch did not suggest which.

If anything, an earlier VOPD patch on the bundling algorithm suggested it might be sticking to 4 VGPR banks still, though it could still be updated at any time.
I would look at the allocation granularity for hints (which has grown by a factor 2 from RDNA1 to RDNA2 and will go up an additional +50% [3 times that of RDNA1] with N31/N32).
How do you do register allocation that is both relatively easy to do and has also the property of maintaining an equal distribution of the allocated registers over all banks? For sure you don't want to allocate different number of registers in different banks. That means the allocation granularity will always be a multiple of the number of register banks, right?
By the way, the number of register banks is not necessary the same as the number of ports to the register file (in GCN and RDNA1 it may have been), as you usually have some kind of crossbar between the register file banks and the register file ports (the actual SRAM for the GPRs in GPUs is likely single ported, so we have a pseudo multiported setup as it is way cheaper to implement). Keeping the number of ports constant while increasing the number of banks just reduces the probability of bank conflicts but does not increase the register file bandwidth.

TLDR: The increased allocation granularity tells us in pretty certain terms that the number of banks went up, but it does not necessarily mean an increased register bandwidth. But that would definitely help for the dual/co-issue stuff as we need in total (source and destination) 6 operands per clock for the VOPD FMACs for instance. And having some reserve for the other stuff going on may be good.
 
Once upon a time they were bigger than NVidia. They also dominated notebooks for a few years, until NVidia decided to crush them there too heh.
 
Did they regress after amd bought them ?
Not really. ATI was never much good at executing or predicting competition. Radeon 9700 was a standout because NVidia had their terrible GeForce FX that year. After that it was NVidia outmaneuvering them at every generation with superior hardware, software, initiatives, etc. In ~2008 ATI tried to get marketshare back with a price war and a cheap-but-competitive product (RV770) but it didn't really amount to much.

Which sounds awful heh. ATI did make some great stuff over the years but for the most part they just couldn't bring all the necessary ingredients together as well as NVidia has.
 
Last edited:
If AMD end up on top in raster numbers, I just don't see them pricing it lower than the $1200 4080, and definitely closer to the 4090 if not the same. So the crazy prices are here to stay.

That's only true if the market accepts Nvidia's pricing structure. If Nvidia priced the 4080 12 GB at $2K, AMD would not blindly follow Nvidia into that stupidity since then neither company would be able to sell cards. Granted, it's possible that Ada sells well initially and then jump off a cliff, but by Nov 3rd AMD should be able to come up with a good estimate via a combination of sales data and polling.
 
They've offered better (faster) products at cheaper prices several times in history without having major effect on marketshare

Which is abit weird. What if AMD offers a RDNA3 product that competes or even beats NV in normal raster and comes close to ampere in RT, but at a lower price?

Mobile.
Their GPU IP roadmap exists to power their APUs.

Thats going to lead to trouble down the line aswell.
 
Not really. ATI was never much good at executing or predicting competition. Radeon 9700 was a standout because NVidia had their terrible GeForce FX that year. After that it was NVidia outmaneuvering them at every generation with superior hardware, software, initiatives, etc. In ~2008 ATI tried to get marketshare back with a price war and a cheap-but-competitive product (RV770) but it didn't really amount to much.

Which sounds awful heh. ATI did make some great stuff over the years but for the most part they just couldn't bring all the necessary ingredients together as well as NVidia has.

They didn't really outmaneuver them immediately. 9800, X800, X1800, and X1900 series (including x50 and xx50 mid gen refreshes) all traded blows back and forth with NV. ATi and NV were basically the same performance during those years. I went with AMD just because their MSAA was superior to what NV had at the time. I know people that went with NV just because they overclocked better. HD 2900 XT is when ATi fell quite a bit behind WRT both perf and perf/watt.

That said, NV's marketing department was always better than ATi/AMD's marketing department with some rare exceptions (AMD marketing capitalized on the 48xx series relatively well due to NV stumbling with 2xx series).

Even during the FX fiasco, NV's marketing outmaneuvered ATi's marketing department.

Regards,
SB
 
Clockspeeds. Exponential f-max to heat generation curve, 50% improvement in clockspeed per watt. That gives us maybe 25%ish increase in clockspeed. Chiplet spacing and better cooling from bigger cooler/better compounds might end up just countering the higher chip density, or maybe it can be pushed a bit higher?

"Almost 4ghz!" might just be the "overclock speed" AMD already hypes up its CPU overclock, so it's not out of character. Regardless the increase in register/cache size at the lowest levels do indicate support for higher clockspeeds without getting starved.

AMD's concentration on what the average gamer cares about (perf per $) might well pay off, the amount of people complaining about Nvidia's over the top prices is noticeably high.

The angstromics leak post has been cleaned up, making me feel better about it. But GCD's only going to 1 doesn't square with a 384bit bus and the imminent availability of 24gbps from Samsung (they have a customer and it's not Nvidia). That bandwidth increase ends up 2x a 6950xt's, but a 256bit bus could accommodate 96CUs and higher clockspeeds (if not the fastest) with 24gbps memory, it doesn't make economic or engineering sense. If Angstromics is correct(ish) I'd bet there's 2 GCDs at least for the top most card. I'd also be disappointed to see 32mbs stick around for the lowest end, 1440p could use 48mb, but that's another story, maybe there could be a stacked 64mb version for the highest low end.
 
Last edited:
They didn't really outmaneuver them immediately. 9800, X800, X1800, and X1900 series (including x50 and xx50 mid gen refreshes) all traded blows back and forth with NV. ATi and NV were basically the same performance during those years. I went with AMD just because their MSAA was superior to what NV had at the time. I know people that went with NV just because they overclocked better. HD 2900 XT is when ATi fell quite a bit behind WRT both perf and perf/watt.

That said, NV's marketing department was always better than ATi/AMD's marketing department with some rare exceptions (AMD marketing capitalized on the 48xx series relatively well due to NV stumbling with 2xx series).

Even during the FX fiasco, NV's marketing outmaneuvered ATi's marketing department.

Regards,
SB
I think X800 and X850 were a plan B honestly because they were still SM2 and only a basic extension of R300. Plan A was probably some kind of Xenos-like R400 because there are hints all over this board about that. The SM2 limitation led to the cards becoming useless in like 2 years as games required SM3. Bioshock won't run, Unreal Engine 3 games often won't run on them, Oblivion is bloom-only, etc.

X1800 was ~6 months late and barely fast enough to match GF7 and of course did not in OpenGL. X1900 was finally competitive but eh I think the damage had been done.

Then R600 was ~6 months late as well and we know how that turned out as you say.

But yeah they did have the best filtering and anti-aliasing until GF 8800. I was a Radeon guy post Matrox G400 to GF 8800.
 
Last edited:
Status
Not open for further replies.
Back
Top