AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

RTX 10% in 2½ years vs PS5 4% in 2 months (pretty sure it's at this point safe to assume active PSN-users cover the userbase, Sony sold 4.5m consoles in 2 months, which is 3.95% of 114 million PSN users)

Yes well no clear numbers then (10% of the total geforce userbase??? lol). The pc market is arguably larger then the PS market, and since no GPU being manufactured since last year is lacking ray tracing, i see no problems there. Actually the largest market will probably the pc.

But this is going pretty offtopic already, should there be another thread for this?

Yes, somehow, userbase and consoles made its appearance here.

According to NVIDIA about 10% of their user base has RTX, not sure if it was just gamers or did it include professional cards too
 
10% of the total geforce users would be a good solid number and I optimistically want it to be at least that to progress the use of RT in game development. One problem I have though is Nvidia simply counting numbers of units sold? For Ampere that's going to be a majority of their units in mining rigs, not in gamers' hands. There's a bunch of folks just here that have 3070's and 3080's sitting in mining rigs doing nothing.

I like to think that those GPUs will eventually end up in the hands of gamers but that's only going to be true for a portion of those cards, even after another eventual bust of the mining economy, whenever that may be. By then we'll likely just have the next generation and you'll have this black hole of the Ampere generation.
 
10% of the total geforce users would be a good solid number and I optimistically want it to be at least that to progress the use of RT in game development. One problem I have though is Nvidia simply counting numbers of units sold? For Ampere that's going to be a majority of their units in mining rigs, not in gamers' hands. There's a bunch of folks just here that have 3070's and 3080's sitting in mining rigs doing nothing.

I like to think that those GPUs will eventually end up in the hands of gamers but that's only going to be true for a portion of those cards, even after another eventual bust of the mining economy, whenever that may be. By then we'll likely just have the next generation and you'll have this black hole of the Ampere generation.

Yup the amount of NV RT capable cards in gamer's machines is likely quite significantly lower than the amount of cards that NV has sold. Same goes for AMD. At least for the next few years consoles will be the driving force for adoption of RT into most AAA titles, but I expect developers that have enough budget may include RT above and beyond what the consoles are capable of.

I'm now in a position where I'm interested in a new card that has HDMI 2.1 (to use with my new TV), but there just isn't anything available to easily buy without camping the stores or paying an exorbitant price. Hopefully the market crashes soon then maybe I can pick a used card up for cheap on Ebay or something. I don't even care if it has RT or not. :p

Alternatively, a DP to HDMI 2.1 adapter would be great, but those are still mostly non-existent. The only one I saw on Amazon was REALLY unreliable and flaky.

Regards,
SB
 
That's the one I saw. It's hugely buggy and unreliable. It can and does just stop working while connected if your display goes to sleep and then you wake it back up, for example.

Regards,
SB
Guess the quality has gone downhill with HDMI 2.1 requirements or the ones I know using older gens were lucky
 
Guess the quality has gone downhill with HDMI 2.1 requirements or the ones I know using older gens were lucky

Well, it is a first gen product. So who knows if the problems are inherent in the chip that is being used or in Club 3D's implementation of it.

But considering it's the only DP to HDMI 2.1 converter that I've seen so far, I'm guessing the chip may be buggy and thus other companies are steering clear of it for now.

Regards,
SB
 
At least for the next few years consoles will be the driving force for adoption of RT into most AAA titles, but I expect developers that have enough budget may include RT above and beyond what the consoles are capable of.

BFV did Ray tracing and so did many more since 2018.
Just like low-end gpus will be or are the 'driving force'. Anyway, already in the now and today, RT is already far and far above whats in console versions. Theres a thing like scaling. Even on consoles (XSS, switch/nintendo consoles).
 
Gigabyte Radeon RX 6700 XT Gaming OC review - Introduction (guru3d.com)
April 9, 2021
It might be so that the reference RX 6700 XT sits close to RTX 3060 Ti and sometimes 3070 performance, but only in shading performance. Raw Raytracing performance is a notch slower than the competition offers. We can also not apprehend that AMD still has not implemented any form of machine learning super-sampling dedicated in hardware, much like NVIDIA offers Tensor cores. For these two reasons (RT perf and lacking MLAA), we cast doubt on why AMD is trying to justify that starting price of 479 USD. The true competitor here is the RTX 3060 Ti with its 399 USD MSRP. We want to remind AMD that NVIDIA introduced its Tensor cores back in the summer of 2018, yet still has no to that technology answer implemented. You can also argue that while the Infinity cache works most of the time, it's designed to be a workaround to fill an imperfection in the choice of a more affordable memory type (GDDR6 opposed to GDDR6X), the current AMD GPUs are memory bandwidth deprived, even with GDDR6 at 16 Gbps, but more so due to the 192-bit wide memory bus. And that's going to bite this product in the ass every time you get GPU limited or the L3 cache runs out and gets fewer hits.
 
"Because we fucking can you dumb motherfucker."

-AMD
And does Nvidia, they just didn't update the MSRP for mining craze levels.


I don't get these reviewers who live in a bubble / denial / sheer dishonesty over the fact that 99.9% of the cards aren't being sold at MSRP and disparage AMD for the 6700XT's MSRP.

A $480 6700XT sold at AMD's website has (unfortunately for consumers) an insanely higher price/performance ratio than all the other modern graphics cards being sold in etail and retail, yet these guys bend over backwards to make it look like a bad deal.


This guru3d guy is even complaining about infinity cache by pretending to know its hit rates will somehow be worse in the future.
How far has that site fallen.. it used to be a reference to me. Sad.
 
Is Hilbert really this biased, or doesn't he just know better?
We can also not apprehend that AMD still has not implemented any form of machine learning super-sampling dedicated in hardware, much like NVIDIA offers Tensor cores.
...
We want to remind AMD that NVIDIA introduced its Tensor cores back in the summer of 2018, yet still has no to that technology answer implemented.
NVIDIA doesn't have "machine learning super-sampling dedicated in hardware". They have DLSS running on Tensor cores aka Matrix units, but they aren't dedicated or even designed to accelerate DLSS spefically.
Also AMD has implemented Matrix units just like NVIDIA has, they just didn't put them into consumer hardware.
You can also argue that while the Infinity cache works most of the time, it's designed to be a workaround to fill an imperfection in the choice of a more affordable memory type (GDDR6 opposed to GDDR6X)
One could just aswell argue "GDDR6X is designed to be a workaround to fill an imperfection in GDDR6", or tons of other similar ridiculous comparisons. There are several roads to a goal (which in this case is usable memory bandwidth), why would a big cache be any more of a "workaround" than any other means to the end?
 
One could just aswell argue "GDDR6X is designed to be a workaround to fill an imperfection in GDDR6", or tons of other similar ridiculous comparisons
How is dodging the shittiest memory standard since RDRAM even bad anyway.
why would a big cache be any more of a "workaround" than any other means to the end?
Nvidia good AMD bad has been the GPU market since ~2012 and nothing would change for some time more.
 
why would a big cache be any more of a "workaround" than any other means to the end?
Because it has more performance pitfalls?
Any pointer chasing workloads, such as ETH, BVH, any random sampling, etc do not scale well with cache sizes.
Cache's hit rates tend to fall of the cliff with higher resolutions, larger data sets, increasing randomness of memory accesses.
Cache as bandwidth amplifier is ok for certain things, but that's not a magic wand, which was all of a sudden just discovered by AMD.
 
For a gaming card it seems to be good enough imo. And maybe it's a first step in a memory architecture with chiplets sharing datas ?
 
It's not a standard, though, it's just Microns product. JEDEC hasn't touched it.
Not all mem standards are JEDEC lol.
but that's not a magic wand
Yes it is (just a bit tricky and pricey in its own way).
What was the last time you've seen a GPU just scale with moar MHz pumped into it?
And maybe it's a first step in a memory architecture with chiplets sharing datas ?
No shit.
G6X is just a dumb attempt to get more offchip b/w without relying on pricey and hardly ever readily available HBM.
 
Back
Top