AMD: RDNA 3 Speculation, Rumours and Discussion

Status
Not open for further replies.
I feel it's worth reminding a few folks: Moore's Observation (aka "Law") was not about $ per transistor, nor overall transistor density, nor any power or performance or compute capability metric either. Rather, his observation was about the total number of transistors in an integrated ciruit roughly doubling every two years.
That is not correct. It was *always* about cost. You could always do (and can still do) fancy things in a research lab that would never be commercially viable.
Gordon Moore said:
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year.
 
That is not correct. It was *always* about cost. You could always do (and can still do) fancy things in a research lab that would never be commercially viable.
I get that you're quoting his 1965 paper. The observation he made while at Intel, which famously became his "law", was specifically transistors per IC.
 
I get that you're quoting his 1965 paper. The observation he made while at Intel, which famously became his "law", was specifically transistors per IC.
But that paper (or rather article) *is* "Moore's Law". I agree that it's an observation, or a self-fulfilling prophecy. But it all stems from that quote. Transistors-per-IC doesn't really make much sense without cost being a factor. What's an IC anyway? The reticle limit? I'm not aware of any other formulation made by Moore that doesn't include cost. Now, as you correctly point out, there are a host of bastardizations/corollaries. And then there's Dennard scaling, which is about power.

Regardless, our argument is moot. It's all going to shit. Dennard, Moore, everything.
 
Regardless, our argument is moot. It's all going to shit. Dennard, Moore, everything.
Scaling is no more possible...
56774403.jpg
 
Maybe at some point this will force rethinking a lot of the uarch vs "just" increasing number of units (I know it's not that simple). With maybe a longer life period for the products too, without the new arch each 2 years. My wet dream is somebody going full tbdr to help solving the bandwitdh problem, but I guess imgtechs have too many patents for that to happen...
 
Last edited:
Necessity is the mother of all invention.
Economies of scale demand anything actually fancy to die.
Every single CMOS replacement anything is exactly that fancy that's badly suitable for economies of scale.

You know, DRAM should've been replaced like 30 years ago but alas!
Still here with no signs of dying.
 
Economies of scale demand anything actually fancy to die.
Every single CMOS replacement anything is exactly that fancy that's badly suitable for economies of scale.

You know, DRAM should've been replaced like 30 years ago but alas!
Still here with no signs of dying.

Isnt that the case for every new technology? It always starts with poor economies of scale and improves from there. I think it’s way too early to lose faith.
 
Isnt that the case for every new technology?
We've had like a bazillion promising DRAM replacements dying without ever reaching said scale.
And that's like, replacing pretty basic memory tech.
Imagine how miserable killing CMOS would be.
I think it’s way too early to lose faith.
We're basically bending physics over with EUV just to keep a semblance of CMOS scaling going.
That should about tell you the chances actual CMOS replacements have in the wild.
 
There are some fully-optical switches and transistors in development, some of them can even be CMOS-compatible, but not sure if it pans out (considering "successes" of FeRAM, ReRAM and the likes)... It'd be pretty naive to expect something to appear that will be as versatile and cheap as the current gen technologies without the same level of investment into their production / development (inflation adjusted, of course). It also does not help that science now is mostly short-term / small project oriented, big ideas and mega-science projects (to borrow from our agitprop language) are few and far between...
 
First patent seems to aim at traversing the BVH tree even when a part of the BVH data is missing in cache with the hope of getting a hit within the resident data, at least while the missing data is still being fetched
It seemed to me this document specifies discarding the ray query for a node when the node is not resident.

Perhaps the next version of DXR is going to bring partially resident BVHs? Or maybe BVH Query Feedback, similar to Sampler Feedback?
 
Ray tracing performance is all that's going to matter and so Navi 33 achieving Navi 21 ray tracing performance at 1080p is not going to be enough, as solid 120-144fps won't be reached.


67fps for High ray tracing at 1080p (I don't think he found a good test scene though).


34fps for Psycho ray tracing at 1080p.

Regarding second video, he comments that 21.9.1 improved ray tracing performance, "I remember getting way less FPS on 1080p with ray tracing. So something must have improved".
 
Oh noes, no, not even close.
I also don't get the "everything that matters will be RT performance" statements, for GPUs releasing next year.

After UE5 with Lumen was shown we started to see the big AAA games actually moving away from RTRT or making light use of it. Battlefield V was one of the first showcases of ray tracing but Battlefield 2042 is skipping it entirely. COD MW Remake and COD Black Ops had ray tracing but COD Vanguard might be missing out after so many calls for disabling the feature in the previous games. Far Cry 6 apparently uses ray tracing as an afterthought, Halo Infinite AFAIK doesn't have any, nor does Age of Empires 4.

I'm not suggesting Ray Tracing isn't the future or that it will eventually become the number one performance factor. It just doesn't seem all that important for the games being released within the next couple of years, which is what matters the most for people buying graphics cards in 2022.
 
Status
Not open for further replies.
Back
Top