Bondrewd
Veteran
It smacks 4080 a bit too nice to be called that.It should’ve been called “7800 XT” then
7970XT 3GHz edition and I'll just throw nostalgiabuxx at the screen.Then “7950 XTX” for an R580+…
It smacks 4080 a bit too nice to be called that.It should’ve been called “7800 XT” then
7970XT 3GHz edition and I'll just throw nostalgiabuxx at the screen.Then “7950 XTX” for an R580+…
That, and the view that if performance was higher then AMD would’ve focused on benchmarks throughout their presentation.For some RT performance is all that matters and this "only" seems to match last gens fastest card if even that.
I miss my X1900 XTX. Its fan screamed like hell but damn was it an amazing GPU.It smacks 4080 a bit too nice to be called that.
7970XT 3GHz edition and I'll just throw nostalgiabuxx at the screen.
I can appreciate the sentiment, but do we really have enough info to make that determination right now is what I'm trying to figure out. Seems like a lot of people are doing a lot of guessing and predictions based on very little data.For some RT performance is all that matters and this "only" seems to match last gens fastest card if even that.
Well, AMD isn't really promising more either. We know 6950XT performance so we can estimate pretty well where 7900XTX lands atI can appreciate the sentiment, but do we really have enough info to make that determination right now is what I'm trying to figure out. Seems like a lot of people are doing a lot of guessing and predictions based on very little data.
Again, did I miss something? How can y'all be doom and glooming about this part when we really haven't seen any kind of real performance evaluation of it?
I can appreciate the sentiment, but do we really have enough info to make that determination right now is what I'm trying to figure out. Seems like a lot of people are doing a lot of guessing and predictions based on very little data.
Well, AMD isn't really promising more either. We know 6950XT performance so we can estimate pretty well where 7900XTX lands at
Honestly the first thing that comes to mind for me is they don't know the real numbers yet. A month of driver development can make an enormous difference, and it wouldn't be the first time they weren't sure of the performance at the announcement.I can't think of a reason AMD would have underplayed or sandbagged its numbers
That's because it's 1 polymorph engine/TPC and they cull 0.5x prims/PM engine per clock iirc? Then 1 triangle rasterised per GPC per clockMore accurately it can rasterise 33B tris/s and transform/cull 198B/s
Yea.doubled cull rates so that's 4 primitives culled/SE per clock?
For N33 too to lessen that relatively teeny 32MiB LLC pressure.Big L0 and L1 increases will be very important for APUs
Nanite is mostly pure compute, doesn't use GPU geometry h/w. They are still looking into using mesh shaders there AFAIR.Could be interesting with nanite
Of course it is, completely forgetting one of the main points of it. ThanksNanite is mostly pure compute, doesn't use GPU geometry h/w. They are still looking into using mesh shaders there AFAIR.
Both of these thumbnails are destroying me.
The main thrust of my argument was that if we are to say RT performance doesn't matter because we'll just turn it off anyway, then you are accepting that on your $900 GPU, you are getting a lesser experience in some respects than even an Xbox Series S gamer.
On the other hand if you turn RT on, then the AMD GPU is much slower than it's competing Nvidia GPU. Both options are bad IMO.
But when RT comes into play - as it does in many big titles and will in many more moving forwards
potentially only competing with a 3080Ti which can be picked up for much less
Again, did I miss something? How can y'all be doom and glooming about this part when we really haven't seen any kind of real performance evaluation of it?
What was he thinking anyways? He put out that video that was clearly overhyping things and ignoring all credible looking information out there on what Navi 31/RDNA3 would be, and is now acting like the sky has fallen when it didn't work out that way.
Question fixed, adding a important detail of their slogan.I’m curious if AMD is basing their claim of this being the most advanced gaming GPU solely on the basis of it being a chiplet design.
Nanite is mostly pure compute, doesn't use GPU geometry h/w. They are still looking into using mesh shaders there AFAIR.