This kind of silly hand-waving platform fanboy stuff is when this forum is at its worst, this is the exact kind of detail-oriented discussion is what Beyond3D should be about. If you think it's irrelevant, maybe just move on
What the hell does that have to do with your post, or this thread at all? What does the PS5 have to do with this? If you have a problem with HU* where you think their bias is so prevalent that it's actually caused them to manufacture these benchmarks, then say as much. Otherwise create another thread with this ridiculous resetera content.Like the idea some have that RT and reconstruction tech should be omitted in benchmarks and discussions because PS5 doesnt perform all that well at those tasks?
You might want to learn how to better understand the deeper complexities of a bar graph.So with a slow CPU a Geforce card is at worst as slow as a radeon card. Then when you add more CPU power so the CPU isn't a bottleneck you get better performance with a Nvidia card. What exactly should be a surprise here?
So with a slow CPU a Geforce card is at worst as slow as a radeon card. Then when you add more CPU power so the CPU isn't a bottleneck you get better performance with a Nvidia card. What exactly should be a surprise here?
So apparently this shitpost is fine according to moderation team here. I'll quote it again then.It's not like any amount of tests would change your mind.
Could you...provide a little more detail on what you believe this is illustrating? It's being run on a 10700k at 5ghz, I'm not sure how it relates.To add some food for thought - from recent GN's 1060 retrospective:
Those are interesting findings, I wonder if the same holds true for DX11 games on AMD GPUs? did RDNA fix their high overhead on that API as well?
I would also like to see HUB do an extensive review on the state of RT performance, considering the amount of games releasing with RT these days, they clearly showed they are capable of doing more than a 1000 benchmark over something that they deemed important enough, but I guess that's too much to hope for considering their current "Agenda".
Why would you go from a 3090 to 6800xt?That's the thing, AMD didn't have higher overhead on DX11, they just couldn't multithread it like nvidia did and ended up loading a single thread which bottlenecked everything. Now on DX12 which can load more threads independent of the GPU driver, the higher overhead of nvidia drivers becomes a bottleneck.
I doubt that will change now. I moved to 6800XT from 3090 last week and it seems same as before, DX11 games hitch more and perform slightly worse.
I just assumed if he bought a 3090 to begin with he has money to burn.Size, and money (if he sold the 3090 ?) ?
What is there to provide? The results show a CPU limited case from a DX12 game which is known to be very CPU limited. They also don't show anything like what HUB is showing.Could you...provide a little more detail on what you believe this is illustrating? It's being run on a 10700k at 5ghz, I'm not sure how it relates.
Why would you go from a 3090 to 6800xt?
What is there to provide? The results show a CPU limited case from a DX12 game which is known to be very CPU limited. They also don't show anything like what HUB is showing.
I've said that more tests are needed. This is another such test which shows that the issue isn't really universal and may not even be CPU related.
These results are obviously CPU limited since everything between 5700XT and 6800 show the same fps. 6800XT bring a bit faster then the rest of the bunch is an interesting artefact though.That benchmark is custom high/ultra settings so it could be gpu limited at times.