GPU Ray Tracing Performance Comparisons [2021-2022]

But Microsoft and Sony, whose consoles move the majority of AAA game sales at launch, decided to stick to an architecture that repurposes the TMUs for RT acceleration and don't use dedicated tensor cores.
A) MS and Sony "stick" to what is cheaper generally.
B) RDNA2 does not "repurpose TMUs for RT acceleration".
C) Tensor cores have nothing to do with RT. And if you've meant RT cores then RDNA2 has them, just cut down to BVH traversal only from Turing's traversal and evaluation. Which is coincidentally the main reason for A) and for RDNA2 RT h/w being considerably slower in practice than that of Turing and Ampere.

With how power situation is evolving in general I'll be rather surprised if AMD won't adopt similar to Nv approach around RDNA3 or 4. Ampere is beating RDNA2 in power efficiency in RT by some multitude of times atm.
 
All companies? Intel and Nvidia are, for their dGPUs.
But Microsoft and Sony, whose consoles move the majority of AAA game sales at launch, decided to stick to an architecture that repurposes the TMUs for RT acceleration and don't use dedicated tensor cores.
The first RT implementation in a smartphone is apparently coming in the form of Exynos 2200 which uses RDNA2 just like the consoles. The first RT implementation in a handheld console is arguably the Steam Deck, with another RDNA2 GPU.
Is Apple expected to use exclusively dedicated RT units in their future iGPUs? Is there a chance Nintendo will actually pay for the footprint of a SoC that has dedicated RT units, for their next handheld?
Yea I think it’s safe to say all major companies. All companies have tensor cores. Right down to Google. AMD has tensor cores just because they aren’t in mainstream consoles doesn’t mean they don’t have them. And MS and Sony still went forward with ML based features on their compute units which is a compromise for dedicated silicon.

To be fair, MS stripped away its ROPS and Sony did away with some FPU on the CPU as well, silicon was clearly at a premium for both companies.

RT mobile is driven by PowerVR and as you say we will see RDNA2 on mobile.

The reason we don’t see it’s prominence in cheaper devices is because it’s not cheap enough for it to come down to that device. But given a long enough period the technologies that drive the high end eventually are brought to the low end.

RT methods have to compete with 20+ years of T&L tooling and skill sets. It’s going to be a while for it to become the mainstream just because games still need to be made and deadlines met. You can’t pay a studio to sit there and just learn and solve problems, they ultimately are required to deliver a product too.

ML techniques as they are today are significantly sped up by tensor cores, when ML techniques are completely integrated into software stacks then tensor accelerators will likely be the solution here.

We see AI accelerator chips in a variety of phones and mobile devices today because they have a greater need for them. In games, perhaps ML is still largely in its infancy, but over time I expect to see ML arrive there too, and not just in a form of super resolution.
 
- Raytracing being used at a level that is universally perceivable as being substantially better than a rasterization trick (and without sacrificing everything else, like you see in e.g. LEGO or RTX Minecraft)
You are inserting these adjectives to artificially create an insurmountable wall. Why does it have to be universal? Why not a majority? What is the definition of substantial? However you choose to define it so that it never passes your test?
- Getting good enough performance with raytracing
Weasel adjective. As others have observed you can get 4K/60fps with a 3080 on Control and others. You can get 1440p/60fps on Spiderman on a freaking $500 console.
- Using affordable hardware
Another weasel adjective. What is affordable? Are you complaining about the pandemic/crypto prices? Yes they suck. But that's a universally sucky story and has nothing to do with this narrative you're trying to weave.
 
Ugh.
Fully path traced in real time Q2RTX isn't compelling?
Metro Exodus's real time RTGI wasn't compelling for him but suddenly became so in EE? Why I wonder?
Control's top notch RT implementation wasn't compelling either?
I mean come the fuck on. The guy is changing his own goal posts now because they were blatantly wrong in the first place, it's clear as day.

It's also funny how borked FC6 RT implementation is being dismissed by them since apparently "AMD's RT h/w simply isn't up to par so what can you do move on nothing to see here".
And this is after they've spent several years describing to us in great detail how Metro Exodus's RTGI "isn't very compelling".
Jeez.
Quake 2 isn't very compelling outside of being cool from a tech standpoint. Metro EE is a huge improvement over the initial RT implementation and makes it the only truly compelling usage of RT so far IMO. Control is much more on the side of image refinement than anything else.
 
He does not have to be an expert in anything
Of course he has to be when his words are referenced as ground truth.
Has he already proved these theses with real facts? Is he knowledgeable enough to even make a proper testing of RT quality? Did he test proper scenes at all? Does he know how ground truth image should look like? Did he compare against ground truth? If not, did he explain theory and math behind his conclusions?
The answer to these questions would be no, so using his words as arguments is simply disrespectful to people here.

and people watching it are free to agree with it or not
Of course people are free to do whatever they want, but imagine an utterly silly situation where one would criticize a whole science direction with arguments taken from entertaining YT type of content, would you expect scientific community give up and accept the critics?

The images for comparisons are there, the performance figures are there, that is all you need as a gamer to get an estimation of what you prefer to use.
These images must be made in proper places, must be compared to reference or must be explained theoretically, that's how education process happens.

got curious though, since I am not a follower of Pewdiepie
It seems you didn't get the joke.

I have never even claimed that a reviewer needs to be an expert
To be quoted and referenced he has to be an expert and his defence better be good.

Honestly, not going to waste even more time explaining obvious stuff.
 
I'm pretty sure that Steve stated that he doesn't care about RT many many times on every possible occasion.
The fact that Metro EEE can't run without RT h/w now means little - Q2RTX and Minecraft RTX both couldn't run without RT h/w since 2019.
Come off it. You're far too intelligent to actually believe this.
 
What is affordable?
2060S was very affordable for about two years and it can do RT in 60 fps just fine in most of RT titles.
He is just peddling his agenda which has nothing to do with reality.

Quake 2 isn't very compelling outside of being cool from a tech standpoint.
I don't get the distinction between being "cool" and "compelling".

Metro EE is a huge improvement over the initial RT implementation and makes it the only truly compelling usage of RT so far IMO.
Just no. EE's improvements to RTGI over the original RTGI implementation are substantially smaller than the difference between non-RT and RTGI graphics in the original release.

Control is much more on the side of image refinement than anything else.
Again don't think I understand the point. RT is "image refinement" even when doing full on path tracing.

There were compelling RT implementations for years now, and it is nothing but admittance of that on HUB's part now, when they've started using RT in their regular benchmarks.
Nothing has changed in RT landscape with the release of MEEE and F1 2021.

Come off it. You're far too intelligent to actually believe this.
Believe what? I've heard him saying that many times in different videos on their channel. And it was rather funny when he suddenly started saying the opposite couple of months ago.
 
It's reasonable because it's the hard truth. ¯\_(ツ)_/¯
And despite that, he still recommends a RTX 3080 over a RX 6800XT if you can find them at similar prices.
They want to keep their 5700XT fanbase happy. That's why they downplay RT on every opportunity they get and also ignore the possibilities with DX12 Ultimate in the future. They heavily recommended the 5700XT over Turing in 2019, so if they would admit that the 5700XT was already outdated at its release and especially for true next gen games, they would lose their face in front of their audience.

Look at the 5700XT hype comments in their Far Cry 6 video and you have solid proof of my theory. It's the only explanation. They know the 5700XT is very popular with their fanbase. This has nothing to do with Ampere or RDNA2.

But it's probably wrong to generalize the whole HW Unboxed team. Tim is pretty objective, I feel like this is coming mostly from Steve.
 
Last edited:
Quake 2 isn't very compelling outside of being cool from a tech standpoint. Metro EE is a huge improvement over the initial RT implementation and makes it the only truly compelling usage of RT so far IMO. Control is much more on the side of image refinement than anything else.

You have never played any games with Raytracing, huh? "Deliver us the Moon" is 18 months old and has one of the best Raytracing implementation. Raytracing is transforming the visual and solves one of the biggest problem for such games: Missing reflections.
 
You have never played any games with Raytracing, huh? "Deliver us the Moon" is 18 months old and has one of the best Raytracing implementation. Raytracing is transforming the visual and solves one of the biggest problem for such games: Missing reflections.

2077 with all five ray tracing modes at rainy night time in night city is probably the most impressive i have seen.
 
Control is much more on the side of image refinement than anything else.

Shadows, fog, particles, volumetric lighting, shaders, tessellation, ambient occlusion, reflections, DOF, AA are all refinements. They all add to the final result.

The irony is that one of the main reasons people don't fully appreciate RT yet is that games aren't designed with RT as a baseline. If you look at a scene with one or two shadow casting lights it's going to be hard to see where RT helps because advanced shadow mapping (and caching) techniques look really good. But games don't include scenes with many dynamic shadow casting lights because shadow mapping would be laughably slow. So when people say RT shadows are blah it's understandable because they have no frame of reference for what real-time shadows "should" look like in more complex scenes.
 
Of course he has to be when his words are referenced as ground truth.
Has he already proved these theses with real facts? Is he knowledgeable enough to even make a proper testing of RT quality? Did he test proper scenes at all? Does he know how ground truth image should look like? Did he compare against ground truth? If not, did he explain theory and math behind his conclusions?
The answer to these questions would be no, so using his words as arguments is simply disrespectful to people here.


Of course people are free to do whatever they want, but imagine an utterly silly situation where one would criticize a whole science direction with arguments taken from entertaining YT type of content, would you expect scientific community give up and accept the critics?


These images must be made in proper places, must be compared to reference or must be explained theoretically, that's how education process happens.


It seems you didn't get the joke.


To be quoted and referenced he has to be an expert and his defence better be good.

Honestly, not going to waste even more time explaining obvious stuff.

I have not seen anyone referencing his words as ground truth, what I do see however is that all the time someone links to his videos, and multiple tests and diversity are actually good for making decisions and ought to be encouraged, the pro-raytracing camp constantly tries to invalidate his videos because they believe they are The Authority deciding what settings people are allowed to use. People are free to choose their settings in PC games, and if one or two single reviewers say they decide not to use raytracing because of too little gains for the performance cost, I do not see why it is so hard to simply accept their existance and go on with life, especially when there are multiple other sites that do test with raytracing as a standard.

To be frank, Hardwareunboxed has probably been getting alot more attention on here than they otherwise would because of you guys' obsession with them.

It is hilarious that you actually are prepared to throw away decades worth of benchmarks if they cannot show proof that they are actual experts.

Their videos are not made for presentations at tech conferences or any kind of expert audience.
Their videos are directed towards gamers wanting to see benchmarks and performance for various games and graphics cards. The average joe looks at comparisons and the performance, and no theory or math can save the product if they not are impressed by what they see.

But yeah, I do not want to waste more time on this specific ridicilous criteria either.
 
Shadows, fog, particles, volumetric lighting, shaders, tessellation, ambient occlusion, reflections, DOF, AA are all refinements. They all add to the final result.

The irony is that one of the main reasons people don't fully appreciate RT yet is that games aren't designed with RT as a baseline. If you look at a scene with one or two shadow casting lights it's going to be hard to see where RT helps because advanced shadow mapping (and caching) techniques look really good. But games don't include scenes with many dynamic shadow casting lights because shadow mapping would be laughably slow. So when people say RT shadows are blah it's understandable because they have no frame of reference for what real-time shadows "should" look like in more complex scenes.
Control with RT offers minor improvements while slashing performance in half. Metro EE looks entirely different while costing a smaller amt of performance. It is not offering up a similar image with some refinement here and there.
 
Shadows, fog, particles, volumetric lighting, shaders, tessellation, ambient occlusion, reflections, DOF, AA are all refinements. They all add to the final result.

The irony is that one of the main reasons people don't fully appreciate RT yet is that games aren't designed with RT as a baseline. If you look at a scene with one or two shadow casting lights it's going to be hard to see where RT helps because advanced shadow mapping (and caching) techniques look really good. But games don't include scenes with many dynamic shadow casting lights because shadow mapping would be laughably slow. So when people say RT shadows are blah it's understandable because they have no frame of reference for what real-time shadows "should" look like in more complex scenes.
20 years of training gamers about static lighting and static shadows.
People are just used to it because they've never seen the alternative. And I don't blame them, they've never seen what they've never seen. Minecraft RT is like the only title to portray this correctly and most people will not play it.

Every game leading up to now including even some RT titles have light sources in the most awkward of places.

Find a hidden passageway? No worries, we've conveniently left candles lit for you to rummage about.
Find a hidden tomb that's over 1000 years old, we're still burning fire here and there just for you!

Whenever you have to make a compromised for baked lighting, you get the most awkward of setups. Game design based around graphics all start to feel the same because ultimately, lighting forces the game to be played a very specific way. All scenes need to be lit, even the dark ones. Collapsing environments and buildings that allow light to fill a dark room? Never going to happen. Gamers have been trained to ignore these and instead focus on texture details, model and geometry density, and resolution as being the indicators of higher quality. Lighting and shadows and physically based rendering was the major item of last generation, but dynamic lighting/GI and shadows is never a something people bring up. And that will be for this coming generation.

When we finally let go of baked lighting, we're going to see some really different gameplay scenarios, people should be excited for what RT brings to the table, and it's going to be a lot more than just 'marginal improvements on existing' ultra settings.
 
That second shot is the sauce. I've experimented with RT on and off in Control quite a bit and I think that shot objectively captures the improvements that RT brings to this game. Subjectively though, I don't know if it communicates the sum-of-its-parts effect to someone who's never experienced it in motion. To me playing it with RT off feels like playing a very good-looking video game. Playing with RT on feels like guiding a toy character through a physical miniature world. The stable reflections on vinyl floors and wooden walls, the contact shadows that ground objects to surfaces, and Jessie's self-shadowing just makes everything feel *solid* and not a bunch of texture-mapped triangles. And all that at 4K/60fps.
 
It's understandable that some regular run of the mill dude will often not give a flying fuck about graphical progress and enhancements, but it's not understandable when the tech elite and graphics enthusiasts do that exact same thing in a forum dedicated to graphics and techs.

But I guess that's what happens when you affiliate yourself to a certain vendor, you convince yourself that RT brings nothing to the table so that you can buy current/previous gen AMD hardware and not feel bad about missing out.
 
the pro-raytracing camp constantly tries to invalidate his videos because they believe they are The Authority deciding what settings people are allowed to use
No, it's because he has ruined the reputation of a once venerable site: TechSpot.

People are free to choose their settings in PC games
People, not tech reviewers, those represent products and what they are capable of.

There is a standard called DX12U, with it comes new graphics features called ray tracing, not testing your new hardware with the new graphics standards represents the most stupid and misleading thing I have seen in tech journalism in decades.

HBU had no problem using early DX12 tests in their reviews despite them not offering anything new to the table over DX11, not even performance enhancements (on the contrary, it often made performance worse compared to DX11), but now when DX12 has new graphics features they don't test because it doesn't matter!! That's not tech journalism, that's hypocrisy.

And it bit them in their assess, now they have to go through hoops to justify favoring the featureless RDNA1 GPUs over Turing, to save face in front of their audience who followed their words without much thought.
 
Last edited:
Back
Top