Optimizations of PC Titles *spawn*

You’re assuming that faster at gaming and faster at raytracing are going to be 2 different things this generation from a typical gamer’s perspective. Nvidia’s marketing team has a pretty easy job if big titles continue to embrace RT.

Practically speaking Nvidia has no reason to change course. AMD has a very compelling product but it’s not the first time that’s happened and we know how that usually goes in terms of market penetration. All the other stuff that helps Nvidia stay ahead - brand recognition, software, feature set, developer relations are still there. Of course if Nvidia’s advantage in RT solidifies they will beat us over the head with it but that’s expected.

Indeed, NV doesnt have to do much. Their Ampere GPUS are well, faster at normal rasterization, in special at the most important resolution, 4k. Their ray tracing hardware is about a generation ahead, its much faster, as evident by DF who has sources for this they didnt want to name.
Ampere DLSS is also another thing that leads them to extra performance, almost needed seeing how much RT still has for impact and the need for higher frame rates.

Ampere is just the more suited to todays and the futures gaming then RDNA2 as is, in special seeing where we are going (RT and heavy compute, upscaling). That doesnt mean AMD has a bad product, far from it. I think the 6800XT (20+TF) is a real good GPU, and it comes with their very first ray tracing hardware. But NV still has the edge.

Aside from that, ray tracing is here and its here to stay, consoles have it, AMD has it, NV has it, and probably Intel too. Just about every game launching now has it.
 
4K is the most important resolution to whom, in the PC space?


You’re assuming that faster at gaming and faster at raytracing are going to be 2 different things this generation from a typical gamer’s perspective.
There's nothing wrong with faster at raytracing defining game performance in the medium to long term, IMO.

But as I mentioned before, defining game performance in the short term through how fast the gpu is at accelerating incoherent rays would mean devs starting to put unnecessary mirrors and mirrored surfaces that needlessly bounce entire scenes 3 times before reaching the player's POV.
It would be just to hurt AMD performance more than it does nvidia's, and it would be a disservice to advancing videogames towards prettier graphics.


We already had two entire generations of graphics cards whose performance in many games depended mostly on their ability to render triangles smaller than pixels (i.e. complete trash wrt optimization). We would all do better with less of those schemes.
 
There's nothing wrong with faster at raytracing defining game performance in the medium to long term, IMO.

But as I mentioned before, defining game performance in the short term through how fast the gpu is at accelerating incoherent rays would mean devs starting to put unnecessary mirrors and mirrored surfaces that needlessly bounce entire scenes 3 times before reaching the player's POV.
It would be just to hurt AMD performance more than it does nvidia's, and it would be a disservice to advancing videogames towards prettier graphics.

We already had two entire generations of graphics cards whose performance in many games depended mostly on their ability to render triangles smaller than pixels (i.e. complete trash wrt optimization). We would all do better with less of those schemes.

This is pure conjecture and conspiracy theorizing correct? Or is there actual evidence of developers using unnecessarily small triangles and other nefarious schemes to undermine AMD. As I said earlier there is zero benefit to Nvidia to gimp RT performance on Turing or Ampere. Every ray is precious.
 
Huh? What makes you think it's best case scenario for AMD?
The feature test is using exactly the things ToTTenTranz proposed:
Only primary rays and a more coherent ray traversal. The scene is static and the acceleration structure is only built once, no refitting or rebuild. Should be very helpful for the infinity cache.
And yet Ampere delivers twice the performance.

Ampere has 50% more compute performance, too. It doesnt matter how you want to optimize for Navi Ampere has so much brute force performance that the gap will always widen between both.
 
Indeed, NV doesnt have to do much. Their Ampere GPUS are well, faster at normal rasterization, in special at the most important resolution, 4k. Their ray tracing hardware is about a generation ahead, its much faster, as evident by DF who has sources for this they didnt want to name.
Ampere DLSS is also another thing that leads them to extra performance, almost needed seeing how much RT still has for impact and the need for higher frame rates.

Ampere is just the more suited to todays and the futures gaming then RDNA2 as is, in special seeing where we are going (RT and heavy compute, upscaling). That doesnt mean AMD has a bad product, far from it. I think the 6800XT (20+TF) is a real good GPU, and it comes with their very first ray tracing hardware. But NV still has the edge.

Aside from that, ray tracing is here and its here to stay, consoles have it, AMD has it, NV has it, and probably Intel too. Just about every game launching now has it.


That is not what Game Developers are saying^

I think you are stuck in the past and truly don't realize that AMD's RDNA is technically superior to Ampere in Gaming. Game Developer's have two new Consoles and RDNA to play with. NVidia can only pay/bribe/market so many "proprietary" games... but I doubt many Game Studios will be interested. So it's 100% up to NVidia. And 27 months of DLSS has told us that Jensen isn't backing up his lip service just yet. Console sales with be in the tens of millions.

Subsequently, AMD acknowledges they were being conservative on their 6800 Series clocks, because they know enthusiasts will unlock the card's true potential. We also know that AMD cpu & gpu and a new pcie-4.0 NVMe drive... the 6800xt will certainly keep more stable frames than Ampere...!

We also need an truly agnostic ray tracing game, before you can claim anything about NVidia dominance in ray tracing.
 
Game Developer's have two new Consoles and RDNA to play with. Nvidia can only pay/bribe/market so many "proprietary" games... but I doubt many Game Studios will be interested.
I don't think Nvidia is too concerned about console politics and they have a good working relationships with independent studios, some which are part of GeForce Now. Intel is also getting started by collaborating with IO Interactive today to incorporate VRS and Ray Tracing in Hitman 3, so it looks like it's no longer just about Nvidia and AMD.
 
This is pure conjecture and conspiracy theorizing correct? Or is there actual evidence of developers using unnecessarily small triangles and other nefarious schemes to undermine AMD.
The Internet is forever:

https://www.pcr-online.biz/2015/07/...y-damages-the-performance-on-nvidia-hardware/

Richard Huddy said:
Number one: Nvidia Gameworks typically damages the performance on Nvidia hardware as well, which is a bit tragic really. It certainly feels like it’s about reducing the performance, even on high-end graphics cards, so that people have to buy something new.

Richard Huddy said:
If you look at the way the performance metrics come out, it’s damaging to both Nvidia’s consumers and ours, though I guess they choose it because it’s most damaging to ours. That’s my guess.

He goes into more detail in some on the video interviews he gave about the subject.

Richard Huddy was developer relations at nVidia, then at ATi->AMD, then at Intel and here he was back at AMD.
He knows damn well what he's talking about.



As I said earlier there is zero benefit to Nvidia to gimp RT performance on Turing or Ampere. Every ray is precious.
There is benefit in hurting nvidia's own GPUs by 20% if it hurts AMD's by 60% (e.g.).



Witcher 3's hairworks ran like a dog on nvidia Kepler + Maxwell hardware but it ran like a snail on AMD's GCN.
People eventually found out that forcing tessellation factor to 32x or 16x from the game's predefined 64x would result in a large boost in performance and no discernible difference in image quality.
The problem here is that the large performance deltas the game showed between e.g. a GTX 980 and a 290X were now much smaller.
 
When Lisa Su put Raja in charge of the new graphics division under the RTG brand I believe he decided to limit the PR part of Huddy's work, eventually replacing him. Now you find that Raja is the one doing the interviews and the podcasts.

Huddy came under fire from some of the press for pushing "too hard" against GameWorks. Jason Evangelho, a Forbes and PCWorld editor at the time, called him out publicly in an article.

The irony of fate I guess is that Jason now works for AMD.

With all of that said I really liked Huddy. He was genuine, supremely passionate and direct.
what happened to Richard Huddy? : Amd (reddit.com)
 
Only primary rays and a more coherent ray traversal. The scene is static and the acceleration structure is only built once, no refitting or rebuild. Should be very helpful for the infinity cache.

Should be best case on all current hardware.
 
This is pure conjecture and conspiracy theorizing correct? Or is there actual evidence of developers using unnecessarily small triangles and other nefarious schemes to undermine AMD. As I said earlier there is zero benefit to Nvidia to gimp RT performance on Turing or Ampere. Every ray is precious.
It’s been a while but I think I remember AMD claiming Nvidia used sub pixel triangles in hairworks and that isoline tessellation is an all around poor approach but its less slow on Nvidia GPUs.
 
Witcher 3's hairworks ran like a dog on nvidia Kepler + Maxwell hardware but it ran like a snail on AMD's GCN. People eventually found out that forcing tessellation factor to 32x or 16x from the game's predefined 64x would result in a large boost in performance and no discernible difference in image quality. The problem here is that the large performance deltas the game showed between e.g. a GTX 980 and a 290X were now much smaller.

That hair works stuff is pretty damning. It’s Nvidia’s code and it puts the hurt on their own hardware for no reason. So I’ll give you that there’s precedent.
 
That hair works stuff is pretty damning.
Wonder how it was any different from TressFX in this regard.
Wasn't TressFX exploiting the lack of shared memory atomics on Kepler?
Wasn't Forward+ doing the same?
NVIDIA had fixed this in one generation and striked back with hairworks.

AMD claiming Nvidia used sub pixel triangles
What's wrong with subpixel triangles?
It's the new gold now, look at all the praises for tesselation in Demon's Souls and subpixel triangles in UE5.
What I can conclude from this is that if not for AMD, geometry rich games could have happened 4 years earlier.
 
Last edited by a moderator:
What's wrong with subpixel triangles?
It's the new gold now, look at all the praises for tesselation in Demon's Souls and subpixel triangles in UE5.
What I can conclude from this is that if not for AMD, geometry rich games could have happened 4 years earlier.

For tesselation probably on consoles it was maybe a no go but model quality can have been higher in games but the streaming was a limitation from ex technical art director of Naughty Dog.

 
Wonder how it was any different from TressFX in this regard.
Wasn't TressFX exploiting the lack of shared memory atomics on Kepler?
TressFX is open source. Nvidia could, and can, at any time, implement a patch that supports TressFX and optimizes for their architecture.
Hairworks is a black box save for NDA signers.


What's wrong with subpixel triangles?
It's the new gold now, look at all the praises for tesselation in Demon's Souls and subpixel triangles in UE5.
What I can conclude from this is that if not for AMD, geometry rich games could have happened 4 years earlier.

Neither of those games is rendering subpixel triangles.
In fact, it's a bit ironic that you mention UE5 whose greatest achievement is to implement triangle culling, LOD control and asset streaming techniques to guarantee that the system never renders subpixel triangles.

Without subpixel triangles (Gameworks) we would have had faster and/or better looking games in the PC, and less of a monopoly in the discrete GPU space that ultimately drove PC graphics cards' prices up for everyone.
 
TressFX is open source. Nvidia could, and can, at any time, implement a patch that supports TressFX and optimizes for their architecture.
Hairworks is a black box save for NDA signers.




Neither of those games is rendering subpixel triangles.
In fact, it's a bit ironic that you mention UE5 whose greatest achievement is to implement triangle culling, LOD control and asset streaming techniques to guarantee that the system never renders subpixel triangles.

Without subpixel triangles (Gameworks) we would have had faster and/or better looking games in the PC, and less of a monopoly in the discrete GPU space that ultimately drove PC graphics cards' prices up for everyone.

In UE 5 the smallest size is triangle at same size of a pixel an in demon's soul¡s devs talk about smallest size as pixel size triangle.
 
In UE 5 the smallest size is triangle at same size of a pixel an in demon's soul¡s devs talk about smallest size as pixel size triangle.

Exactly. UE5's goal is to have 1 pixel = 1 triangle. Subpixel triangles = more than 1 triangle per pixel.

Having 2 triangles within the space of one pixel means you're already processing twice the geometry you should, for that particular pixel. One pixel = one color, so it's useless to have more than one triangle within the space of a pixel.
Now imagine having hundreds / thousands of pixels for which the GPU is processing 2 or 3 or more triangles, and you're now processing geometry at a point that is just hurting performance at absolutely no advantage to image quality.

Here at 7min Robert Hallock explains what's happening in Witcher 3 in more detail:

 
So what’s the correct term for this new overtesselation?

Is it overraytracing or ray overtracing or what?

I just want make sure I’m hip with the latest talking points. Thanks.
 
Back
Top