Current Generation Games Analysis Technical Discussion [2020-2021] [XBSX|S, PS5, PC]

Status
Not open for further replies.
I haven't seen any evidence of that whatsoever, on the contrary.

5700xt launch
relative-performance_2560-1440.png


Today
relative-performance_2560-1440.png
 
I would say it has more to do with AMD drivers being piss poor so there's naturally more performance to eek out of the cards through driver updates.
I don't know if that's the only reason. GCN was very strong in compute, and games became more compute heavy. That may be a byproduct of the architecture being in both major consoles, or it could be just the trend we were moving towards anyway.


The add-in cards are too slow too, the ones I have are basically useless in games.



It's very dead, there's not been a decent game release with PhysX for years now.

Shame as it's by far my most favorite tech to see in games, Cryostasis being one of the best.

Drivers have nothing to do with it, there's not been a AAA game released with PhysX for years now, it's dead.
So I've argued before that it's not a AAA game, and I stand behind that, but Cyberpunk 2077 uses PhysX. There aren't any settings to tweak in the menus but the Physx .dlls are in the bin folder. Honestly, it's probably one of the reasons the game runs so poorly on Xbox One and PS4, since there is no GPU acceleration there and running PhysX on a Jaguar has got to be painfully slow. I don't know if it's still true, but I remember before nVidia bought PhysX, there was criticism that PhysX on the CPU was single threaded and didn't use SSE or AVX either.

Correct me if I am wrong but isn't Physx the default physics engine used by UE up to the last iteration of UE 4?
Up to 4.25. But they are only on 4.27 now, so this was a fairly recent change.

-edit- Looks like they replaced PhysX in 4.23 but ended support for PhysX in 4.25, so it's a bit more nebulous when developers would have stopped using it.
 
Last edited:
I’m not sure how that graph is supposed to disprove my claim?
Yeah compare the same model of nvidia with the same model AMD years apart, not different models, too many variables.
that techpowerups quite a good site,I've never heard of it before.
this will surely age very poorly. Just like all of the other “we’ll never need more than…” predictions in the history of technology.
notice how phone screens aint really getting higher DPI, its cause theres no real benefit as its at the limit of human vision.
the first phone with 4k screen came out 6 years ago, why havent they taken over the marketplace?
Though I think there will be a need for 8k TV screens, esp if your screen takes up the whole wall, 8k phones, well not so much (though I have no doubt some company will do it, just for the Lolz)
 
Last edited:
Though I think there will be a need for 8k screens, esp if your screen takes up the whole wall

Exactly. Also 4K computer monitors are certainly not at the limit of rendered high frequency detail given how close we sit to them.
 
I’m not sure how that graph is supposed to disprove my claim?
The gain happened a few months after the launch of the 5700XT, it's due to drivers enhancements (the 5700 series suffered huge driver problems that persisted for a long time after launch), it's not due to game specific workloads that favors AMD due to consoles influence as you postulated.
 
The gain happened a few months after the launch of the 5700XT, it's due to drivers enhancements (the 5700 series suffered huge driver problems that persisted for a long time after launch), it's not due to game specific workloads that favors AMD due to consoles influence as you postulated.

My memory could be failing me but my recollection is that the 5700xt and 2070 super were consistently trading blows soon after they both launched. This isn't a new thing.
 
With regards to SSAA, that is perhaps a poor example to get your point across. SSAA never made sense in the first place.

You’ll also need to provide further clarification as to what you mean by the cost got to high? If you’re referring to computational cost, I’d argue against that line of thinking as the driving factor for the adoption of Rt accelerators. In my view, the real driving factor is the lack of scalability in man hours.
Yea computational costs. If we look at technologies that developers have largely bypassed, there are a whole slew of features that never got anywhere close to the adoption rate that DLSS will. Namely, tesselation, geometry shaders, tiled resources, etc. We have RT accelerators because we have physical limits on chip power and chip sizes. These arguments have to be made with a clear ceiling that you can arguably only shrink a chip so much and pack it with so much power. There becomes and eventuality that either your chips are enormous, or your clock speed is so high that your wattage/cm^2 is greater than a nuclear power plant. In either case, we can't move forward.

A good example of technology that should be deprecated but cannot be is the 3D pipeline if you will. It's arguably overshadowed by compute shaders in almost every single way. It's more efficient in dispatching kernels, it's direct ALU to memory access and scheduling is both more efficient and faster. It doesn't run into the same issues that the fixed function pipeline does, and it can take much larger advantage of using all the available ALU on a chip.

But even then, very few companies have moved away to all compute based engines, because if they actually did, we could do away with all the silicon space used for ROPs, geometry, schedulers etc, and just make even more ALU units for processing. But we haven't. And we likely will never.

It's a case in point that technology sticks around because there are still edge cases where 3D Pipeline is still desirable to have around. It's an argument that stretches beyond the idea that power alone is enough to deprecate technology, it's usually not. To this end, MSAA sticks around even though TAA can draw better AA without shimmering (but with blurring during motion). And to that end, Deep Learning techniques which has it's own pros and cons will likely stick around to. It all comes down to application and DLSS is still largely in its infancy compared to the other two which have been around for a very long time.

I actually believe that these upsampling techniques will go away with time. They might be replaced with more efficient techniques or we might get to a point where there’s sufficient power that they’re not needed. I don’t share the belief that power will be used to drive increased screen resolutions. Consumers spending patterns on display devices suggests that we’re reaching a point where the resolution is good enough. I expect to see extremely poor adoption of 8k and a paradigm shift in display devices.
Eventually you're going to hit a wall, whether it be in number of pixels that need to be rendered, or the quality per pixel increases dramatically. If you push for more realistic visuals and push the boundaries of what can be rendered, then to make frame time, you're going to have to render less. In those cases you need upsampling techniques if you want to maintain a higher resolution or a higher frame rate.

I don't really see a way around this, unless you see a way around the obvious ceiling we're hitting with silicon right now. There's simply no answer but to ask developers to do more with less, code for: do less work and approximate the more.
 
My memory could be failing me but my recollection is that the 5700xt and 2070 super were consistently trading blows soon after they both launched. This isn't a new thing.
The 5700xt was roughly equal to a 2070 at launch in many game averages. Now a 5700 is roughly equal to a 2070 while a 5700xt is up there with a 2070S. Check the 2 images I posted earlier. I’d say its likely the 5700xt will gain a few more percentage points over the next 12-18 months.
 
I’d say its likely the 5700xt will gain a few more percentage points over the next 12-18 months.
I disagree, the gains happened due to driver improvements in the first few months after luanch, this is over now, the position of the 5700XT remained stationary for the past 18 months. In the future, I expect a quick nose dive as games start utilizing DX12U features.
 
What problems do you propose can be solved by throwing more man hours at current tech? Art, animation? There are fundamental mathematical limitations to current graphics rendering techniques that no amount of developer time can fix.

From a game development perspective, the bottleneck is man hours leading to skyrocketing development costs. The average consumer doesn’t care what techniques are used. Ray traced shadows or shadow maps, ray traced gi or baked lighting, SSR or ray traced reflections, as long as it looks good to them, that’s all that matters. The developers benefit from the adoption of these technologies to speed up their workflow and reduce man hours thus reducing cost.

This will surely age very poorly. Just like all of the other “we’ll never need more than…” predictions in the history of technology.

If you think display devices are only going to continue to offer higher resolutions as the main selling point then let’s just agree to disagree. I don’t share the belief that resolution will be a primary factor for adoption in the future. Display devices as we know it might change and how we consume media might change. We can already see companies dabbling in VR, Augmented reality devices, holographic devices, etc.
 
It isn’t irrelevant because currently 5700xt owners are enjoying the benefits. It is a relevant data point that being on both consoles benefits AMD in the PC space. It’s also looking like it will be years before the transition you mention will occur.
Even by your incorrect scenario, a 5% difference doesn't really change your gaming experience much, however being locked out of graphical features like Ray Tracing, Mesh Shaders is a lot worse for any player.

I said propriety Nvidia technology........ Name me a single piece of Nvidia only tech one that's become standard in games.
Well, PhysX is a library for processing physics simulation on the CPU, It is still used in a variety of engines, Control uses it, Cyberpunk, Hitman .. etc.

What you are talking about is GPU accelerated PhysX, where the GPU is accelerating some effects instead of the CPU, PhysX popularized this concept back when no game dared to do so, right now most games have GPU accelerated particles and other effects.

G-Sync is an all encompassing solution now, it swallowed FreeSync and became it's own beast, allowing Geforce users to access features not available on other GPUs from other vendors.

TXAA popularized temproal AA, it morphed into TAA later on (without of the MSAA component of TXAA).

HBAO+ is pretty popular and much more widespread than any other SSAO solutions.
 
Well, PhysX is a library for processing physics simulation on the CPU, It is still used in a variety of engines, Control uses it, Cyberpunk, Hitman .. etc.

What you are talking about is GPU accelerated PhysX, where the GPU is accelerating some effects instead of the CPU, PhysX popularized this concept back when no game dared to do so, right now most games have GPU accelerated particles and other effects.

G-Sync is an all encompassing solution now, it swallowed FreeSync and became it's own beast, allowing Geforce users to access features not available on other GPUs from other vendors.

TXAA popularized temproal AA, it morphed into TAA later on (without of the MSAA component of TXAA).

HBAO+ is pretty popular and much more widespread than any other SSAO solutions.

PhysX is not an industry standard

G-Sync is not an industry standard

TXAA is not an industry standard

You see where I'm going with this?

Nvidia come out with some nice gaming tech but it never becomes industry standard.
 
Yea computational costs. If we look at technologies that developers have largely bypassed, there are a whole slew of features that never got anywhere close to the adoption rate that DLSS will. Namely, tesselation, geometry shaders, tiled resources, etc. We have RT accelerators because we have physical limits on chip power and chip sizes. These arguments have to be made with a clear ceiling that you can arguably only shrink a chip so much and pack it with so much power. There becomes and eventuality that either your chips are enormous, or your clock speed is so high that your wattage/cm^2 is greater than a nuclear power plant. In either case, we can't move forward.

A good example of technology that should be deprecated but cannot be is the 3D pipeline if you will. It's arguably overshadowed by compute shaders in almost every single way. It's more efficient in dispatching kernels, it's direct ALU to memory access and scheduling is both more efficient and faster. It doesn't run into the same issues that the fixed function pipeline does, and it can take much larger advantage of using all the available ALU on a chip.

But even then, very few companies have moved away to all compute based engines, because if they actually did, we could do away with all the silicon space used for ROPs, geometry, schedulers etc, and just make even more ALU units for processing. But we haven't. And we likely will never.

It's a case in point that technology sticks around because there are still edge cases where 3D Pipeline is still desirable to have around. It's an argument that stretches beyond the idea that power alone is enough to deprecate technology, it's usually not. To this end, MSAA sticks around even though TAA can draw better AA without shimmering (but with blurring during motion). And to that end, Deep Learning techniques which has it's own pros and cons will likely stick around to. It all comes down to application and DLSS is still largely in its infancy compared to the other two which have been around for a very long time.


Eventually you're going to hit a wall, whether it be in number of pixels that need to be rendered, or the quality per pixel increases dramatically. If you push for more realistic visuals and push the boundaries of what can be rendered, then to make frame time, you're going to have to render less. In those cases you need upsampling techniques if you want to maintain a higher resolution or a higher frame rate.

I don't really see a way around this, unless you see a way around the obvious ceiling we're hitting with silicon right now. There's simply no answer but to ask developers to do more with less, code for: do less work and approximate the more.

I think the disagreements we’re encountering are based on entirely on the scope at which we view the discussion. Your arguments are predicated on the limitations and understanding of technology today. I’m not thinking about computer graphics in a 10 or 20 year scope, I’m thinking about how it will evolve over the next few hundred years. The tv of today is not the same tv that was invented in 1927. Neither is the phone the same as the first phone invented. The same can be said for computers, computer chips, and rendering techniques.

Computers and computer graphics is a field in its infancy compared to other professionals fields of study. I expect drastic changes as the field continues to mature. We won’t rely on silicon forever and we’re are already exploring alternatives like graphene nanoribbons, etc. Even the way we make chips will change. Photonic processors is an area that is being heavily researched.
 
I doubt it. DLAA looks much better than the TAA in Elders Scroll Online and ~10% performance hit is worth it.
Please understand, the idea that something looks better is entirely subjective and not objective. Let’s refrain from trying to pass off subjective opinions as objective observations.
 
From a game development perspective, the bottleneck is man hours leading to skyrocketing development costs. The average consumer doesn’t care what techniques are used. Ray traced shadows or shadow maps, ray traced gi or baked lighting, SSR or ray traced reflections, as long as it looks good to them, that’s all that matters. The developers benefit from the adoption of these technologies to speed up their workflow and reduce man hours thus reducing cost.

You didn't answer the question :)

If you think display devices are only going to continue to offer higher resolutions as the main selling point then let’s just agree to disagree. I don’t share the belief that resolution will be a primary factor for adoption in the future. Display devices as we know it might change and how we consume media might change. We can already see companies dabbling in VR, Augmented reality devices, holographic devices, etc.

I'm not so naive as to think we've already hit the limits of display technology after a few decades. Whatever advancements are ahead of us will require orders of magnitude higher rendering performance.
 
The developers benefit from the adoption of these technologies to speed up their workflow and reduce man hours thus reducing cost.

I agree with this, give it another 10 years and the ray tracing hardware should be powerful enough that developers can do away with all the pre-baking of lighting, shadows, reflections and loads of other things thus saving man hours and money.

The extra graphical polish ray tracing will bring is just the cherry on the top.

I dread to think how many man hours someone spent doing all the cube maps in Spiderman.
 
Status
Not open for further replies.
Back
Top