DavidGraham
Veteran
I haven't seen any evidence of that whatsoever, on the contrary.The 5700xt has steadily been gaining on it’s Turing competitor
I haven't seen any evidence of that whatsoever, on the contrary.The 5700xt has steadily been gaining on it’s Turing competitor
I haven't seen any evidence of that whatsoever, on the contrary.
And this is after a few months of the 5700XT, it's a already a few fps behind the 2070 Super.5700xt launch
I’ve seen DLAA in action and in my opinion, it’s something that need not exist. I don’t foresee myself using it at any time in the future.
I don't know if that's the only reason. GCN was very strong in compute, and games became more compute heavy. That may be a byproduct of the architecture being in both major consoles, or it could be just the trend we were moving towards anyway.I would say it has more to do with AMD drivers being piss poor so there's naturally more performance to eek out of the cards through driver updates.
The add-in cards are too slow too, the ones I have are basically useless in games.
It's very dead, there's not been a decent game release with PhysX for years now.
Shame as it's by far my most favorite tech to see in games, Cryostasis being one of the best.
So I've argued before that it's not a AAA game, and I stand behind that, but Cyberpunk 2077 uses PhysX. There aren't any settings to tweak in the menus but the Physx .dlls are in the bin folder. Honestly, it's probably one of the reasons the game runs so poorly on Xbox One and PS4, since there is no GPU acceleration there and running PhysX on a Jaguar has got to be painfully slow. I don't know if it's still true, but I remember before nVidia bought PhysX, there was criticism that PhysX on the CPU was single threaded and didn't use SSE or AVX either.Drivers have nothing to do with it, there's not been a AAA game released with PhysX for years now, it's dead.
Up to 4.25. But they are only on 4.27 now, so this was a fairly recent change.Correct me if I am wrong but isn't Physx the default physics engine used by UE up to the last iteration of UE 4?
I’m not sure how that graph is supposed to disprove my claim?And this is after a few months of the 5700XT, it's a already a few fps behind the 2070 Super.
https://www.techpowerup.com/review/powercolor-radeon-rx-5600-xt-red-devil/28.html
Yeah compare the same model of nvidia with the same model AMD years apart, not different models, too many variables.I’m not sure how that graph is supposed to disprove my claim?
notice how phone screens aint really getting higher DPI, its cause theres no real benefit as its at the limit of human vision.this will surely age very poorly. Just like all of the other “we’ll never need more than…” predictions in the history of technology.
Though I think there will be a need for 8k screens, esp if your screen takes up the whole wall
The gain happened a few months after the launch of the 5700XT, it's due to drivers enhancements (the 5700 series suffered huge driver problems that persisted for a long time after launch), it's not due to game specific workloads that favors AMD due to consoles influence as you postulated.I’m not sure how that graph is supposed to disprove my claim?
The gain happened a few months after the launch of the 5700XT, it's due to drivers enhancements (the 5700 series suffered huge driver problems that persisted for a long time after launch), it's not due to game specific workloads that favors AMD due to consoles influence as you postulated.
Yea computational costs. If we look at technologies that developers have largely bypassed, there are a whole slew of features that never got anywhere close to the adoption rate that DLSS will. Namely, tesselation, geometry shaders, tiled resources, etc. We have RT accelerators because we have physical limits on chip power and chip sizes. These arguments have to be made with a clear ceiling that you can arguably only shrink a chip so much and pack it with so much power. There becomes and eventuality that either your chips are enormous, or your clock speed is so high that your wattage/cm^2 is greater than a nuclear power plant. In either case, we can't move forward.With regards to SSAA, that is perhaps a poor example to get your point across. SSAA never made sense in the first place.
You’ll also need to provide further clarification as to what you mean by the cost got to high? If you’re referring to computational cost, I’d argue against that line of thinking as the driving factor for the adoption of Rt accelerators. In my view, the real driving factor is the lack of scalability in man hours.
Eventually you're going to hit a wall, whether it be in number of pixels that need to be rendered, or the quality per pixel increases dramatically. If you push for more realistic visuals and push the boundaries of what can be rendered, then to make frame time, you're going to have to render less. In those cases you need upsampling techniques if you want to maintain a higher resolution or a higher frame rate.I actually believe that these upsampling techniques will go away with time. They might be replaced with more efficient techniques or we might get to a point where there’s sufficient power that they’re not needed. I don’t share the belief that power will be used to drive increased screen resolutions. Consumers spending patterns on display devices suggests that we’re reaching a point where the resolution is good enough. I expect to see extremely poor adoption of 8k and a paradigm shift in display devices.
The 5700xt was roughly equal to a 2070 at launch in many game averages. Now a 5700 is roughly equal to a 2070 while a 5700xt is up there with a 2070S. Check the 2 images I posted earlier. I’d say its likely the 5700xt will gain a few more percentage points over the next 12-18 months.My memory could be failing me but my recollection is that the 5700xt and 2070 super were consistently trading blows soon after they both launched. This isn't a new thing.
I disagree, the gains happened due to driver improvements in the first few months after luanch, this is over now, the position of the 5700XT remained stationary for the past 18 months. In the future, I expect a quick nose dive as games start utilizing DX12U features.I’d say its likely the 5700xt will gain a few more percentage points over the next 12-18 months.
What problems do you propose can be solved by throwing more man hours at current tech? Art, animation? There are fundamental mathematical limitations to current graphics rendering techniques that no amount of developer time can fix.
This will surely age very poorly. Just like all of the other “we’ll never need more than…” predictions in the history of technology.
Even by your incorrect scenario, a 5% difference doesn't really change your gaming experience much, however being locked out of graphical features like Ray Tracing, Mesh Shaders is a lot worse for any player.It isn’t irrelevant because currently 5700xt owners are enjoying the benefits. It is a relevant data point that being on both consoles benefits AMD in the PC space. It’s also looking like it will be years before the transition you mention will occur.
Well, PhysX is a library for processing physics simulation on the CPU, It is still used in a variety of engines, Control uses it, Cyberpunk, Hitman .. etc.I said propriety Nvidia technology........ Name me a single piece of Nvidia only tech one that's become standard in games.
Well, PhysX is a library for processing physics simulation on the CPU, It is still used in a variety of engines, Control uses it, Cyberpunk, Hitman .. etc.
What you are talking about is GPU accelerated PhysX, where the GPU is accelerating some effects instead of the CPU, PhysX popularized this concept back when no game dared to do so, right now most games have GPU accelerated particles and other effects.
G-Sync is an all encompassing solution now, it swallowed FreeSync and became it's own beast, allowing Geforce users to access features not available on other GPUs from other vendors.
TXAA popularized temproal AA, it morphed into TAA later on (without of the MSAA component of TXAA).
HBAO+ is pretty popular and much more widespread than any other SSAO solutions.
Yea computational costs. If we look at technologies that developers have largely bypassed, there are a whole slew of features that never got anywhere close to the adoption rate that DLSS will. Namely, tesselation, geometry shaders, tiled resources, etc. We have RT accelerators because we have physical limits on chip power and chip sizes. These arguments have to be made with a clear ceiling that you can arguably only shrink a chip so much and pack it with so much power. There becomes and eventuality that either your chips are enormous, or your clock speed is so high that your wattage/cm^2 is greater than a nuclear power plant. In either case, we can't move forward.
A good example of technology that should be deprecated but cannot be is the 3D pipeline if you will. It's arguably overshadowed by compute shaders in almost every single way. It's more efficient in dispatching kernels, it's direct ALU to memory access and scheduling is both more efficient and faster. It doesn't run into the same issues that the fixed function pipeline does, and it can take much larger advantage of using all the available ALU on a chip.
But even then, very few companies have moved away to all compute based engines, because if they actually did, we could do away with all the silicon space used for ROPs, geometry, schedulers etc, and just make even more ALU units for processing. But we haven't. And we likely will never.
It's a case in point that technology sticks around because there are still edge cases where 3D Pipeline is still desirable to have around. It's an argument that stretches beyond the idea that power alone is enough to deprecate technology, it's usually not. To this end, MSAA sticks around even though TAA can draw better AA without shimmering (but with blurring during motion). And to that end, Deep Learning techniques which has it's own pros and cons will likely stick around to. It all comes down to application and DLSS is still largely in its infancy compared to the other two which have been around for a very long time.
Eventually you're going to hit a wall, whether it be in number of pixels that need to be rendered, or the quality per pixel increases dramatically. If you push for more realistic visuals and push the boundaries of what can be rendered, then to make frame time, you're going to have to render less. In those cases you need upsampling techniques if you want to maintain a higher resolution or a higher frame rate.
I don't really see a way around this, unless you see a way around the obvious ceiling we're hitting with silicon right now. There's simply no answer but to ask developers to do more with less, code for: do less work and approximate the more.
Please understand, the idea that something looks better is entirely subjective and not objective. Let’s refrain from trying to pass off subjective opinions as objective observations.I doubt it. DLAA looks much better than the TAA in Elders Scroll Online and ~10% performance hit is worth it.
From a game development perspective, the bottleneck is man hours leading to skyrocketing development costs. The average consumer doesn’t care what techniques are used. Ray traced shadows or shadow maps, ray traced gi or baked lighting, SSR or ray traced reflections, as long as it looks good to them, that’s all that matters. The developers benefit from the adoption of these technologies to speed up their workflow and reduce man hours thus reducing cost.
If you think display devices are only going to continue to offer higher resolutions as the main selling point then let’s just agree to disagree. I don’t share the belief that resolution will be a primary factor for adoption in the future. Display devices as we know it might change and how we consume media might change. We can already see companies dabbling in VR, Augmented reality devices, holographic devices, etc.
The developers benefit from the adoption of these technologies to speed up their workflow and reduce man hours thus reducing cost.