Upscaling Technology Has Become A Crutch

It’s not just you. For me as it stands, ue5’s performance cost when compared to the visuals is one of the most unjustifiable engine trade off I’ve seen in a long time.
It has to be a developer issue more than an engine one. Perhaps game development has just gotten too complex? Game optimization has just been dreadful the past 1-2 years.
 
Each game built on the first. Uncharted 1 required a new engine. Uncharted 2 refined the engine. Uncharted 3 took it further. Same for Gears, or BG:Dark Alliance or whatever. When companies had their own in-house engines, they used them for multiple titles, costs were lower and recouperated sooner.

Is the cost of the engine development most of the expenses? I would assume art/sound assets and planning and design and qa and promotion/marketing is a bigger chunk. I still belive that 3 small/medium games are more costly to make than 1 large.

I saw a talk with Matt Damon where he talked about when the DVD market disappeared, then a whole layer of medium big movies also died. Because they relied on DVD sales to recoup and make profit.
Now that sales channel is gone and they have to go really big and make the money back on worldwide teater releases. Which costs a lot of money, so studios are gambling on bigger sure thing projects than somewhat little smaller titles with possible smaller audiences.

I think that the same thing has happened to games, the AA type games are going away because publishers are risk averse :)
 
Is the cost of the engine development most of the expenses? I would assume art/sound assets and planning and design and qa and promotion/marketing is a bigger chunk. I still belive that 3 small/medium games are more costly to make than 1 large.

I saw a talk with Matt Damon where he talked about when the DVD market disappeared, then a whole layer of medium big movies also died. Because they relied on DVD sales to recoup and make profit.
Now that sales channel is gone and they have to go really big and make the money back on worldwide teater releases. Which costs a lot of money, so studios are gambling on bigger sure thing projects than somewhat little smaller titles with possible smaller audiences.

I think that the same thing has happened to games, the AA type games are going away because publishers are risk averse :)

Building a game engine for a AAA game probably takes 5+ years if you were to start from scratch and equal the top end of what's available. And that's not just a renderer, that's the whole content pipeline and editing tools. Then you look at companies that threw money at games trying to bring old engines up to speed that just ended up being a money pit, like whatever the Halo Infinite engine is or Frostbite. Halo Infinite is one of the most expensive games ever made, and it does not show. Battlefield 2042 was an absolute mess. A lot of that has to do with the fact that the tools were glued together and unproductive. Frostbite has a bad track record now. Even Battlefield 4 was a mess at launch and took a year to get patched into a good state. Cyberpunk is the same. Launched in a bad state and took a lot of patches to get up to speed.
 
It has to be a developer issue more than an engine one. Perhaps game development has just gotten too complex? Game optimization has just been dreadful the past 1-2 years.
Perhaps, but when most of the devs are having issues, it’s no longer a dev issue anymore.
 
Yes!!!

No sarcasm: wait, people here are upset by this?
Yes because a “good TAA implementation” is an oxymoron. Even the best we have currently with DLSS has way too many artifacts. Ghosting galore, blurring, and co are some its many obvious problems. I mean, there’s a subreddit dedicated to the disdain of TAA.
 
Yes because a “good TAA implementation” is an oxymoron. Even the best we have currently with DLSS has way too many artifacts. Ghosting galore, blurring, and co are some its many obvious problems. I mean, there’s a subreddit dedicated to the disdain of TAA.
I think many more would be upset if graphics didn’t evolve. The demand to have games run better at native resolution doesn’t make a lot of sense either, at least from a certain perspective, no one wants to be forever stuck in last generation graphics.

nvidia triggered the end of native rendering when they introduced the 2000 series GPUs. Since then the silicon budget and performance of AI calculations on their cards continue to take a greater portion of the chip while standard rendering hardware dwindles.

All hardware companies are on their way to phase out native rendering. Your only recourse is to just wait it out for these algorithms to improve.

Yelling at developers doesn’t make a lot of sense either, they target the configurations that the hardware vendors setup; you’re paying top dollar with a 4090 to play with upscaling, AA and frame generation not without.
 
I think many more would be upset if graphics didn’t evolve. The demand to have games run better at native resolution doesn’t make a lot of sense either, at least from a certain perspective, no one wants to be forever stuck in last generation graphics.

nvidia triggered the end of native rendering when they introduced the 2000 series GPUs. Since then the silicon budget and performance of AI calculations on their cards continue to take a greater portion of the chip while standard rendering hardware dwindles.

All hardware companies are on their way to phase out native rendering. Your only recourse is to just wait it out for these algorithms to improve.

Yelling at developers doesn’t make a lot of sense either, they target the configurations that the hardware vendors setup; you’re paying top dollar with a 4090 to play with frame generation not without.
I’d argue that TAA and its derivatives are not necessary for graphics to improve. In fact, TAA has had a negative impact on image clarity, motion clarity and what we got in return was image stability? A fair trade off for some but not for other.

Personally speaking, I paid for a 4090 for so that I could avoid the use of TAA or DLSS where possible by brute forcing everything at a higher rendering resolution. I am absolutely not a fan of frame generation and steer clear of it.
 
I
I’d argue that TAA is not and its derivatives are not necessary for graphics to improve. In fact, TAA has had a negative impact on image clarity, motion clarity and what we got in return was image stability? A fair trade off for some but not for other.

Personally speaking, I paid for a 4090 for so that I could avoid the use of TAA or DLSS where possible by brute forcing everything at a higher rendering resolution. I am absolutely not a fan of frame generation and steer clear of it.
how would you argue that it’s not necessary?

Gaming is running into the same hard wall that every computational problem arrives to; more computation requires significantly more bandwidth. You do more computation per pixel means you need to keep reading and writing those results somewhere. The way we’ve gotten around computation is to bake, so that nothing needed to be computed at run time and we have done that for decades now.

We’ve hit the wall of what baking can achieve which is why we are seeing movement back towards hardware accelerators like RT and AI silicon. So how are graphics going to improve further if we continue another 8 years of baking lighting and geometry? Outside of better textures nothing else will improve further without going to run time calculations.
 
I

how would you argue that it’s not necessary?

Gaming is running into the same hard wall that every computational problem arrives to; more computation requires significantly more bandwidth. You do more computation per pixel means you need to keep reading and writing those results somewhere. The way we’ve gotten around computation is to bake, so that nothing needed to be computed at run time and we have done that for decades now.

We’ve hit the wall of what baking can achieve which is why we are seeing movement back towards hardware accelerators like RT and AI silicon. So how are graphics going to improve further if we continue another 8 years of baking lighting and geometry? Outside of better textures nothing else will improve further without going to run time calculations.
Hmm, what does TAA have to do with the transition to hardware accelerators like RT? TAA is not a requirement to use these accelerators? It's just there to aid in improving performance because hardware is lagging behind. That won't remain the case forever though if history is anything to go by. We always have gaps between new discoveries that lead to drastic improvement in technology. I cannot argue that the computational limitations we face today will be relevant tomorrow as history has proven that not to be true.
 
Yes!!!

No sarcasm: wait, people here are upset by this?
Playing devil's advocate, I think the concern here is basically a more 'swimming', less solid rendering than before, which some find aesthetically more jarring. I saw a recent gameplay clip in space and the shadows were laggy past the motion of the objects and it looked really wrong. Temporal artefacts may be more disagreeable than other IQ issues. So yeah, perhaps simpler shadows that lose resolution at distance is actually preferable over realistic shadows that come and go as you move? Not too disimilar to people not wanting fancy-pants shaders when it results in constant movement judder - the interrupting to motion is more bothersome than more primitive graphics.

Even if people can't articulate their issues, if they voice them, I think there's something there that needs to be understood rather than just dismissed. If people aren't happy with various upscaling measures, let's find out why to better understand which rendering shortcomings we should be focussing on. Particularly in case we start backing ourselves into a disagreeable future.
 
Hmm, what does TAA have to do with the transition to hardware accelerators like RT? TAA is not a requirement to use these accelerators? It's just there to aid in improving performance because hardware is lagging behind. That won't remain the case forever though if history is anything to go by. We always have gaps between new discoveries that lead to drastic improvement in technology. I cannot argue that the computational limitations we face today will be relevant tomorrow as history has proven that not to be true.
We are moving towards accelerators because standard computation is not fast enough. TAA sits in a category the same as DLSS, except DLSS requires dedicated silicon. To me these are the same thing, more teraflops will not solve the issue.

At the end of the day; I respect your position on IQ, but you’re not going yo get that level of native rendering prowess except on titles that are still locked to the old way of rendering. I think UE5 is basically the epitome of moving to real time rendering as it handles both geometry and lighting and the pressure it places on VRAM must be enormous such that you’re getting poor performance during native rendering. That’s why we see companies moving towards TAAU, RT and so forth using dedicated silicon. It’s going to relieve the pressure of all that computation and keep bandwidth pressure down lowering the cost to consumers.

Otherwise, take a gander at what’s happening in the AI space. We paid top dollar for P100s, V100s, A100s and despite these having similar computational power to consumer cards the bandwidth and vram is a completely different scale because nvidia links them. We pay well over $15K for these types of boxes.

And we also work on reducing the amount of bits and bandwidth required, and the end result is that we just use it up almost immediately. But those revolutions on savings ends up as a revolution on AI quality and those revolutions were made possible through hardware advancements.

We’re running into the same wall here with gaming. With UE5, Sure there may be some software revolutions that may occur in the near future, but it’s not going to happen without support from hardware vendors. Epic will need to advocate for changes in hardware to make Nanite and Lumen run faster.
 
Playing devil's advocate, I think the concern here is basically a more 'swimming', less solid rendering than before, which some find aesthetically more jarring. I saw a recent gameplay clip in space and the shadows were laggy past the motion of the objects and it looked really wrong. Temporal artefacts may be more disagreeable than other IQ issues. So yeah, perhaps simpler shadows that lose resolution at distance is actually preferable over realistic shadows that come and go as you move? Not too disimilar to people not wanting fancy-pants shaders when it results in constant movement judder - the interrupting to motion is more bothersome than more primitive graphics.

Even if people can't articulate their issues, if they voice them, I think there's something there that needs to be understood rather than just dismissed. If people aren't happy with various upscaling measures, let's find out why to better understand which rendering shortcomings we should be focussing on. Particularly in case we start backing ourselves into a disagreeable future.
I have no issues in saying DLSS could be better or improved. Without a doubt that set of technologies will continue to improve over time.

I’m pushing back against this idea that these technologies are a crutch. If I went into excel and drew a graph of teraflop and bandwidth power over the family of GeForce cards you’d see it go into the negatives by slope.

If I did a graph of silicon budget versus accelerator budget we’d see how much accelerators are taking up more and more of that chip space.

I definitely believe that if we are to review anything, we should review something for how well does it meet its goal of what it is trying to be, not what we want it to be.

And this whole conversation seems the opposite of it. People want the 4090 to perform a certain way, but it was not designed to. It was designed to run DLSS 3 with frame generation. And developers who target that, will undoubtedly have poor performance when not running that.

Saying you want more from DLSS is fine. Saying developer suck for targeting a game specifically for DLSS is wrong imo.
 
Saying you want more from DLSS is fine. Saying developer suck for targeting a game specifically for DLSS is wrong imo.
Targeting a vendor specific technology stack that is not open source is most definitely wrong.

Vendors can provide hardware accelerators but the software should be vendor agnostic. So from that perspective, it’s definitely wrong.
 
Vendors can provide hardware accelerators but the software should be vendor agnostic. So from that perspective, it’s definitely wrong.

In the real world, software is never vendor agnostic. Simple software might be able to make to run on most hardware well enough, but high performance software is almost always designed for a specific hardware (or a range of hardwares) in mind.
For example, an open API such as Vulcan or DX12 might enable software to be able to run on most compatible hardwares, but to run optimally you almost always need to optimize for some specific hardware characteristics. Sometimes the majority of hardware vendors have similar characteristics so you don't have to optimize separately, but most of the time it's not the case, and you do need to have different code paths for different hardwares.
 
In the real world, software is never vendor agnostic. Simple software might be able to make to run on most hardware well enough, but high performance software is almost always designed for a specific hardware (or a range of hardwares) in mind.
For example, an open API such as Vulcan or DX12 might enable software to be able to run on most compatible hardwares, but to run optimally you almost always need to optimize for some specific hardware characteristics. Sometimes the majority of hardware vendors have similar characteristics so you don't have to optimize separately, but most of the time it's not the case, and you do need to have different code paths for different hardwares.
That’s technical true but in the context of the DLSS discussion, this is a false equivalency. Designing a game specifically for DLSS is not equivalent to designing a game for Dx12. The consequences of doing so are vastly different and I’m frankly not sure why this was even brought up. Look at the post I quoted?
 
Targeting a vendor specific technology stack that is not open source is most definitely wrong.

Vendors can provide hardware accelerators but the software should be vendor agnostic. So from that perspective, it’s definitely wrong.
Unless I’m mistaken that’s how Voodoo graphic cards accelerated the adoption of 3D graphics accelerators on PC; which PC was dramatically falling behind consoles in terms of 3D performance.

So if developers didn’t target these bespoke technologies the CPUs back in the day couldn’t cut it. Therefore PC would never have a chance at running these 3D titles.

Now we are back there again, and the standard makeup of a GPU is unable to cross the thresholds of graphical power we require to move to the next generation of graphics, thus we need to leverage a technology on the market today that makes the performance viable while we move into this direction. It doesn’t matter that it is proprietary as long as it works. The rest of the market will move into this space and competition will happen again. And all of these technologies would be supported.
 
Unless I’m mistaken that’s how Voodoo graphic cards accelerated the adoption of 3D graphics accelerators on PC; which PC was dramatically falling behind consoles in terms of 3D performance.

So if developers didn’t target these bespoke technologies the CPUs back in the day couldn’t cut it. Therefore PC would never have a chance at running these 3D titles.

Now we are back there again, and the standard makeup of a GPU is unable to cross the thresholds of graphical power we require to move to the next generation of graphics, thus we need to leverage a technology on the market today that makes the performance viable while we move into this direction. It doesn’t matter that it is proprietary as long as it works. The rest of the market will move into this space and competition will happen again. And all of these technologies would be supported.
Once again with the false equivalency. The situation with Voodo accelerators is drastically different to today. Firstly, there's no reason for DLSS to be proprietary as XESS has shown us. DLSS could run on other GPUs, all that's needed is for hardware vendors to implement their own hardware accelerators. I will say it again, targeting vendor specific technology is wrong and consumers and devs will pay the price. As it stands, due to the proprietary nature of DLSS, it has essentially created a monopoly in the GPU space. We're pay through the nose for GPUs and its why you see enthusiasts cheering on Intel. Its wrong for developers to do so and they'll suffer the consequences of it.
 
Back
Top