Current Generation Games Analysis Technical Discussion [2024] [XBSX|S, PS5, PC]

When Crysis was released, people said they couldn't tell the difference between the highest settings and one step down (I don't remember the verbiage they used in game), and that may have been true. But today, with today's higher monitor resolutions and our more trained eyes, I bet many of us could tell the difference. Sometimes those settings aren't for today's hardware.

Another example was STALKER. It had an option labled something like "full realtime lighting". Enabling it tanked performance and appeared to have little image quality benefit. But, like the name suggests, the realtime lighting was done in realtime. So it had all of the benefits of realtime lighting. And those benefits weren't really appreciated back in the day, especially when you were comparing screenshots and not the game in motion and over a period of time. And to be fair, I think enabling this option made my computer generate screenshots. Single digit FPS if I remember it correctly. In some ways, I find analogies with this setting and current titles that have a huge performance hit with RT enabled. Maybe it's for the next generation of hardware.
I recall the realtime lighting option in STALKER looking hugely different. IIRC it was tied to DX9 and you could no longer use MSAA.
 
From what I remember there was a dynamic lighting option in the first Stalker even? There was huge performance hit in and visual difference in scenarios that would apply to in terms of using the flash light in doors or if they had those swinging lights. I vaguely recall a situation somewhat early on the game in a tunnel you ran into some of those 4 legged monsters that had barrels and the swinging lights and if you used a grenade on them that caused a huge FPS drop as the lighting actually reacted dynamically.

But otherwise I don't think you'd notice in other environements especially outdoors.
 

I wish this had more direct quotes from id software. Not sure what this Nvidia Editor's Day event was. But looks like the industry is now fully in on ray tracing, frame gen, machine learning etc. The next five years are going to be rough on a lot of gamers that have a very different idea of what games should be. If id software, the kind of gold standard example of pc performance and optimization, is on this train then it seems there's no stopping it.
 

I wish this had more direct quotes from id software. Not sure what this Nvidia Editor's Day event was. But looks like the industry is now fully in on ray tracing, frame gen, machine learning etc. The next five years are going to be rough on a lot of gamers that have a very different idea of what games should be. If id software, the kind of gold standard example of pc performance and optimization, is on this train then it seems there's no stopping it.
AI is the future.

IT's the only thing that can do a heavy lifting with a reasonable power and silicon footprint. Brute force has scaling issues.

Feels a bit like we've taken the steps to modernize traditional rendering; AI models seem to be operating as the new 'baking' for modern hardware.
Instead of pre-baking your assets, you're just training them and shipping the model out.
 

I wish this had more direct quotes from id software. Not sure what this Nvidia Editor's Day event was. But looks like the industry is now fully in on ray tracing, frame gen, machine learning etc. The next five years are going to be rough on a lot of gamers that have a very different idea of what games should be. If id software, the kind of gold standard example of pc performance and optimization, is on this train then it seems there's no stopping it.
The majority of gamers will be fine.
A small minority og people (perhaps lacking on the technical knowlegde) will be buthurt.

Same, same as always.

But yeah,
AMD, Intel, NVIDIA, Microsoft and Sony seem to walk the same road and the same direction...so the whiney "crowd" will do nothing but create noise, it can be ignored though., as the combined industry ignores it.
 
I wish this had more direct quotes from id software. Not sure what this Nvidia Editor's Day event was. But looks like the industry is now fully in on ray tracing, frame gen, machine learning etc. The next five years are going to be rough on a lot of gamers that have a very different idea of what games should be. If id software, the kind of gold standard example of pc performance and optimization, is on this train then it seems there's no stopping it.

I wouldn’t worry about them too much. They’ll accept the new reality and find something else to complain about. I’m more interested in what new interesting things developers will come up with if AI and RT become first class citizens in the real-time rendering scene.
 
Instead of pre-baking your assets, you're just training them and shipping the model out.

Have been trying to wrap my head around this. When you train a model to produce a fixed output should that really be considered “intelligence”? Is it not just “compression”?

The neural network in these cases is just another compression format.
 
Have been trying to wrap my head around this. When you train a model to produce a fixed output should that really be considered “intelligence”? Is it not just “compression”?

The neural network in these cases is just another compression format.
it would be unfair to call NN models just another compression format :)

NN take inputs and outputs and generates the rules of how inputs reach outputs over dataset and over so many passes. It’s guessing how to get from A to Z. When you’re done you have a model that allows you to go from A’ to H’

With compression you take an input and apply function f(x) that removes as much data as possible to the output.

Then you take the output and apply f’(x) to the output hoping to get the input.

The largest difference is that for compression you’re trying to get back to the source.

With models, you’re trying to get the right results for inputs it hasn’t been trained on before.

The nuance is that compression and decompression has its limits.

With a NN, if drew a landscape with green colour resembling hills and valleys with blue paint in the shape of the lake, the NN can actually generate and entire picture fully detailed hills and valleys and very beautiful lake!

If you compressed that image, all the way down to just 2 colours, there’s just no way for the inverse function to create the original, all the data is entirely lost.

From this we see that the NN is generating what it believes we want it to display based on how we trained it. The compression algorithm has limits on how much data can be removed for the compression algorithm if it wants to have any real chance at getting back to source.
 
Last edited:
With models, you’re trying to get the right results for inputs it hasn’t been trained on before.

Yep that’s the intelligence part.

I’m thinking specifically about neural materials and neural textures. In those cases are you really passing in inputs never seen before or it has it essentially devolved to a deterministic f(x)? What are the new inputs that will be passed to these models at runtime?
 
Yep that’s the intelligence part.

I’m thinking specifically about neural materials and neural textures. In those cases are you really passing in inputs never seen before or it has it essentially devolved to a deterministic f(x)? What are the new inputs that will be passed to these models at runtime?
Hmm that’s a great question. I would love to learn more about it.

IIRC they said 10x better than compression. I suspect that if you’re getting 10x better than standard compression, that’s very little data to work with, there has to be trained intelligence in there to make up for so much data loss.
 
I don’t watch much LTT but he’s been putting out some decent content on this RT/AI/FG stuff and the additional challenges facing reviewers.

tldw; We’re in the throes of the RT transition. It’s expensive and frustrating and maybe one day the promise will be realized but it’s not going away. FG creates all sorts of new headaches for reviewers but this isn’t a new thing. The era of expecting all hardware to produce the same image is a short lived anomaly and we’re heading back to a time where reviewers will have to talk about image quality again.

Can’t say I disagree with any of that.


 

I wish this had more direct quotes from id software. Not sure what this Nvidia Editor's Day event was. But looks like the industry is now fully in on ray tracing, frame gen, machine learning etc. The next five years are going to be rough on a lot of gamers that have a very different idea of what games should be. If id software, the kind of gold standard example of pc performance and optimization, is on this train then it seems there's no stopping it.
But it seems like Doom is not an RT only title like Indy, they state in the article that you can turn it off.
 
I would not count on that necessarily. I was in that room listening to Billy and our transcription does not make mention of such a thing. If that is refering to something, it is refering to the ability to turn of PT.
That's great, I wonder why no outlet reported on that event other TweakTown?
 
I doubt Nvidia will share but I’m curious as to what new connections are enabled by the transformer approach. I’m still very fuzzy on ML stuff but at a high level transformers are meant to find connections between “distant” data points - e.g. two words in a paragraph that are far apart that help provide context. CNN’s on the other hand are wired such that data points that are close together have more influence (like neighboring pixels in an image).

So if CNN’s have reached their limit with DLSS3 I wonder what “distant” data Nvidia is pulling in to further enhance the model. Is the color of a pixel really influenced by other pixels halfway across the screen? Unlikely. Maybe the transformer model is pulling in a lot more temporal data to help with things like disocclusion in the Alan Wake ceiling fan example. So “distant” here can mean far away in time.
 
Back
Top