Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
I wouldn't say those are 'unplayable' settings either, but I'm going by the chart provided which has it at 51fps at 1440p - is that with native but no DLSS2, but with just frame gen.

I'm not sure what chart were talking about now but this is with DLSS Auto plus FG on a 4070Ti.

It'll sometimes dip into the high 50's but it's remarkably consistent around 60fps.

I'll happily record and post a video at some point if there's any doubt, although I'm on a phone atm while the wife plays TLOU so it won't be tonight.
 
Actually you can. It automatically turns on DLSS if you turn on frame-gen, but if you turn dlss off again after that and apply, then it will be native+FG.

Well I'll be damned!!!

That's another option for me to play around with, and if there's one thing I like, it's options.
 
Personally I’m not convinced that 10-12 GBs of VRAM is the cutoff for modern IQ on midrange PC GPUs.

I think past PC ports just benefited from consoles being 8 GB with only 3-4 GBs being used for GPU exclusive data. Paired that with 40-50MBps of bandwidth to the HDDs and you get a formula that gave PC ports a lot of leg room.

Now the PS5 and XS have 7-10 GBs of basically VRAM and SSDs while you have DX12 forcing more intimate memory management on PC devs. All the sudden if you are porting from the PS5/XS a lot of that leg room has disappeared.

Devs will be forced to work harder on the memory management. DS will probably help if devs buy into it. But devs will have to find memory management solutions that (1) scales and (2) uses memory efficiently.

Because the ideal that low midrange performance paired with 8 GBs of VRAM will give some of your textures a quality from two gens ago is absurd. PS3 and 360 gave us 720p off of 512 MB of total RAM while the 360 was forced to contend with pulling data off a DVD at a rate of 16 MBps.

I think devs have a tendency not to worry about certain aspects of rendering when no bottlenecks are present. How like devs made efficient use of dvd’s limited data rates and space but game sizes ballooned really fast the moment HDD became standard. However when devs are forced to deal with bottlenecks they tend to find clever solutions to overcome them.
 
I'm not sure what chart were talking about now but this is with DLSS Auto plus FG on a 4070Ti.

Thanks, the chart was what troyan posted which I quoted in my initial post.

It'll sometimes dip into the high 50's but it's remarkably consistent around 60fps.

I'll happily record and post a video at some point if there's any doubt, although I'm on a phone atm while the wife plays TLOU so it won't be tonight.

Not accusing you of lying, my comments were based on the post that used that chart as evidence of the 4070 ti's generation longevity. I guess they, and GameGPU in particular picked an absolute worst-case scenario to demonstrate that, but your performance, especially factoring in the considerably higher resolution you're running at, made me wonder if there were settings differences. I mean that's a huge gap, guess they just picked a stress test (or your usage of DLSS auto means you're dipping into DLSS Ultra Perf to maintain that 60p, not sure how it works)?
 
Last edited:
I believe RE titles sell well on PC, but compared to the entire installed console base? I highly doubt this game designed for the PC first and foremost and consoles ported as an afterthought.

From the steam CC numbers then Capcom's annoucement of total sales, PC should be in the ballpark of half the game's sales. Capcom did claim now for some time that its aiming for PC to become its main platform, then they put out a press release some time back that they achieved that, in half the time they expected to. Steam is so big these days that near every game that sells, sells well. Routinelly above consoles. Ubisoft had some quarters or half years where uplay was equal to ps4/5 and more than switch and xbox combined in their financial reports. Hogwarts or Elden Ring probably have half or more of the entire playerbase on steam.

Though i would not think that the state of this game is due to them focusing on PC, since it has issues everywhere. There's another word for that :D
 
Last edited:
Thanks, the chart was what troyan posted which I quoted in my initial post.



Not accusing you of lying, my comments were based on the post that used that chart as evidence of the 4070 ti's generation longevity. I guess they, and GameGPU in particular picked an absolute worst-case scenario to demonstrate that, but your performance, especially factoring in the considerably higher resolution you're running at, made me wonder if there were settings differences. I mean that's a huge gap, guess they just picked a stress test (or your usage of DLSS auto means you're dipping into DLSS Ultra Perf to maintain that 60p, not sure how it works)?

Auto DLSS is quality for 1440p and below, performance for 4k and ultra performance for 8k.
 
It doesn't matter what the excuse is, bad ports, etc. Those will always exist. Nvidia's job is to deliver value with their products and so far, the whole Ada line fails to deliver much value.

No, the company Nvidia's job is to deliver money/roi for its shareholders. Everything they do is a mean's to that end.
It does not matter if it that means "deliver value with products" or just price gouging, that is what the job is.
 
Thanks, the chart was what troyan posted which I quoted in my initial post.



Not accusing you of lying, my comments were based on the post that used that chart as evidence of the 4070 ti's generation longevity. I guess they, and GameGPU in particular picked an absolute worst-case scenario to demonstrate that, but your performance, especially factoring in the considerably higher resolution you're running at, made me wonder if there were settings differences. I mean that's a huge gap, guess they just picked a stress test (or your usage of DLSS auto means you're dipping into DLSS Ultra Perf to maintain that 60p, not sure how it works)?

I'm not 100% but I think Auto just automatically sets the most appropriate DLSS quality level for your resolution and hardware combo, I haven't noticed anything dynamic going on. And ultra performance is pretty noticeable!

My guess is the particular benchmark troyan posted isn't using FG despite it saying so in the title as it seems results can be filtered. The performance differential between the 4070ti and 7900XTX doesn't look large enough.

Here's the benchmark run on my system which seems to be a bit heavier than normal gameplay. I'm also using VRR.

CP2077.jpg

EDIT: this is also using the HD Reworked textures mod.
 
My guess is the particular benchmark troyan posted isn't using FG despite
Yes it isn't, it's using DLSS2/FSR2 Quality only. DLSS3 is selectable on another tab, and it's results is 83fps for the 4070Ti.

 
No, the company Nvidia's job is to deliver money/roi for its shareholders. Everything they do is a mean's to that end.
It does not matter if it that means "deliver value with products" or just price gouging, that is what the job is.
I strongly disagree. Nvidia does not exist without it's customers. Their job is design and manufacture products that are compelling to their customers. The side benefit of that is that I get a nice return on my investment. What you're talking about is putting the cart before the horse and that's how companies crumble.
 
I strongly disagree. Nvidia does not exist without it's customers. Their job is design and manufacture products that are compelling to their customers. The side benefit of that is that I get a nice return on my investment. What you're talking about is putting the cart before the horse and that's how companies crumble.

You seem to be confusing purpose with strategy. Companies change products and strategy all the time. Their purpose always remains the same.
 
It doesn't matter what the excuse is, bad ports, etc. Those will always exist. Nvidia's job is to deliver value with their products and so far, the whole Ada line fails to deliver much value. If you purchase a graphics card for $800 USD in the case of the 4070ti, the card should be designed with longevity in mind. The 4070ti is not.

Bad ports will indeed always exist, but there's a difference between a port that uses the additional VRAM for something of true benefit to the gamer, and one that simply inflates the VRAM requirements artificially as a marketing tactic for one particular IHV.

You can literally save something like 1.5GB of VRAM in RE4 for example by turning the shadow quality down from max to high. And the change is basically invisible.

The real question is why are you defending the card so strongly? Most people look at the ada line up and think it's a bad deal all around. I don't know why you have a vested interest in defending Nvidia.

Lol, I've zero "vested interest in defending Nvidia". If you looked at my posting history you'd see I've been one of the more vocal people on here about the poor value offered this generation from both Nvidia and AMD. I may have a 4070Ti but as I've posted on here, I held my nose when I bought it because I knew it was poor value relative to past generations. The thing is though, it's still in my estimation, the best value GPU available at the moment if you want 3080+ level performance. And 12GB doesn't change that.

As noted, while their may be a very small number of extreme corner cases which have settings that can breach 12GB, the number of those that offer any kind of meaningful benefit to the game for that breach will likely be countable on one hand until the next gen consoles release.

And the number of games that will become unplayable with 12GB (in terms of some massive compromise like PS3 style textures having to be selected, or running the game at 1080p specifically due to VRAM limitations) will be precisely zero.

12GB will at best become a rare, and minor inconvenience for this card for the duration of this generation. And IMO there is simply no alternative available in the price range. Yes the 7900XT will never be VRAM limited at all, but it will be much more regularly RT limited, before we even mention reconstruction or frame generation techniques, so I don't see it as a viable alternative for my gaming preferences which are maximum graphics with acceptable image quality and reasonable frame rates (meaning 60 is perfectly fine, and in a pinch I'll settle for a bit less).

You say it's not built for longevity, but it will last the entire current generation with ease, likely better on average than anything else available right now in it's price range or below. And after that, the games probably up for every currently available GPU bar perhaps the 4090.

Since when is 30% far faster? I chose the bus to highlight the stagnation that's occurring with the 4070. The whole card is bad, I could have chosen the lack of change in cuda cores or other specifications.

I was talking about the 4070Ti which is over 50% faster than the 3070 despite having "only 13%" more bandwidth. This amply illustrates the pint why judging a GPU based purely on it's memory bandwidth alone is silly.

That's really not the point. Nvidia went 7 years or 3 architectures(1070 -> 2070 -> 3070) and delivered no improvement in memory capacity while increasing the price significantly. 16gb is the base expectation at the prices they're charging for a "70" class gpu which is really a 4060 in disguise. We know they're playing funny games as they tried to pass of a 4070 as a 4080 but got caught red-handed. They then successfully passed off a 4070 as a 4070ti and people lapped it up claiming Nvidia had self corrected. What a joke.

You've moved onto a different argument entirely here. I agree the current offerings from both vendors are poor value. And obviously it would have been better to have more VRAM and/or cheaper prices. But that doesn;t mean that 12GB is going to obsolete the card before the next generation of consoles launch. It quite obviously isn't.

Who says the redesigned architecture has anything to do with it? They went from Samsung's bad 8nm process to TSMC's "4N" process. If you just put ampere architecture on that process, you'd have gotten a huge boost in performance simply by doing nothing. All the things you're talking about are process related. Like I said, when i see evidence that Ada is actually significantly faster clock for clock than Ampere, then I'll gladly give credit where credit is due. So far, I haven't seen any evidence of that at all.

I'm not sure what you're trying to argue now. Who cares whether they achieved the performance boost through a better node, faster clock speeds, improved architecture, or magic fairy dust. What matters is the end result. It's faster. And it doesn't matter how wide it's memory bus is, or how many CUDA cores it has. You're paying for performance (and efficiency), not bullet points of a spec sheet.
 
I strongly disagree. Nvidia does not exist without it's customers. Their job is design and manufacture products that are compelling to their customers. The side benefit of that is that I get a nice return on my investment. What you're talking about is putting the cart before the horse and that's how companies crumble.

All for profit companies and even many so called "non-profit" companies go into it with the primary goal of making money.

Obviously, in order to make money you need to convince people that they want to buy your product or services.

I think the part that you are getting caught up on is that not all companies target all consumers, in fact no company targets all consumers.

Mercedes targets a different consumer demographic than Ferrari or Ford, although there may be some overlap between their target demographics.

Just because I don't see any value in current generation NV or AMD offerings doesn't mean that others don't.

Plenty of people still find value in NV products and thus they continue to buy their products. Some might complain about the cost, some might not. But they are still buying the product. NV aren't going to suddenly decide that they don't like making money and start lowering the prices of their products as long as people still buy them.

Now we can all argue that NV are pricing their products out of the average PC gamer's budget which may end up killing the PC gaming market, but that's beside the point because there are still many people with a lot of disposable income willing to buy their products.

So, until people stop buying their products, NV can only assume that they are pricing their products correctly in order to make a profit.

Regards,
SB
 
And the number of games that will become unplayable with 12GB (in terms of some massive compromise like PS3 style textures having to be selected, or running the game at 1080p specifically due to VRAM limitations) will be precisely zero.

I hope you're right, and I think you might be, but there are two things about this console gen than make me wonder a little about 12GB's robustness through all this generation.

Firstly, and as I think @dobwal said further up, resource management may be a bigger issue this gen due to DX12, developer time and talent, and trying to balance fast NVMe drive performance with SATA SSD performance. Projects are also getting more and more complicated, and for every awesome tech like ... everything in UE5 ... there are people somewhere not using it right, and people somewhere else not using it at all.

Secondly, there's maybe the issue of RT. The 4070 Ti is an amazingly fast card, and DLSS and FG allow it to really deliver very impressive RT at high resolutions and very good frame rates. CP 2077 with Overdrive is a real testament to what the 4070Ti can deliver. But along with that RT performance might come a memory cost down the line.

Consoles have - lets face it - bare bones RT performance compared to the RTX series. The 4070Ti is probably 3 or 4 times as fast in RT heavy games (ballpark, don't quote me!) compared to the consoles, and that's excluding DLSS and FG. Along with such an increase in performance, down the line, may come much larger accelerations structures stored in VRAM to extend the range and increase the detail of what you're RT-ing. And that could have a significant cost in terms of VRAM.

None of this will be a problem for games that scale well, that wisely automatically select options, and give users all the settings and overrides they could want. But if your texture options are "Ultra <-> High <-> Dogshit" and RT settings are "Ultra <-> High <-> Low Res puddle reflections to 6 feet" then more VRAM may have ended up helping a 12 GB card like the 4070Ti preserve it's potency. Because it really is banger of a RT card.
 
I strongly disagree. Nvidia does not exist without it's customers. Their job is design and manufacture products that are compelling to their customers. The side benefit of that is that I get a nice return on my investment. What you're talking about is putting the cart before the horse and that's how companies crumble.

You might disagree but the rest of the world that adheres to capitalism does not :)
Especially if you listen to the people in charge of companies.
 
Damn...30 fps is not possible for me using dlss performance, even with the mod.

My laptop 2060 is usually on par with a desktop 3050 in most other games. But with path tracing, it appears these second gen RT cores really are optimized a lot better for this kind of workload. 2080ti gets outperformed by the 3070 too.
 
Status
Not open for further replies.
Back
Top