The Last of Us, Part 1 Remaster Remaster [PS5, PC]

PS4 use GNM low level API, it was easier to go with DX12.
Didn't Nixxess experience the opposite? Their Spiderman port ran easily on DX11 at first, but they needed DX12 for Ray Tracing, so they had to write for DX12 from scratch, they even described their first DX12 rendering experience as being a black screen!
 
Didn't Nixxess experience the opposite? Their Spiderman port ran easily on DX11 at first, but they needed DX12 for Ray Tracing, so they had to right for DX12 from scratch, they even described their first DX12 rendering experience as being a black screen!

I remember all the other Sony ports (Detroit, GoW etc..etc being DX11 too?)
 
You mean relative to the PS5? Not sure why that matters. A 3070 is hitting 60fps at 1440p medium which seems reasonable.

The performance on PS5 tells us how well optimised the PC version is. 60fps at 1440p on a 3070 might be acceptable enough, but based on the PS5, a 3060 should be achieving that. Or at the very least a 3060Ti.
 
Valhalla had a PS5 performing like a 2080, not slightly below a 3080. I don’t recall ever seeing PS5 punching up in performance comparisons for recent COD games.
 
I don't remember people kicking up such a fuss when the consoles were punching well above their GPU weight in AC: Valhalla or COD.
It was later discovered that some effects in COD were being rendered at around a quarter resolution compared to PC’s full resolution.

Furthermore, COD saw the consoles being close matches for a 2080, not a 3080 which is 60% faster than a 2080 and 80-90% faster than the consoles in most games.
 
Valhalla had a PS5 performing like a 2080, not slightly below a 3080. I don’t recall ever seeing PS5 punching up in performance comparisons for recent COD games.
PS5 isn't performing like a 3080.

People need to chill out on the comparisons until we get a DF video comparing settings.

@Dictator I assume one is in the works?
 

2080 Ti fares pretty well vs Ampere. The commentary in these articles is kinda useless though because the author’s conclusions are based only ultra settings.
 

81mg6t2303ra1.jpg
 

2080 Ti fares pretty well vs Ampere. The commentary in these articles is kinda useless though because the author’s conclusions are based only ultra settings.

It's because of the VRAM differences. Look at the 12GB 3060 outperforming the 8GB 3070 there. Event the 4070Ti's 12GB is limiting it at 4K.
 
Extremely high praise from CB:





I don't like these kind of games but the reviews are really giving it heavy praise for graphics.

Check out our screenshots, these are excellent graphics, and they are paired with VERY well designed environments. The designers didn't just rely on the rendering tech, but they made smart use of their assets to create a believable, realistic experience. One highlight is without doubt the characters and their facial expressions which are among the best I've ever seen. What's also impressive is the quality of the textures, which are detailed, sharp and crisp, even if you walk right up to them, but this increases VRAM usage of course.


Putting that aside, The Last of Us Part I is a gorgeous, at times almost photorealistic, must-see game. Although ray tracing is missing, the high-quality pre-baked global illumination (GI) together with the believable material representation (physically-based shading) creates a very nice lighting atmosphere. The hand-built game world is bursting with details, such as overgrown street canyons, abandoned restaurants and many detailed assets. The textures are beyond doubt and mostly in very high resolution.

 
I've just got the University.

The more I play it the more I understand why it's using so much memory, there's just so many unique assets used, I went in to a large house and really struggled to find instances where an asset was used more than once.
 
Event the 4070Ti's 12GB is limiting it at 4K.

The 4070ti is right behind the 3090 in the techpowerup review at 4k, also remember that while the 4070ti is on par with a 3090/3090ti in terms of power it's massively down on memory bandwidth.

The minimum frame rate at 4k on the techpowerup article has the 4070ti right between the 6800XT and 6900XT, both of which are 16GB cards.

And techpowerup even highlighted in their review of the 4070ti that it's performance at 4k dropped off more than they expected it too.
 
The 4070ti is right behind the 3090 in the techpowerup review at 4k, also remember that while the 4070ti is on par with a 3090/3090ti in terms of power it's massively down on memory bandwidth.

The minimum frame rate at 4k on the techpowerup article has the 4070ti right between the 6800XT and 6900XT, both of which are 16GB cards.

And techpowerup even highlighted in their review of the 4070ti that it's performance at 4k dropped off more than they expected it too.

Yeah it drops off a cliff at 4K because the high resolution is taking up additional VRAM to the point the 4070Ti's memory capacity is exceeded. We've seen in other reviews that the game can demand over 14GB at 4K Ultra.

At sub 4K the 4070Ti outperforms the 3090 and nips at the heals of the 3090Ti as we would expect but at 4K it falls well behind the 3090. That's not normal behavior at 4K despite its lack of bandwidth on paper. It's making up that bandwidth deficit through bigger caches and generally more efficient bandwidth usage.
 
Yeah it drops off a cliff at 4K because the high resolution is taking up additional VRAM to the point the 4070Ti's memory capacity is exceeded. We've seen in other reviews that the game can demand over 14GB at 4K Ultra.

Most games fall off a cliff at 4k as the 4070ti isn't a 4k card, it has nothing to do with the amount of VRAM as games that are well within the 12GB also scale poorly to 4k.
  • 4070ti: 504.2GB/s
  • 3090: 936.2GB/s
  • 3090ti: 1TB/s
Even the 3070ti has more bandwidth at 608.3GB/s

Now granted the 4070ti has some slight architectural improvements over the 3000 series regarding bandwidth efficiency and some additional cache but those improvements will only get you so far.

The 3090ti's actual amount of bandwidth is 1008GB/s, literally double the 4070ti, cache isn't going to get close to making up for such a massive drop in bandwidth, the 3090 is the same, huge bandwidth advantage.
 
Back
Top