DavidGraham
Veteran
It ran much better under DX11.And I don't even know why they switched to a DX12 renderer.
DLSS doesn't require DX12, it works very well under DX11, and was featured in many DX11 titles.No RT or DLSS
It ran much better under DX11.And I don't even know why they switched to a DX12 renderer.
DLSS doesn't require DX12, it works very well under DX11, and was featured in many DX11 titles.No RT or DLSS
RE engine is such a beast. Also, yea just like the other remakes... loading times are incredibly fast on PC. Even Steam Deck is lightning quick... game also looks and runs incredibly!
Yea, no DirectStorage. All the other ones load extremely fast too.And I assume that's without Direct Storage. Very impressive!
"A demonstration designed to highlight the load time, streaming performance, frame rate and player experience difference between DirectStorage and standard asset loading will be presented.”
Yea, no DirectStorage. All the other ones load extremely fast too.
It's going to be interesting once games start utilizing DirectStorage more and more. I'm looking forward to AMDs upcoming GDC talk/demo about DirectStorage.
This is (hopefully) exactly what I've been waiting for. Something to comprehensively demonstrate upfront load speeds, streaming improvements, and how it will improve player experience compared directly against standard loading. Hopefully their demonstration has examples from actual upcoming games and not just synthetic ones.
Yes that'll be very interesting. I wonder if they'll demonstrate their smart access storage as well.
This is an expensive feature you can turn on or off which costs multiple ms to render. It easily fits within a 60fps budget on a high end gpu (runs fine on my 3080) -- it's not "unoptimized" just because it's expensive or doesn't scale the way you expect from other features in other games (I know it may appear that way after what feels like decades of ps4 ports, but not every feature is fragment shader bound or scales by screen size.)The hair rendering takes additional 3,1ms for 10% screen area - that is 66% of the time a whole frame takes to get rendered without advanced hair rendering. And it is so unoptimized for nVidia that the GPU gets stalled by it...
This is an expensive feature you can turn on or off which costs multiple ms to render. It easily fits within a 60fps budget on a high end gpu (runs fine on my 3080) -- it's not "unoptimized" just because it's expensive or doesn't scale the way you expect from other features in other games (I know it may appear that way after what feels like decades of ps4 ports, but not every feature is fragment shader bound or scales by screen size.)
The percentages here can paint a misleading picture, it's probably only 40% of the frame at super high end framerates -- most games are not really designed to run at ~200+fps, unless your computer is hyper fast (like, from a decade from now) there's just so much going in to scheduling and parallelizing work to get it to run an entire frame in ~5ms every time that it's impractical. In my opinion 144fps is the absolute maximum "reasonable" target framerate for a super high end PC, and its sitting right under that with everything turned on for you -- that's basically perfect performance.207 fps dropping to 129fps for hair on one character alone is quite drastic though, that's a 38% fps drop. In my tests without RT at much lower framerates, it was more like a 20% drop.
So there could be a bottleneck with how this is calculated when we're getting into very high frame rates when it's holding up the scene rendering, in that sense this feature could be considered 'unoptimized' in that it doesn't scale that well...perhaps (possible a release day driver could alleviate this somewhat too). I don't think it's completely out of pocket to see a performance drop you regularly get with something like RT reflections for slightly more detailed hair and not say 'wtf'.
Of course it doesn't! The work is almost surely in doing the simulation and generating the geometry.The hair rendering is decoupled from the resolution - so it doesnt scale with pixel count. Here is 1080p:
Rendering w/o: 3.1ms
Hair rendering: 3.6ms
PS5 uses a terrible CBR, even on XSS the grass is not pixellated as ps5. Don't know the hell going on here. But it would explain why runs better and at slightly higher resolution than XSX, cheaper reconstruction method.
A 4090 can render a 1080p path tracing frame in Portal RTX within 10ms. And yet it needs 3.6ms to render and simulate hair... Cant believe that there is someone who defends this unoptimized mess...Of course it doesn't! The work is almost surely in doing the simulation and generating the geometry.
Also, wait, your after number is 156 fps? It's OK to play a game at 1080p if you want to ensure a locked 144hz.
A 4090 can render a 1080p path tracing frame in Portal RTX within 10ms. And yet it needs 3.6ms to render and simulate hair... Cant believe that there is someone who defends this unoptimized mess...
What about the one in Fifa? Performs pretty well no?