What do you prefer for games: Framerates and Resolutions? [2020]

What would you prioritize?


  • Total voters
    42

ultragpu

Banned
I think it's safe to say the vast majority including DF would prefer 1440p/30 with balls to the wall visuals than 4k/60 with raytracing in a console space after reading all the feedback. And this aligns with my own preference perfectly too, hate to say I told ya so but it's true. You just won't get the same visual impact if you waste all your resource on Raytracing, 60fps and native 4k. With how advanced frame reconstruction technique is and how indistinguishable it is from native res, it's simply a no brainer to go this route. Also a full real time GI solution like Lumen is a perfect alternative to RT GI, people should stop dissing on rasterizer and embrace its full potential bit more. Maybe the only meaningful usage of RT is for reflection?
 
I would be very happy with 1440p 30Hz single player/co-op story driven console content which pushes the hw to max. I would be equally happy with some dlls2.0 like solution taking 1080p content and upscaling to 4k. Anyway if I play some multiplayer game requiring high framerate I play that with pc. Pads be damned as controllers if accuracy/speed is needed. Only exceptions for me to this rule are gran turismo and tekken. I don't take stills and zoom in to find faults, as long as things look great in motion most of the time I'm ok.
 
I think it's safe to say the vast majority including DF would prefer 1440p/30 with balls to the wall visuals than 4k/60 with raytracing in a console space after reading all the feedback. And this aligns with my own preference perfectly too, hate to say I told ya so but it's true. You just won't get the same visual impact if you waste all your resource on Raytracing, 60fps and native 4k. With how advanced frame reconstruction technique is and how indistinguishable it is from native res, it's simply a no brainer to go this route. Also a full real time GI solution like Lumen is a perfect alternative to RT GI, people should stop dissing on rasterizer and embrace its full potential bit more. Maybe the only meaningful usage of RT is for reflection?
I would take 1440p60.

I think if your'e capable of 1440p/30 you might be with some upscaling help, capable of 4K30
I would always take 60 fps over resolution especially for games that really need it.

The blow away portion of that demo was it's ability to cull, scale, and draw so many triangles and texture them. I think GPUs with more compute performance will do better because of it. That being said, games have limit on game sizes, so I don't expect that level of fidelity in video games. So somewhere in there 60fps is there, and there should be extra horsepower for RT.

It's entirely possible that lumens could run faster with RT hardware, then without. They just showcased what it would be to run without RT hardware. They didn't show you the performance with it on.

The advantage there is that Lumens runs on this gen. If lumens only ran on RT hardware, you're locking a lot of people out. All the core features on UE5 run on this gen because it's all compute shader based. But if you want the performance, you need to bring the muscle.
 
The advantage there is that Lumens runs on this gen. If lumens only ran on RT hardware, you're locking a lot of people out.
Wait until we see what performance is like on this gen to see if 'running on' equates to 'usable in games'.
 
Wait until we see what performance is like on this gen to see if 'running on' equates to 'usable in games'.
I suspect, largely, it'll look on PS4 as well as their current top exclusives do. Which is massive I think considering how much time and budget goes into that.

And for Xbox, vice versa, comparable to their best exclusives, or RDR2 level quality.

this is a fairly big lift.

ie, if a AA game can get away with making something like HZD graphically. Yea that's a huge win.
 
I would take 1440p60.

I think if your'e capable of 1440p/30 you might be with some upscaling help, capable of 4K30
I would always take 60 fps over resolution especially for games that really need it.

The blow away portion of that demo was it's ability to cull, scale, and draw so many triangles and texture them. I think GPUs with more compute performance will do better because of it. That being said, games have limit on game sizes, so I don't expect that level of fidelity in video games. So somewhere in there 60fps is there, and there should be extra horsepower for RT.

It's entirely possible that lumens could run faster with RT hardware, then without. They just showcased what it would be to run without RT hardware. They didn't show you the performance with it on.

The advantage there is that Lumens runs on this gen. If lumens only ran on RT hardware, you're locking a lot of people out. All the core features on UE5 run on this gen because it's all compute shader based. But if you want the performance, you need to bring the muscle.
Yes certain games would benefit from different fps respectively. But in this case it is the 30fps that made everyone go gaga. The take away I got from Nanite and the Eurogamer interview was that you need both high teraflops gpu and really fast IO hardware working in tandem to truly realize this kind of visual. So If you're pairing a 2080 with a standard HDD or a SSD multiple times slower than the one in PS5's you would compromise on the amount of polygons and details to be streamed no? Then again the engine is highly scalable so it'd be interesting to see how everyone fairs.
And I really doubt Lumen can run faster with RT on, just about all the games I've seen with RT on drains the hell out of fps.
 
Yes certain games would benefit from different fps respectively. But in this case it is the 30fps that made everyone go gaga. The take away I got from Nanite and the Eurogamer interview was that you need both high teraflops gpu and really fast IO hardware working in tandem to truly realize this kind of visual. So If you're pairing a 2080 with a standard HDD or a SSD multiple times slower than the one in PS5's you would compromise on the amount of polygons and details to be streamed no? Then again the engine is highly scalable so it'd be interesting to see how everyone fairs.
And I really doubt Lumen can run faster with RT on, just about all the games I've seen with RT on drains the hell out of fps.
I think you're misjudging what does what here.

The tech of the engine is that it can take zbrush level models and just dump it into the game. The engine handles mostly everything else for you. The rendering is happening at the smallest possible level, which is 1 triangle per pixel.
That means of the 20 billion triangles in that scene per frame, everything is culled to 20 million polygons, and then only 3.7 million of those triangles are being drawn, as 1440p, is 3.7 million pixels, 1 triangle per pixel is 3.7M triangles.

So simplistically, _any_ device that is running a higher resolution than 1440p using UE5 dynamic + nanite, means it's rendering more triangles and showing more texture detail than what you saw with PS5. So if you are doing say 4K for instance which is 8.7 Million pixels, you are seeing nearly more than ~2x the amount of detail on the screen if we continue to keep with 1 triangle per pixel.

But even 8.7 M triangles is still a significant culling from 20M triangles on the screen. Which means, ultimately, the 2080TI or the PCs with a fast enough SSD to stream in even lower quality assets or meshes, even just by a smudge will still be outputting vastly more detail.

So this is where the I/O argument starts to fall flat. PS5 with dynamic resolution only has enough muscle to output 1440p/30. The models it can import per second are significantly and vastly higher. But nearly all of that detail is culled away because it can't show it. I think someone can make an argument that PS5 could have reduced the mesh quality and achieved the exact same visual quality. But Epic wanted to sell how powerful their engine was in terms of just absorbing _any_ level of details (ie. for TV productions like The Mandalorian). Showcasing how powerful the PS5 SSD is, which professional level SSDs are capable of and higher, so that means there can be budget savings on hardware for that level of professional realtime rendering.

On this exact line of thinking however; it now showcases how important 4K is and 60fps (static and temporal resolution). Because 4K will actually show 2x more detail than 1440p. As opposed to just 2x better anti-aliasing at 4K.
 
Last edited:
You're assuming the transfer rate isn't limiting. Perhaps PS5 has enough GPU power to process 4 million triangles, but also is the only platform with a drive fast enough to supply 4 million triangles per frame. Moving to another machine, you may have enough GPU power to drive 8 million triangles but only enough storage performance to supply the renderer with 1 million triangles and have to draw them larger than one per pixel. Moving down to HDDs, regardless how much GPU power you have, the drive might only be able to serve up 50,000 triangles what with all the random drive seeks for the different portions of the datasets you are trying to load.

Unless I've missed something (and I haven't gone looking, so may well have!), the bottlenecks haven't been talked about and we don't know the relationship between storage performance profile and rendering performance. It may yet be that God-tier storage is the difference here.
 
I think you're misjudging what does what here.

The tech of the engine is that it can take zbrush level models and just dump it into the game. The engine handles mostly everything else for you. The rendering is happening at the smallest possible level, which is 1 triangle per pixel.
That means of the 20 billion triangles in that scene per frame, everything is culled to 20 million polygons, and then only 3.7 million of those triangles are being drawn, as 1440p, is 3.7 million pixels, 1 triangle per pixel is 3.7M triangles.

So simplistically, _any_ device that is running a higher resolution than 1440p using UE5 dynamic + nanite, means it's rendering more triangles and showing more texture detail than what you saw with PS5. So if you are doing say 4K for instance which is 8.7 Million pixels, you are seeing nearly more than ~2x the amount of detail on the screen if we continue to keep with 1 triangle per pixel.

But even 8.7 M triangles is still a significant culling from 20M triangles on the screen. Which means, ultimately, the 2080TI or the PCs with a fast enough SSD to stream in even lower quality assets or meshes, even just by a smudge will still be outputting vastly more detail.

So this is where the I/O argument starts to fall flat. PS5 with dynamic resolution only has enough muscle to output 1440p/30. The models it can import per second are significantly and vastly higher. But nearly all of that detail is culled away because it can't show it. I think someone can make an argument that PS5 could have reduced the mesh quality and achieved the exact same visual quality. But Epic wanted to sell how powerful their engine was in terms of just absorbing _any_ level of details (ie. for TV productions like The Mandalorian). Showcasing how powerful the PS5 SSD is, which professional level SSDs are capable of and higher, so that means there can be budget savings on hardware for that level of professional realtime rendering.
So the way I understand it is Nanite is built for 8k-16k capable PCs and PS5's SSD is best suited for them? Is the SSD simply over designed for PS5? Thought Tim Sweeney said it's a very balanced console.
 
So the way I understand it is Nanite is built for 8k-16k capable PCs and PS5's SSD is best suited for them? Is the SSD simply over designed for PS5? Thought Tim Sweeney said it's a very balanced console.
There are professional SSDs that go up to like 15GB/s for that type of work.
Yes UE5 is built for Render Movies like the Mandalorian and other production studios that need to do render work. Doing it in UE5 provides them exactly what they need without having to go to a render farm, of which when they are satisfied they can still go to a render farm for final if they want to.

It is also built to scale down as low as Android.

PS5 completely removes the bottleneck on I/O. If PS5 had more muscle, you'd see more. There may be even higher demands now that UE5 exists. So 16K resolution, 16K textures etc. All getting us ever closer to real life CGI.
 
Last edited:
Hold on, wouldn’t unreal upscaling reconstructing 1440p to 4K and effectively give you 8.7 million polys on ps5?
I don't believe so. You rendered 1 triangle per pixel at 1440p and then up resolution to 4K. So you've still only got 3.7M triangles but on 8.7 million pixels. You want 8.7 million triangles on 8.7 million pixels if you want to take advantage of 4K.
 
I don't believe so. You rendered 1 triangle per pixel at 1440p and then up resolution to 4K. So you've still only got 3.7M triangles but on 8.7 million pixels. You want 8.7 million triangles on 8.7 million pixels if you want to take advantage of 4K.
I thought PS5 can render 8.7 million of tris as well as stream them fast enough but can only render the final frame at 1440p. So don't you effectively have 8.7 million tris at disposal and just need to reconstruct the final frame to 4k?
Or is it in order to render 8.7 million tris you would need to render 8.7 million pixels? I'm slow sorry.
 
I thought PS5 can render 8.7 million of tris as well as stream them fast enough but can only render the final frame at 1440p. So don't you effectively have 8.7 million tris at disposal and just need to reconstruct the final frame to 4k?
The information being streamed from the SSD.
The GPU looks at the available information, based on load selects a resolution.
Based on that resolution removes all triangles and vertices that are sub pixel level
The remaining points if mergable can be merged with each other.
Then the texture is wrapped over the remaining verticies.
You now have 1 triangle per pixel at 1440p.

You up-resolution, you are just increasing the pixel count, not the triangle count. To up the triangle count, you need to process more triangles.

Fixed Function hardware cannot render that many triangles at 1 triangle per pixel. It kills the fixed funciton pipeline, ie, it can't even process a 1080p imagine at reasonable frame rates attempting that.
1 triangle per pixel is managed entirely (with exception cases) through compute shaders or the mesh shader if you have newer technology pipeline.

So the number of triangles that can be generated and the number of triangles that can be culled, are all dependent (whether mesh shader or not) on the number of compute units and bandwidth available. More parallel is going to be more advantageous than clockspeed here because you will definitely have all your CUs filled here with so much work to process. You you'll have more of everything going. At lower precision (if this available), you're just going to output that much more.

Radeon VII and Vega 64 owners can rejoice.
 
There is no I/O bottleneck. The assets are 8K and 16K respectively. But loads it fine. The bottleneck is on compute that's why the final output is 1440p.
In talking about the tech, Sweeney said drive performance was important, PS5's drive was godlike, and a PC with fast SSD could still 'be amazing'.

In my mind, the engine is selectively loading the portions of the models needed from storage, requiring lightning fast seeks and fast transfer. That's why assets complexity doesn't matter, but also why IO performance does. That's the whole basis of virtualised assets, caching small portions instead of the whole thing.
 
In talking about the tech, Sweeney said drive performance was important, PS5's drive was godlike, and a PC with fast SSD could still 'be amazing'.

In my mind, the engine is selectively loading the portions of the models needed from storage, requiring lightning fast seeks and fast transfer. That's why assets complexity doesn't matter, but also why IO performance does. That's the whole basis of virtualised assets, caching small portions instead of the whole thing.
Drive speed is definitely important. They showcased the insane level of models they could accommodate on PS5. But accommodating that, didn't help the fact that the renderer could not achieve anywhere close to what the native resolution are of those assets. So it's important, but only so important. No where close to the notion that PS5 level SSD is the only way you can get 1440p30 at this fidelity. Far from, high end PCs and XSX will render this at higher resolution (thus better image quality) than PS5. I believe this demo is significantly more compute bound than it is I/O bound.
 
So the way I understand it is Nanite is built for 8k-16k capable PCs and PS5's SSD is best suited for them?
On the pc you can cache on the main ram and stream from there to the gpu at 5 times the speed of the ps5's ssd, an above all on pc you can scale based on your setup
 
Back
Top