A Generational Leap in Graphics [2020] *Spawn*

Did high end pc games ever run at 60 with available hardware?..

seems like they run console ports just fine of course. But this doom and cyberpunk got me thinking
 
I hope they will stop with baked lighting. I saw Digitalfoundry John Linneman top 10 games of 2020 and TLOU2 dynamic element during gameplay non being using the same lighting than static element begins to be very visible.
I hope their next game still target 30fps and upscaled/dynamic 4k
 
Did high end pc games ever run at 60 with available hardware?..

seems like they run console ports just fine of course. But this doom and cyberpunk got me thinking

No, to me that kinda defines "high end pc game" -- they just dont come aorund very often, because once a console generation is a few years old pcs have outpaced consoles so far that it's hard to make anything scalable enough to stress PC but also run on consoles. (Altho this gen might change that, RT is super scalable by its nature.)

"Crysis" like games need to have the right timing and be focused on pcs as the target platform, but when they do they're a huge generation leap and high end gfx cards run them at 30fps on the target resolution.
 
Well, we can compare this:
witcher-bench-4k-u.jpg


To this:
upload_2020-12-23_22-12-14.png
This time they have pushed the barrier even harder and I expect the DLC's and Mod's to push this even further.

And this is EXACTLY what PC gaming is about.
It is not about +200 FPS on a console port...it's about cranking the image quality up to "11". ;)

Like I said...CDPR did it RIGHT this time...they did not let the lowest common denominator (the consoles) hold the PC back.
 
If that's what PC gaming is about the there has not been a lot of PC gaming these past few years :p
Aside from Flight Simulator, Star Citizen and maybe this, everything has basically been games designed to run on the Xbox One (2013).
I would like for there to be more games to push the technology. Now that console development seem to focussing on 60, there might not be too much graphically intensive games on those platforms

I hope this thread will be updated with 2021 in the title of course :)
 
Dunno why there's so much PC talk in the console forum. Isn't there a section for that on B3D? It's boring to even comment here now due to the nonsensical nature of the posts.

We've had guys suggesting that their disposable income somehow quantifies them more than someone that chooses to buy a console. It's nauseatingly stereotypical and blindingly naive.

Can @PSman1700 and @HLJ stick to the PC section? That would be much appreciated. Thank you.
 
If that's what PC gaming is about the there has not been a lot of PC gaming these past few years :p
Aside from Flight Simulator, Star Citizen and maybe this, everything has basically been games designed to run on the Xbox One (2013).
I would like for there to be more games to push the technology. Now that console development seem to focussing on 60, there might not be too much graphically intensive games on those platforms

I hope this thread will be updated with 2021 in the title of course :)

Like I said...CDPR did it RIGHT this time...also why people are calling this the new "Crysis"
And your "optimism" for ~11TF GPU performance by the arbitrary "60 FPS" is something I don't quite understand.

60 FPS on consoles usually means "dynamic resolution" at lower settings than a PC...because the TF's just are not there.

Let go back and look at the performance generation to generation around launch time:

PlayStation 2: 6.2 GFLOPS
Xbox: 10 GFLOPS
Geforce 2 TI: 1.8 GFLOPS

PlayStation 3: 230.4 GFLOPS
Xbox 360: 240 GFLOPS
GeForce 7800 GTX: 165 GFLOPS

PlayStation 4: 1.85 TFLOPS
Xbox One: 1.31 TFLOPS
GeForce GTX 680 Ti: 3.1 TFLOPS

PlayStation 4 Pro: 4.2 TFLOPS
Xbox One X: 6 TFLOPS
GeForce GTX 1080 Ti: 8.9 TFLOPS

Playstation 5: 10.2 TFLOPS (~22 TFLOPS total with raytracing)
Xbox Series X: 12 TFLOPS (~25 TFLOPS total with raytracing)
GeForce RTX 3080: 29.7 TFLOPS (~87 TFLOPS total with raytracing)(**excluding Tensor cores performance here**)

Never before have the new consoles launched with such a performance deficit compared to PC GPU's
In 2 years the deficit will be even bigger.

Your hopes for matching/exceeding the PC GPU performance is not based on any facts I can see?
 
Like I said...CDPR did it RIGHT this time...also why people are calling this the new "Crysis"
And your "optimism" for ~11TF GPU performance by the arbitrary "60 FPS" is something I don't quite understand.

60 FPS on consoles usually means "dynamic resolution" at lower settings than a PC...because the TF's just are not there.

Let go back and look at the performance generation to generation around launch time:

PlayStation 2: 6.2 GFLOPS
Xbox: 10 GFLOPS
Geforce 2 TI: 1.8 GFLOPS

PlayStation 3: 230.4 GFLOPS
Xbox 360: 240 GFLOPS
GeForce 7800 GTX: 165 GFLOPS

PlayStation 4: 1.85 TFLOPS
Xbox One: 1.31 TFLOPS
GeForce GTX 680 Ti: 3.1 TFLOPS

PlayStation 4 Pro: 4.2 TFLOPS
Xbox One X: 6 TFLOPS
GeForce GTX 1080 Ti: 8.9 TFLOPS

Playstation 5: 10.2 TFLOPS (~22 TFLOPS total with raytracing)
Xbox Series X: 12 TFLOPS (~25 TFLOPS total with raytracing)
GeForce RTX 3080: 29.7 TFLOPS (~87 TFLOPS total with raytracing)(**excluding Tensor cores performance here**)

Never before have the new consoles launched with such a performance deficit compared to PC GPU's
In 2 years the deficit will be even bigger.

Your hopes for matching/exceeding the PC GPU performance is not based on any facts I can see?
192 Gflops for PS3 GPU. Let's not rewrite history.
 
Dunno why there's so much PC talk in the console forum. Isn't there a section for that on B3D? It's boring to even comment here now due to the nonsensical nature of the posts.

The biggest game of the year unfortunately doesn't have anything close to a real nextgen console version -- this argument can all be considered a head start on discussing the cyberpunk next gen patch when it lands... someday
 
Sony throwing the game out of the store could be to motivate CDPR to create a good PS5 version? But maybe it will do the opposite.

@HLJ
With the consoles targeting 60fps the graphics will obviously be 'worse' than when they have literally double the rendering budget. They will still do great things but I understand that TLOU2 would not look as good as it did if they cut the render budget in half.

Star Wars battlefront however for me is one of the greats of the past generation, but they did it with clever engineering and visual trickery. Maybe the same with Call of Duty Infinite Warfare.
But again, those games could have benefitted from 30fps but we will never know

edit: slightly off topic but, lol is 30tflop for 3080 for real? How come the 6800 XT is beating it in the vast majority of games with only 20tflop?
I believe the 30tflop is just marketing numbers but not actual performance, same with xbox 12tflop; it is not real world tflop but only paper spec PR tflop, which is weaker than IRL tflop (if that makes sense)
 
Like I said...CDPR did it RIGHT this time...also why people are calling this the new "Crysis"
And your "optimism" for ~11TF GPU performance by the arbitrary "60 FPS" is something I don't quite understand.

60 FPS on consoles usually means "dynamic resolution" at lower settings than a PC...because the TF's just are not there.

Let go back and look at the performance generation to generation around launch time:

PlayStation 2: 6.2 GFLOPS
Xbox: 10 GFLOPS
Geforce 2 TI: 1.8 GFLOPS

PlayStation 3: 230.4 GFLOPS
Xbox 360: 240 GFLOPS
GeForce 7800 GTX: 165 GFLOPS

PlayStation 4: 1.85 TFLOPS
Xbox One: 1.31 TFLOPS
GeForce GTX 680 Ti: 3.1 TFLOPS

PlayStation 4 Pro: 4.2 TFLOPS
Xbox One X: 6 TFLOPS
GeForce GTX 1080 Ti: 8.9 TFLOPS

Playstation 5: 10.2 TFLOPS (~22 TFLOPS total with raytracing)
Xbox Series X: 12 TFLOPS (~25 TFLOPS total with raytracing)
GeForce RTX 3080: 29.7 TFLOPS (~87 TFLOPS total with raytracing)(**excluding Tensor cores performance here**)

Never before have the new consoles launched with such a performance deficit compared to PC GPU's
In 2 years the deficit will be even bigger.

Your hopes for matching/exceeding the PC GPU performance is not based on any facts I can see?
Tflops of rtx3xxx are less „effective” and almost 30tf 3080 is not even 2x faster than 10tf ps5 gpu without rt (btw performance/tf doesnt mean shit all that matter is perf/power consumption in arch. effectivness)
 
This gen of console is mid range pc hardware. Pound sand to your hearts desire if you don’t like hearing that.

Like every other generation, 2nd and 3rd gen games will benefit from a mature toolset but the hardware itself is midrange parts.

Mid gen, it’ll be updated with the latest mid range parts. Yay.
 
This gen of console is mid range pc hardware. Pound sand to your hearts desire if you don’t like hearing that.

Like every other generation, 2nd and 3rd gen games will benefit from a mature toolset but the hardware itself is midrange parts.

Mid gen, it’ll be updated with the latest mid range parts. Yay.

That's the first time I heard midrange. Last generation it was low-end parts right?

I agree that the difference now is smaller than ever; last gen Multiplatform games that were 30 fps on console (everything outside of fighting games, shooters and some racing games) were easily 60 on PC. Now we even have Call of duty ray tracing at 60fps.

Maybe the increased frame rate on consoles allows the PC to have much higher detail again
 
192 Gflops for PS3 GPU. Let's not rewrite history.

That's just pixel shaders. It was 232 GF with both vertex and pixel shaders which is a better comparison to modern day unified numbers.

edit: slightly off topic but, lol is 30tflop for 3080 for real? How come the 6800 XT is beating it in the vast majority of games with only 20tflop?
I believe the 30tflop is just marketing numbers but not actual performance, same with xbox 12tflop; it is not real world tflop but only paper spec PR tflop, which is weaker than IRL tflop (if that makes sense)

Tflops of rtx3xxx are less „effective” and almost 30tf 3080 is not even 2x faster than 10tf ps5 gpu without rt (btw performance/tf doesnt mean shit all that matter is perf/power consumption in arch. effectivness)

You should never use TFLOPs alone to gauge a GPU's performance. It's just an indicator of one component amongst several that determine overall performance. the 3080's 30 FLOPS are real but can only be achieved under certain circumstances when the workload is right and other bottlenecks aren't prevalent. It's likely that there are points within certain frames that those 30TF are all brought to bare to complete a task much faster than 6800XT, but that task will only be part of the overall workload of the frame and thus it's impact on the overall frame time won't scale directly in line with the TFLOP rating in comparison to other GPU's. Also because Ampere's CUDA cores now handle both floating point and integer math, the real world average throughput of the 3080 in a typical float : integer workload of 2:1 would be more like 20TF. Although under those same circumstances the 6800XT would be more like 13-14TF.
 
Back
Top