A Generational Leap in Graphics [2020] *Spawn*

Consoles are already doin’ some medium settings stuff (WDL?), even godfall isnt fully ’max’ settings compared to pc versions epic, and even lacking RT (which can be had on rdna2 pcs).

We speak about realworld performance and here it is a bit under two times more powerful for the 6900XT

Theoretically, the highest end pc gpus are actually close to 3 times the ps5 (3090/6900xt oc) in non RT perf. They arent bw constrained like ps5 either.
We cant judge yet if those gpus wont utilize their TFs, current games arent even all that optimized for rdna2/ampere. I dont think AMD is making up things either.

That and valhalla is probably the least intresting game to performance test anyway.
If we start including RT, ps5 falls even shorter, a 6800XT will do alot better there at twice the compute capabilities.

Also, why would any serious pc gamer try to match the consoles and end up with a mid range machine.
Zen2 is kinda.... why go with that, or 2018 2060 rt performance, or settle with a 10TF (close to 5700xt oc) class gpu with subpar (per df) RT.

@pjbliverpool

Cyberpunk is great but the pc enjoys the middle land, you get all pc games (theres really good indie stuff on steam, great lookers at it), BC all the way, all MS games, Sony games here and there. For all the multiplats, youl have the best versions.
Not a bad place at all.
 
@pjbliverpool

Cyberpunk is great but the pc enjoys the middle land, you get all pc games (theres really good indie stuff on steam, great lookers at it), BC all the way, all MS games, Sony games here and there. For all the multiplats, youl have the best versions.
Not a bad place at all.

Yeah I don't have enough time to play even half of the games I really want to play so I'm really not losing too much sleep over missing out on a few Sony exclusives no matter how well they're rated (which I could play on PS Now if I really cared that much). I've owned HZD since the day it launched on PC (thanks to my wife who games far more than me) and have yet to play it - although it's next on my list after AC:Odyssey. Meanwhile I've spent a ton of time over the last week playing Mario Kart 8 with my son on the PC, because.... emulators rock.
 
By your earlier logic Watch Dogs is the best PC game ever because it runs at an even lower frame rate than 2077 right?
on a side-note: 2080ti must have been the worst investment ever; with less than 2 years on the market it is unable to run modern games even at 4k 30fps maxed it seems

What on earth gave you that idea?
Quote please.

And I was being very nice here.
I didn't even mention the lower settings on console vs PC.
You stumbled again ;)
 
Yeah I don't have enough time to play even half of the games I really want to play so I'm really not losing too much sleep over missing out on a few Sony exclusives no matter how well they're rated (which I could play on PS Now if I really cared that much). I've owned HZD since the day it launched on PC (thanks to my wife who games far more than me) and have yet to play it - although it's next on my list after AC:Odyssey. Meanwhile I've spent a ton of time over the last week playing Mario Kart 8 with my son on the PC, because.... emulators rock.

The holiday season has given me too much time :runaway:
upload_2020-12-25_9-34-55.png
 
What on earth gave you that idea?
Quote please.

And I was being very nice here.
I didn't even mention the lower settings on console vs PC.
You stumbled again ;)

Well, we can compare this:
witcher-bench-4k-u.jpg


To this:
View attachment 5175
This time they have pushed the barrier even harder and I expect the DLC's and Mod's to push this even further.

And this is EXACTLY what PC gaming is about.
It is not about +200 FPS on a console port...it's about cranking the image quality up to "11". ;)

Like I said...CDPR did it RIGHT this time...they did not let the lowest common denominator (the consoles) hold the PC back.

On PC Watch Dogs has an even lower frame rate than 2077.

"
this is EXACTLY what PC gaming is about.
It is not about +200 FPS on a console port."

43fps on a console port for the most powerful PC GPU ever created.. according to you that should be "EXACTLY what PC gaming is about"

If you don't want to hit 60 fps on console ports there are cheaper ways to go about it. No need to buy a 3090, even a 2080 should not be able to hit 60 as you see in your benchmarks.
 
I see the conversation has turned into comparing PC as a platform to console as a platform.. Which seems pretty of-topic and like a purposeful deflection of topic at Hand. Not sure what the price of component, or historical development of the PC have to do with the next generation difference in rendering Fidelity!
 
Some people argued that the worse a (high end) PC runs a game, the more technical it is as high frame rates indicate that the development could have been constrained by other (lesser) platforms.

If Legion runs even worse than CP2077 than that could mean that it has more advanced technologies in some areas.

The rain on the streets combined with the ground reflections look much more convincing in Legion for example, unless somebody sabotaged CP2077 in that comparison by lowering the settings
 
Consoles are already doin’ some medium settings stuff (WDL?), even godfall isnt fully ’max’ settings compared to pc versions epic, and even lacking RT (which can be had on rdna2 pcs).



Theoretically, the highest end pc gpus are actually close to 3 times the ps5 (3090/6900xt oc) in non RT perf. They arent bw constrained like ps5 either.
We cant judge yet if those gpus wont utilize their TFs, current games arent even all that optimized for rdna2/ampere. I dont think AMD is making up things either.

That and valhalla is probably the least intresting game to performance test anyway.
If we start including RT, ps5 falls even shorter, a 6800XT will do alot better there at twice the compute capabilities.

Also, why would any serious pc gamer try to match the consoles and end up with a mid range machine.
Zen2 is kinda.... why go with that, or 2018 2060 rt performance, or settle with a 10TF (close to 5700xt oc) class gpu with subpar (per df) RT.

@pjbliverpool

Cyberpunk is great but the pc enjoys the middle land, you get all pc games (theres really good indie stuff on steam, great lookers at it), BC all the way, all MS games, Sony games here and there. For all the multiplats, youl have the best versions.
Not a bad place at all.

Again maybe you play theoretically, but people play in realworld if everything was theoretical maybe we can stop website benchmarking games on different GPU or/and consoles. We just need paper spec this is enough.

https://www.techpowerup.com/gpu-specs/radeon-vii.c3358

Better than 1080 Ti and 5700XT a great GPU what a value 13,44 FP32 TFlops and 1 TB/s memory bandwidth. it eats PS5 GPU at dinner. Wrong...

EDIT:

And no Consoles will not have high settings in game. At the beginning of the gen because of cross gen consoles have high or very high settings in many part of the games but often some functionnality at lower settings. And this is common too for consoles to have some settings going lower than the low PC settings. But when the generation continue, consoles have lower and lower settings.

This is the reason platform holder introduce mid gen consoles. It will be midrange PC level and help consoles being ok from a performance point of view.

This time it will be better because they don't have the CPU as a handicap. Midgen will really be midrange PC part in three years.
 
Last edited:
In the case of the RX 6900XT then yes, we're "only" talking about double the overall real world performance from what we know so far. But you have to remember that Ampere dedicates die space to both RT and ML performance so both of those elements should be factored into it's total potential performance.

If a game doesn't use RT or ML, or only utilises them lightly, then Ampere (or Turing for that matter) isn't being used to it's full potential. Just look at CB2077 or WD:L performance at max RT and high resolutions with DLSS vs RDNA2. We're talking a performance advantage of between 3x (without DLSS) and 8x (with DLSS) over the next gen consoles in that scenario.

Obviously that doesn't apply to all situations so you could argue it doesn't matter, it's just unwisely used die space, an inefficient architecture. But the way I like to look at it is, what would the result be if for example these consoles used a 3070 instead of what they're using right now? What if every console title used DLSS (they obviously would since DRS which is used in every title is a much bigger compromise) along with a heavier dose of RT? How much extra performance would those consoles demonstrate compared to what they do currently?



This isn't really relevant. Both of the latest PC architectures are able to take advantage of Sampler Feedback as well. The streaming element simply makes MIP transitions smoother and/or cheaper.



This doesn't make a lot of sense. Assuming you mean the PC equivalents then PC settings requirements increase over time. What requires a 3080 today for Ultra settings will probably require a 4080 in a couple of years. Consoles have always had to scale down the settings ladder over time compared to PC's for that very reason. A few years won't help the situation, it'll hinder it.
In the case of the RX 6900XT then yes, we're "only" talking about double the overall real world performance from what we know so far. But you have to remember that Ampere dedicates die space to both RT and ML performance so both of those elements should be factored into it's total potential performance.

If a game doesn't use RT or ML, or only utilises them lightly, then Ampere (or Turing for that matter) isn't being used to it's full potential. Just look at CB2077 or WD:L performance at max RT and high resolutions with DLSS vs RDNA2. We're talking a performance advantage of between 3x (without DLSS) and 8x (with DLSS) over the next gen consoles in that scenario.

Obviously that doesn't apply to all situations so you could argue it doesn't matter, it's just unwisely used die space, an inefficient architecture. But the way I like to look at it is, what would the result be if for example these consoles used a 3070 instead of what they're using right now? What if every console title used DLSS (they obviously would since DRS which is used in every title is a much bigger compromise) along with a heavier dose of RT? How much extra performance would those consoles demonstrate compared to what they do currently?



This isn't really relevant. Both of the latest PC architectures are able to take advantage of Sampler Feedback as well. The streaming element simply makes MIP transitions smoother and/or cheaper.



This doesn't make a lot of sense. Assuming you mean the PC equivalents then PC settings requirements increase over time. What requires a 3080 today for Ultra settings will probably require a 4080 in a couple of years. Consoles have always had to scale down the settings ladder over time compared to PC's for that very reason. A few years won't help the situation, it'll hinder it.
The fixed-function RT hardware currently used in the Amper architecture will soon become unusable thanks to AMD’s shader-based solution, which promises full programmability for Ray-tracing and brings Super Resolution, which will be a clear answer to DLSS. New consoles and AMD VGAs will set this direction in the near future.

In response to the SF vs SFS proposition, from the test environment of the MS, use the same amount of data from the bandwidth:

Memory for HDD traditional mip streaming: 2159 mb active
Memory for XVA with PCs SF: 1053 mb active
Memory for XVA with Xbox Series SFS: 330 mb active

There is no need for a more advanced solution on the PC side, because there will be more VRAM over time, but on a console, SFS will provide a solution to this.
 
I think everyone's being a touch premature with the consoles, and in fact, current PC hardware. They have not peaked, we've got a bit of time to go before that happens. All we've currently got is a new baseline.

The irrational is creeping into B3D.
 
The fixed-function RT hardware currently used in the Amper architecture will soon become unusable thanks to AMD’s shader-based solution, which promises full programmability for Ray-tracing and brings Super Resolution, which will be a clear answer to DLSS. New consoles and AMD VGAs will set this direction in the near future.

In response to the SF vs SFS proposition, from the test environment of the MS, use the same amount of data from the bandwidth:

Memory for HDD traditional mip streaming: 2159 mb active
Memory for XVA with PCs SF: 1053 mb active
Memory for XVA with Xbox Series SFS: 330 mb active

There is no need for a more advanced solution on the PC side, because there will be more VRAM over time, but on a console, SFS will provide a solution to this.

Out of RT hardware intersection there is no fixed function hardware on Turing and Ampere. RT traversal is done on MIMD compute unit, if tomorrow for more flexibility DXR let the traversal being programmable. Nvidia can open access to traversal unit of RT core and it will continue to go faster than on RDNA2 GPU because MIMD is better than SIMD for BVH traversal.

And we don't even know what is Super Resolution....

For SFS, you don't understand what it is doing. Someone explain it to you before.

B3D is creepy, we have more and more console or PC or IHV warring and less and less constructive discussion.
 
Again maybe you play theoretically, but people play in realworld if everything was theoretical maybe we can stop website benchmarking games on different GPU or/and consoles. We just need paper spec this is enough.

https://www.techpowerup.com/gpu-specs/radeon-vii.c3358

Better than 1080 Ti and 5700XT a great GPU what a value 13,44 FP32 TFlops and 1 TB/s memory bandwidth. it eats PS5 GPU at dinner. Wrong...

EDIT:

And no Consoles will not have high settings in game. At the beginning of the gen because of cross gen consoles have high or very high settings in many part of the games but often some functionnality at lower settings. And this is common too for consoles to have some settings going lower than the low PC settings. But when the generation continue, consoles have lower and lower settings.

This is the reason platform holder introduce mid gen consoles. It will be midrange PC level and help consoles being ok from a performance point of view.

This time it will be better because they don't have the CPU as a handicap. Midgen will really be midrange PC part in three years.
There will most likely be no mid range consoles this time. The current consoles were much stronger and more advanced than the 2013 generational change. Manufacturing costs are much higher than the previous generation, so both manufacturers are selling their consoles at a significant loss and will continue to do so for years to come. Thus, it is unlikely that even more expensive machines would be brought out in the meantime. Furthermore, programming for SSDs and new I / O holds a lot of potential in itself, and developers will be very pleased with these consoles in the next 5-6 years.
 
Dirt5 looks amazing as well in terms of scalability


It is a very optimised game with a lot of performance headroom. Dirt 6 if it targets the new hardware as a baseline will be unbelievable IMO
 
I've been coming around to the idea that we're likely "stuck" with these machines for gen 9, but we might see a shorter 5 year generation with a move to 2nm in 2025.

With that being said, we'll get some gorgeous games. If we got Gears 5 and TLOU2 on 1.3 and 1.8 TF machines I'm sure there are going to be impressive games for 12 and 10 TF consoles.

I hope the focus is on IQ and framerate and not resolution this generation. If we get lucky then techniques will come along to give DLSS a run for it's money.
 
Especially the current old triangle-based Ray-tracing method makes little use of the performance of the new consoles. In this regard, more efficient BVH and voxel will be the key.

Don't expect magical "more efficient bvh" to solve problems. The entire way raytracing (all raytracing, let alone realtime raytracing via rtx) works is already super efficient acceleration structures (typically bvh trees). There's room for improvement when it comes to setting up scenes and choosing what does or doesn't go in to the raytracing structure, but everybody is already generating great bvh trees very fast. Big speed boosts will come from faster hardware, cheaper subject matter (at the bottom of the bvh tree you still need to do ray triangle intersection tests although 99.9% of games are made out of triangles, so...), and i suppose maybe clever shaders that minimize the need for samples
 
"Theoretically" doing a ton of work here.

Yes but thats the same for the rdna2 RT argument.....
Or the xsx's performance in some titles.

Goes both ways, too early to say if current games are a true example of what the new archs can do. In special for ampere.

Right now someone getting a 6900XT with default clocks will be sitting at 2.5x the ps5 performance, with an oc making that closr to three times. Could come in handy if things are done on the cu's (RT, dlss etc), or raw performance.
Guess that when real next gen stuff appears, and youd go ultra settings those gpus get stretched better.
Those are sure not bw bound either.

Ampere is a different story, due to better specced RT snd tensor hw. Next gen is all about RT and reconstruction tech anyway.
 
Last edited:
On PC Watch Dogs has an even lower frame rate than 2077.

"
this is EXACTLY what PC gaming is about.
It is not about +200 FPS on a console port."

43fps on a console port for the most powerful PC GPU ever created.. according to you that should be "EXACTLY what PC gaming is about"

If you don't want to hit 60 fps on console ports there are cheaper ways to go about it. No need to buy a 3090, even a 2080 should not be able to hit 60 as you see in your benchmarks.

You confuse FPS with fe
Some people argued that the worse a (high end) PC runs a game, the more technical it is as high frame rates indicate that the development could have been constrained by other (lesser) platforms.

If Legion runs even worse than CP2077 than that could mean that it has more advanced technologies in some areas.

The rain on the streets combined with the ground reflections look much more convincing in Legion for example, unless somebody sabotaged CP2077 in that comparison by lowering the settings

You asked for it and since I rather game than waste my time on your strawmen go watch this:

You are comparing apple and oranges and the game has a lot LESS DXR features than CB2077, but like with screen space reflections, you do not see/perceive that.

Add to that the WDL looking to "shiney new", something that most games suffer from
CB2077's world in contrast looks gritty, used and a lot more realistic...but at the same time with a lot more DXR (eg. raytraced global illumination).
But again, you demonstrated again that you do not see image fidelity, you just she shiney (and not the errors)...so I will go play CB2077...and let you strawman some more.

But very telling what GPU can match those settings on console...and it is not this generation ;)
 
Back
Top