A Generational Leap in Graphics [2020] *Spawn*

5.5 is 25% better than 4.4. We have seen others games with up to 9x improvement (from PS4 to PS5) when GPU limited.
- The last guardian: PS4 1080p ~20fps to PS5: 1890p ~60fps, that's up to 9x (and using BC)
- Spiderman remastered has about 7-8x more perf on PS5 using the 60fps mode (so RDNA2 features).
Another Indie game I don't remember the name has up to 8x perf improvement using BC on PS5.

Games have different bottlenecks and perform different when put on a newer architecture such as PS5/XSX. Not sure what is so shocking about it. Also, we have seen more than a hand of full games on XBO X performing not only better, but beyond 2x rendered pixels compared to PS4 PRO. So 8x or whatever is not that far fetched really!
 
Its rainbow six siege, i dont its impressive but thats me. Its going into diminishing returns territory there. For example both my 1070 and 2080Ti can do 500fps in counter strike source. Try to do the same with another game, take watch dogs or something, and see where performances go.
....yes, I agree that a completely different and much older game is less impressive? I mean what is the point of this reply? You do realize that you don't have to respond to every single post in a thread since you last visited it, right?
 
IQ improvements aren’t strictly hardware performance-based. A big reason for that is consoles where devs find more elegant solutions to pull more performance out of the fixed-hardware platforms over time.

I agree with this, although I think DLSS bucks this trend is a pretty disruptive way.

Given all the bugs and glitches and the way even new Nvidia cards struggle, CP77 comes off as a straight brute force endeavor.

I don't agree with this though. CB2077 is hugely heavy on hardware, yes. But IMO the visuals warrant the performance cost.
 
I agree with this, although I think DLSS bucks this trend is a pretty disruptive way.



I don't agree with this though. CB2077 is hugely heavy on hardware, yes. But IMO the visuals warrant the performance cost.
If games with notably better visuals on hardware over 3x weaker become the norm are they really warranted? Going back to Crysis it’s important to remember that the consoles never came close to matching it, much less surpassing it.
 
Last edited:
I don't agree with this though. CB2077 is hugely heavy on hardware, yes. But IMO the visuals warrant the performance cost.

Not saying it doesn’t. Just saying that it seems CDR addressed CP77 needs by simply throwing a ton of Tflops at it.

It’s understandable given we are just at the beginning of a new gen, but I don’t think CP77 like IQ is going to take a 3090 with DLSS set to quality in the future.
 
If games with notably better visuals on hardware over 3x weaker become the norm are they really warranted? Going back to Crysis it’s important to remember that the consoles never came close to matching it, much less surpassing it.



as for CP2077, this is how a console from 7 years ago and over 17 times weaker* handles it:


*at least going by Nvidias' (fake PR) specs
 
Streaming. Im talking about SSD drives replacing fast GDDR5/6 ram. Im not saying the SSD drive cannot mitigate for the lack of VRAM, im just saying it cant take account for all of it. If that theory would be true, they could aswell have stick with 8GB ram 'because the SSD'. Dont forget either that the PS4 has a rather low bandwith to play with also, where the SSD has to help out?



Again, theres a reason some compare only cross gen titles? I think native games designed around the systems gives a better picture of what the systems can do, as you always like to point out. That would be Shadowfall and 1886 for the PS4, demon souls/rift apart for the PS5.
Also, your forgetting that PS3 and 4 where on more different architectures then PS4 and 5 are. Besides that cross gen porting wasnt as great as it is nowadays, witht focus more heavily on that.



Yes, exactly. Its also almos two years older and basically a 2012 midranger at that.



I too have no idea why cross gen games are so hot in this discussion right now. obviously, cross gen wasnt such a thing back then, and people are totally forgetting that PS3 and PS4 where on totally different architectures. If game A was done for say PS3 in mind and ported to PS4, that could explain the harder to optimize for case.



CP2077 really has some impressive nvme streaming tech going for it on pc, so yes, if games are designed around the pc, it has nor problem doing these things.



This, was thinking the same thing. As a pc gamer and upgrading alot (before atleast), this came to mind but didnt or couldnt explain it as well.
Anyway, when comparing graphical leaps, its generally best to compare games developed and designed around the system, not old games/cross gen titles.
The avoidance of this is telling me enough.




Its rainbow six siege, i dont its impressive but thats me. Its going into diminishing returns territory there. For example both my 1070 and 2080Ti can do 500fps in counter strike source. Try to do the same with another game, take watch dogs or something, and see where performances go.



Hm, not under-powered, consoles never are. They get mid range hardware and devs optimize for it. What i am saying is that the leap isnt as great as we are used to. Thats a different thing to under-powered. A 2070 system is not underpowered either.



A 3060Ti is more powerfull, but thats considered high-end i guess. No idea where people place hardware anymore.



Its a topic for concerned people. Im not concerned, with direct storage, NV and probably AMD, together with MS finding solutions that will fix those issues. NV promised direct access to the SSD/GPU, at speeds higher then any console. I have no doubt thats possible if they want to.



True they have always done so. But the pc has gotten much better in that regard though, in special since consoles have become more and more alike pc's. Optimization has been really bad before, like 15 to 20 years ago. Looking at last generation, its quite amazing that say a 7870 still can even run modern AAA titles ported badly.
Remember the days when MGS2 got ported to PC.


Again it shows you don't understand how works streaming, there is enough memory for streaming assets but you have some datastructure generated at runtime like the BVH and it takes a good amount of memory up to 2GB and you can other things generated at runtime. After for BVH you can create it during the development of the game for static object, stream it from the SSD and only update it for dynamic object. I don't speak about replacing RAM. And it does not mean they did not needed more RAM this generation, assets quality will be much better, CPU is more powerful meaning more RAM for system running on it, you have datastructure for raytracing or other for of GI(Voxel datastructure or maybe one day point datastructure). On Unreal Engine 5 demo, they were able to stream film quality assets for static object. If the consoles had more RAM for example 32 GB it means they will need to use less often the streaming system. I am not sure next generation we will need to multiply by two the memory maybe 24 GB is enough, memory bandwidth improvement is more important. The limit for game assets quality is not RAM or streaming system, this is the size of assets on storage. The situation is not as dramatic than with cartridge but this is the reality. I think if it was possible having a 10TB game size limit would be enough for an Open world with UE5 demo level of quality. In theory, you can stream all the content of a 100 GB game in 10 seconds on PS5 (oodle texture and oddle kraken) using the average 11 GB/s give by RAD tool games. On Xbox Series, you can stream all the content in 20 seconds using the 4.8 GB/s number.


From a guy working for RAD Tools game (oodle kraken/oodle texture)

The leap is the same than other generation for the GPU and most of the time bigger CPU, storage with SSD and there is some new coprocessors component to take some workload from the CPU, I/O system in PS5 and Xbox Series X (no I/O operation on CPU at all on PS5 side) and the Tempest Engine for 3d audio and some DSP on Xbox Series. The leap is a bit less for memory bandwith but again because of some compression inside the GPU this is more efficient and this is not like it was much bigger last generation PS3(48GB/s) to PS4(176GB/s) was a 3.7 increase in memory bandwidth from PS4 to PS5(448 GB/s) it i only 2.54 but like I said RDNA2 GPU needs less bandwidth than GCN GPU. The difference is not huge.

No there is a reason, in PC benchmark they use existing titles to compare hardware difference. You need the same workload to compare.
https://www.guru3d.com/articles_pages/sapphire_radeon_rx_6800_nitro_review,23.html

And cross gen, on PS4 they used PC path as the base. PS3/360 generation was long and at the end they were destroyed by PC. PS3 was the problem with inferior mutliplatform games compared to 360 and PC. This is useful and the best to compare two differents consoles hardware on pure performance aspect.

I know Demon's souls is not a PS4 game and it is gorgeous, this is one of the best looking game but it is difficult to compare the first year of games when last generation we had no cross gen title on Sony and Microsoft side. Horizon 2 comparison against Killzone Shadow Fall for example will not be interesting because Horizon is cross gen. This is the reason I think 2022 an 2023 will be a better point of comparison. For example, Unreal Engine 5 title will begin to arrive in 2022/2023.

I don't compare next-generation consoles to PC SSD, again in two years PCIE5 will be there it will go much faster than PS5 SSD but out of loading a bit faster* it will change nothing because the limit is not on streaming side but size of games. This is the same on PC. The consoles advantage is temporary.

* if PS5 load in 2 seconds a level it will load in 1 seconds on PC, there is a diminish return effect.
 
Last edited:
With all of the praise and hyperbole going around, the base 2013 PS4 version is extremely, extremely underrated.

with how impressed people are that the switch can run Witcher 3, or wolfenstein, PS4 running Cyberpunk 2077 is really the equivalent of the PSVita running Crysis
 
Again it shows you don't understand how works streaming, there is enough memory for streaming assets but you have some datastructure generated at runtime like the BVH and it takes a good amount of memory up to 2GB and you can other things generated at runtime. After for BVH you can create it during the development of the game for static object, stream it from the SSD and only update it for dynamic object. I don't speak about replacing RAM. And it does not mean they did not needed more RAM this generation, assets quality will be much better, CPU is more powerful meaning more RAM for system running on it, you have datastructure for raytracing or other for of GI(Voxel datastructure or maybe one day point datastructure). On Unreal Engine 5 demo, they were able to stream film quality assets for static object. If the consoles had more RAM for example 32 GB it means they will need to use less often the streaming system. I am not sure next generation we will need to multiply by two the memory maybe 24 GB is enough, memory bandwidth improvement is more important. The limit for game assets quality is not RAM or streaming system, this is the size of assets on storage. The situation is not as dramatic than with cartridge but this is the reality. I think if it was possible having a 10TB game size limit would be enough for an Open world with UE5 demo level of quality. In theory, you can stream all the content of a 100 GB game in 10 seconds on PS5 (oodle texture and oddle kraken) using the average 11 GB/s give by RAD tool games. On Xbox Series, you can stream all the content in 20 seconds using the 4.8 GB/s number.

Look, im not saying 16GB is paltry, that its not enough to drop some jaws down the line. What i mean is that in quantity, its less of a increase for memory then before. The SSD is going to mitigate some of it, but not all. Ram management is going to be the key there indeed.

The leap is the same than other generation for the GPU and sometimes bigger CPU, storage with SSD and there is some new coprocessors component to take some workload from the CPU, I/O system in PS5 and Xbox Series X (no I/O operation on CPU at all on PS5 side) and the Tempest Engine for 3d audio and some DSP on Xbox Series. The leap is a bit less for memory bandwith but again because of some compression inside the GPU this is more efficient and this is not like it was much bigger last generation PS3(48GB/s) to PS4(176GB/s) was a 3.7 increase in memory bandwidth from PS4 to PS5(448 GB/s) it i only 2.54 but like I said RDNA2 GPU needs less bandwidth than GCN GPU. The difference is not huge.

The jump in performance for the GPU is absolutely not the same as before. its five times less in pure metrics. Half that as we went from PS3 to PS4. Architectural improvements happen all the time for every generational shift. It could be argued that going from G70 to GCN1.1 was the larger improvement over going from GCN to RDNA2. For that we need someones input but atleast it can be said architectural improvements or IPC is going to account for both shifts.

CPU wise, even all the credit cell got on some forums and articles, was actually about four times weaker then the 8 core jaguar in the PS4. Thats aside from a very bad vs a much better efficiency. The jaguars got alot of flak, and compared to PC cpus they where low end, but compared to what was in the PS3 (cell) it actually was a great improvemet across the board for gaming related tasks. While the Zen2 cpu is a much needed leap, its not directly more of a leap then going from Cell>jaguar.

Yes the coprocessor is going to offload IO tasks, but remember that the PS4 didnt really need one as games didnt rely heavlily on fast loading.
The leap for memory bandwith is not 'abit less', its alot less. The GPU in the PS3 basically had to part with about 20gb/s for its 256mb allocated to vram. That went all the way to 176gb/s for the PS4. RDNA2 needs less BW then GCN, but whos to say bandwith efficieny wasnt improved going from G70 to GCN? Nothing happened in those 8 years of development?
I'd say that even there the difference is quite large, dGPU variants have infinity cache for a reason, or in Amperes case, closing in to 900gb/s of raw BW.

No there is a reason, in PC benchmark they use existing titles to compare hardware difference. You need the same workload to compare.

Thing is, scaling has improved alot, and cross-gen sure has been the focus now. Still, i see SF and 1886 as bigger leaps to the previous best graphics then what DS and rift apart do. But then we go again in this territory where some thing GTA3 looks better then 4 etc :p

And cross gen, on PS4 they used PC path as the base. PS3/360 generatino was long and at the end they were destroy by PC. PS3 was the problem with inferior mutliplatform games compared to 360 and PC. This is useful and the best to compare two differents consoles hardware on pure performance aspect.

Hm, seems like a disadvantage to the PS4 then, by using a PC path on the console? Anyway, scaling has come long ways since then, and the focus on cross gen has put much more optimization towards that im sure.

I know Demon's souls is not a PS4 game and it is gorgeous, this is one of the best looking game but it is difficult to compare the first year of games when last generation we had no cross gen title on Sony and Microsoft side. Horizon 2 comparison against Killzone Shadow Fall for example will not be interesting because Horizon is cross gen. This is the reason I think 2022 an 2023 will be a better point of comparison. For example, Unreal Engine 5 title will begin to arrive in 2022/2023.

Its an amazing title graphically, and its up there along others. But its not that leap shadowfall brought us. I mean, its something DF has shared the same opinion on. Generational shifts have decreased since the PS2, its no secret :)

Then you'd need to compare 2015/2016 PS4 games to 2022/2023 PS5 games or something, not launch ones like shadowfall. Now i say DS and SF because they both are launch titles and basically what we got for PS5.
Time will tell if we see the same leap as PS2 to PS3, and PS3 to PS4. Im still saying leaps have gotten smaller, its a general thing basically everyone knows.

I don't compare next-generation consoles to PC SSD, again in two years PCIE5 will be there it will go much faster than PS5 SSD but out of loading a bit faster* it will change nothing because the limit is not on streaming side but size of games. This is the same on PC. The consoles advantage is temporary.

* if PS5 load in 2 seconds a level it will load in 1 seconds on PC, there is a diminish return effect.

The console advantage right now is direct storage not being ready yet on pc. Whatever that advantage is anyway, PCIE4 nvme setups load games almost as quick? Have to do some side by side testing with the same games.
 
Look, im not saying 16GB is paltry, that its not enough to drop some jaws down the line. What i mean is that in quantity, its less of a increase for memory then before. The SSD is going to mitigate some of it, but not all. Ram management is going to be the key there indeed.



The jump in performance for the GPU is absolutely not the same as before. its five times less in pure metrics. Half that as we went from PS3 to PS4. Architectural improvements happen all the time for every generational shift. It could be argued that going from G70 to GCN1.1 was the larger improvement over going from GCN to RDNA2. For that we need someones input but atleast it can be said architectural improvements or IPC is going to account for both shifts.

CPU wise, even all the credit cell got on some forums and articles, was actually about four times weaker then the 8 core jaguar in the PS4. Thats aside from a very bad vs a much better efficiency. The jaguars got alot of flak, and compared to PC cpus they where low end, but compared to what was in the PS3 (cell) it actually was a great improvemet across the board for gaming related tasks. While the Zen2 cpu is a much needed leap, its not directly more of a leap then going from Cell>jaguar.

Yes the coprocessor is going to offload IO tasks, but remember that the PS4 didnt really need one as games didnt rely heavlily on fast loading.
The leap for memory bandwith is not 'abit less', its alot less. The GPU in the PS3 basically had to part with about 20gb/s for its 256mb allocated to vram. That went all the way to 176gb/s for the PS4. RDNA2 needs less BW then GCN, but whos to say bandwith efficieny wasnt improved going from G70 to GCN? Nothing happened in those 8 years of development?
I'd say that even there the difference is quite large, dGPU variants have infinity cache for a reason, or in Amperes case, closing in to 900gb/s of raw BW.



Thing is, scaling has improved alot, and cross-gen sure has been the focus now. Still, i see SF and 1886 as bigger leaps to the previous best graphics then what DS and rift apart do. But then we go again in this territory where some thing GTA3 looks better then 4 etc :p



Hm, seems like a disadvantage to the PS4 then, by using a PC path on the console? Anyway, scaling has come long ways since then, and the focus on cross gen has put much more optimization towards that im sure.



Its an amazing title graphically, and its up there along others. But its not that leap shadowfall brought us. I mean, its something DF has shared the same opinion on. Generational shifts have decreased since the PS2, its no secret :)

Then you'd need to compare 2015/2016 PS4 games to 2022/2023 PS5 games or something, not launch ones like shadowfall. Now i say DS and SF because they both are launch titles and basically what we got for PS5.
Time will tell if we see the same leap as PS2 to PS3, and PS3 to PS4. Im still saying leaps have gotten smaller, its a general thing basically everyone knows.



The console advantage right now is direct storage not being ready yet on pc. Whatever that advantage is anyway, PCIE4 nvme setups load games almost as quick? Have to do some side by side testing with the same games.

This is not how you count metric means nothing. What is important is real world performance and here we see depending of the title 5 to 8 times more pixel or in Spiderman remastered performance mode a 7/8 times more pixel pushed.

https://www.techpowerup.com/gpu-specs/geforce-gtx-1080-ti.c2877

1080 Ti Tflops is higher than 2070 Super or 2080 but the GPU performance is behind. This is the same things and it is better to buy a Turing GPU with or wihtout raytracing. Stop your bullshit. What you tell made no sense at all, this is pure trolling.

We will not rewrite history the Jaguar is weak. And the main reason we had a regression in gameplay physics. It was not stagnant because we had less physics in PS4/XB1 titles than PS3/360 one.

for G70 to GCN GPU, memory delta colour compression inside the GPU was a domain where Nvidia was much better than AMD, no chance for you. This is the reason GCN GPU need so much memory bandwidh and Vega GPU use HBM2. It improved with Polaris side and continued to improve with RDNA. You need to learn a bit about PC GPU technology.;) The PS3 had an Nvidia GPU.;)


Again DF is not an absolute value, this is a matter of taste and the work on Demon's souls* began in 2017on a GPU without raytracing acceleration and Bluepoint did not have the time or the workforce to use it. Bluepoint is not GG or ND, they are a tinier studio. It would have been more interesting to compare GG work on PS4 and PS5 if Horizon 2 was not cross gen. But we can compare 2015/2016 title like Uncharted 4 or TO1886 to 2022/2023 title if you want, I am sure the gap will be wide.

*Imo it is with Flight Simulator 2020 and Cybepunk 2077 the best looking title of 2020. TLOU2 would be the 4th best title before Spiderman MM.
 
Last edited:
If games with notably better visuals on hardware over 3x weaker become the norm are they really warranted? Going back to Crysis it’s important to remember that the consoles never came close to matching it, much less surpassing it.

The quality of graphics at any given time are relative the period they're released in. We're barely scratching the surface of what DX12U class architectures are capable of so at this point, very early on in the optimisation cycle I'd say that yes they are worth it (although see my response below for further context on this). As time goes on developers will naturally learn more about the new hardware capabilities and develop better ways to use the available power and features which will result in better looking games, as is the case every console generation. However for where we are right now in the optimisation cycle I'd say that CB2077's visuals warrant their high performance cost vs other games currently available.

Not saying it doesn’t. Just saying that it seems CDR addressed CP77 needs by simply throwing a ton of Tflops at it.

It’s understandable given we are just at the beginning of a new gen, but I don’t think CP77 like IQ is going to take a 3090 with DLSS set to quality in the future.

Actually I do agree with you. My earlier statements about the graphics being worth the performance cost weren't fully thought through and thinking about it more, I was really talking of the non-RT version of the game (as that's what I'm playing) when I made them. The requirements of those graphics seem quite in line with the result IMO compared with other games available at the moment in my own experience. Adding RT seems to give significant additional benefits but yes, an argument can certainly be made that the extra graphical fidelity isn't worth losing 2/3rds of your performance, and there will be games later this generation which use that power for more visual impact.



as for CP2077, this is how a console from 7 years ago and over 17 times weaker* handles it:


Despite YT compression underselling the vast difference in image quality and framerate between 4K 40fps+ and 720p 20fps+, I think the CB comparison holds up very well to the other 2 videos. The Crysis difference is probably bigger, but the Crysis 3 difference is smaller IMO. Just look at the scene from 7:14 for a example in the CB comparison. That scene gets a few seconds in the video but could represent hours of gameplay, both in car and on foot within the game, and that's a very clear generational difference between the two platforms. Here it is for reference:


*at least going by Nvidias' (fake PR) specs

This is silly. I assume you're referring to the available TFLOPS, but there is nothing fake or PR about what Nvidia rates the Ampere GPU's at. It's a simple factual reporting of the architectures raw float shader throughput that is easily testable. If you're concluding that Nvidia is lying about it's FLOPS because there isn't a real world 17x difference between the 3080 and PS4 (when not using RT or DLSS) then that simply highlights a lack of understanding of the relevance of FLOPS in the overall system architecture context.
 
The quality of graphics at any given time are relative the period they're released in. We're barely scratching the surface of what DX12U class architectures are capable of so at this point, very early on in the optimisation cycle I'd say that yes they are worth it (although see my response below for further context on this). As time goes on developers will naturally learn more about the new hardware capabilities and develop better ways to use the available power and features which will result in better looking games, as is the case every console generation. However for where we are right now in the optimisation cycle I'd say that CB2077's visuals warrant their high performance cost vs other games currently available.
I agree up to a point. I'd say the performance gap between a 3090+DLSS and a PS5 is clearly bigger than G80 and the PS3. You can look at any gen 7 console title throughout their lifetime and not a single one ever came close to matching Crysis. 7 years of developer improvement and nothing came close. Similar to DX12U being new, there were several new paradigms of the time. Learning them improved things dramatically, but not nearly enough to close the gap. GPU wise Crysis performance was justified. Comparing the best looking games on a base PS4 to Cyberpunk on a 3090 and there is just no way the visuals aren't punching well below their weight with the 35-40x more GPU capability available.
 
Last edited:
for G70 to GCN GPU, memory delta colour compression inside the GPU was a domain where Nvidia was much better than AMD, no chance for you. This is the reason GCN GPU need so much memory bandwidh and Vega GPU use HBM2. It improved with Polaris side and continued to improve with RDNA. You need to learn a bit about PC GPU technology.;) The PS3 had an Nvidia GPU.;)

G70 is based on the CineFX architecture and had so many problems with pipeline bubbles. And unlike G7x is hasnt even twice the pixel shader output per pipeline. GCN 1.1 is so much better that alone the architecture improvements are one the same level as the raw compute performance improvement from PS4 -> PS5.
 
This is not how you count metric means nothing. What is important is real world performance and here we see depending of the title 5 to 8 times more pixel or in Spiderman remastered performance mode a 7/8 times more pixel pushed.

Im talking metrics because thats what we do every generation. Even MS and Sony themselfs did so (four times the CPU, twice the GPU power of the OneX etc). On a forum like this, there will be metrics tossed around.
I agree on real world performance, but pushing 'between 5 to 8 times more pixels' doesnt really tell the whole story either. It doesnt mean flat out that theres a 8x improvement.

Aside from that, when a new generation of consoles arrive, usually, its the native games that really show off the capabilities, where graphical fidelity is upped. Shadowfall did just that, and so does Demon souls, but the latter didnt show a huge a leap as SF did.

1080 Ti Tflops is higher than 2070 Super or 2080 but the GPU performance is behind. This is the same things and it is better to buy a Turing with or wihtout raytracing. Stop your bullshit. What you tell made no sense at all, this is pure trolling.

What has a 1080Ti vs 2070S to do with what i said? Your talking about architectural improvements there, ofcourse TF for TF, Turing is more efficient then Pascal, it should be. I have no clue why you even need to start about that.
Going from PS3 to PS4, or G70 to GCN, there where architectural improvements too. TF for TF, the PS4's has an advantage there aswell.
Aside from that, keep things civil. You think the leap is as large as goig from PS3 to 4. I think its not. Its a agree and disagreement, thats how forums work.

We will not rewrite history the Jaguar is weak. And one of the reason we had a regression in gameplay physics. It was not stagnant but we had less physics in PS4 title than PS3/360 one.

Jaguar is weak, i have never said it was a strong CPU for the time. What i am saying is that the performance relative to Cell vs the jaguar wasnt all that bad. It was a four time increase, not talking effeciency. Which means, the leap in CPU going from PS4 to 5 isnt that much different, despite the CPU being much better then the jaguar. You can, perhaps, blame the Cell for that.

for G70 to GCN GPU, memory delta colour compression inside the GPU was a domain were Nvidia was much better than AMD, no chance for you. This is the reason GCN GPU need so much memory and Vega GPU use HBM2. You need to learn a bit about PC GPU technology.;) The PS3 had an Nvidia GPU.

Lol. atleast you keep it funny with the 'no chance for you' :) Anyway, NV had an advantage you say over amd with G70? Is that what you mean? Anyway, i seriously doubt that, going from a Nvidia G70 (7800GT derative) to a AMD GCN 7870 derative, the memory efficiency wasnt improved at all, or even behind.

Again DF is not an absolute value, this is a matter of taste and the work on Demon's souls* began three years on a GPU without raytracing acceleration and Bluepoint did not have the time or the worforce to use it. Bluepoint is not GG or ND, they are a tinier studio. IT would have been more interesting to compare GG work on PS4 and PS5. But we can compare 2015/2016 title like Uncharted 4 or TO1886 to 2022/2023 title if you want, I am sure the gap will be wide.

DF is not an absolute value no, why would it be? They are though, very highly regarded here. The work on Demon souls began three years ago, ye, i can believe that. But the same would go for Shadowfall, i doubt they made that game in half a year. The development time probably was quit close to that three year development time you talk about.

GG wasnt back then what they are today either. Also, an advantage for PS4 to 5 is that we stay on the same AMD/X86 platform, whereas for PS3 to PS4, from NV to amd and Cell to out-order X86.

*Imo it is with Flight Simulator 2020 and Cybepunk 2077 the best looking title of 2020. TLOU2 would be the 4th best title before Spiderman MM.

Yea, can agree on that, its up there with those (demon souls). Still, i'd rate CP2077 at 1 and can see why alex and john did.

not a single one ever came close to matching Crysis.

Sure about that? Uncharted 3 did look very nice graphically. Larger gap then DS to CP2077 pc perhaps, but dont forget that leaps have gotten smaller on pc too, but to a much lesser degree (hardware wise). Blame power usage id say.
 
There is only one place where we will have a visual diminishing return, this is cutscene. We can model good character and this will be into the details some very visible like hair rendering, peach fuzz and some more subtle effect like better subsurface scattering.

Slide 106
http://advances.realtimerendering.com/s2020/NaughtyDog_TechArt_TLOU2-final.pptx

Comparison between cutscene model and ground truth offline rendered model.

SJ7o7xR.png


This will be an interesting generation leap. This is the first time many of our techniques are “correct” and can be carried over without rewriting; leaving us to focus on missing details (both in terms of geometry and rendering.) Even in this side-by-side image there are major elements missing, like peach fuzz, localized light bounce, and hair detail. There are also a few things that are present but “wrong”, like pore detail and specular response.

But that’s not a bad place to be. If we continue to fight for correctness and reusability, and healthy collaboration, the next generation of consoles and PCs should yield a massive improvement in quality.

Im talking metrics because thats what we do every generation. Even MS and Sony themselfs did so (four times the CPU, twice the GPU power of the OneX etc). On a forum like this, there will be metrics tossed around.
I agree on real world performance, but pushing 'between 5 to 8 times more pixels' doesnt really tell the whole story either. It doesnt mean flat out that theres a 8x improvement.

Aside from that, when a new generation of consoles arrive, usually, its the native games that really show off the capabilities, where graphical fidelity is upped. Shadowfall did just that, and so does Demon souls, but the latter didnt show a huge a leap as SF did.



What has a 1080Ti vs 2070S to do with what i said? Your talking about architectural improvements there, ofcourse TF for TF, Turing is more efficient then Pascal, it should be. I have no clue why you even need to start about that.
Going from PS3 to PS4, or G70 to GCN, there where architectural improvements too. TF for TF, the PS4's has an advantage there aswell.
Aside from that, keep things civil. You think the leap is as large as goig from PS3 to 4. I think its not. Its a agree and disagreement, thats how forums work.



Jaguar is weak, i have never said it was a strong CPU for the time. What i am saying is that the performance relative to Cell vs the jaguar wasnt all that bad. It was a four time increase, not talking effeciency. Which means, the leap in CPU going from PS4 to 5 isnt that much different, despite the CPU being much better then the jaguar. You can, perhaps, blame the Cell for that.



Lol. atleast you keep it funny with the 'no chance for you' :) Anyway, NV had an advantage you say over amd with G70? Is that what you mean? Anyway, i seriously doubt that, going from a Nvidia G70 (7800GT derative) to a AMD GCN 7870 derative, the memory efficiency wasnt improved at all, or even behind.



DF is not an absolute value no, why would it be? They are though, very highly regarded here. The work on Demon souls began three years ago, ye, i can believe that. But the same would go for Shadowfall, i doubt they made that game in half a year. The development time probably was quit close to that three year development time you talk about.

GG wasnt back then what they are today either. Also, an advantage for PS4 to 5 is that we stay on the same AMD/X86 platform, whereas for PS3 to PS4, from NV to amd and Cell to out-order X86.



Yea, can agree on that, its up there with those (demon souls). Still, i'd rate CP2077 at 1 and can see why alex and john did.



Sure about that? Uncharted 3 did look very nice graphically. Larger gap then DS to CP2077 pc perhaps, but dont forget that leaps have gotten smaller on pc too, but to a much lesser degree (hardware wise). Blame power usage id say.

This is exaclty the same problem you can't compare GCN 1.1 Tflops to RDNA 2 Tflops like you can't compare Pascal Tflops to Turing Tflops. Your comparison is bad. At the end the GPU is probably arounf 7 to 8 times more powerful than the 2013 PS4 without pushing new features, it tells a big part of the whole story. The other part of the story is new features and this is an advantage for PS5 GPUs not PS4 GPUs.;)

and same for @troyan Take CELL SPU + G70 and they had some compute shader equivalent with SPU for some workload on vertex side or you can do postprocessing too with CELL SPU. ;)

Maybe you need to go back to some GDC document of the CELL SPU usage for graphics.
 
Last edited:
This is exaclty the same problem you can't compare GCN 1.1 Tflops to RDNA 2 Tflops like you can't compare Pascal Tflops to Turing Tflops. Your comparison is bad. At the end the GPU is probably arounf 7 to 8 tomes more powerful than the 2013

Yes. The same for G70 to GCN1, you cant compare them TF for TF either. Fact remains that in TF metrics alone, the improvement was 10 times the increase, as opposed to 5 what we got now. Thats pure metrics. Counting in going from G70 arch to GCN1.1 arch and the improvement is even larger then what we see from GCN to RDNA. If the PS5 GPU is 7 times more powerfull then the PS4s, the PS4s gpu was close to 20 times more powerfull. Architectural changes account for both.
And no, basing performance increases purely based on resolution is faulty to begin with. In special considering last gen games.

Maybe you need to go back

Maybe use common sense.

Dark Souls 3 to Demon's Souls Remake.

Still doesnt change on what we got as the best visuals possible on PS5 at launch vs the PS4 at launch. No matter what, if the leap was as big as Chris1515 promises, we sure should have seen it in a game natively designed around the PS5 (bluepoint claims so). In special if the development time was three years.
 
Yes. The same for G70 to GCN1, you cant compare them TF for TF either. Fact remains that in TF metrics alone, the improvement was 10 times the increase, as opposed to 5 what we got now. Thats pure metrics. Counting in going from G70 arch to GCN1.1 arch and the improvement is even larger then what we see from GCN to RDNA. If the PS5 GPU is 7 times more powerfull then the PS4s, the PS4s gpu was close to 20 times more powerfull. Architectural changes account for both.
And no, basing performance increases purely based on resolution is faulty to begin with. In special considering last gen games.



Maybe use common sense.



Still doesnt change on what we got as the best visuals possible on PS5 at launch vs the PS4 at launch. No matter what, if the leap was as big as Chris1515 promises, we sure should have seen it in a game natively designed around the PS5 (bluepoint claims so). In special if the development time was three years.

Sorry I didn't understand. What was Chris' promise? I just came into the thread reading the title then replying. :LOL:
 
Back
Top