General Next Generation Rumors and Discussions [Post GDC 2020]

I don't need native 4k nor raytracing, just give me 1600-1800p and CBR to 4k, use the extra power for eye candies then finish off at stable 30fps and looking like that Rebirth demo. If Horizon 2's visuals could hit that mark then I'll be super content.
Because this is a in your face "holy shit next gen is here" moment no matter how you slice it.
But of course, if you can chuck in some raytracing while maintaining those asset quality, effects, scale, res, etc then by all means use raytracing. But I highly suspect it's really comes down to either one or another.
 
I don't need native 4k nor raytracing, just give me 1600-1800p and CBR to 4k, use the extra power for eye candies then finish off at stable 30fps and looking like that Rebirth demo. If Horizon 2's visuals could hit that mark then I'll be super content.
Because this is a in your face "holy shit next gen is here" moment no matter how you slice it.
But of course, if you can chuck in some raytracing while maintaining those asset quality, effects, scale, res, etc then by all means use raytracing. But I highly suspect it's really comes down to either one or another.

If you don't want any dynamic anything in your game, I think developers could pull this off. It's just massive front-loaded work resulting in big budgets for I/O streaming, memory footprint and big abuse of hard drive space.

I don't think you understand the importance of 4K or ray tracing when you say both are not needed. You need to understand textures and how they are presented.
First and foremost
a) 4K resolution

Textures sizes and our technology around it are based around powers of 2. So 4K textures are - 2^12, 8K - 2^13, and 16K - 2^14 respectively. These are the resolution sizes.

Our compute shaders are now responsible for building the mips for textures from mip 0 (LOD 0) which is the full 2^12 pixels you see working its way down to MIP 12, which is 2^0, which is 1 pixel. So each MIP removes a power of 2.
Noting this, each MIP level going up reduces the number of pixels you can visually see by 1/2.
So MIP 0 begins right infront of you, and it increases in MIPs the furthest you can see outwards until it gets below 2^0, so less than 1 pixel.

If you have a resolution that is not capable of presenting high resolution textures, then at LOD 0, you can only see at best your native resolution worth of pixels, and the drop off even at MIP 1 which is 2^11, you won't be able to see any discerning difference because the number of pixels is still higher than your screen resolution.

That means, at MIP 0 and MIP 1, I guess depending on how the texture is wrapped (4K is not a big texture if you're using 1 texture for the entire world), but for example a pot, chair or gun, you won't see any difference on a 4K texture.

Whereas someone with 4K resolution will see a massive difference all the way down.
At MIP 2, someone with 4K resolution will see what a 1080p person would see at MIP 0, but 4K is seeing it way much further out and so forth.

So if you want gorgeous landscapes, having 2 extra levels of discernable texture detail from further out is paramount to image quality.
Perhaps the only reason you haven't seen it, is because you haven't seen proper 4K with HDR coupled with large 4K textures, and so your attention is elsewhere, while my attention is how much detail can be seen.

b) 60 frames per second creates a significant amount of additional motion clarity you don't get with 30fps, and this helps stabilize all those small details you wont be able see in 4K.

c) Ray Tracing allows for what you see in rebirth, but dynamically. And dynamic will be the future of next gen, as this generation is all baked. We can bake all sorts of lighting and light maps, but it's not going to work as well as a completely dynamic image working together. So I think another 7 years of just pure static lighting with sub 4K resolution is going to really just extend the graphics we have today.

It seems to me though, what you seem most wowed by, is not graphics (from a technical perspective) but production level of graphics. You really love high production stuff.

And certainly high production stuff will be able to surpass low production stuff. Almost always, but once the technologies are working in harmony, ray tracing with high production levels, I don't think you're going to want to just throw away 4K and ray tracing. There's going to be a notable lift in graphics fidelity you can't make up on 1080p static baked lighting.

Having said that, most people need to find a way to separate technology from production values when it comes to graphics.

I think, if you're really serious about what SSD can do for graphics and gameplay, throwing away those massive textures by pairing them with super low resolution doesn't make a lot of sense.
 
Last edited:
I don't need native 4k nor raytracing, just give me 1600-1800p and CBR to 4k, use the extra power for eye candies then finish off at stable 30fps and looking like that Rebirth demo. If Horizon 2's visuals could hit that mark then I'll be super content.
Because this is a in your face "holy shit next gen is here" moment no matter how you slice it.
But of course, if you can chuck in some raytracing while maintaining those asset quality, effects, scale, res, etc then by all means use raytracing. But I highly suspect it's really comes down to either one or another.

Or this


yLLGetK.png


 
I think native 4k is a waste. You'll gain some clarity in textures and have less visible aliasing, but at huge expense to complex lighting models and complex image processing. Upscaling or reconstruction from somewhere around 1440p will be good enough, assuming improvements are made in image reconstruction and we're not doing just basic upscaling with TAA applied on top. I know light sensitivity of the eye is different than our ability to distinguish pixels, so even if the pixels are small enough that you can't make out their individual shape there's still benefit to adding more pixels because they can have their own luminance value. Just a question of where you draw the line and what the cost is.
 
I don't need native 4k nor raytracing, just give me 1600-1800p and CBR to 4k, use the extra power for eye candies then finish off at stable 30fps and looking like that Rebirth demo. If Horizon 2's visuals could hit that mark then I'll be super content.
Because this is a in your face "holy shit next gen is here" moment no matter how you slice it.
But of course, if you can chuck in some raytracing while maintaining those asset quality, effects, scale, res, etc then by all means use raytracing. But I highly suspect it's really comes down to either one or another.

Meh. 60fps minimum or it didn’t happen.
 
The big jump from last gen to this gen wasn’t resolution so much; as it was the introduction of PBR pipeline. Combined with higher resolution and people were satisfied.

this coming next gen; without ray tracing we’re not going to build a lot of distance from this gen. Perhaps that builds a very strong argument for compute shaders being as flexible as they are. 4K is already there with midgen, leaving 60fps as being the largest delimiter. And anything in the 1080p range without ray tracing is tactfully the same as this gen; with respect to graphics.
Audio is going to be the hardest challenge to market; not going to happen in a sizzle reel.

SSD performance hard to market; hard to showcase in a sizzle reel as well.

this is where MS fell flat; the best things about next gen people couldn’t put their finger on.

This happened last gen as well; some people couldn’t see a big difference at launch, we are in for a repeat for a couple years.

I still disagree. Powerful CPU's (seems people ignoring this now, when it's been the politically correct thing for ages), SSD's (huge gamechanger, and puts consoles above a base gaming PC in a major spec for the first time in forever), 2080 level GPU's, 16GB GDDR, I think these are monsters. I'm more excited about this console jump than any in a while. Because on top of normal CPU/GPU/RAM increases we're adding SSD's into the mix. Not even to mention with raytracing, but I wonder if these machines are really powerful enough to do justice to that....

But again watching on YouTube or stream a cross platform/cross gen game like AC, it might not come across. If you watch a YT comparison of a 360 game against current gen version, often you'll be left thinking the 360 version is 80-90% as good. But in reality on your display, the difference is much larger.

Now granted, diminishing returns is a thing, that IMO really hit with PS4/One generation. That's not going away.

I'm sure Sony will go all out to wow us with their PS5 only exclusives footages, possibly soon, and that where you might start to see major differences.
 
I still disagree. Powerful CPU's (seems people ignoring this now, when it's been the politically correct thing for ages), SSD's (huge gamechanger, and puts consoles above a base gaming PC in a major spec for the first time in forever), 2080 level GPU's, 16GB GDDR, I think these are monsters. I'm more excited about this console jump than any in a while. Because on top of normal CPU/GPU/RAM increases we're adding SSD's into the mix. Not even to mention with raytracing, but I wonder if these machines are really powerful enough to do justice to that....

But again watching on YouTube or stream a cross platform/cross gen game like AC, it might not come across. If you watch a YT comparison of a 360 game against current gen version, often you'll be left thinking the 360 version is 80-90% as good. But in reality on your display, the difference is much larger.

Now granted, diminishing returns is a thing, that IMO really hit with PS4/One generation. That's not going away.

I'm sure Sony will go all out to wow us with their PS5 only exclusives footages, possibly soon, and that where you might start to see major differences.
Sure, I don't disagree, but that's something that we know, as avid enthusiasts of technology.
It's diifficult to explain to someone who knows nothing but can only watch trailers to know what to look for.
Compute shaders only do absolutely everything, from a feature level perspective, everything can be done via compute shaders, performance may not be there however.

Most of everything you write about though, is difficult for someone to understand in a sizzle reel.
While I know the biggest differences, the realities are that most people won't understand why hardware will hold back a game. I mean at a certain point in time, a game can be presented in many different ways. You can present a title like Dark Souls where there are few enemies to ever combat, but each battle can be costly on your life. Or you can present it as killing unlimited hordes of enemies. Or you can have smart AI, etc. People aren't going to get why CPU and SSD will hold back level design or creative vision, because we only see the end product, not what the product could have been.

As for graphical differences, sadly most people can't tell the difference between low/medium settings vs high/ultra.

Some folks can't tell resolution differences beyond 1080p.

Some folks can't tell the differences in HDR.

Some folks can't tell what is 60fps.

And with mid-gen here for 4 odd+ years before launch, we already got most of those visual upgrades. It's certainly going to be hard to differentiate.
 
Even though I'm older, I really don't want to sit on the couch in front of the big screen for hours to game.

I'll probably get the PS5 at some point, though my PS4 is not getting much use right now.

I think it's possible there will be a new paradigm, gaming on a second screen as opposed to an immersive audio/visual experience in front of a big screen with a surround sound system.


We will see if Nintendo doubles-down on the Switch by releasing a next gen version and see if it gains similar sales momentum. Not saying it will outsell the other consoles but it will be interesting if it carves out a similar big market share.

We see that kids are already trained to watch longer video content on their phones rather than big screens. Could be that they're being similarly trained to play games on devices with smaller screens.
 

Reminds me of "we are mandating 720p on 360"

I honestly find these kinds of claims annoying. Devs have their own priorities that these hw manufacturers cant account for. And thats how it should be. If a dev wants to prioritize a higher fps or sacrifice fps for cpu power for other things, that should be the standard on console

Nobody should be in business of forcing devs to do anything. Especially as they push the hardware with things like rt as the gen goes on.
 
Last edited:
SSD's (huge gamechanger, and puts consoles above a base gaming PC in a major spec for the first time in forever)

I'm not sure this is the case. Even as recently as the PS4 generation the consoles where touted as being a step above all PC's and cable of feats PC's wouldn't be able to match thanks to HSA.

Then going back to generation earlier Xenos was arguably more powerful and certainly more advanced than any GPU on the market at the time of it's launch while Cell left all PC CPU's in the dust in some respects (not all of course).

Then going back a further generation the original Xbox also had a more powerful GPU than anything available on PC at the time of it's launch.

This new generation feels like it stands up to PC's better than the PS4 generation but I'd say it's not without precedent. And while the SSD's will certainly put these consoles ahead of the majority of gaming PC's on the storage front at the time of launch, that's pretty much always been the case with a new generation of consoles, i.e. you generally need a pretty high end PC to beat or even maintain parity with the consoles at the point of their launch (if it's possible at all).
 
@iroboto I think highly cleaned up 1080p that has no aliasing and isn't blurred could actually look very good without having to push for 4k rendering. Then the higher you push the resolution the better it'll look. I'm hoping for improvements to temporal anti-aliasing that can clean up the image. I think Morgan McGuire was part of a team that was working on selectively super-sampling blurred images with ray-tracing. There are hopefully ways that we can push lower resolution to look more like a 1080p film than a current 1080p game. Now obviously you can push that resolution higher than 1080p, but there's some middle ground trade-off for performance between 1080p and 4k.
 
@iroboto I think highly cleaned up 1080p that has no aliasing and isn't blurred could actually look very good without having to push for 4k rendering. Then the higher you push the resolution the better it'll look. I'm hoping for improvements to temporal anti-aliasing that can clean up the image. I think Morgan McGuire was part of a team that was working on selectively super-sampling blurred images with ray-tracing. There are hopefully ways that we can push lower resolution to look more like a 1080p film than a current 1080p game. Now obviously you can push that resolution higher than 1080p, but there's some middle ground trade-off for performance between 1080p and 4k.
indeed, you don't need 4K native to have a 4K experience . I'm totally fine with varying methods of reconstruction, or DLSS etc. I would just be disappointed by a generation of low resolution assets when the goal of the generation is to unshackle the I/O limitations.
 
I'm not sure this is the case. Even as recently as the PS4 generation the consoles where touted as being a step above all PC's and cable of feats PC's wouldn't be able to match thanks to HSA.

Then going back to generation earlier Xenos was arguably more powerful and certainly more advanced than any GPU on the market at the time of it's launch while Cell left all PC CPU's in the dust in some respects (not all of course).

Then going back a further generation the original Xbox also had a more powerful GPU than anything available on PC at the time of it's launch.

This new generation feels like it stands up to PC's better than the PS4 generation but I'd say it's not without precedent. And while the SSD's will certainly put these consoles ahead of the majority of gaming PC's on the storage front at the time of launch, that's pretty much always been the case with a new generation of consoles, i.e. you generally need a pretty high end PC to beat or even maintain parity with the consoles at the point of their launch (if it's possible at all).

Consoles where more of a match back then to pc's yes then nowadays, XSX/PS5 will be rather mid.end compared to let's say a 3080Ti, a zen 3 12core and a optane drive. Very expensive though.

Dunno about cell for gaming btw, i think a intel quad was much better at most things, except specialized GPU like tasks, like this folding at home, but there GPU's like the X1900 did shine even more. I remember reading a discussion about it here somewhere back then (or was it PVC forums?). A Q6600 was better for gaming, a quad core at 2.4ghz i think it was. Cell was doing GPU things too.
OG xbox was a hybrid GF3/4, closest probably was a Ti500 with added vertex shader. But less speed and bandwith then a ti500, not to forget memory. Anyway, close to it's launch, or even before it (EU), Ti4600 was already here. I'm almost forgetting R8500, that was available before OG xbox, and on specs atleast much more powerfull. Think that GPU aged quite well. But then the rest of the xbox wasn't really up to, a p3 733 and 64mb ram.... :p
Historically, if consoles where able to match the pc on some component, it was for a very short amount of time.
For things like PS2, it was a totally different architecture, even to this day it's hard to compare it to anything else of the time.
 
Consoles where more of a match back then to pc's yes then nowadays, XSX/PS5 will be rather mid.end compared to let's say a 3080Ti, a zen 3 12core and a optane drive. Very expensive though.

Dunno about cell for gaming btw, i think a intel quad was much better at most things, except specialized GPU like tasks, like this folding at home, but there GPU's like the X1900 did shine even more. I remember reading a discussion about it here somewhere back then (or was it PVC forums?). A Q6600 was better for gaming, a quad core at 2.4ghz i think it was. Cell was doing GPU things too.
OG xbox was a hybrid GF3/4, closest probably was a Ti500 with added vertex shader. But less speed and bandwith then a ti500, not to forget memory. Anyway, close to it's launch, or even before it (EU), Ti4600 was already here. I'm almost forgetting R8500, that was available before OG xbox, and on specs atleast much more powerfull. Think that GPU aged quite well. But then the rest of the xbox wasn't really up to, a p3 733 and 64mb ram.... :p
Historically, if consoles where able to match the pc on some component, it was for a very short amount of time.
For things like PS2, it was a totally different architecture, even to this day it's hard to compare it to anything else of the time.

Agreed on pretty much everything but don't forget the X1900XTX didn't release until 2 months after the Xbox360, and while clearly much more powerful it still lagged behind in feature set by quite a bit.
 
People be looking at what is equivalent to black flag on ps4 saying "cancel next gen!"

Really silly stuff. Graphics are already super good by this point, but this new hardware still has a lot of more subtle things to offer from a variety of perspectives, especially after the crossgen grace period
 
Reminds me of "we are mandating 720p on 360"

I honestly find these kinds of claims annoying. Devs have their own priorities that these hw manufacturers cant account for. And thats how it should be. If a dev wants to prioritize a higher fps or sacrifice fps for cpu power for other things, that should be the standard on console

Nobody should be in business of forcing devs to do anything. Especially as they push the hardware with things like rt as the gen goes on.

If you read the article that Iroboto linked right below that.

Microsoft has talked about the framerates Series X will enable. But are you saying the Xbox Series X effectively ends sub 60 frames per second games, either from Xbox itself or from third-parties?

Jason Ronald:
I wouldn't say it ends it, but now the creative control is in the developers' hands. Ultimately, we view resolution and framerate as a creative decision. Sometimes, from a pure gameplay aspect, 30 is the right creative decision they can make. But in previous generations, sometimes you had to sacrifice framerate for resolution. With this next generation, now it's completely within the developers' control. And even if you're building a competitive game, or an esports game, or a twitch fighter or first-person shooter, 60 frames is not the ceiling anymore. As we've seen on PC and other ecosystems, ultra high framerates and ultra low latency input, that is the precision they prefer to prioritise. So we've designed the system to put that creative control in developers hands.

It's still in the developer's hands. No one is forcing anyone to do 60 FPS. But doing 60 FPS is going to be so much easier that I'm willing to bet most developers will be able to hit 60 FPS.

So, if a dev. really wants to do 30 FPS (barf) they can. If they want to do 60, they can. If they want to do 120, they can.

Regards,
SB
 
mesh shaders / primitive shaders should allow for much better LOD management and smooth transitions, much better than adaptive tessellation this gen, that was not much utilised by games sadly. I only remember noticing it in Battlefront 1, Battlefield 1 and Justt Cause 4.

you can see it used here on the tree trunks, i like this tech cause it does not swap brutally to a higher LOD, it's done smoothly.

 
If you read the article that Iroboto linked right below that.



It's still in the developer's hands. No one is forcing anyone to do 60 FPS. But doing 60 FPS is going to be so much easier that I'm willing to bet most developers will be able to hit 60 FPS.

So, if a dev. really wants to do 30 FPS (barf) they can. If they want to do 60, they can. If they want to do 120, they can.

Regards,
SB

Technically the system will output 60/120fps at all times (as dones current gen), in a similar vein to the "all games are output at 1080" xbox one tweet. Did they not review the prior launch to catch these easy traps they keep walking into.

Glad there was a more detailed answer out there.

Everything 60fps minimum - it’s a wonderful debate that comes back every generation, but it’s an illusion to believe this time is going to be any different. Literally nothing has changed in that respect.

Hardware capacity asside this time however halving the resolution (if we assume 4k targets) is still a very viable output. One that is still probably larger than a good many displays being used.

We also saw performance modes on the enhanced current gen,

Let's be hopeful the stars are aligning
 
If you don't want any dynamic anything in your game, I think developers could pull this off. It's just massive front-loaded work resulting in big budgets for I/O streaming, memory footprint and big abuse of hard drive space.

I don't think you understand the importance of 4K or ray tracing when you say both are not needed. You need to understand textures and how they are presented.
First and foremost
a) 4K resolution

Textures sizes and our technology around it are based around powers of 2. So 4K textures are - 2^12, 8K - 2^13, and 16K - 2^14 respectively. These are the resolution sizes.

Our compute shaders are now responsible for building the mips for textures from mip 0 (LOD 0) which is the full 2^12 pixels you see working its way down to MIP 12, which is 2^0, which is 1 pixel. So each MIP removes a power of 2.
Noting this, each MIP level going up reduces the number of pixels you can visually see by 1/2.
So MIP 0 begins right infront of you, and it increases in MIPs the furthest you can see outwards until it gets below 2^0, so less than 1 pixel.

If you have a resolution that is not capable of presenting high resolution textures, then at LOD 0, you can only see at best your native resolution worth of pixels, and the drop off even at MIP 1 which is 2^11, you won't be able to see any discerning difference because the number of pixels is still higher than your screen resolution.

That means, at MIP 0 and MIP 1, I guess depending on how the texture is wrapped (4K is not a big texture if you're using 1 texture for the entire world), but for example a pot, chair or gun, you won't see any difference on a 4K texture.

Whereas someone with 4K resolution will see a massive difference all the way down.
At MIP 2, someone with 4K resolution will see what a 1080p person would see at MIP 0, but 4K is seeing it way much further out and so forth.

So if you want gorgeous landscapes, having 2 extra levels of discernable texture detail from further out is paramount to image quality.
Perhaps the only reason you haven't seen it, is because you haven't seen proper 4K with HDR coupled with large 4K textures, and so your attention is elsewhere, while my attention is how much detail can be seen.

b) 60 frames per second creates a significant amount of additional motion clarity you don't get with 30fps, and this helps stabilize all those small details you wont be able see in 4K.

c) Ray Tracing allows for what you see in rebirth, but dynamically. And dynamic will be the future of next gen, as this generation is all baked. We can bake all sorts of lighting and light maps, but it's not going to work as well as a completely dynamic image working together. So I think another 7 years of just pure static lighting with sub 4K resolution is going to really just extend the graphics we have today.

It seems to me though, what you seem most wowed by, is not graphics (from a technical perspective) but production level of graphics. You really love high production stuff.

And certainly high production stuff will be able to surpass low production stuff. Almost always, but once the technologies are working in harmony, ray tracing with high production levels, I don't think you're going to want to just throw away 4K and ray tracing. There's going to be a notable lift in graphics fidelity you can't make up on 1080p static baked lighting.

Having said that, most people need to find a way to separate technology from production values when it comes to graphics.

I think, if you're really serious about what SSD can do for graphics and gameplay, throwing away those massive textures by pairing them with super low resolution doesn't make a lot of sense.
You don'y need native 4k to see all that, reconstruction from 1600-1800p base to 4k is good enough with 4k textures as been proven time and time again, except the half hearted ones like RDR2 on the Pro. And I don't just simply love high production stuff, it's the sum of all parts and a perfect balance of things. You can't afford to have everything in a console space, but some sacrifices are more bang for the buck than others. RT and native 4k are not the ones that scream omg next gen most of the time, you pretty much need an On/Off Digitalfoundry dissection to see it. Control is currently the poster child of Raytracing isn't it? Yet it looks like a PS3 game next to this Rebirth demo even if we render the demo at 1080p/30fps vs 4k/60fps for Control.
In short, Raytracing, native 4k, 60fps are not what makes a game look next gen, it's a combination of other things like 10s or 100s millions of polygons for environment detail, fluid sim, gpu particles, extensive use of Photogrammetry, high fidelity character models, motion matching animation, high res textures of great variety, and you put all that in a voxel cone tracing global illumination renderer with SSR, cubemaps etc and you'll have a game looking million times better than a 4k/60fps raytraced basic bitch that hardly goes above what current gen can offer :). But of course, when you have a 20TF Titan X then all shall be rendered to the last giga ray.
 
Back
Top