Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
Bad example. Ultra settings are always very ineficiente afterthoughts meant to put an extra cherry on top of a niche market's ice-cream. They are definitely not representative of what a game built for a 1080Ti from the ground up would look like. There were ps360 gen games that ran sluggishly on ultra settings on cards faster than the ps4, and still didn't look an inch as good as any PS4 exclusive.
Absolutely I agree.
That's due to technology, feature and architectural differences.
Compute shaders didn't exist back in DX9, it's like asking a PS4 to do reflections, refraction, transparencies and all sorts of soft shadows in real time... without RT capabilities...
Do you really think that throwing more power at the solution will solve it?
Would a 12TF Xbox 360 be able to generate better graphics than a X1X?

The irony of this discussion; but without the sarcasm, this is what the actual debate is.
Are we not at the point of diminishing returns today, such that the only way to break the next graphical threshold is to move to RT?

Would we be satisfied with next gen only being higher fps and higher frame rate than what we have today? More baked lighting? SVOGI at best?

Unlike the PC market, we're locking the generation in for 6 full years starting in 2020. I can really see this debate on both ends of the argument. Believe me, it's like the twirling ballerina, some people see her twirling right the others left.

Part of where I'm looking is not just the technical piece, which is what we get hung up on a lot here. I'm looking at the business aspect. In particular Nvidia, the partnership with MS, streaming tech, the PC space. MS still very much wants you to move to windows 10. They very much want casuals who have low resolution screens to experience next gen gaming over streaming.

If I were MS, could I really sell 4K... twice? Many of you have already discussed to death that you felt 4K was a complete waste of power; drop the resolution and increase the quality per pixel, yet here we are...
 
Last edited:
Yeah, sure.

People have been talking about the $400 launch price, and how we can't expect to see it alongside a generational upgrade, when continuing to push 4K. I say the base model should be focused on 1440p-1800p, and the Pro model should be focused on 4K/dual gaming.

-- Base PS5 --

$350

- 8c/16t Zen2 CPU clocked at 3GHz
- 70CU Navi GPU clocked for 8-10TF
- 16GB GDDR6 memory
- 64GB NVME
- 1TB HDD
- UHD BR drive

-- PS5 Pro Duo --

$550

- 2 x 8c/16t CPU clocked at 3.8GHz
- 2 x 70CU Navi GPU clocked higher than the base model's for just over double the performance (think PS4 -> PS4Pro)
- 2 x 16GB GDDR6 memory
- 2 x 64GB NVME
- 2TB HDD
- UHD BR drive
- 2 x HDMI 2.1 ports, 2 x camera ports

That's based on the fact that the APU+memory in the PS4 came to something like $190, whereas this time I reckon it'll be more like $250.

They can enter the market aggressively to gather both cost conscious and performance conscious gamers, and they can also negate the risk of a competitor having more performance.

Base PS5 owners who subscribe to PSNow will be able to stream from the Pro, but play it locally if preferred. PS4 owners, or PlayStation tablet owners, who subscribe to PSNow can stream from the PS5Pro. When Sony's PSNow PS5Pro blades are being overloaded, any number of them can divide into two.

Still think my original guess is the most likely out of the 2 tier approach:

Normal console @ £329:
4c/8t Zen at 3ghz
~8Tflops (3072 Alu x ~1300mhz)
16GB Ram
120GB ssd
1TB hdd

Top tier @ £499
4c/8t Zen at 3.8ghz
~16Tflops (4096 Alu x 1950mhz)
16GB - 24GB Ram
120GB ssd
2TB hdd
 
Absolutely I agree.
That's due to technology, feature and architectural differences.
Compute shaders didn't exist back in DX9, it's like asking a PS4 to do reflections, refraction, transparencies and all sorts of soft shadows in real time... without RT capabilities...
Do you really think that throwing more power at the solution will solve it?
Would a 12TF Xbox 360 be able to generate better graphics than a X1X?

The irony of this discussion; but without the sarcasm, this is what the actual debate is.
Are we not at the point of diminishing returns today, such that the only way to break the next graphical threshold is to move to RT?

Would we be satisfied with next gen only being higher fps and higher frame rate than what we have today? More baked lighting? SVOGI at best?

Unlike the PC market, we're locking the generation in for 6 full years starting in 2020. I can really see this debate on both ends of the argument. Believe me, it's like the twirling ballerina, some people see her twirling right the others left.

Part of where I'm looking is not just the technical piece, which is what we get hung up on a lot here. I'm looking at the business aspect. In particular Nvidia, the partnership with MS, streaming tech, the PC space. MS still very much wants you to move to windows 10. They very much want casuals who have low resolution screens to experience next gen gaming over streaming.

If I were MS, could I really sell 4K... twice? Many of you have already discussed to death that you felt 4K was a complete waste of power; drop the resolution and increase the quality per pixel, yet here we are...

So basically X is 4k and you don't see Microsoft releasing until they can do RT as that seems the future and console generations are too short to miss the boat by a year?
 
Are we not at the point of diminishing returns today, such that the only way to break the next graphical threshold is to move to RT?
No. RT is definitely the next big step, but the current scope for compute and rasterisation is nowhere near peaked. Having the same tech now but with substantially more horsepower will enable plenty. There is lots of progress with realtime global illumination without needing specialist raytracing hardware. First-gen realtime RT may be comparable to latest-gen optimised compute-based lighting, though perhaps a little better quality but at half the framerate and resolution. So it's a significant trade between true reflections and shadows for far lower fidelity, or 'good enough' shadows and reflection hacks in far greater fidelity. We need RT to get beyond first/second/third generation (depending on what's being counted as a generation) for it offer better in every way. Raytraced shadows on compute is a thing already, for example. Throwing in all the tech advances you've mentioned that aren't tied to RT, there may be significant optimisations to be made further down the line too enabling far better quality from the existing shader+rasterisaion tech. It can also be argued that a move away from some of the fixed-function rasterising could be all that's necessary ,moving the GPUs towards programmability (and so adapting to raytracing as a result) without needing dedicate RT hardware.
 
So basically X is 4k and you don't see Microsoft releasing until they can do RT as that seems the future and console generations are too short to miss the boat by a year?
I am leaning towards that, yes. But there is enough advancement in the non-RT space to proceed as well. But it's hard to sell 4K to someone who doesn't have a 4K TV and has no intention of upgrading their 1080p screen.

Even as an RT console, we're looking at 1080p RT quality, 1080p performance or 4K as display modes. adding RT to a console does not limit it's non RT abilities.
 
No. RT is definitely the next big step, but the current scope for compute and rasterisation is nowhere near peaked. Having the same tech now but with substantially more horsepower will enable plenty. There is lots of progress with realtime global illumination without needing specialist raytracing hardware. First-gen realtime RT may be comparable to latest-gen optimised compute-based lighting, though perhaps a little better quality but at half the framerate and resolution. So it's a significant trade between true reflections and shadows for far lower fidelity, or 'good enough' shadows and reflection hacks in far greater fidelity. We need RT to get beyond first/second/third generation (depending on what's being counted as a generation) for it offer better in every way. Raytraced shadows on compute is a thing already, for example. Throwing in all the tech advances you've mentioned that aren't tied to RT, there may be significant optimisations to be made further down the line too enabling far better quality from the existing shader+rasterisaion tech. It can also be argued that a move away from some of the fixed-function rasterising could be all that's necessary ,moving the GPUs towards programmability (and so adapting to raytracing as a result) without needing dedicate RT hardware.
Agreed, but what makes having a 14TF GPU with no RT better than a 11.5 TF GPU with RT?
Hypothetical scenario here:
PS5 goes pure compute, pushes the latest in rasterization architecture, all silicon dedicated towards it - hits say 15 TF.
X1 goes RT route, pushes the latest in rasterization architecture, but can only hit 11 TF.

Are we saying because of this 4 TF difference, that X1 used towards RT hardware, that it's incapable of producing what PS5 does (invariably at a lower resolution or slightly lowered settings?) That doesn't make a lot of sense to me. We have a 30% power gap between xbox and PS4 today, and all AAA titles are about 30% less resolution. Unless that 30% power can translate into something else where X1 is going to be missing out on something, I don't see the debate here.

But if X1 used that RT hardware, I'm pretty confident that PS5 could not match Xbox settings without significantly penalizing resolution or frame rate.
 
I've never known new consoles having trouble selling and needing some tech spiel to get people to buy them. If next-gen doesn't have raytracing, you think millions of console gamers will give up their hobby even if the games look notably better
adding RT to a console does not limit it's non RT abilities.
But it does, and that's the problem. At least, the current implementation does. The more silicon you dedicate to raytracing, the less you have for shaders. If adding raytracing was free, it'd be a no-brainer, and there also lies the question about what can be done in compute, such as shadows and various hacky GI techniques that aren't as good as RT but might be good enough for the next-gen until RTRT has matured.

Are we saying because of this 4 TF difference, that X1 used towards RT hardware, that it's incapable of producing what PS5 does (invariably at a lower resolution or slightly lowered settings?).
Going by the current demo, which obviously doesn't represent absolutely what RT can do, but does provide some data rather than total guesswork, I think the problem is that RT can reach further than rasterisation but does so slower. Your hypothetical PS5 could not create the reflections of your X1 (X2?), but your X1 will be slower in creating the same content that the PS5 can create. So taking the Remedy RTX demo, imagine a GPU the size of 2080 GTX which is all shaders and no Tensor Cores and RT stuff. You're looking at ~twice the potential shader power. It could render the same sort of game at 4K60, say, with faked reflections. If you choose instead the 2080 GTX, you will get true reflections but can never reach beyond that 1080p30 because the RT hardware can't work any faster than that. To represent it with crude numbers, rasterising can get four times as much data on screen per second, even if it's rather hacky data. Back in the bad old days, offline rendering used rasterising because it was soooo much faster, and it was only when the workflow and push for realism exceeded rasterisation's ability to fake it did they make the transition. We're definitely not at the limit of game rasterising to fake it.
 
If you choose instead the 2080 GTX, you will get true reflections but can never reach beyond that 1080p30
Or
you can get the same ~ FauxK@60 without the RT hardware features enabled.

It's a choice.
Debating power at this point in time is discussion of clarity, softness, or frame rate.
Discussing feature set is a discussion about enabling.
 
If you choose instead the 2080 GTX, you will get true reflections but can never reach beyond that 1080p30 because the RT hardware can't work any faster than that.
The problem with this hypothesis is that it is based on the first crude wave of RTX demos, it ignores optimizations down the line, it ignores better hardware utilization, or even better combinations of hybrid rasterization that achieves higher IQ without too much brute force RT. That 1080p30 could very well turn into 1800p30.
 
Would just the addition of tensor cores be useful? Since they're used to denoise and upscale, they presumably can be utilised with RT and non-RT games. If so, they could be deployed painlessly on developers, who could deploy resources how they see fit - just traditional rasterising, or forms of hybrid rendering - as the ray tracing landscape settles down over a few years.
 
Would just the addition of tensor cores be useful? Since they're used to denoise and upscale, they presumably can be utilised with RT and non-RT games. If so, they could be deployed painlessly on developers, who could deploy resources how they see fit - just traditional rasterising, or forms of hybrid rendering - as the ray tracing landscape settles down over a few years.
I think the answer is, it depends on what developers can access.
all this talk of simulation, etc. If one could model it online, the tensor cores could receive the inputs and spit out a solid output.
so the talk about physics, cloth, or water etc, and what not, multiple scenarios could possibly be modeled on a massive network. And then packaged and sent down with the game for the console to run against.

extremely complex AI can be modeled online and once again packaged and run against with the tensor cores.

so the answer is, it depends on what developers are allowed to do and what they are capable of doing, and also depends on what makes sense. No one knows, because the area of discussion is heavily researched, but not with respect to a game developers perspective.

If you could imagine drivatars, just way better implemented for everything.
 
Please don't shoot me for being a dumb ass, (be it a stupid question or its already been covered and I missed it/forgot) but I recall a recent demo where the game was being rendered locally with very basic gfx and the fancy part being rendered in the cloud with the images lining up to produce a kind of hybrid solution for getting streaming.

So, could RT be the bit in the cloud?
 
Please don't shoot me for being a dumb ass, (be it a stupid question or its already been covered and I missed it/forgot) but I recall a recent demo where the game was being rendered locally with very basic gfx and the fancy part being rendered in the cloud with the images lining up to produce a kind of hybrid solution for getting streaming.

So, could RT be the bit in the cloud?
i guess if you're already doing that type of mixed rendering, I suppose it that would work.
 
I think the answer is, it depends on what developers can access.
all this talk of simulation, etc. If one could model it online, the tensor cores could receive the inputs and spit out a solid output.
so the talk about physics, cloth, or water etc, and what not, multiple scenarios could possibly be modeled on a massive network. And then packaged and sent down with the game for the console to run against.

extremely complex AI can be modeled online and once again packaged and run against with the tensor cores.

so the answer is, it depends on what developers are allowed to do and what they are capable of doing, and also depends on what makes sense. No one knows, because the area of discussion is heavily researched, but not with respect to a game developers perspective.

If you could imagine drivatars, just way better implemented for everything.

Could it mean, in theory, that games could keep patching themselves to run and look better? :runaway:

Both the PS4 and XBoxOne have video recording. Could that be expanded upon, in order to let an ML algorithm go over recordings coupled with its e.g. AA data from the time of recording and improve said AA in future?

Parsing recordings could be an OS feature, but flexibility seems to be best, so it's probably best to expose as much as possible for developers to tinker with. And letting developers monkey around with hardware that's (assuming it is) likely to make an appearance in any future RTRT hardware solution seems like a good way of getting tools and techniques in place prior to widespread deployment.
 
Could it mean, in theory, that games could keep patching themselves to run and look better? :runaway:

Both the PS4 and XBoxOne have video recording. Could that be expanded upon, in order to let an ML algorithm go over recordings coupled with its e.g. AA data from the time of recording and improve said AA in future?

Parsing recordings could be an OS feature, but flexibility seems to be best, so it's probably best to expose as much as possible for developers to tinker with. And letting developers monkey around with hardware that's (assuming it is) likely to make an appearance in any future RTRT hardware solution seems like a good way of getting tools and techniques in place prior to widespread deployment.
From what I understand the recordings they use online to model against are at a super high resolution. So you'll get more return out of using the DLSS models that ship with the title.
 
From what I understand the recordings they use online to model against are at a super high resolution. So you'll get more return out of using the DLSS models that ship with the title.

For console, I think it would make far more sense to make tools available to game makers to generate their own DLSS profiles so they can ship them as part of the package.

MS have an absolutely insane amount of server power to make available to licenced console developers, if the developer themself didn't have the resources.

Driver black magic is not ideal for anyone other than the black magic provider / dealer.
 
Or
you can get the same ~ FauxK@60 without the RT hardware features enabled.
But you can't. If it takes 14 TF to produce a FauxK@60 and you have a 10 TF part with additional RT, you can't render the same quality/framerate. The RT silicon can't be repurposed to help with rasterising.

Debating power at this point in time is discussion of clarity, softness, or frame rate.
Discussing feature set is a discussion about enabling.
It's not as black and white as that. We already have raytraced shadows on compute - see Claybook on a 1.4 TF XB1. The better the rasterising hardware becomes, the more versatile it becomes and more blurred the featuresets become.

The problem with this hypothesis is that it is based on the first crude wave of RTX demos, it ignores optimizations down the line, it ignores better hardware utilization, or even better combinations of hybrid rasterization that achieves higher IQ without too much brute force RT. That 1080p30 could very well turn into 1800p30.
Well sure. What other data are we supposed to use? "I propose next gen incorporates RTX because I imagine performance will quadruple between the demos and what'll actually be achieved"? :-?
 
For console, I think it would make far more sense to make tools available to game makers to generate their own DLSS profiles so they can ship them as part of the package.

MS have an absolutely insane amount of server power to make available to licenced console developers, if the developer themself didn't have the resources.

Driver black magic is not ideal for anyone other than the black magic provider / dealer.
I believe this is what is done today. Or I assume this is how the process would work, or works for nvidia today.
 
But you can't. If it takes 14 TF to produce a FauxK@60 and you have a 10 TF part with additional RT, you can't render the same quality/framerate. The RT silicon can't be repurposed to help with rasterising.
I would question that possibility as a theoretical limit, or a case in point where a developer purposely decided to make a 14TF game run at 1080p@30fps. I doubt there is so much quality per pixel there all using fast and efficient approximations each approximation getting you closer to a ray traced solution but without the drawbacks of ray tracing. There is fallacy in that logic.

I'm almost certain at that much quality per pixel, you're running up against what ray tracing hardware is meant to do properly and at significantly worse accuracy or performance.

It's not as black and white as that. We already have raytraced shadows on compute - see Claybook on a 1.4 TF XB1. The better the rasterising hardware becomes, the more versatile it becomes and more blurred the featuresets become.
:-?
SDF is the large enabler there for that to happen. But SDF isn't the right technology for a majority of AAA games out there.
 
For console, I think it would make far more sense to make tools available to game makers to generate their own DLSS profiles so they can ship them as part of the package.

MS have an absolutely insane amount of server power to make available to licenced console developers, if the developer themself didn't have the resources.

Driver black magic is not ideal for anyone other than the black magic provider / dealer.

To go further, there's no reason the AI derived algorithm couldn't target standard computer shader specs rather than requiring Tensor cores. DLSS is designed specifically to address nVidia's business needs.
 
Status
Not open for further replies.
Back
Top