Next Generation Hardware Speculation with a Technical Spin [pre E3 2019]

Status
Not open for further replies.
What I mean by that is: I strongly believe the kind of gamer-Joe that waits untill the time ps5 is reaching $299 to buy it, would still not buy it at launch even if there was a $299 sku available then. This dude simply is not in a rush to buy the new cutting edge toy. He'd rather wait for the thing to get more must-have software and hope they drop the price further to motivate him even more.
I wish Sony and/or MS share this thought. They could even open up Pro/X for games that no longer have to run on the base models as well. I doubt this happens, but it would be more fair to the customers and limits the risk to change vendor.

The question 'can we assume GPU requirements to scale with screen pixels?' is a difficult one. It's easy to say yes looking at current games, but it changes when thinking about promising tech like variable rate shading or object space shading.
If there is a 1:4 ratio for HD vs 4K, those techniques become less attractive to progress. So i would want something like 1:2.5 maybe, which appears a bit extreme, but 1:3 is quite a big difference already.

In general i do not like different console models already at launch. Too confusing, too much danger of splitting people into first and secondary class gamers. Risk of lazy optimization. Fragmentation and destabilization of a market that won't grow anymore in any case i guess.
A weak streaming console might make sense, but otherwise HD vs. 4K alone does not justify different specs. Targeting 1440p and upscale / checkerboard is good enough. Focus has to be mostly on better games anyways.
 
The question 'can we assume GPU requirements to scale with screen pixels?' is a difficult one. It's easy to say yes looking at current games, but it changes when thinking about promising tech like variable rate shading or object space shading.
If there is a 1:4 ratio for HD vs 4K, those techniques become less attractive to progress. So i would want something like 1:2.5 maybe, which appears a bit extreme, but 1:3 is quite a big difference already.
Why whould variable refresh shading not scale with resolution?
I've always thought that anaconda would be what they put in xcloud 2, but what if it's Lockhart.
Powerfull enough to do 1080p streaming, remote desktop, remote rendering for things like hollolense 2.
4TF is more than necessary if you just scaled for resolution, thereby giving it headroom for outliers that don't scale down 1 to 1.
And the devs don't need to do any work, it just renders at quarter resolution in both cloud and offline.
Only very few games would need, or maybe devs would like to tailor for the lower end spec.
Could be a lot less optimisation required for lower end than people think.

Edit: some games may be tailored for things like input when streaming to touch screen, instead of using default control scheme also.
 
Last edited:
Why whould variable refresh shading not scale with resolution?
In case of variable shade rate you have higher geometric complexity per pixel with lower resolution, even if you use lower LOD levels. So high resolution benefits more.
And object space lighting can be completely independent of resolution. Somehow 4K is a bit insane, so if you shade for a 2K target and there is no comparison, nobody would complain because geometry is still sharp. This way you can invest cycles in realistic lighting and get better end quality. A low spec console could prevent this.
But just to say. 4TF is still good, and the advantages could outweight all this.
Agree about little extra work, but you have to focus on the low spec model and this decides what's possible.

The worst thing that could happen is gamers moving from consoles to PC. If console tech becomes short living but streaming from PC to living room TV works this could happen a lot.
Sony does not want this, likely the reason why we hear multi model rumors only from MS. But PS is the most healthy market in games and i think we still need it this way, especially now.
As long as games are in such a immature state (development is so expensive and problems of interactive storytelling remain unsolved) we should not risk to destabilize this.
So my argument here is less a technical one. I'm afraid making single player games could become too risky in general, and the only way to make money becomes multi player games on PC.

But i might be just wrong. Microsofts idea has many interesting aspects otherwise. Kinect, XBone+Cloud, Windows on phones... all of this was interesting, though :|
 
I think that the way to make single player games more profitable is to have a larger install base. Which lower end model and streaming may actually help towards.

Would it be financially beneficial for MS to put anaconda in xcloud 2 if your not going to be streaming in 4k?
Would be using all that power to render 4k supersample to HD then stream. Sounds costly.

If they use Lockhart in xcloud 2, the games need to run on it, and I'm sure they said it will be able to play every game.

If that's the case makes sense to release Lockhart as a console also.

But i might be just wrong. Microsofts idea has many interesting aspects otherwise. Kinect, XBone+Cloud, Windows on phones... all of this was interesting, though :|
You just had to go there :LOL:
 
If anyone's interested in ray tracing, then that - like traditional rasterisation - scales very well with resolution. I'm hoping ray tracing is a thing next gen, and that there's some level of hardware acceleration.

Looking forwarding to seeing what efficiency gains Navi brings, and what customisation MS and Sony specify. MS's tweaks in X1X see it now running AAAs like RDR2 and Metro at double the performance of PS4 Pro, four times PS4 and about 1 million times the resolution of the esram constrained X1. Neither Sony nor MS will have been sitting on their hands.

But i might be just wrong. Microsofts idea has many interesting aspects otherwise. Kinect, XBone+Cloud, Windows on phones... all of this was interesting, though :|

The technology in Kinect and Windows Phone was fantastic. It's the understanding of the market that was flawed. MS's tech is almost always somewhere between good and fantastic.Their choices in the consumer market? ... eeeehhhhhhh not so reliable. But that's a matter for a different part of the forum...
 
If anyone's interested in ray tracing, then that - like traditional rasterisation - scales very well with resolution.
Usually. But You can do RT in object space too, and then the same points would apply again. (Object space is still unlikely to happen, but just to mention.)

However, what if it was like this:
NV: "We have awesome RT now, can you make it API standard? Nobody will use it if it's just an extension."
MS: "Hmmm, sure, if you give us RTX grid servers for our awesome xcloud!"
NV: "Deal!"

Maybe that's how RT is planned to be used in next gen? Streaming lightmaps and such?

I think that the way to make single player games more profitable is to have a larger install base. Which lower end model and streaming may actually help towards.
I do not think the games market can still grow, any assumption it would can lead to wrong decisions. But that's just a belly feeling and pessimism - totally no expert here.
 
What if we had a technical discussion?

Curious:
Somehow in the same area as the variable resolution method they used for PSVR, but I wonder if the hardware next gen will be purposefully changed to make it easier to implement a sparse rendering without resorting to complicated masking.

https://patents.justia.com/inventor/tobias-berghoff
https://patents.justia.com/patent/20180374195
https://patents.justia.com/patent/10068311
 
Last edited:
If two systems are sufficiently similar then for most stages of optimisation they can effectively be treated as the same platform.

For example, the same CPU, with the same compiler, accessing the same API and with the same memory access overheads (e.g. same VM) can effectively be treated the same way. Optimise once and run it on either.

GPUs work in repeating blocks. Things like geometry processors, ROPs, L1 cache, LDS, SIMDs per CU all scale on a per block level, so they should benefit from the same optimisations. As long as you get stuff like the command processor and the memory bandwidth / L2 cache right, the same game with the same optimisations should scale really, really well between the two performance profiles with minimal additional work (if any). Get your high end version running as you want, then just adjust the resolution of the low end one to match the performance profile.

Choosing the right balance of LODs for the assets could end up being the most SKU specific thing, and if you keep everything the same but use an automated tool for streaming textures even that might end up requiring little additional work.
That's all nice and should perfectly work... in theory. But not in reality. With 5 SKUs to develop for devs are taking less time to optimize the game for each versions, it's inevitable.

Here you can see 100ms frame-time spikes on both Pro and XBX in the last Trials Rising, source VG Tech. This is not tolerable in such a game and has never happened in any previous Trials games.

And it's particularly not tolerable on a 4.2 and 6 Tflops premium machines.
 
  • Like
Reactions: snc
That's all nice and should perfectly work... in theory. But not in reality. With 5 SKUs to develop for devs are taking less time to optimize the game for each versions, it's inevitable.

Here you can see 100ms frame-time spikes on both Pro and XBX in the last Trials Rising, source VG Tech. This is not tolerable in such a game and has never happened in any previous Trials games.

And it's particularly not tolerable on a 4.2 and 6 Tflops premium machines.

Sebbi it´s not there anymore, that´s why
 
That's all nice and should perfectly work... in theory. But not in reality. With 5 SKUs to develop for devs are taking less time to optimize the game for each versions, it's inevitable.

I don't think you can use the number of SKUs as a cause<-> effect correlation.

edit:
to clarify,

Yes that might mean less time per SKU/family, but it's part of the schedule overall, and who is to say it'd have been less or more or the same otherwise?

There are enough examples in other studios where only 2 SKUs still does not result in well optimized endeavours. It really depends on the team and circumstances.

What @function is getting at (I think) is that optimizing for the base architecture at the low level is one thing, then scaling it up is relatively straight-forward.
 
Last edited:
I don't think you can use the number of SKUs as a cause<-> effect correlation.
Also both pro and 1X are slightly different than just more TFlops.
4pro has CB hardware functionality.
Base Xbox esram
No one is saying that next gen dual consoles would have different feature sets.
It would literally be 100% the same engine.
 
I did some research involving historical node transitions, especially between TSMC 28nm and 16nm, and what that meant for the consoles and their pro versions. Based on TSMC's stated figures for perf increases, the actual flops for TSMC Nvidia gpus and consoles matches up. I won't get into Nvidia here.

For TSMC past PR numbers for their perf improvements for node transitions.
28nm to 20nm is a 15% improvement.
20nm to 16nm is a 50% improvement.
16nm to 16nm+ was a 15% improvement.

28nm to 16nm+ thus comes out to a 1.984 times improvement. The gpu in pro had by my estimates 110 to 130 watt increases, so total perf should have been a 2.34 times increase to the Pro. 1.84TF (ps4) x 2.34 = 4.3TF, which is close to the 4.2 teraflops in the real world.

The gpu in the Xbone to the 1X in my estimates went from 70 watts to 150 watts. Total perf is 4.25 times increase. 1.31TF x 4.25 = 5.5694TF. It seemed like MS found a way to increase perf beyond what the TSMC node offered. Perhaps credit to the HOVIS method?

Using figures TSMC published to extrapolate improvement for next generation:

16nm+ to 7nm should be a 40% improvement.
7nm to 7nm++ might be a 15% improvement.

For the same TDP, we can expect a part 1.61 times faster. If 150 watts were allocated again ENTIRELY to the GPU, I did expect 9.66 teraflops.

The only way were getting close to 12 teraflops system with Zen2, is 225 watt+ console.

yeah, please temper expectations regarding the FLOPS. We're bumping up against Moore's law and the limits of silicon HARD. One positive going for us is that there won't be a massive resolution bump this time around to horde all the improvements away. Also, I expect custom silicon, AI, and AMD architectural improvements to carry the torch for next gen. If AMD can get their flops performing on part with Nvidias, ~10TF in the consoles should be performing like AMD ~15TF GCN GPU.
 
40% power gain is what we should expect on a node transition on the same product.
Navi will be a different product, so those gains are not the only ones to take into account.
Looking at the performance per watt charts from AMD, they claimed polaris offered a 2.5x increase in performance/watt over 28 nm products. That’s a 150% increase!
That translates into a 136 pixels height difference!
If the scale is right, Navi should offer about a 122% increase on performance/watt over Vega. So that should not be the whole picture.


88g9xap.jpg
 
Yes Proelite... I expect Sony to be at 10 Tf or little above.... not needed more right now. Actually a bit less would be enough but marketing needs a double digit number...
 
40% power gain is what we should expect on a node transition on the same product.
Navi will be a different product, so those gains are not the only ones to take into account.
Looking at the performance per watt charts from AMD, they claimed polaris offered a 2.5x increase in performance/watt over 28 nm products. That’s a 150% increase!
That translates into a 136 pixels height difference!
If the scale is right, Navi should offer about a 122% increase on performance/watt over Vega. So that should not be the whole picture.


88g9xap.jpg

That graph' really needs to be stopped being quoted...
 
Status
Not open for further replies.
Back
Top