Middle Generation Console Upgrade Discussion [Scorpio, 4Pro]

Status
Not open for further replies.
If you wanted to keep current resolutions and go to 60fps, you're going to need double the fill. Even if you just want to be as fast as PS4 at shadow rendering you're going to need double the fill.

Depends I suppose. There have been cases of XO games using lower res shadows, though I don't recall if they were 1/2 x 1/2 compared to PS4 or something in between. In the cases where they match, it's harder to pinpoint without profile tools. In a lot of cases we've seen identical features, but the pixel resolution scaled according to the CU difference (approximately).

Also have to consider the CPU, of course, and associated interconnect bandwidths.

Easier when devs target 60fps in the first place though. :p

Anyways
 
Speaking of resolution and VR, the Rift is 2160 x 1200.

Typical Xbox One game is around 1600 x 900. To keep the same proportion of frame buffers in esram at rift resolutions you'd need 1.8 times the esram.

64 MB of esram would appear to be a reasonable amount of super fast scratchpad for a 1080+ and Rift focused system, potentially offering FuryX levels of BW.
 
In order to use the panel at it's optimum resolution, VR needs a frame buffer 1.4x the panel resolution in both direction before the fisheye transform, so it would need 3.6x the ESRAM.

Maybe new architectures could allow rendering directly to a transformed target?
 
Depends I suppose. There have been cases of XO games using lower res shadows, though I don't recall if they were 1/2 x 1/2 compared to PS4 or something in between. In the cases where they match, it's harder to pinpoint without profile tools. In a lot of cases we've seen identical features, but the pixel resolution scaled according to the CU difference (approximately).

Also have to consider the CPU, of course, and associated interconnect bandwidths.

Easier when devs target 60fps in the first place though. :p

Anyways

For the shadow rendering speeds I'm going off what sebbbi has said: 16 ROPs are generally enough in current games as if you pack your data properly you end up BW limited before you're fill limited, and the X1 has tons of BW for its ROPs. The exception however is shadow map rendering, where the PS4 is loads faster. Perhaps some games are keeping shadows the same and taking the hit elsewhere?

I'd guess there's only so far you can push your 16 ROPs even with the sauce of esram ...

(But yeah, I wish more developers would target 60 fps in the first place!)
 
For the shadow rendering speeds I'm going off what sebbbi has said: 16 ROPs are generally enough in current games as if you pack your data properly you end up BW limited before you're fill limited, and the X1 has tons of BW for its ROPs. The exception however is shadow map rendering, where the PS4 is loads faster. Perhaps some games are keeping shadows the same and taking the hit elsewhere?

Right, my point was that sometimes you'll see higher res shadows on PS4, other times not - for the latter either it's a RAM consumption issue and/or PS4 isn't fill limited on that part. In the former case, if they're just doing 1/2 x 1/2 vs PS4, then it's 1/4 fillrate on there.

Of course, for 30fps games, you don't know whether it's a CPU issue (devs stuck on CPU particles/sorting etc) or if GPU-side is doing 30-50. idk

Case by case I guess?
 
In order to use the panel at it's optimum resolution, VR needs a frame buffer 1.4x the panel resolution in both direction before the fisheye transform, so it would need 3.6x the ESRAM.

Maybe new architectures could allow rendering directly to a transformed target?

Isn't that one of the features Nvidia's simultaneous multi projection should be able to help with? By rendering to virtual viewports that more closely resemble a curved surface, it should be able to cut down greatly on the amount of ... oversampling? ... you need to do the whole fisheye thing.
 
You could render the lower res borders to main memory. Multi-res viewport. thing.

Duct tape.

Yeah, I like it. Towards the edges you'd be sampling more pixels to calculate the increased distortion (I think) so you might be able to get away with a lower res buffer and cut down effects (and the player's less likely to notice, probably).

Come to think of it, didn't games on X1 have access to a secondary display pane with a rectangular "cut out" area where it could efficiently detect nothing should be rendered there (no z-test needed?)??. ???

Right, my point was that sometimes you'll see higher res shadows on PS4, other times not - for the latter either it's a RAM consumption issue and/or PS4 isn't fill limited on that part. In the former case, if they're just doing 1/2 x 1/2 vs PS4, then it's 1/4 fillrate on there.

Of course, for 30fps games, you don't know whether it's a CPU issue (devs stuck on CPU particles/sorting etc) or if GPU-side is doing 30-50. idk

Case by case I guess?

Yeah, case by case I suppose.

For cases where you're limited by your ability to read, write or blend from buffers you're either going to need more ROPs or more BW. Taking on board what sebbbi said, I think an increase in both would be nice for a new system. That's why I'm still banging the esram drum, in case we don't get HBM2.
 
You could render the lower res borders to main memory. Multi-res viewport. thing.

Duct tape.
Isn't that one of the features Nvidia's simultaneous multi projection should be able to help with? By rendering to virtual viewports that more closely resemble a curved surface, it should be able to cut down greatly on the amount of ... oversampling? ... you need to do the whole fisheye thing.
Right, I was wondering if simultaneous viewports required new hardware feature, or if that was an API/driver thing that could be done already on current gen.

The PSVR method is extremely weird, it's a full res buffer (1.4x) but the rendering is progressively "skipping" pixels in a gaussian pattern toward the edges. To be blended afterward. It seems even more duck-tape-ish than multiple viewports. Maybe it's about the geometry being fed only once?
 
Right, I was wondering if simultaneous viewports required new hardware feature, or if that was an API/driver thing that could be done already on current gen.

The PSVR method is extremely weird, it's a full res buffer (1.4x) but the rendering is progressively "skipping" pixels in a gaussian pattern toward the edges. To be blended afterward. It seems even more duck-tape-ish than multiple viewports. Maybe it's about the geometry being fed only once?

You might be right about feeding the geometry only once. Multiple viewports (from different angles) should require multiple transforms, but I'm guess that Nvidia has developed a form of custom hardware acceleration, while Sony has developed a software solution to reduce unnecessary shading work. Different display panes being fed at different resolutions might be yet another approach.

Perhaps Polaris has some AMD developed tech to make VR more efficient. AMD have been pretty tight lipped about it so far.
 
Theoretically speaking, suppose that MS wants the slim to be the better alternative to base PS4 in every aspect. They decide that they have to match or exceed the performance.

They do the following.
Use a cut down version of the low end Polaris 11 as the GPU. Clock it at 1.2ghz.
Boost Cpu to 2.0ghz+.

function mentioned that PS4 has 32 ROPs. Hower thats running at 800mhz. Assuming p10 has 16 ROPs at 1.2 ghz, there is a lesser gap. Essentially 24 ROPs to 32 ROPs.

Esram clocks would need to be increased in tandem to keep sync, thus increasing the bw for free.

Because of memory BW efficiencies in new polaries, a ~50% increase in GPU capabilities can be had on the same DDR3 memory setup.

It seems to me to be a big waste to port Xb1 to 14nm Finfet and not take advantage of new efficiencies.

MS updates the existing XB1 SDK, makes it optional for devs to use the extra power. Ms uses some of the extra power for smoother OS and shit.'

As for for people who think there is too many versions that devs have to support, I argue that Scorpio and PC should be counted together, so essentially its

Xb1, Xb1 slim, and Scorpio / PC.

Now for the unknowns, which may or may not be a con. I think Microsoft would need to weigh the cons against the benefit of these spec bump.

Unknowns:
1. How much would be the difference in R&D cost between porting GCN1.1 to lower fab, and redesigning the SOC to use Polaris.

2. What is the increase in TDP and heat and more importantly BOM cost of using Polaris at 1.2ghz versus using a simple shrink.

3. What's the cost of, to MS, of supporting the extra target in the SDK. Would the adoption rate by developers be high enough to justify the cost in man power and time.
 
Last edited:
Theoretically speaking, suppose that MS wants the slim to be the better alternative to base PS4 in every aspect. They decide that they have to match or exceed the performance.

They do the following.
Use a cut down version of the low end Polaris 11 as the GPU. Clock it at 1.2ghz.
Boost Cpu to 2.0ghz+.

function mentioned that PS4 has 32 ROPs. Hower thats running at 800mhz. Assuming p10 has 16 ROPs at 1.2 ghz, there is a lesser gap. Essentially 24 ROPs to 32 ROPs.

Esram clocks would need to be increased in tandem to keep sync, thus increasing the bw for free.

Because of memory BW efficiencies in new polaries, a ~50% increase in GPU capabilities can be had on the same DDR3 memory setup.

It seems to me to be a big waste to port Xb1 to 14nm Finfet and not take advantage of new efficiencies.

MS updates the existing XB1 SDK, makes it optional for devs to use the extra power. Ms uses some of the extra power for smoother OS and shit.'

As for for people who think there is too many versions that devs have to support, I argue that Scorpio and PC should be counted together, so essentially its

Xb1, Xb1 slim, and Scorpio / PC.

Now for the unknowns, which may or may not be a con. I think Microsoft would need to weigh the cons against the benefit of these spec bump.

Unknowns:
1. How much would be the difference in R&D cost between porting GCN1.1 to lower fab, and redesigning the SOC to use Polaris.

2. What is the increase in TDP and heat and more importantly BOM cost of using Polaris at 1.2ghz versus using a simple shrink.

3. What's the cost of, to MS, of supporting the extra target in the SDK. Would the adoption rate by developers be high enough to justify the cost in man power and time.


4) whats the cost of MS pricing the xbox one slim at $100 under the ps4. You get almost the same experience for over a third less in price .

5) What if xbox one can fit in a small intel nuc sized enclosure and sell for even less .


The upgrade makes no sense cause now they have 3 consoles to support and they are already behind sony in consoles sold. The ps4 will get support for a long time because there are almost twice as many ps4s as xbox ones. Add in xbox 1.5 and its even more of a reach for developers to support it
 
You think so? They typically haven't shown much off for the last six or seven ....

... oh. :(
$400 bucks for 290x level hardware per eye is not a bad cost at all. You can do $800 and have two cards per eye for rendering.

If they have something in the $300 range that performs like a 1070 then they could have dual eye 1070s for $600 , cheaper than a 1080 currently.
 
4) whats the cost of MS pricing the xbox one slim at $100 under the ps4. You get almost the same experience for over a third less in price .

5) What if xbox one can fit in a small intel nuc sized enclosure and sell for even less .


The upgrade makes no sense cause now they have 3 consoles to support and they are already behind sony in consoles sold. The ps4 will get support for a long time because there are almost twice as many ps4s as xbox ones. Add in xbox 1.5 and its even more of a reach for developers to support it

I would argue that Scorpio support would be trivial for devs who plan to release their game on W10. Its Neo that would be the bigger burden for the same devs.
 
I would argue that Scorpio support would be trivial for devs who plan to release their game on W10. Its Neo that would be the bigger burden for devs.

As a dev what order of platforms would you target for ?

I would wager 1) Ps4 with 4xM sold . 2) Xbox one with 2Xm sold 3) Neo 4) Scorpio/ PC (just tweak the pc port) 5) Xbox 1.5 as suggested here.
 
Status
Not open for further replies.
Back
Top