Middle Generation Console Upgrade Discussion [Scorpio, 4Pro]

Status
Not open for further replies.
How much ESRAM do you need for 4k, assuming some type of reconstructive or interpolating approach to rendering? How much die space do you lose that could provide for a bigger gpu?

I have a feeling they'll ditch it.

Edit:

UE4 render target @ 4k

8294400 pixels * ~52 bytes (gbuffer layout as best as I can tell) = ~431308800 bytes = ~412 megabytes

That's native 4k rendering, which we know the console isn't going to do, but you can see how it quickly gets really hard to make an eSRAM that's large enough to accommodate a large render target. The smaller it gets the more limited its uses become. Maybe if it gets paired with really fast memory, that becomes less of an issue, where Xbox One is too reliant on eSRAM bandwidth. You need engines like UE and Frostbite to run very well.

Not to mention eSRAM management is just adding complexity that's out of line with making PC and Xbox development more similar.
 
Last edited:
It's relatively low for the resolution. If we think 150 GB/s is enough for 1080p (current consoles), linear scaling would suggest 600 GB/s is needed for the same fidelity at 4k (which I'm going to call 2160p from now on because it's more accurate and a comparative term).

600GB/s might have been a demand before lossless color compression was around in GCN GPUs.
Even with that in mind, 320GB/s might be too little for 2160p with the same IQ and framerates that we get today with the PS4 @ 1080p.
Regardless, raw bandwidth may not be the only obstacle to 2160p. They would need a much higher fillrate too. The PS4 does 25 GPixels/s with 32 ROPs and only the GTX 1080 does 4x that with 64 ROPs at a whopping (prohibitive for a console?) 1.6GHz.

If there's a 384bit interface, we might be looking at 96 ROPs and 100GPixels/s would be attainable at 1GHz. But if it's "only" 48 ROPs then there's no way a 14/16FF embedded GPU will reach 4*PS4 fillrate.
 
How much ESRAM do you need for 4k, assuming some type of reconstructive or interpolating approach to rendering? How much die space do you lose that could provide for a bigger gpu?

I have a feeling they'll ditch it.
I'm dubious about them using esram also.

A difference between the mid gens and the ps4 and xo though is both machine may require alternative rendering where as, only one really needed it before. And it does seem like the way forward (at the moment anyway)
 
I'm dubious about them using esram also.

A difference between the mid gens and the ps4 and xo though is both machine may require alternative rendering where as, only one really needed it before. And it does seem like the way forward (at the moment anyway)

Well, Rainbow Six siege used their "checkerboard rendering" on PS4. It's an optimization. My expectation would be that if Neo games use techniques like that, then the PS4 version will as well. Neo will reconstruct/interpolate 4k and the PS4 will reconstruct/interpolate 1080p. Not sure the correct terminology.
 
Well, Rainbow Six siege used their "checkerboard rendering" on PS4. It's an optimization. My expectation would be that if Neo games use techniques like that, then the PS4 version will as well. Neo will reconstruct/interpolate 4k and the PS4 will reconstruct/interpolate 1080p. Not sure the correct terminology.
my point was that xo really suffered with fat gbuffers and ps4 was ok.
I'm not suggesting that we will have different renderers for the different platforms. I'm saying more reason to move to alternative techniques as will help all platforms.
also seems to be the direction moving anyway, would've helped xo if gen started of with it though.
 
So there is 0 possibility that MS does not included embedded ran onto this SOC?

Pssst, your wording on that means it's possible that MS will include embedded RAM. ;) Zero possibility of not doing something = doing something.

Anyway, I find it doubtful that MS will include ESRAM in Project Scorpio just due to the 6 TFLOPs they targeting. If you look at die shots of the XBO SOC, the ESRAM takes up a massive amount of die space and is likely one of the reasons why they didn't incorporate more CUs. However, at the time they needed to lock down hardware specs they wanted to have 8 GB of memory. DDR was the only feasible option at that time so ESRAM was needed to improve performance in certain bandwidth heavy tasks, especially as bandwidth contention between CPU and GPU would reduce the effective bandwidth of DDR even further.

Project Scorpio is not only increasing bandwidth significantly, it's also increasing CUs massively. The increased bandwidth negates most of the reasons they needed ESRAM in the first place. And the increased number of CUs means they need to be as efficient with die space as possible.

It'd certainly be nice to have ESRAM, but unlike the XBO, the advantages aren't as clear cut or numerous. Instead of going with ESRAM again, it appears that MS is instead going to go with increased CU count.

And if Sebbbi and the iD (Doom guys) are correct, rendering at 4K isn't going to be anywhere close to a linear increase in computation or bandwidth requirements.

With console gameplay, where a player typically plays at a distance of two metres or more, and your display size is a common one (say 70" or so) it starts to become a performance waste relatively quickly, particularly if we are talking about 4K. If a developer does it the brute force way, you are essentially rasterising the same content, but literally 4x slower for not that much of a gain. Even for desktop rendering where users sit fairly close to the display, I can think of a myriad of approaches for decoupling resolution costs, than just brute force rendering.

While the final image may be rendered out at 4K, it's highly unlikely that the components that make up a rendered image will all use 4K appropriate assets. It just isn't needed. Sebbbi has mirrored those sentiments in other posts scattered around the forum.

If developers just brute force things, it may become an issue. And in the past brute force was all that was used because 4k was never a target for game developers even on PC. It was an added benefit. With developers now potentially targeting 4K, we're likely to see a lot more innovation in using non-native resolution elements to render a native 4K frame buffer. That will have obvious benefits for 4K gaming on consoles but will also make 4K gaming on PC require far less beefy GPUs.

iD aren't the only developers that have been thinking about how to efficiently target 4K gaming while providing the best IQ possible.

Regards,
SB
 
Last edited:
UE4 render target @ 4k

8294400 pixels * ~52 bytes (gbuffer layout as best as I can tell) = ~431308800 bytes = ~412 megabytes
True, I have a feeling this number may be larger due to HDR requirement, but I'm not sure how the math works out there. Where I was going with this sram business was to actually have the sram as a pool of small memory for the sake of the situation of dealing with operations with very high concurrent memory operations. In particular post processing (in which the size of the image should be reduced greatly), or helping out with any particle effects.

Pssst, your wording on that means it's possible that MS will include embedded RAM. ;) Zero possibility of not doing something = doing something.
Ahh! lol that only occurred to me because you pointed that out. I'm a shill and I don't even know it.

Project Scorpio is not only increasing bandwidth significantly, it's also increasing CUs massively. The increased bandwidth negates most of the reasons they needed ESRAM in the first place. And the increased number of CUs means they need to be as efficient with die space as possible.

It'd certainly be nice to have ESRAM, but unlike the XBO, the advantages aren't as clear cut or numerous. Instead of going with ESRAM again, it appears that MS is instead going to go with increased CU count.
Yea I think there was mention that to keep TDP low, the SoC would have to lower frequency but have many more CUs to make up for it. In which this is a strong argument for where their die space is allocated.

However ;)

I'm not an engineer, so not going to pretend like I know what I'm talking about, but here goes; if async compute does continue to take off, it appears to me that there is a good opportunity during the post processing part of the pipeline to submit async compute requests, but this is also the time that the GPU will drop significantly to do heavy read and writes. So if you moved post processing to a fast pool of memory that's great with simultaneous R/W you'd have better async performance for instance.

Anyway, that was sort of my thought. Embedded ram would be available just to provide a speed boost in areas where it could use it while the shared pool would do all the heavy lifting. Enough speculation from me. I don't think their doing esram either. But I wouldn't be shocked if for some reason they came out and said they had it.
 
Would love to hear someone say this at an executive level meeting. Just lol.

As many have noted again and again. You don't make decisions that lead to a loss. Businesses take risk seriously, especially when risk has not netted them any large returns in recent history:

Just wanted to go back and address this as Microsoft just released their financial earnings report.

There are a couple things that have actually taken off in a huge way. The biggest being Azure cloud services. Its revenue for 4Q ending June 30, 2016 grew 102% YoY. That's the 8th consecutive quarter where it had triple digit growth.

The other risk they took was implementing a product as a service model for Office. Something many pundits predicted would fail and consumers would reject. It grew 54% YoY (stand alone declined 19% indicating some movement to the service based model). Office 365 Business grew by 45% YoY. On the consumer side Office revenue increased by 19% "driven by higher revenue from Office 365 consumer, mainly due to growth in subscribers."

Yes, Office was already a successful product, but many felt that by moving to an application as a service model with a yearly subscription that it would hurt the Office segment. It's done the opposite and had a positive effect.

But Azure. That was a risky venture when they started it, but it's paying off in spades as it is by far their fastest growing segment.

But that's why you take risks. You'll often have a lot of unsuccessful products, but it doesn't take many successful products to more than make up for it. They key is that you need at least one or more of your products to continue to be successful so you can continue to take risks.

In many ways game development and publishing is a hyper-inflated example of this. There it is always boom or bust. And why publishers and developers rely so heavily on established franchises. Those established franchises are the only things that allow them to take the occasional risk on a new IP.

Regards,
SB

PS - I know this is all a bit unrelated to what you were originally responding to. :) I just wanted to clarify that they have had some large returns on risky ventures in recent history.
 
It's relatively low for the resolution. If we think 150 GB/s is enough for 1080p (current consoles), linear scaling would suggest 600 GB/s is needed for the same fidelity at 4k (which I'm going to call 2160p from now on because it's more accurate and a comparative term). DF have an article on a 2160p GPU, the Titan 1080 GTX, which is 320 GB/s uncontested VRAM, with additional CPU bandwidth of DDR main RAM.

So Scorpio, and very much more Neo, are low on BW for 2160p rendering, similar to PS3 being relatively low for 1080p rendering. I think that's a fair assertion.
But there are better bandwidth saving tech available that isn't used in PS4 or XBO, right? I haven't been keeping track if the newer delta color compression and the like is on consoles or not.
 
But there are better bandwidth saving tech available that isn't used in PS4 or XBO, right? I haven't been keeping track if the newer delta color compression and the like is on consoles or not.

They used it first in Tonga
What they call GCN 1.2, being Orbis/Durango 1.1

ColorCompress_575px.png


Claim up too 40% higher memory efficiency.

I don´t know if it´s been improved for Polaris
 
They used it first in Tonga
What they call GCN 1.2, being Orbis/Durango 1.1

Claim up too 40% higher memory efficiency.

I don´t know if it´s been improved for Polaris
It is improved further in Polaris. Als AMD settled on what they call different GCN-versions, they're now GCN/2/3/4 or 1st/2nd/3rd/4th gen GCN
 
lot of speculations that it's next year waiting on vega, zen etc
but could the wait be due to gddr5x also?
whatever the make up of the soc, it seems like it's going to be a big chip.
so soc + (12gb) gddr5 sounds like a very high tdp for a console.
as it sounds like most people doubt it would be hbm + lddr4
 
lot of speculations that it's next year waiting on vega, zen etc
but could the wait be due to gddr5x also?
whatever the make up of the soc, it seems like it's going to be a big chip.
so soc + (12gb) gddr5 sounds like a very high tdp for a console.
as it sounds like most people doubt it would be hbm + lddr4

They must have a reasonable confidence it's going to ready 2017 holiday, at the very least they explored the concept of a 6TF chip this year which is vastly accelerated. Dev kits still need to be released, software/OS etc all needs to be prepped. Final silicon testing and then clocking.

It's quite possible it's using older tech, just a lot of it, waiting for a reasonable price point.


Sent from my iPhone using Tapatalk
 
Do we even know if their ray traced view of the motherboard represents the final version late 2017?

It might be their current development system but the late 2017 release window makes a lot assumptions about it feel wrong.

I don't think waiting for a "reasonable price point" from late 2016 to 2017 makes sense. What should happen in-between and why should that matter during launch where just a few millions would be sold anyway. Price becomes relevant after the enthusiasts bought it and the price reductions happen to reach a wider market and with their low/high-end product plan that makes even less sense.
 
Do we even know if their ray traced view of the motherboard represents the final version late 2017?

Can't wait for someone to google translate this and understand "We do know the final version of the motherboard has ray tracing view accelerator."

...

"Shape"
 
Do we even know if their ray traced view of the motherboard represents the final version late 2017?

It might be their current development system but the late 2017 release window makes a lot assumptions about it feel wrong.

I don't think waiting for a "reasonable price point" from late 2016 to 2017 makes sense. What should happen in-between and why should that matter during launch where just a few millions would be sold anyway. Price becomes relevant after the enthusiasts bought it and the price reductions happen to reach a wider market and with their low/high-end product plan that makes even less sense.
reasonable is the word, how reasonable and fair the price is going to be..especially if it includes an AR or VR device, hopefully.

I want infographic games.

 
lot of speculations that it's next year waiting on vega, zen etc
but could the wait be due to gddr5x also?
whatever the make up of the soc, it seems like it's going to be a big chip.
so soc + (12gb) gddr5 sounds like a very high tdp for a console.
as it sounds like most people doubt it would be hbm + lddr4

I think that no memory amounts were given also meant that memory type is undecided. GDDR5 would mean 12GB.

You can also get 320GB/S with GDDR5x or DDR4 + HBM2.
 
It is improved further in Polaris. Als AMD settled on what they call different GCN-versions, they're now GCN/2/3/4 or 1st/2nd/3rd/4th gen GCN

Thanks

Yep, i know that they moved from 1.x to Gen X
We´ll see if Vega it´s considered Gen 5 or like Tonga and Fiji to retain same Gen
 
Status
Not open for further replies.
Back
Top