Middle Generation Console Upgrade Discussion [Scorpio, 4Pro]

Status
Not open for further replies.
Yeah well Forza Apex seems to run incredibly well. Halo 5 Forge also seems like its well optimized. So I wouldn't be shocked if Microsoft Studios can hit 4K native on Scorpio. I'm sure these PC games that Microsoft is making are being optimized/tested on Scorpio-equivalent hardware @ 4K resolution.

Very doubtful you see many 3rd party titles at 4K native though.
 
But if the system was going to bottleneck on the CPU capability with any regularity, why not just shave off some CUs from the design and make a smaller chip that wouldn't bottleneck on the CPU as often?
You can always use more CUs thanks to compute. The GPU is no longer slaved to the CPU and unable to work without it. We've even got GPU's creating work for themselves these days.
 
GPUs are a macrocosm of Government it seems! More seriously, I need to spend some time looking at AMD's architecture, I'm conscious I know nothing about it and certainly didn't know this!
 
You can always use more CUs thanks to compute. The GPU is no longer slaved to the CPU and unable to work without it. We've even got GPU's creating work for themselves these days.
I often forget about this as well. Especially as engines continue to evolve towards a dx12 pipeline with execute indirect, as well as more focus on async compute there could very well be good reason to have as many CUs as possible.

Or :)

Cut some CUs to match the CPU and put in 32MB of esram for pure BC with XBO and fast scratch pad for Everything else!


Sent from my iPhone using Tapatalk
 
To be clear, I'm not saying a CPU isn't needed at all! Once the work is set up, the CPU distributes it across CU. More CU automatically get used, so you only need more CPU if the GPU is consuming faster than it's being fed, and you can always scale the workload up. The 'GPU's creating work' idea comes from mention of GPUs generating commandlists themselves/data themselves, although specifics are few and far between as it's pretty bleeding edge. Certainly Sebbbi has spoken of GPUs running from a single draw call. https://forum.beyond3d.com/threads/gpu-driven-rendering-siggraph-2015-follow-up.57240/
 
You can always use more CUs thanks to compute. The GPU is no longer slaved to the CPU and unable to work without it. We've even got GPU's creating work for themselves these days.

I also believe this to be the case. That's another reason why I don't think the weak CPU is going to limit the graphical performance of the refresh systems when running games designed to run on the even weaker CPUs of their predecessors. It is reasonable to expect there will be some additional amount of CPU performance required to support the larger GPU components and I expect that PS4 Pro has and Scorpio will have CPUs up to that task.
 
Depends if they go with multiple memory pools or not and how accurate the render is to the final console. The render showed 12 memory chips on the top layer.
 
16GB total would probably mean 11-12GB for games. Considering the average is now pretty much 8GB on graphics cards I think that several GB buffer would give Scorpio some future proofing.
 
Wasn't isn't a year or two ago where AMD was talking about an Exascale apu sporting 32 Zen cores, a Greenland based igpu with 32 GB of HBM?

If AMD believes it can produce something like that, then producing an 8 zen core apu with a 6 Tflop igpu should be a cakewalk. LOL.

To be fair, I think they were hoping to charge several thousands of dollars per exascale APU. I doubt MS is that elastic on pricing.
 
Zero
With 384 bit bus 12 is the only option.

Maybe slightly nitpicking, but it's not zero as there are other options, however unlikely. It could be 24GB or they could populate different size memory chips to some of the channels like nVidia has done in the past. For example have 4 of the 12 channel have 16Gb chips and 8 channels 8Gb chips. Have 12GB for games and slower access to the remaining 4GB for OS.


I would like to see MS align with someone like Samsung and bring Adaptive/Free Sync to TV's.

That would be a proper leap in next gen for gamers instead of VR, HDR, 4k.

I think Adaptive/Free Sync is overrated, especially on the consoles. Steady framerate is imo far better and with consoles it is much easier to have locked framerates. Even on a PC i'd prefer adjusting settings and lock the framerate to a fixed number and have it stay there instead of fluctuating.
 
Doom Vulkan patch gave RX 480 around 30% increase. Most of this came from GCN specific wave shuffle intrinsics and async compute. Console games have used these features extensively for years. 980 Ti (Maxwell) doesn't gain anything from async compute, and most PC devs will not be writing code with GPU specific intrinsics for it.
Nvidia looks to have implemented the wave shuffle intrinsics as well with a recent Doom patch. I'd agree PC devs likely won't use them extensively, but it looked like they were coming with SM6. In the case of compute based AA, especially SSAA, I'd expect a relatively standardized implementation at the very least. It just seems too obvious a performance win not to have implemented. The details should be relatively straight forward if just mapping a render target for input. FXAA seems a prime example of this and the technique would seem simple enough to release on GPUOpen.

I've seen absolutely nothing on this, but a better solution might be to implement a basic SSAA data passing mechanism in SM6 that accounts for wave size. That would seem trivial to wrap up the coming functionality and be a fairly common occurrence.

I haven't heard anything that suggests Vega will offer a significant efficiency increase over Polaris but I'd be interested to take a look at your sources on that.
Just speculation, but these two papers are what I'm basing efficiency gains on for the time being. The first seems to tie into some of the Polaris features already. I'd imagine Vega could expand on that. The 2nd is interesting to say the least. It would very likely result in a IP level change along with efficiency gains depending on the implementation. Both seem fairly plausible for implementation with Vega and come from AMD engineers.
http://people.engr.ncsu.edu/hzhou/hpca_12_final.pdf
http://www.freepatentsonline.com/20160085551.pdf
 
Status
Not open for further replies.
Back
Top