Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
Does it even make sense to sell a $199 PS4 when they can have a >2x more powerful console that supports 4K HDR TVs and significantly better PSVR experience for $249?

To be honest, I think 2019 will see the PS4 starting to be phased out. Preferably with the introduction of a mobile PS4 Go.
Then we'd have:

- PS4 Pro for $249
- PS4 Go for $350-400

PS4 Go would have full compatibility with the 2013 console but in 8-9" tablet form. Devs still target both SKUs, but the Pro would become the de facto home console for 2019, until the PS5 release.

Agreed. A Pro Slim would be an ideal mass market device - great for 1080p and PSVR, decent for 4K, and tons of content.

And Sony need to compete with the Switch, so a portable PS4 should come sooner rather than later.
 
Maybe but that section is also very general and could be talking about the GPU or CPU of any system. Also wouldn't AMD have some patents given this is also their IP or at least some of their people with expertise in this area along
with Mark and David?

I find it interesting that David Simpson is a lead programmer at ND/Ice Team as well as a lead architect for the PS4 and Pro SoCs. He specializes in the GPU which might indicate the patents relate to that rather than CPU?

I would think AMD is not in the business of patenting things like this given the compatibility implied in x86. These patents seem to cover software driven approaches (invoking things like real-time adjustments), which would suggest an alternate environment mode that is layers above the uOP plane. CPU is called out by name more than GPU, and things like cache and bussses which the CPU must arbitrate are mentioned.
 
I would think AMD is not in the business of patenting things like this given the compatibility implied in x86. These patents seem to cover software driven approaches (invoking things like real-time adjustments), which would suggest an alternate environment mode that is layers above the uOP plane. CPU is called out by name more than GPU, and things like cache and bussses which the CPU must arbitrate are mentioned.

I'll guess we'll have to wait and see how it pans out.
 
The embedded Vega in the Raven Ridge SoCs has spectacular performance/watt. It's getting >2x the performance of Bristol Ridge solutions with same TDP and same CU count.
It's true that Vega 10 failed to meet the power efficiency characteristics that most hoped for (AMD included, probably), but a single chip's performance isn't representative of the whole architecture. There's embedded Vega and there's a 7nm Vega 20 for HPC coming up this year.

Well they also went from Bulldozer to Zen in those APU's no? Hard to discount the effect of CPU jump in that equation.
 
I wonder if using 3 HBM2 stacks at 2.4Gbps (~920GB/s) would give the SoC enough system bandwidth to completely forego the L3 cache for inter-core coherency.
That could reduce the die area for each CCX by ~1/3rd.. at the cost of an extra HBM stack, of course.
The L3 complex hosts the shadow tags used to check the L2 caches of the other cores in a CCX, which is possibly why Raven Ridge's half-sized L3 has a rather thick band of silicon with a set of SRAM arrays where the missing L3 half would have been. This part of the architecture doesn't seem to be optional as it was before, so the area saved would be less. Re-designing the architecture pushes the burden handled locally in a CCX further away.
The L3 hierarchy also absorbs much of the complexity for outstanding misses, since bandwidth and access buffering is 2-4x higher than what can be sent across the fabric.

Latency-wise, it's making every L2 miss take roughly as long as a memory access, and quadrupling the number of clients if the L3 isn't consolidating the cores. Taking this expansion and pairing it with 24 memory channels might not bode well for the data fabric.
 
Article to go with DF video: https://www.eurogamer.net/articles/...n-a-potential-ps5-deliver-a-generational-leap

PlayStation 5: when can Sony truly deliver a generational leap in power?
And what kind of spec can be realistically delivered?

Let's begin with timing. What we do know is that Mark Cerny has once again been hitting the road, talking to developers about their needs for the next-gen PlayStation. But in terms of when an actual retail console is likely to be delivered, there are two crucial technological hurdles that need to be cleared before production of a final unit can begin: that'll be the availability of a smaller, denser process for manufacturing the system's main processor, plus the necessity for newer, faster memory. In both cases, 2019 looks like the earliest possible time a generational leap in console power can be delivered, but other factors - system build cost, for example - may set that back further.

It starts at the transistor level. The 16nm FinFET production process from Taiwanese chip manufacturing giant TSMC is currently used by all of the console manufacturers and while competitors are available (and have been used in the last-gen era), the hot candidate for the process used by PlayStation 5 and the next-gen Xbox will be TSMC's upcoming 7nm FinFET technology. Mobile devices will likely first dibs on the process, and it seems that Huawei may have the first full production run. Typically it requires at least a year for a new process to achieve the kind of efficiency needed to make console production possible, which again makes 2019 the earliest conceivable time for a viable console theoretically capable of delivering a substantial leap in power.
...

What about the secret sauce?
Looking back at the standard PlayStation 4 and Xbox One, the graphics hardware in their respective SoCs turned out to be very similar to existing AMD desktop GPU designs - though Sony doubled down on asynchronous compute, while Microsoft introduced instructions for easier backwards compatibility, along with a programmable command processor. However, with the enhanced consoles, we saw much more ambitious, more customised designs. Microsoft made over 40 GPU hardware optimisations, while Sony introduced hardware checkerboarding functionality and Vega-level features like double-rate FP16, or so-called 'rapid packed math'.

Compute units, clock-speeds and teraflops will matter, but we expect both Sony and Microsoft to push the boat out with hardware customisations that reflect their expectations for the generation to come. At this point, it's way too early to speculate with much depth on this, but maybe the recent GDC offered up some small clue as to the way forward with a strong emphasis on hardware-accelerated ray tracing providing some remarkable real-time global illumination.
 
I've read the notion, a few times, that the only reason for the mid-generation consoles is 4K, and that there will be little to warrant mid-generation iterations of the PS5 and XBoxTwo.

There may be some truth to that, but I believe the line that they were made to prevent migration to PC. Nonetheless, they would need something on which to sell people.

Real time ray tracing is probably not going to be viable in 2019/2020 at console prices (wasn't it 3 GTX1080's that were used?) does anyone reckon that between algorithms being improved and appropriate hardware coming down in price, it may become viable for mid-generation?
 
If AMD is able to exceed 64 compute units with its new Navi architecture (scalability is mentioned in an early slide), looking at how the silicon area of the Xbox One X's Scorpio Engine could scale on a 7nmFF process, 80 compute units looks viable, with 72 or 76 active. 1500MHz on such a core would propel you towards the top-end of the 11-15TF window.

you-had-my-curiosity-gif.gif
 
If Cerny is just now hitting the road, I would think it has to be to advertise the capabilities, not solicit explicit feedback, perhaps other than what kind of and how much RAM and non-volatile memory to offer.

Also, even if a 64CU limit exists, they can still get a chip that uses all 64 by designing it with 68/70 for yields.

I've read the notion, a few times, that the only reason for the mid-generation consoles is 4K, and that there will be little to warrant mid-generation iterations of the PS5 and XBoxTwo.

There may be some truth to that, but I believe the line that they were made to prevent migration to PC. Nonetheless, they would need something on which to sell people.

Real time ray tracing is probably not going to be viable in 2019/2020 at console prices (wasn't it 3 GTX1080's that were used?) does anyone reckon that between algorithms being improved and appropriate hardware coming down in price, it may become viable for mid-generation?

We’ll certainly see selectively applied RT in games next generation. It may not be that accelerated by hardware, but devs can use their general compute budget to implement it in limited scenarios.
 
Last edited:
If Cerny is just now hitting the road, I would think it has to be to advertise the capabilities, not solicit explicit feedback, perhaps other than what kind of and how much RAM and non-volatile memory to offer.

Agree. I think he even said it was ~2008/2009'ish he did the tour of 30 devs for the PS4 so 4 years before launch. That was stated in the post-launch PS4 "making of" interviews he did.

Wonder when the time will be right for the other info Richard has?
 
If Sony believes VR (PSVR) is the future of gaming... whatever they offer, it will require hardware capable of delivering next generation VR performance on driving at least two 1080-1200p/120hz+ OLED screens or possibly a true 2160p single OLED screen.
 
Article to go with DF video: https://www.eurogamer.net/articles/...n-a-potential-ps5-deliver-a-generational-leap

PlayStation 5: when can Sony truly deliver a generational leap in power?
And what kind of spec can be realistically delivered?

Let's begin with timing. What we do know is that Mark Cerny has once again been hitting the road, talking to developers about their needs for the next-gen PlayStation. But in terms of when an actual retail console is likely to be delivered, there are two crucial technological hurdles that need to be cleared before production of a final unit can begin: that'll be the availability of a smaller, denser process for manufacturing the system's main processor, plus the necessity for newer, faster memory. In both cases, 2019 looks like the earliest possible time a generational leap in console power can be delivered, but other factors - system build cost, for example - may set that back further.

It starts at the transistor level. The 16nm FinFET production process from Taiwanese chip manufacturing giant TSMC is currently used by all of the console manufacturers and while competitors are available (and have been used in the last-gen era), the hot candidate for the process used by PlayStation 5 and the next-gen Xbox will be TSMC's upcoming 7nm FinFET technology. Mobile devices will likely first dibs on the process, and it seems that Huawei may have the first full production run. Typically it requires at least a year for a new process to achieve the kind of efficiency needed to make console production possible, which again makes 2019 the earliest conceivable time for a viable console theoretically capable of delivering a substantial leap in power.
...

What about the secret sauce?
Looking back at the standard PlayStation 4 and Xbox One, the graphics hardware in their respective SoCs turned out to be very similar to existing AMD desktop GPU designs - though Sony doubled down on asynchronous compute, while Microsoft introduced instructions for easier backwards compatibility, along with a programmable command processor. However, with the enhanced consoles, we saw much more ambitious, more customised designs. Microsoft made over 40 GPU hardware optimisations, while Sony introduced hardware checkerboarding functionality and Vega-level features like double-rate FP16, or so-called 'rapid packed math'.

Compute units, clock-speeds and teraflops will matter, but we expect both Sony and Microsoft to push the boat out with hardware customisations that reflect their expectations for the generation to come. At this point, it's way too early to speculate with much depth on this, but maybe the recent GDC offered up some small clue as to the way forward with a strong emphasis on hardware-accelerated ray tracing providing some remarkable real-time global illumination.
Finally some confirmation Mark Cerny is still involved with the next gen, he has a knack for hardware.

The entire article is so much better than the crazy rumors of late. I like that Richard considered the small possibility of 18GB on 384bits. For that to happen we should see some indication of 12gbits parts from one of the memory manufacturers, but so far all three only commited to 8 and 16. I wonder if they could use some sort of binning process with disabled sections of dram to use 16gbits parts as 12gbits (or 24gb as 16gb) which would otherwise be thrown away. Maybe even support implemented in the memory controller to blacklist/remap a number of rows on each chip or something.
 
I've read the notion, a few times, that the only reason for the mid-generation consoles is 4K, and that there will be little to warrant mid-generation iterations of the PS5 and XBoxTwo.

There may be some truth to that, but I believe the line that they were made to prevent migration to PC. Nonetheless, they would need something on which to sell people.

Real time ray tracing is probably not going to be viable in 2019/2020 at console prices (wasn't it 3 GTX1080's that were used?) does anyone reckon that between algorithms being improved and appropriate hardware coming down in price, it may become viable for mid-generation?

Instead of resolution-boosting for the mid-gen machines they could focus on fps-boosting. ie release mid-gen hardware where the CPU is the primary upgrade.
 
Finally some confirmation Mark Cerny is still involved with the next gen, he has a knack for hardware.

I'd say it's his perspective as a developer which is more important. Sony's PlayStation machines machines increasingly became developer unfriendly until Mark Cerny was involved with Vita then PS4. But no amount of desire to be develop-freindly can produce hardware or hardware options that aren't economical. The choices of base hardware will be limited once more, more so if Sony stay with AMD.
 
Finally some confirmation Mark Cerny is still involved with the next gen, he has a knack for hardware.

The entire article is so much better than the crazy rumors of late. I like that Richard considered the small possibility of 18GB on 384bits. For that to happen we should see some indication of 12gbits parts from one of the memory manufacturers, but so far all three only commited to 8 and 16. I wonder if they could use some sort of binning process with disabled sections of dram to use 16gbits parts as 12gbits (or 24gb as 16gb) which would otherwise be thrown away. Maybe even support implemented in the memory controller to blacklist/remap a number of rows on each chip or something.

I don’t think they want to be stuck with 384 bit interfaces or a minimum of 12 DRAM chips when it comes to die shrinking and cost cutting. A 256 bit interface would age much more gracefully. Heck, HBM won’t look so bad once they can get sufficient via density on organic.

With the X1X, Microsoft doesn’t really have to worry about cost reductions, IMO. It may never be rev’ed, and I think that’s fine for what it is.
 
Status
Not open for further replies.
Back
Top