Predict: Next gen console tech (9th iteration and 10th iteration edition) [2014 - 2017]

Status
Not open for further replies.
It's the year delay which puzzles me. Maybe AVFS requires a lot more time to design sensing "loops" all around the chip. Maybe yield improved this year.

I think the considerable CPU enhancements to a dead line - supposedly based on extensive profiling and then testing on simulated CPUs - took some number of months. I think the development of a highly efficient and low latency 12 channel, 384-bit memory controller (2x the ratio of channels per memory chip) probably also required significant profiling, development and testing on simulators. And bumping clockspeeds up by ~ 30% beyond the already overclocked 16nm FF X1S can't have been zero work and pain free either.

Outside of console wars, I don't think anyone would normally claim enhancements to CPU architecture, memory controllers and big jumps in CPU frequency should be achievable with basically zero months of additional R&D. Especially when the company behind it has explicitly stated that, yeah, all this extra cool shit that no-one else has done took us some time to do.

I know you're joking...

He's not.
 
1070? Xbox 1 X is already very close to 1070 in flops. Half the problem with AMD gpu's are their DX11 drivers, which wouldn't apply to a console right? Seems to me Nvidia's advantage is mostly on the software side of things in terms of performance.

On PC rx vega 64 is comparable to 1080 and vega 56 seems to be better than 1070.
From a TDP perspective I'm not sure if Vega will work. I could be wrong.

I might be out of touch, but I still consider a 1070 a fairly high end nvidia card.

2 years from now it's performance level at AMD middle range seems conservatively realistic. Or rather a console with 1070 performance at $399 could be doable.
 
From a TDP perspective I'm not sure if Vega will work. I could be wrong.

I might be out of touch, but I still consider a 1070 a fairly high end nvidia card.

2 years from now it's performance level at AMD middle range seems conservatively realistic. Or rather a console with 1070 performance at $399 could be doable.
What I'm saying is scorpio already pretty much has gtx 1070 performance, so that'd be a low bar to set. Its certainly much closer to 1070 than 1060, anyway.

Vega could work in a console, just massively downclocked. But hopefully AMD comes up with something much better in time for new consoles. Imo they might as well not bother if they can't do around 1080 ti performance, or maybe in between 1080 and ti at worst.
 
What I'm saying is scorpio already pretty much has gtx 1070 performance, so that'd be a low bar to set. Its certainly much closer to 1070 than 1060, anyway.

Vega could work in a console, just massively downclocked. But hopefully AMD comes up with something much better in time for new consoles. Imo they might as well not bother if they can't do around 1080 ti performance, or maybe in between 1080 and ti at worst.

Scorpio is a 499$ console probably not making any profit out of hw.
 
I think the considerable CPU enhancements to a dead line - supposedly based on extensive profiling and then testing on simulated CPUs - took some number of months. I think the development of a highly efficient and low latency 12 channel, 384-bit memory controller (2x the ratio of channels per memory chip) probably also required significant profiling, development and testing on simulators. And bumping clockspeeds up by ~ 30% beyond the already overclocked 16nm FF X1S can't have been zero work and pain free either.

Outside of console wars, I don't think anyone would normally claim enhancements to CPU architecture, memory controllers and big jumps in CPU frequency should be achievable with basically zero months of additional R&D. Especially when the company behind it has explicitly stated that, yeah, all this extra cool shit that no-one else has done took us some time to do.
I'm seeing this as one taking 3 years, the other 4 years, but since we don't know when they started working on them, it could be ANY time frame. ID buffer patents were filed in 2014, which means they started development long before 2014. I can't find patents for anything scorpio related but they should be public a few months after launch.

I usually ignore PR claims and wait for objective facts from devs, engineers or patents. ID buffer was explained (Cerny presented it at GDC), and DR FP16 is simple enough to figure out. We know what it is.

MS have a recent history of reactive PR claims, which I won't list here. One of their proudest "addition" is emphasized vaguely as their invention and called Hovis. But digging into AMD's AVFS and TI's presentation about the same technique, it matches MS claims. With that in mind, chances are their other claims about the CPU is possibly overblown since they again refuse to give any specifics, and use similar hype wording PR. They changed something in the cpu after making benchmarks, and improved latency by an unknown amount.

Both Sony and MS had access to the same technologies from AMD, and they chose some recent ones (MS used AVFS, Sony used DR FP16) and they asked for some custom changes which might not be AMD's choices in their own products (MS CPU mods, Sony ID buffer). Three/four years ago it was MS using esram and tensilica cores, and sony asking for 8 ACE, cache bit, etc...

TL;DR
Both have extra cool shit no one else has done.

Bonus material:
Pronounce "avfs" out loud as if it was an english word...
 
Last edited:
I'm seeing this as one taking 3 years, the other 4 years, but since we don't know when they started working on them, it could be ANY time frame. ID buffer patents were filed in 2014, which means they started development long before 2014. I can't find patents for anything scorpio related but they should be public a few months after launch.

I usually ignore PR claims and wait for objective facts from devs, engineers or patents. ID buffer was explained (Cerny presented it at GDC), and DR FP16 is simple enough to figure out. We know what it is.

Indeed, but I'd say that the claims of latency improvements, memory channel count, TLB entries etc are pretty likely to be true given the target audience of a Hotchips presentation and the verifiability of those features.

MS have a recent history of reactive PR claims, which I won't list here. One of their proudest "addition" is emphasized vaguely as their invention and called Hovis. But digging into AMD's AVFS and TI's presentation about the same technique, it matches MS claims. With that in mind, chances are their other claims about the CPU is possibly overblown since they again refuse to give any specifics, and use similar hype wording PR. They changed something in the cpu after making benchmarks, and improved latency by an unknown amount.

Regarding AVFS, I understood it be something that operated at the chip level, and not something that required board level configuration on a per-chip basis. I got the impression that "the Hovis technique" was different to current voltage scaling techniques where tuning went beyond the chip and onto the board.

I imagined a situation where a chip could have different maximum power supplied to different pins, with power balance between VRMs, where the worst case scenario for all the chips critical parameters could be calculated never to occur. And that you'd set that up on a per chip basis.

Without turbo and throttling, and with a need to maintain a constant level of performance no matter what, I don't see how AVFS as described for CPUs could directly be applied to a console chip. I think the Hovis thing will be a different approach to maximising performance, quite possibly for a chip that wouldn't be able to benefit from, and probably doesn't even support, AVFS.

Edit: like dicking around in your bios with all kinds of voltages, with the aim of hitting a certain power and performance threshold for the chip, instead of using the present voltages. What you save in one area you may be able to spend in another and you can do that without having all kinds of modern heuristics, predictive models, and sensor networks across your basically 4 year old architecture.

TL;DR
Both have extra cool shit no one else has done.

Completely agree.
 
Last edited:
So you expect console gpu progression to grind to a halt? Plus who's to say that $499 price tag won't become the norm for Sony and MS?
The problem is that the whole system needs to get beefier to increase performance profiles.

It's more CUs, more bandwidth, more CPU, more storage, faster storage, more cooling. Etc etc.

We don't get the node shrinks nearly as fast as we used to.

Yes, $499 may be the realistic price point from here forward. But even that may not be low enough to support a bigger jump in technology.

Ie. look st Scorpio base model. 1 TB minimum.
12 GB GDDR5. Prices ain't dropping like crazy on storage so that's an immediate increase in costs to support 4K. And SOCs are getting shrunk every 2 years either.

2 years from now is not a lot of time imo to have a noticeable generation difference over this generation. It's why these mid gen refreshes are critical. They give you the same games but on higher resolution. They can hold the fort until the rest of the technologies drop further in price or change entirely.
 
Indeed, but I'd say that the claims of latency improvements, memory channel count, TLB entries etc are pretty likely to be true given the target audience of a Hotchips presentation and the verifiability of those features.

Regarding AVFS, I understood it be something that operated at the chip level, and not something that required board level configuration on a per-chip basis. I got the impression that "the Hovis technique" was different to current voltage scaling techniques where tuning went beyond the chip and onto the board.

I imagined a situation where a chip could have different maximum power supplied to different pins, with power balance between VRMs, where the worst case scenario for all the chips critical parameters could be calculated never to occur. And that you'd set that up on a per chip basis.

Without turbo and throttling, and with a need to maintain a constant level of performance no matter what, I don't see how AVFS as described for CPUs could directly be applied to a console chip. I think the Hovis thing will be a different approach to maximising performance, quite possibly in a chip that wouldn't be able to benefit from, and probably doesn't even support, AVFS.

Completely agree.
It requires a board level design because the chip is controlling the vrms in an active closed loop, but it's not different components per board, each board will end up at a slightly different voltage since the vrms are all digital today. Dvfs however is open loop, there are fixed tables of voltages. It's possible they made a half way implementation with no closed loop, like sensing once in the production line and setting the dvfs table once. The principle is the same. But that could create the "noise roulette" others speculated.

The TI presentation explains (better than amd) how it applies to any silicon, because it compensates for "weaker" vs "stronger" wafers. The more stable the production is, the less this is having an impact. It helps yield where you can't do binning (like consoles!), and it avoid throwing away the "weak" silicon, or otherwise overvolting all of them to allow the weaker ones to pass.
 
It requires a board level design because the chip is controlling the vrms in an active closed loop, but it's not different components per board, each board will end up at a slightly different voltage since the vrms are all digital today. Dvfs however is open loop, there are fixed tables of voltages. It's possible they made a half way implementation with no closed loop, like sensing once in the production line and setting the dvfs table once. The principle is the same. But that could create the "noise roulette" others speculated.

Oh yeah, I get that DVFS and AVFS require board level design but they don't (I don't think) require you to "bake" chip specific values and settings into the board's configuration (on PC boards have to work with whatever chip you plug in / solder onto it).

DVFS requires a specific implementation on the CPU. What I'm guessing is that MS had no way to engineer that kind of system into the chip, wouldn't have benefited so much from it anyway (they can't turbo and they can't throttle and don't have to run on a battery) and so worked around it by - as you say - sensing in the production line and setting for the board based on a far smaller and less dynamic range of variables. End result being you can push any given chip further than you could without, so the yield frequency goes up.

Hovis had lemons and made lemonade. Not totally original but still most likely a first in its own way.

The TI presentation explains (better than amd) how it applies to any silicon, because it compensates for "weaker" vs "stronger" wafers. The more stable the production is, the less this is having an impact. It helps yield where you can't do binning (like consoles!), and it avoid throwing away the "weak" silicon, or otherwise overvolting all of them to allow the weaker ones to pass.

Thanks. I will check that presentation out as soon as I have some down time from work and am not drunk.
 
Can't find the presentation but this is one of their pdf...
http://www.ti.com/lit/an/slva646/slva646.pdf

Interestingly they say AVS is going to be more and more important with smaller nodes because they will have more process variation.

And AVS doesn't necessarily means dynamic. If we forget the F.

AVS comes in different classes (see Section 4):
• Class 0
– This class accounts for process variations. Temperature and aging should be factored into the
voltage-margin.
• Class 1
– This class accounts for process variations and aging. The device is calibrated and the voltage
adjusted at boot. Because the temperature may change, temperature must be factored into the
voltage-margin.
• Class 2
– Software performs a self-check at intervals and accounts for process variations, temperature, and
aging. During the self-check, only limited performance is available. No adoption of this class has
been seen in automotive markets.
• Class 3
– Hardware performs a self-check at intervals and accounts for process variations, temperature, and
aging. During the self-check, only limited performance is available. No adoption of this class has
been seen in automotive markets.
AMD's AVFS is much more advanced, adding sensors everywhere so that the vrms are continuously adjusted. But nothing is preventing a less expensive and simpler method, even a one time check out of production to account only for process variation. In that case it could be as simple as writing in the bootloader's eprom or something.
 
Last edited:
Regarding console case size and cooling and a future Xbox.

XB1X is pretty small, it's ever so slightly smaller than the Xbox One S, yet the 1X manages to have the most power with the highest clocked GPU (and highest clocked Jaguar CPU in a console, plus 12 GB GDDR5). All cooled with a vapor chamber.-- I wonder, theoretically speaking, if Microsoft took the same approach & methodology used with XB1X, and applied it to their next gen console, but next time had the same size case the original Xbox One, allowing more airflow, and even more cooling options. The other factors being it would be 4 years out (Fall 2021) and the same launch price $499.

With those 4 factors
1. XBX1 methodology
2. larger case allowing greater cooling possibilities
3. Fall 2021 release
4. $499 price.

What might be the best specs Microsoft could have with those things in mind?

Especially if they get to use a 2nd generation 7nm+ FinFET process.
 
So you expect console gpu progression to grind to a halt? Plus who's to say that $499 price tag won't become the norm for Sony and MS?

I'm not expecting miracles out of 7nm process. it will be tricky to create something *much* better than scorpio at 399$ entry point around ~2020 without taking a loss. HBM2 or equivalent could help on the power front, but I suspect gddr6 would be more cost effective. It's all about balancing cost versus benefit. Where do console manufactures want to spent the money on and where are compromises made?

I for one would really like pushing the pricepoint and specs higher for consoles but higher price might not be what makes a good mass market device. Unfortunately it looks like 1000$ phone is viable but 1000$ console is not. I would like to see reasonable gpu, reasonable cpu and a very fast mass storage and optimize that to fit 800-1000$ price range(not going to happen, but one can hope). Ditch the optical media as standard and make it an accessory for those markets that don't want to accept fully digital future.

Isn't 1070 a cut down 1080? Console manufacturer doesn't have luxury of binning/salvaging not so great parts. Anything not up to the spec is a loss for sony/ms.

I believe process is only going to do so much. To get traditional generational leap something magical would need to happen and that is unlikely. AMD has plenty of work left to even match pascal let alone do miracles. Of course next gen would gain some architectural improvements over current gen but how much?

I'm suspecting there is no next gen. It will be incremental upgrades and experimentation with new models and price points. I think the business people inside sony and ms realize that loosing compatibility resets market and I doubt sony wants to do that. Reset would let ms back into game. Breaking backwards compatibility would have to come with some enormous benefit over incremental upgrades to make it worthwhile. There is reason why iphones and androids and mac os x stay backwards compatible. Microsoft probably learnt expensive lesson with the metro ui and trying to force people out of their existing native apps...
 
Last edited:
The problem is that the whole system needs to get beefier to increase performance profiles.

It's more CUs, more bandwidth, more CPU, more storage, faster storage, more cooling. Etc etc.

We don't get the node shrinks nearly as fast as we used to.

Yes, $499 may be the realistic price point from here forward. But even that may not be low enough to support a bigger jump in technology.

Ie. look st Scorpio base model. 1 TB minimum.
12 GB GDDR5. Prices ain't dropping like crazy on storage so that's an immediate increase in costs to support 4K. And SOCs are getting shrunk every 2 years either.

2 years from now is not a lot of time imo to have a noticeable generation difference over this generation. It's why these mid gen refreshes are critical. They give you the same games but on higher resolution. They can hold the fort until the rest of the technologies drop further in price or change entirely.
I would also like to see more balanced systems going forward, but the fact is the gpu will remain the largest chip in consoles at this point. It's not like i'm expecting a monster gpu ; I think half of whatever the top top end card is at the time of release is about as good as we can hope for.

But with regards to the 1070 specifically - we're already at that performance level with scorpio. Mark Cerny says he wants at a minimum 8 TFs of gpu power for their 4K solution, so that'd be 1080 level at least. But honestly I expect more esp. if they'll be $500 as well.

And yeah ram prices don't seem to be falling fast at all, which is concerning. I'd like to see newer consoles take the gamecube approach ; faster loading and overall effeciency as opposed to raw grunt (gpu grunt in the current console space). Have that 8TF gpu in the ps5 or whatever, then spend the rest on faster storage and memory. And obviously we'll get far better cpu's by default.

Isn't 1070 a cut down 1080? Console manufacturer doesn't have luxury of binning/salvaging not so great parts. Anything not up to the spec is a loss for sony/ms.

Actually since the 7th gen all console gpus are cut down desktop gpus in one way or another. Ps4 and Xbox one have compute units disabled vs. their desktop counterparts.

I'm not expecting miracles out of 7nm either but at the same time, with the gpus taking the lead in gaming vs. cpu I don't expect them to skimp on that front.
 
Last edited:
I'm seeing this as one taking 3 years, the other 4 years, but since we don't know when they started working on them, it could be ANY time frame. ID buffer patents were filed in 2014, which means they started development long before 2014. I can't find patents for anything scorpio related but they should be public a few months after launch.

I usually ignore PR claims and wait for objective facts from devs, engineers or patents. ID buffer was explained (Cerny presented it at GDC), and DR FP16 is simple enough to figure out. We know what it is.

MS have a recent history of reactive PR claims, which I won't list here. One of their proudest "addition" is emphasized vaguely as their invention and called Hovis. But digging into AMD's AVFS and TI's presentation about the same technique, it matches MS claims. With that in mind, chances are their other claims about the CPU is possibly overblown since they again refuse to give any specifics, and use similar hype wording PR. They changed something in the cpu after making benchmarks, and improved latency by an unknown amount.

Both Sony and MS had access to the same technologies from AMD, and they chose some recent ones (MS used AVFS, Sony used DR FP16) and they asked for some custom changes which might not be AMD's choices in their own products (MS CPU mods, Sony ID buffer). Three/four years ago it was MS using esram and tensilica cores, and sony asking for 8 ACE, cache bit, etc...

TL;DR
Both have extra cool shit no one else has done.

Bonus material:
Pronounce "avfs" out loud as if it was an english word...

Hovis refers to "tuning" SE with a specific motherboard and its components.

http://www.eurogamer.net/articles/digitalfoundry-2017-project-scorpio-hardware-deep-dive

"It's not new technology in that chips have differences in settings and things like that. What is really different is that this is almost like taking the chip to a dyno," Del Castillo says. "If you're familiar with cars, cars get tuned on a dyno. We tune everything at the motherboard level, rather than the component level. It takes into account a lot of variations in other parts that are sitting alongside those boards."

AVFS seems refers to real time monitoring and more finely changing voltage according to the individualized characteristics of a processor in an effort to avoid power being wasted.

They might both work to accomplish the same thing (avoid using unnecessary power) but don't seem to describe the same method at all.
 
Last edited:
Hovis refers to "tuning" SE with a specific motherboard and its components.

http://www.eurogamer.net/articles/digitalfoundry-2017-project-scorpio-hardware-deep-dive

"It's not new technology in that chips have differences in settings and things like that. What is really different is that this is almost like taking the chip to a dyno," Del Castillo says. "If you're familiar with cars, cars get tuned on a dyno. We tune everything at the motherboard level, rather than the component level. It takes into account a lot of variations in other parts that are sitting alongside those boards."

AVFS seems refers to real time monitoring and more finely changing voltage according to the individualized characteristics of a processor in an effort to avoid power being wasted.

They might both work to accomplish the same thing (avoid using unnecessary power) but don't seem to describe the same method at all.
Car reference explains nothing. Just like their explanation of a simple blower fan being like a car supercharger.

It's a technique that Microsoft calls the 'Hovis method', named after the engineer who developed it. Every single Scorpio Engine processor that comes off TSMC's production line will have its own specific power profile. Rather than adopt a sub-optimal one-size-fits-all strategy, Microsoft tailors the board to match the chip.
Whichever PR line you follow says something different.

AVS (with an F or not) refers to tuning the voltage regulators to compensate variations of the process. The chip that takes 80% o the power.

See the TI pdf I posted above.

But yeah, it looks like it's not adaptative, it's just adapted once. The text above is exactly what AVS is. The varying power profile is caused by the varying process strength. The only thing to change the power profile is the power delivery. The only power delivery parts on the board are the regulators.
 
Last edited:
I'm not expecting miracles out of 7nm process. it will be tricky to create something *much* better than scorpio at 399$ entry point around ~2020 without taking a loss. HBM2 or equivalent could help on the power front, but I suspect gddr6 would be more cost effective. It's all about balancing cost versus benefit. Where do console manufactures want to spent the money on and where are compromises made?

I for one would really like pushing the pricepoint and specs higher for consoles but higher price might not be what makes a good mass market device. Unfortunately it looks like 1000$ phone is viable but 1000$ console is not. I would like to see reasonable gpu, reasonable cpu and a very fast mass storage and optimize that to fit 800-1000$ price range(not going to happen, but one can hope). Ditch the optical media as standard and make it an accessory for those markets that don't want to accept fully digital future.

Isn't 1070 a cut down 1080? Console manufacturer doesn't have luxury of binning/salvaging not so great parts. Anything not up to the spec is a loss for sony/ms.

I believe process is only going to do so much. To get traditional generational leap something magical would need to happen and that is unlikely. AMD has plenty of work left to even match pascal let alone do miracles. Of course next gen would gain some architectural improvements over current gen but how much?

I'm suspecting there is no next gen. It will be incremental upgrades and experimentation with new models and price points. I think the business people inside sony and ms realize that loosing compatibility resets market and I doubt sony wants to do that. Reset would let ms back into game. Breaking backwards compatibility would have to come with some enormous benefit over incremental upgrades to make it worthwhile. There is reason why iphones and androids and mac os x stay backwards compatible. Microsoft probably learnt expensive lesson with the metro ui and trying to force people out of their existing native apps...

This is one of the reasons I hope for a two tier launch. A 4 core Zen with a ~10TFlop GPU and a healthy chunk of HBM3 or GDDR6, coupled with a UHD Blu-ray drive and 750GB HDD seems doable for 2019, at a mass market price. It would be enough to deliver much better 4K visuals than PS4Pro and, to a lesser extent, could even do so per eye in the inevitable PSVR2. As long as it's backwards compatible, it would be enough to entice most base PS4 owners quickly enough.

Such a console is all well and good, but both non-portable home consoles have now established the two tier model, and I'm sure Nintendo will join in within a year or two. So why not try for a piece of that high tier pie at launch? An 8 core Zen CPU with a ~20TFlop GPU equipped with double the amount of HBM3/GDDR6, and a 2TB HDD - maybe with a tiny SSDD/NVMe - for double the mass market price would surely find an audience when there are graphics cards selling for well over £1000.

Same architecture, same sized physical medium, but for audiences with different priorities.

The PS3 and XBoxOne both suffered from their high price, but it didn't help that they were also both worse than their competition. Had they released a better console than their competition, still at a high price, but alongside a low cost alternative, I really think the marketshare of the platforms would have been greater.
 
Car reference explains nothing. Just like their explanation of a simple blower fan being like a car supercharger.


Whichever PR line you follow says something different.

AVS (with an F or not) refers to tuning the voltage regulators to compensate variations of the process. The chip that takes 80% o the power.

See the TI pdf I posted above.

But yeah, it looks like it's not adaptative, it's just adapted once. The text above is exactly what AVS is. The varying power profile is caused by the varying process strength. The only thing to change the power profile is the power delivery. The only power delivery parts on the board are the regulators.
Once again. I do feel like your jumping to your worst conclusions on this one.

Observationally yes, the technologies do sound the same. They might even be the same, they do read like they are looking to achieve the same goal.

But that does not mean it is a rebranding of AMD AVS.

There are multiple companies that can work towards the same goal and achieve it through different means. How they get that goal and the yield of their success is more important than just saying it's the same goal.

What if AVS was not effective enough for console? How would you know? Is AVS made for every application and has no restrictions on operations and deployment?

Do you know what hovis method yields are? Do you know the cost implications of hovis method?

So while they aim to do the same, how it's implemented is the challenge. One would ask why do we not find AVS on 4Pro?

It's pretty clear to me that Bill Hovis worked on Hovis method:
 

Attachments

  • IMG_1152.jpg
    IMG_1152.jpg
    189.6 KB · Views: 10
All right. I don't think it's an unimportant addition. Nor that it didn't require additional work. But it is an AVS implemention.

It's something that will be necessary at 7nm and beyond. So expect it in some form next gen.
 
Last edited:
Status
Not open for further replies.
Back
Top