Predict: Next gen console tech (9th iteration and 10th iteration edition) [2014 - 2017]

Status
Not open for further replies.
Given the silicon issues, a two-tiered launch would be interesting. Keep the RAM and Core/thread/shader Engine count the same, but people can pay for the higher end chip like regular PC market. It'd be something more tangible than HDD space, which may be less and less of a marketable value given the use of external HDDs.

e.g.
SKU1@ $499.99 = 8 core CPU@ 3.5GHz , 4x15CU @1.5GHz
SKU2@ $399.99 = 8 core CPU@ 3.0GHz, 4x12CU @1.3GHz

(wholly made-up and the first one to bitch about relative performance on a made-up scenario completely missed the point).

It's the same base specs for devs with slight boost for those willing to pay for it, and it should be pretty low amount of work (if any) to certify even compared to the midgen twins we see now since it's literally clock bump and zero changes in architecture.

Obviously, yields would determine how wide of a dichotomy there would be. i.e. how many chips they can even get to x-GHz with n-HW blocks enabled.
 
Last edited:
All right. I don't think it's an unimportant addition. Nor that it didn't require additional work. But it is an AVS implemention.

It's something that will be necessary at 7nm and beyond. So expect it in some form next gen.
I'm unsure if you have the information on hand, but some articles linked that AVS was not yet a feature in Ryzen? Or is that old news ?
 
Given the silicon issues, a two-tiered launch would be interesting. Keep the RAM and Core/thread/shader Engine count the same, but people can pay for the higher end chip like regular PC market. It'd be something more tangible than HDD space, which may be less and less of a marketable value given the use of external HDDs.
I feel like if we go with your idea of two tier launch, id like to see massive separation between the two. $399 and $699 for instance. Just really shoot for the moon for those that want to pay for it and hold the crown on graphics for all the 3P comparisons.
 
I'm unsure if you have the information on hand, but some articles linked that AVS was not yet a feature in Ryzen? Or is that old news ?
At work right now.. No idea about zen. I saw slides from AMD about an extremely advanced avfs with 500 sensing values across carrizo? They were saying it was worth the effort for laptop chips. Power graphs were showing diminishing returns at the higher states but still something interesting there. A console only needs the check at the highest power state, maybe it's much simpler.

The thing about PC parts is that they can be binned for different skus, so cpus and gpus might not make avfs worth it. Better parts would be clocked higher, not adjusted for better efficiency?
 
Last edited:
Every AMD chip at this point has some variant of its DVFS and per-chip characterization for power delivery.
My understanding of AMD's methods are that they are from the standpoint of a silicon chip implementer, where things are modified/customized on the die. Since the chips are going in any number of sockets or PCBs, AMD is unlikely to have much say on what the board components are. The board/motherboard/system vendors are also generally unlikely to do much to tweak in AMD's stead.

Microsoft's method seems to include some level of customization outside of the chip.
I'm not sure at this point how to characterize the ROI of this, although it seems like for all its powerpoint slides that I'd hate to imagine what AMD's power consumption would be without its AVFS techniques given how off the mark/non-functional they've been for years now.
 
From the point of view of the silicon, what does a motherboard provides other than the dynamically requested voltage and suppressing transients?

Maybe there is tuning to be done for transients if different silicon are seen as a different RLC network, from the point of view of the regulator. Maybe that's a stretch.
 
From the point of view of the silicon, what does a motherboard provides other than the dynamically requested voltage and suppressing transients?

Maybe there is tuning to be done for transients if different silicon are seen as a different RLC network, from the point of view of the regulator. Maybe that's a stretch.

What the board and the components deliver can be different relative to what the chip measures, and each can vary or degrade over time.

Boot Time Calibration and Aging Compensation are portions that are on-chip measures meant to help a chip adapt to different qualities of boards and components, and to varying degrees appear to conflict with what Microsoft is doing by customizing the motherboard components. Calibration would seemingly make changing the components redundant, and aging compensation risks shifting the performance budget of the console, which the platform strives to keep consistent.
It might still work if adjusting the board means AMD's chips can more accurately compensate, or possibly it makes those measures more worthwhile since it's been alleged that these measures were left off in chips like Polaris.

AVFS is itself a combination of multiple systems for measuring temperature and power, and varying voltage and clocks. For the consoles, varying clocks outside of specific transitions outside of the guaranteed game performance range seems like a non-starter, and transient upclocks on temp-based turbo unreliable. More conservative elements that might allow for consistent performance at lower voltage, better guard-banding, or emergency throttling would still be useful, although they don't do as much for tuning to a board as they react to dynamic conditions.
The guard-banding portion might be an area where improving monitoring and waiting a year could help, since AMD's methods do rely on factory-measured values and characterization of the silicon, and it's usually silicon that comes in a year or so later that shows power/performance gains on effectively the same chip.
 
Car reference explains nothing. Just like their explanation of a simple blower fan being like a car supercharger.


Whichever PR line you follow says something different.

AVS (with an F or not) refers to tuning the voltage regulators to compensate variations of the process. The chip that takes 80% o the power.

See the TI pdf I posted above.

But yeah, it looks like it's not adaptative, it's just adapted once. The text above is exactly what AVS is. The varying power profile is caused by the varying process strength. The only thing to change the power profile is the power delivery. The only power delivery parts on the board are the regulators.

It explains plenty if you know a Dyno measures performance at the wheels of a car, so it takes accounts of other components besides the engine. Its analogous to measuring horsepower of the overall system and tweaking to boost power performance of a car, but the hovis method seems to be measuring the power needs of the overall hardware and tuning for the sake of efficiency and not performance boosting. Tweaking across the motherboard and its components to ensure the hardware overall isn't wasting energy.

And there is nothing that's says MS is using Hovis in lieu of AVFS. The Polaris arch's energy efficiency measures includes AVFS and Scorpio is based on Polaris.

Whether Hovis is effective or beneficial is another matter and if it does have some benefit it should show up in the production of Surface and other MS products. If not, it's probably just a bunch of PR.
 
Last edited:
And there is nothing that's says MS is using Hovis in lieu of AVFS. The Polaris arch's energy efficiency measures includes AVFS and Scorpio is based on Polaris.
I'm something of a broken record on this, but was it stated that Scorpio is based on Polaris?
It's been noted as having Polaris features, but that may not be the same. The GPU is listed as being compatible with the Xbox One/S, which if they mean binary compatibility is a feature that would be non-trivial for GCN3+ architectures.

That question makes me want to take a step back and review where the form of AVFS implemented was discussed.

Whether Hovis is effective or beneficial is another matter and if it does have some benefit it should show up in the production of Surface and other MS products. If not, it's probably just a bunch of PR.
One small possibility is that the hardware is closer to the original architecture, which has more limited abilities for on-die tuning than Polaris, requiring additional investment around it to compensate.
 
One small possibility is that the hardware is closer to the original architecture, which has more limited abilities for on-die tuning than Polaris, requiring additional investment around it to compensate.

On the CPU front it's hard to imagine a Jaguar derivative having complex on chip power management. Power you save on the GPU you might be able to spend on the CPU and vice versa. I can imagine for example tuning the board to deliver more or less power to different parts of the SoC - and possibly adjusting power from different VRMs - could allow more chips to hit higher target frequencies.
 
I'm something of a broken record on this, but was it stated that Scorpio is based on Polaris?
It's been noted as having Polaris features, but that may not be the same. The GPU is listed as being compatible with the Xbox One/S, which if they mean binary compatibility is a feature that would be non-trivial for GCN3+ architectures.

That question makes me want to take a step back and review where the form of AVFS implemented was discussed.


One small possibility is that the hardware is closer to the original architecture, which has more limited abilities for on-die tuning than Polaris, requiring additional investment around it to compensate.
Supposing that possibility where they are unable to get all internal measurement necessary for a true closed loop, I was thinking about AVS in it's simplest form, compensating the process strength variation and nothing else. Some TI application notes are showing this with minimal requirement from the actual silicon. Basing the correction on the consumption from a standardized test.

Bring the soc to it's max temperature
Run a standardized max TDP test
Adjust voltage to get a predefined consumption measured from the vrm side
Write result to the board bootstrap, or fuses, or resistor arrays on the chip packaging
Run the real Pass/Fail test with other margins dialed in

They could adjust the target max consumption as the yield allows. For a console, they would only care about not exceeding the power design, none of the the other power reduction perks like aging compensation and low power modes are as important. All consoles would be identical from the consumer point of view without a "noise roulette".
 
I'm something of a broken record on this, but was it stated that Scorpio is based on Polaris?
It's been noted as having Polaris features, but that may not be the same. The GPU is listed as being compatible with the Xbox One/S, which if they mean binary compatibility is a feature that would be non-trivial for GCN3+ architectures.

That question makes me want to take a step back and review where the form of AVFS implemented was discussed.


One small possibility is that the hardware is closer to the original architecture, which has more limited abilities for on-die tuning than Polaris, requiring additional investment around it to compensate.

Agreed. We can't assume that AVFS was included but we can't readily assume that it wasn't.

I'll add that the overall focus of Polaris as an arch was to provide 2.5X improvement to performance per watt and that AVFS was readily marketed as part of the features that improved the efficiency of Polaris. AMD claims 14nm finfet allows a 1.7X improvement. Finfet with AMD tech allows the performance per watt improvement to jump to 2.8X.

MS's aim was not only a 6 Tflop APU but one that could fit into a form factor that rivaled the Slim. Hovis is used to improve efficiency and AVFS and other features that enabled better performance per watt seems like a natural fit for Scorpio. Unless AVFS doesnt offer any real benefit I wouldn't be surprise if its a part of the SE.
 
Last edited:
Given the silicon issues, a two-tiered launch would be interesting. Keep the RAM and Core/thread/shader Engine count the same, but people can pay for the higher end chip like regular PC market. It'd be something more tangible than HDD space, which may be less and less of a marketable value given the use of external HDDs.

e.g.
SKU1@ $499.99 = 8 core CPU@ 3.5GHz , 4x15CU @1.5GHz
SKU2@ $399.99 = 8 core CPU@ 3.0GHz, 4x12CU @1.3GHz

(wholly made-up and the first one to bitch about relative performance on a made-up scenario completely missed the point).

It's the same base specs for devs with slight boost for those willing to pay for it, and it should be pretty low amount of work (if any) to certify even compared to the midgen twins we see now since it's literally clock bump and zero changes in architecture.

Obviously, yields would determine how wide of a dichotomy there would be. i.e. how many chips they can even get to x-GHz with n-HW blocks enabled.
I think if they wanted 2 models, they would probably want more difference between them. Maybe have the same difference as current gen where 1 box is 4k and 1 is 1080p. With something like a 40% improvement like you have there, I don't know how they would market the difference, 1 is obviously going to be worth much more than the other for the consumer and keeping 2 models with so much difference would make development harder.

Maybe the next base xbox will be weaker than the xbox one x and the premium model will be 2x as fast. Although I think 1 model with as much performance they can deliver at the time for the lowest price would be much easier to market. With 2 models, its hard to not lose money on the base model unless you are marking up the premium model a lot, this all have to have some tangible difference to the consumer but not too much so that they see the base model as crap, its a very hard to balance problem when dealing with a situation like a console where the margins on hardware are very thin and they are trying to extract as much out of the hardware as they can.
 
Doom and gloom article about medium term 3d nand scalability. I don't know what's the next technology step after this?

https://www.eetimes.com/author.asp?section_id=36&doc_id=1332422&
The life span of 3D NAND might be a lot shorter than most people think.

At the Flash Memory Summit this year, Samsung announced its development of 1Tb 3D NAND, which would be used for commercial products launching next year. However, I wonder when the 4Tb 3D NAND will hit the market.

Based on the information available on TLC 512Gb 3D NAND with 64-layer on about 130mm2 die size (from Samsung and Toshiba) and assuming string stacking of 64-layer, I figured that in order to implement the 4Tb NAND chip:

  • Eight string stacks of 64-layer are needed. Which will make (512Gb x 8) = 4Gb
  • The total layer then becomes 512-layer on 130mm2 die size
  • It will take about a year to process a wafer, 5 weeks for memory logic plus 8 times (i.e. 8 string stack of 64-layer) 5 to 6 weeks for a 64-layer cell layer implementation. Therefore, the wafer processing time for a 512-layer will be about 45 to 53 weeks.

If this simple assessment is right, then it is practically impossible to implement the 4Tb NAND chip. If QLC is considered instead of TLC, there will be an improvement of 25 percent at best. So, a 410-layer will be needed for QLC 4Tb 3D NAND and about nine months of wafer processing time.

How about 16Tb 3D NAND? It needs 2,048-layers with four years of wafer processing time.

For the last several decades, NAND has achieved dazzling bit growth under Moore’s Law. When Moore’s Law ends and planar NAND switches to 3D NAND, many people expect 3D NAND will continuously expand its memory scaling in a vertical direction. However, 3D NAND just achieves price parity with planar NAND at 64-layer. Thus, 3D NAND will just start competing on price with planar NAND. And now I am thinking it will be practically impossible to expect 4Tb NAND.

The limitations of 3D NAND scaling seem obvious. Then, will 3D NAND reach the end of its life span? It might not be that far off.

--Sang-Yun Lee is founder and CEO of BeSang Inc.
 
I think it would be cooler to have the gpu and cpu on a single apu but have all OS stuff (including background downloads, file management, chat, live streaming, screen capture, etc) be done by a secondary mobile-like apu. Then the apu would be entirely dedicated to the game simulation, no cores or time-slices shared for other tasks. And at the same time, the console gets to actually be energy efficient and silent when doing stuff on stand-by mode.

I understand that was the original idea with ps4's southpridge or something along those lines, but they underestimated the requirements to get that stuff done by that chip. I hope they revisit that idea and get it done right this time.
 
Last edited:
Status
Not open for further replies.
Back
Top