Sony PS6, Microsoft neXt Series - 10th gen console speculation [2020]

[STATE MODES]

>There is an opportunity with future AMD hardware to figure a way for relatively wider GPUs to dynamically scale down saturation workloads to smaller cluster of CUs while proportionately increasing the frequency of clocks on those active hardware components while the inactive hardware components/CUs reserve at a dramatically lower clock (sub-100 MHz) until they are needed for more work.

>This assumes that AMD can continue to scale GPU clock frequencies higher (4 GHz - 5 GHz) with future RDNA designs, provided they can make such work with silicon designs on smaller node processes. Since any given cluster of the GPU would need to be able to clock this high, it means the entire GPU design must be able to clock at this range, potentially across the entire chip, in order to make this feasible.

>Power delivery designs may also have to be reworked; chiplet approach will help a lot here.

>This approach would be more suitable for products that need to squeeze out and scale performance for various workloads, support variable frequency (this is, essentially, variable frequency within portions of the GPU itself), and has to stay within a fixed power budget...such as a games console. Therefore it might be less required (though potentially beneficial) for PC GPUs as it gives a different means of scaling clocks with workloads while having more granularity in control of the GPU's power consumption.

>AMD's implementation would be based on Shader Array counts, so the loads would be adjusted per Shader Array. On chiplet-based designs, each chiplet would theoretically be its own Shader Array, so this is essentially a way of scaling power delivery between the multiple chiplets dynamically.

>This could be used in tandem with already-established power budget sharing between the CPU and GPU seen in designs like PS5; in this case it would be beneficial in allowing the GPU to maintain implementation of this particular feature for games that may have lighter volume workloads, but intense iteration workloads that could stress a given peak frequency. However, this should be minimal and its fuller use would be more in the traditional fashion when talking about full GPU volume workloads.

>Another benefit of State Mode is that when targeting power delivery to a smaller cluster of the GPU hardware and increasing the clock, clock-bound processes (pixel fillrate, instructions per second, primitives per second) see large gains, generally inverse of the decrease in active CU count. However, some other things such as L0$ and L1$ amounts will reduce, even if actual bandwidths have better-than-linear scaling respective of the total active silicon.

[PS6 - STATE MODE IMPLEMENTATION]

>SHADER ENGINES: 1

>SHADER ARRAYs (PER SE): 2

>CUs: 20

>SHADER CORES (PER CU): 128

>SHADER CORES (TOTAL): 2,560

>ROPs: 128

>Future RDNA chiplet designs will probably keep the back-end to its own block. However, for design reasons ROP allocation would likely scale to per chiplet cluster evenly, so each chiplet (or if essentially a chiplet, SE) would have its own assigned group of ROPs. This equals 2x 64 ROPs for PS6.
>TMUs (PER CU): 8

>TMUs (TOTAL): 160

>MAXIMUM WORKLOAD THREADS: 20,480

>MAXIMUM GPU CLOCK: 4113.449 MHz (shaved off some clock from earlier calcs to account for non-linear clock scaling with power scaling)

>PRIMITIVES (TRIANGLES) PER CLOCK (IN/OUT): Up to 8 PPC (IN), up to 6 PPC (OUT)

>PRIMITIVES PER SECOND (IN/OUT): Up to 32.9 billion PPS (IN), up to 24.675 billion PPS (OUT)

>GIGAPIXELS PER SECOND:
Up to 263.26 G/pixels per second (4113.449 MHz * 64 ROPs)

>INSTRUCTIONS PER CLOCK: 2

>INSTRUCTIONS PER SECOND:
8.226898 billion IPC

>RAY INTERSECTIONS PER SECOND: 658.151 G/rays per second (4113.449 MHz * 20 CUs * 8 TMUs)

>THEORETICAL FLOATING POINT OPERATIONS PER SECOND: 21.06 TF

>CACHES:

>L0$: 256 KB (per CU), 5.12 MB (total)
>L1$: 1 MB (per Dual CU), 10 MB (total)

>L2$: 24 MB

**Unified cache shared with both chiplets

>L3$: 192 MB

**Unified cache shared with both chiplets

>>TOTAL: 231.12 MB
That just about wraps up GPU specifications speculation. Could probably focus a bit on display output support and resolution, etc. but a lot of that can likely be assumed. 4K, 8K, HDMI of whatever standard is in place by then, the usual codecs, etc.

Should be able to cover the CPU, memory, storage and audio stuff in the next post and then move on to Microsoft's possible design. If there's stuff anyone'd like to add on top or alter then do share, because a lot of these specifications I've come to settle on after having some discussion and insight from many people on these boards since posting.
 
PS6 will still use x86 I think, Xbox Series 2 will be ARM based most probably, because their frameworks allow it (just like my M1 MacBook runs X86 code even faster than most native X86 machines).

Just my 2 cents
 
PS6 will still use x86 I think, Xbox Series 2 will be ARM based most probably, because their frameworks allow it (just like my M1 MacBook runs X86 code even faster than most native X86 machines).

Just my 2 cents
I think MS needs a little more than "their frameworks allow it" to jump architectures again. I'm pretty sure x86 will continue to be what game devs want, too (yes, even with all their frameworks).
 
PS6 will still use x86 I think, Xbox Series 2 will be ARM based most probably, because their frameworks allow it (just like my M1 MacBook runs X86 code even faster than most native X86 machines).

Just my 2 cents

Well technically, the consoles use x86-64, but I get what you're saying. But I think @Kaotik is right on MS not switching to ARM. There really isn't a reason to.

Also the M1 example is interesting but dunno if applicable; MS would have to develop their own silicon from the ground-up which is a very large financial investment. They'd also need to compete with other companies like AMD for wafers at varying foundries. Lastly it'd probably show a vote of no confidence from MS towards AMD's technology which, hey, anything can happen in the next six or so years, but I find it hard that'll happen considering how close the two companies seem to be operating (and their having similar goals).

I'd even say MS's dynamic with AMD has some of the (friendlier) shades of the Wintel dynamic from the '80s and '90s, I think that's something the two companies want to establish. So why throw that away and turn to ARM (especially since Nvidia owns them; not that MS has any notable issues with Nvidia but Nvidia haven't been the easiest company to work with in the past for home consoles, just ask OG Xbox and PS3)?

If anything they'd probably go with something RISC-V related, but that still only deals with the CPU side. Perhaps they could go with Intel for GPU technology if something happens to go south with them and AMD, because I don't see many other options aside from that except MS spinning their own GPU design, but that of course (also) puts them in competition with AMD/Nvidia/Intel/Apple in the GPU space.

...and they'd still have to compete for foundry space (that's kind of why I made the Intel suggestion, since they own their own foundries).
 
Last edited:
It is easy to overlook the fact that Apple M1 is at least one full node ahead over anything x86-64, at least in TSMC's own book.

M1 does shatter the PC/Mac space. But its advantage is bound to Apple's mobilisation of bleeding edge processes and technologies, backed by its capital and scale. In other words, you won't get a "like Apple" magical boon simply by switching to ARM.
 
Please link if you do, I haven't heard much about AMD & Arm since ill-fated K12

I wouldnt be so sure that K12 is permanently on ice. Dr Ian Cuttress, of Anandtech, had said that he has heard from people at AMD that the K12 architecture isn't dead and buried, and that they intend to bring an arm product to market at some point.

Purely rumors, but amd might present something about it at CES, where Dr. Su is presenting something.


Link to rumour

https://www.google.com/amp/s/www.te...rumored-working-arm-based-apple-m1-rival.html
 
[SONY PLAYSTATION 6]

Picking up from what got left off, here's a bit more speculation on other parts of the GPU.

[CACHE BANDWIDTHS]

[FULL MODE]

>L0$: 34.085 TB/s

>L1$ 13.77 TB/s

>L2$: 9.639 TB/s

>L3$ (IC): 4.131 TB/s​

[2nd MODE]

>L0$: 20.85 TB/s

>L1$ 8.42 TB/s

>L2$: 5.894 TB/s

>L3$ (IC): 2.526 TB/s​

[TEXTURE FILLRATE]

[FULL MODE]

>1.07591553 Ttexels/s

[2nd MODE]

>658.15 Gtexels/s​

-----------------

If my numbers seem off for the cache bandwidths just let me know and I can fix them. Assumed 1NS latency for L0, 30NS latency for L2 and 70NS latency for L3. Just worked out the numbers as I did a little research to see how the cache would be calculated.

I'll try posting more parts when I have the time but I'll probably put some thoughts on the CPU next.

AMD is investing in ARM as well, it was on the news recently I’ll find an article later when I get home

Link it when you can; these tech companies invest in a lot of different businesses and ventures tho, mainly for R&D but it's never guaranteed it'll get into consumer products anytime soon. Lots of companies have investments in RISC-V for example but I can't think of too many that have RISC-V based products on the market in any capacity.

I wouldnt be so sure that K12 is permanently on ice. Dr Ian Cuttress, of Anandtech, had said that he has heard from people at AMD that the K12 architecture isn't dead and buried, and that they intend to bring an arm product to market at some point.

Purely rumors, but amd might present something about it at CES, where Dr. Su is presenting something.


Link to rumour

https://www.google.com/amp/s/www.te...rumored-working-arm-based-apple-m1-rival.html

Would love to see a HMP design from AMD in the commercial market in the future. Could give room for a genuine Switch competitor, whether from Sony, Microsoft or another company altogether.
 
Last edited:
With the recent news about MS developing its own ARM solution I wonder if they follow Apple's route to develop its own ARM solution. Considering that they are cozying up with AMD it can be interesting.


Also RIP Intel.
 
With the recent news about MS developing its own ARM solution I wonder if they follow Apple's route to develop its own ARM solution. Considering that they are cozying up with AMD it can be interesting.


Also RIP Intel.

Combined with their continuing effort with Windows on ARM, it's entirely possible that a future Xbox could have an ARM CPU core and still support BC of all Xbox consoles. Although that'll have be not only a really performant ARM core to support XBS-X/S games, but a really efficient translation layer. Apple did an admirable job with their x86 translation layer, but it still loses significant performance when running x86 code.

Regards,
SB
 
This has completely left the Microsoft rumoured to be buying discussion, but whatever

Microsoft is developing its own arm-based processors for its own use in servers and surface products. I suppose they finally decided that relying on Qualcomm for the arm based chips in the surface pro x was no longer an option, as the requirements for a laptop type chip are too dissimilar than a chip purely for mobile applications


Microsoft (MSFT) Is Designing Its Own Chips in Move Away From Intel (INTC) - Bloomberg


It appears that Microsoft has been working toward this in some capacity for several years, so it might be that they are further along than people think. Someone said, and I forget who, that they had roughly 1000 CPU/SOC engineers as of last year, mostly focussed on custom silicon for datacentre applications.


On another note, sticking to the theme of the thread, I'm kinda surprised Microsoft didn't buy ARM when it was still available, it fits their MO to a T, and would have worked well within the wider Microsoft umbrella. They would keep selling the ARM licenses of course.

What could have been eh?
 
It's less than 30%..
The M1X which will power the 16 inch Macbook in a few months will be the most powerful portable "X86_64" CPU ever created. That is emulating X86 code of course, native will be into server grade territory

https://www.anandtech.com/show/16252/mac-mini-apple-m1-tested/6

If you don't consider 20-30% loss in performance to be significant, I honestly don't know what to say.

I suppose you have benchmarks on hand of the upcoming Macbook running x86 code? It's a bold claim considering it'll likely be going against Ryzen mobile 5k series CPUs. Considering that Ryzen mobile 4k series are faster than the M1 in applications and Mobile 5k is going to be faster still (potentially 20-30% faster), then yeah.

Now if Geekbench is the only thing you use it for then the M1 is natively faster in single threaded benchmarking, but quite a bit slower in multithreaded benchmarking. Throw in x86 translation and it isn't even faster at single threaded anymore.

The M1 does have the advantage of a superior integrated GPU which is unaffected by the translation layer (as it should be since it's still native code regardless of whether the CPU code is x86 or ARM).

And that's with the M1 benefiting from an entire node advantage (5nm vs. 7nm).

Regards,
SB
 
On another note, sticking to the theme of the thread, I'm kinda surprised Microsoft didn't buy ARM when it was still available, it fits their MO to a T, and would have worked well within the wider Microsoft umbrella. They would keep selling the ARM licenses of course.

What could have been eh?
I thought in the past they would buy AMD though. But it did not happen. It seems in the recent years, MS is more into forming alliances instead of purchasing companies directly.

I wonder if they can cooperate with AMD and come up with something in the future? As far as I know AMD had its own ARM attempts in the past, but now they went with FPGA and MS is interested in Internet infrastructure these days too. Considering MS's interest in ARM, they might work together with AMD just fine.

I wonder if Sony will also design its own chip in the future.

BTW did ARM purchase went through regulation and completed?
 
Last edited:
If you don't consider 20-30% loss in performance to be significant, I honestly don't know what to say.

I suppose you have benchmarks on hand of the upcoming Macbook running x86 code? It's a bold claim considering it'll likely be going against Ryzen mobile 5k series CPUs. Considering that Ryzen mobile 4k series are faster than the M1 in applications and Mobile 5k is going to be faster still (potentially 20-30% faster), then yeah.

Now if Geekbench is the only thing you use it for then the M1 is natively faster in single threaded benchmarking, but quite a bit slower in multithreaded benchmarking. Throw in x86 translation and it isn't even faster at single threaded anymore.

The M1 does have the advantage of a superior integrated GPU which is unaffected by the translation layer (as it should be since it's still native code regardless of whether the CPU code is x86 or ARM).

And that's with the M1 benefiting from an entire node advantage (5nm vs. 7nm).

Regards,
SB

you called it an "admirable job" when it is the maybe the greatest achievement in 'emulation' performance with regards to modern computing.
Windows Arm running X86 code has a 70% performance hit, at least. They didn't even the ability to emulate 64bit code for years, it is still not working now btw. Compared to 100% compatibility for the rosetta2.

Apple has not done an admirable job. They have set the benchmark which will take other companies at least years to even come close to.

Also your geekbench comment is a bit misplaced to say the least; the anandtech link I posted shows you that they are running server benchmarks
 
I wonder if Sony will also design its own chip in the future.

BTW did ARM purchase went through regulation and completed?

No it hasn't been cleared/completed yet, the ARM purchase...but I'm guessing it should sometime by June 2021. It's a mammoth purchase with lots of implications for businesses in multiple international markets so that's probably gonna be the major hang-up.

Also I could definitely see Sony designing their own chip sometime in the future...maybe. They could design the chip but I think they'd need to make sure the design's flexible for use in multiple products in their product line. It could give them a way of getting back into the Vaio PC market for example, like some high-end Vaio laptop PC based around PS6 technology, stuff like that. Otherwise I don't know if it'd be cost-conscious for them to design their own chip from the ground-up, they don't necessarily have the financial resources of companies like Microsoft, Apple, or Google to eat those costs (and multiple computing business market segments to spin such designs off for).

Interesting to hear about the MS/ARM stuff again, figure it was only a matter of time though. They definitely don't want the work on that Windows ARM variant OS to linger about. I just wonder if they (and Sony) will truly bother to do something ARM-based if and when Nvidia owns them, or just fork off to RISC-V focus instead (which sounds like the better option anyway).

Really curious to see how Intel adjusts to this. They'd honestly rather find a way of fully transitioning their microarchitecture to a RISC-V design rather than pay royalties to Nvidia for ARM licensing, if they're forced to do such. They do have the benefit of owning their own fabs and work on the Xe GPU architectures looks promising...I think they'll find ways of adjusting comfortably to this type of stuff.
 
Back
Top