Sony PS6, Microsoft neXt Series - 10th gen console speculation [2020]

Raytracing is heavily RAM bandwidth dependent. No bandwidth enough and raytracing -even present on the HW- will not be used. I suppose this gen raytracing will be mainly used on XSX....

The BW heavy use of RT is a fact. XSX is more RAM BW rich, another fact. Also XSX has tech to save RAM BW. I agree we are on an early stage of new consoles development... So we saw little.... Also MS has still some bad results from old management (Halo).
But is it though? I was under the impression it's really cache bandwidth heavy, not VRAM-bandwidth heavy? Also as pointed out, Sony has demoed raytracing in several titles already.
 
You wrote that RT will 'mainly be used by X', which is just not true. Every third party game using RT, will use it on both. PS5 may get a little less than native 4K (and X too, I'm sure), which may or may not be noticeable, and that will be the end of it.
 
You wrote that RT will 'mainly be used by X', which is just not true. Every third party game using RT, will use it on both. PS5 may get a little less than native 4K (and X too, I'm sure), which may or may not be noticeable, and that will be the end of it.
I said "will"... [emoji16]
 
Raytracing is heavily RAM bandwidth dependent. No bandwidth enough and raytracing -even present on the HW- will not be used. I suppose this gen raytracing will be mainly used on XSX....
Regardless whether raytracing is cache, bandwidth, ram, etc heavy, all is lost when MS announced XsS. Raytracing will forever be cosmetic on next gen and cosmetic normally can be scaled.
 
I think RT will be mainly around bandwidth, cache is a nice to have but it depends on how coherent your rays are, more coherent cache is great, less coherent it will play less of a factor.
 
Regardless whether raytracing is cache, bandwidth, ram, etc heavy, all is lost when MS announced XsS. Raytracing will forever be cosmetic on next gen and cosmetic normally can be scaled.
On XSS it seems will rarely be implemented. Too low RAM BW....
 
I've been thinking about how I think things could play-out from here. I have little knowledge of what are the realities of silicon manufacturing at the moment and what are the chalenges, goals and ambition of current GPU and CPU manufacturers are at the moment. All I'll say here comes directly out of my Ass. And if history is any precedent, my predictions will be 100% incorrect. I will make more high-level predictions since I know I have Zero chance of even aproaching an intelligent guess if I try to predict actual numbers. I hope you guys enjoy receiving my misinformation as much as I am creating it.

2020 - PS5 and XBSS|X launch.

2023 - Microsoft launches their revised Series. With an eye on rolling-gen model and scalable games they won't shy away from performance and feature improvements on these "slim-models" So it is possible it will incorporate Zen3 (or higher) and Navi3 tech in there. Without much pressure to improve Series X perf. they will concentrate in slimming that baby down significantly. For the Series S, they will boost its performance ever so slightly to the point it can flawlessly play all BC games at the One X profile, which will probably mean 6Tflops & 12Gigs of RAM.

I also wouldn't be surprised if they boost their SSD perf to reach parity with sony's. That will of course entail making new SSD cartridges with some sort of icon denoting it provides the extra perf. Somethin like a "TURBO" icon or whatever.

Also, both machine's naming will be horrible. I'm talking something like: Xbox Series X - X and Series S - X. Or Xbox SeriOUS X | S, or maybe even Xbox Series X-tra | S-whole.

At this same year, MS will allow games to release targeting Xbox One X but NOT One S. Again, getting users accustomed to the concept of rolling gen.

2024 - Sony launches their PS5 pro. This one's gimmick will be 2x fps in every title. It seems hard to achieve from a technical standpoint right now, but honestly this seems like the most viable way to market a PRO variant. I don't see 8K gaming as either technically viable or marketable in that timeframe. I also don't see "better Ray Tracing" as a easily marketable feature. Unlike MS, they will want to keep the PRO firmly attached to the PS5 generation, so modern AMD features will only be adopted when they provide easy "performance shortcuts" like how 4 Pro had a mix of Polaris & Vega features, but the aim was to play PS4 games at higher res.

How the hell do you double the gpu and cpu performance of PS5 in a straightforward way is anyone's guess though. Just like with 4 Pro, I think they will rely on devs employing all sorts of trickery to get there without just raw HW brute power. I'd imagine 50-60% extra cpu performance would be enough to double framerate. A lot of tasks don't need to update at double the tick-rate for the game to animate and move at double refresh. Things like AI, path-finding updates, etc don't have to update at every single rendered frame. Also, processes related to sound, streaming, and many general book-keeping of game engines by nature are not directly tied to rendering fps. They'd also rely on temporal reconstruction algos to claim that a game rendering at 1/2 resolution at 2x framerate still ends up reconstructing back up to 100% fidelity in the end. As such they will contend with whatever GPU perf gain they can easily attain within a sensible budget and call it a day. BW would be, in my opinion, one of the key areas where they will focus on. Sound and IO will remain intact compared to base PS5.

2026 - MS will be lauching yet another round of machines. Large performance increases for them will place it in next-gen territory, but MS won't frame it that way, but rather as part of their strategy of constant continuous evolution of their product offerings. They will be ahead of sony on this one who will not have said anything about a PS6 so far, but be a little too early to get some of the more ground breaking "next-gen" stuff from AMD. They won't care since they will still be supporting 2020's Series X | S for 3 or so years still, and by then they'll have yet another revision coming out of the oven surpacing whatever sony came up with for ps6 (MS will be operating at around a 3 year cadence). It will be a confortable low-risk strat for MS. They won't need to be sweating bullets trying to out-do sony nor future proofing their HW with tech for the next 10 years that leapfrogs the whole industry in some revolutionary way. Rather, they will just pick all the low hanging fruit that's ripe enough for picking that AMD offers them and be content with enjoying the natural evolution of the industry. RT will probably be one of the things that will evolve the fastest for the next decade since its a new paradigm that is likely to have a lot of room for improvement.

For the dismay of forum warriors, while the X-tier box will have beafy specs, their S one will be at an even larger delta from its big brother than current's Series S is from X. I'm honestly imagining MS could release a Xbox 2026 X box at 25Tflops and 32Gbs of mem, while the Xbox 2026 S will be at a paltry 12Tf 16Gbs of mem. Pretty much a Series X in raw numbers, but with modern Navi 4 arch and whatever. Despite their despair, MS will carry-on.

The most advanced technology employed here will be in the MLN (machine learned naming) employed to try and conceieve a naming scheme for their consoles so bad not even a human could think of. Something like "Xbox 36O" where the last character is a capital "o" instead of a zero, or "Xbox Buy the Wrong Product Please"
 
2027 - Sony launches PS6. Unlike MS's strategy, Cerny will sweat bullets trying to create an ambitious forward looking hardware based on a vision of how the next AAA gaming experiences of the next 10 years will be built rather than a machine that can play Xbox OG titles in BC mode at 8k240fps. It will launch closely to an AMD's new arch following a new internal paradigm and yada yada. RT will naturally be way faster, and more flexible. sure, whatever. What I further hope is the case though, is that they will have gone even further in the deprecation of the decades old "traditional rendering pipeline" and rigid fixed function blocks. I think things like UE5 Nanite doing raster in compute will force AMD stop postponing a serious assessment of all the legacy architectural choices influencing their modern designs and their validity for modern and future game workloads. Feature that accelerate common tasks such as triangle rasterization, texture projection & filtering, 3d transformation, z-buffering / early-z / buffer compression, texture decompression, tessellation, etc will still remain, but probably in way more programmable fashion with way more room for dev tinkering, and easy side-doors for complete gpucompute based software solutions at any point in the pipeline.

These more programable GPUs will need even more BW though. It might be that this will be the day consoles finally adopt something like HBM memory. Alternatively it might also be the case they will adopt a hybrid memory pool to be able to manage costs. Maybe 16Gb of Very fast expensive, and another 16-32 much cheaper RAM. If their SSD tech evolves in such a way that writing into it becomes a more common operation, than that might be the slower-but-larger pool of active memory.

In fact, SSD writes might be one of the "next-gen" "game changing" features. Suddenly highly dynamic PERSISTENT worlds will be the thing they'll chose to hype to differentiate that gen. As such, they'll have to sort out the lower life expectancy of solid state memory under constant writes, but it might be a solvable problem, if through no other means, by brute-forcing a lot of redundancy. This is the sort of thing that might have no tangible effect in A LOT of games, but which one can sell the "the dream" on. Much like it's been done with the SSD for this current next-gen launch cycle. Real time HW compression might become a thing if that is the case. Much like they have special HW to accelerate Zlib and Kraken decompress right now, they might include HW engines that accelerate the COMPRESSION of data.

Talking about compression, storage will become a much higher priority bottleneck this gen. As streaming becomes easy with the near-instant seeks from storage, devs will prototype a lot of data-heavy solutions, only to discover that although the BW and GPU performance can handle them, the game file itself can't actually accommodate them comfortably. While manufacturing will certainly make larger chips possible for next gen, I think there will still be a lot of demand for more effective compression methods, and this will become a rich area of research in game dev. I won't be surprised if by the end of PS5/Series gen, a lot of engines will be doing some sort of custom decompression on the CPU/GPU leaving the HW decompressors idle some of the time, not because they get better performance that way, but rather for the sake of better compression rates. The tradeoffs might shift in a way that there might be plenty situations where devs will be willing to sacrifice some perf. for the sake of making more content fit within the game's 100Gb allocation (I'm guessing that will be a Sony/MS enforced limit for this gen). That again, will make sony think long and hard about their assumptions of what type of compression algos should be accelerated, and probably they will work with AMD in coming up with a programable "decompression accelerator" block that affords the dev more flexibility to implement a variety of compression/decompression algorithms in HW.

Anyway, after sony did considerable more work to design something more forward looking, it will become standard AMD tech in their PC GPUs the following year, and will be incorporated into MS next batch of consoles the following year (Xbox First Version but not the OG First Version is actually the name™ or XBFVbntOGFViatn for short). Still won't become widely used in either platform because of cross-gen development, nor will they be immediately used on PS6 because of multi-plat and because sony first-party themselves will be making cross-gen games for the first few years too anyway.

Some year in there - MS launches a portable based on whatever the lowest tier is then. They will purposefully have kept their low tier very low so that when they lauch a portable they already "automatically" have a large library.
 
Last edited:
I hope that PlayStation 6 will bring a revolution in the amount of SDRAM (or whatever memory type it will turn out to have).
 
Last edited:
So I'm finally ready to post my write-ups on mid-gen and next-next gen console speculations. It's a lot, so I'm breaking it down into parts focused on specific systems. And I also tried focusing on the design philosophy of the products keeping in mind the business models Sony and Microsoft seem to be going for. I did a Part 1 on Gaf but that was just in trying to figure out PS5 power consumption amounts, mainly on the GPU side. That's something probably worth keeping in mind as the PlayStation-related stuff is touched up on.

I'm starting with the mid-gen refreshes, one system at a time, and then I'll get to the next-next gen stuff. I'm also open to ideas on suggestions with tuning some of these specs, since this needn't be an open-and-shut deal. Let's start with the Sony PlayStation 5 Pro...

----------

In my honest POV, I think market and technology realities are going to be the driving forces behind what goes into the mid-gen refreshes
more than anything. That is to say, I personally think anyone hoping for HBM2, super Big Navi-tier GPU upgrades pushing 20 - 30 TF etc.,
are in for a massive disappointment. There are no feasible advances in terms of node shrink reductions, surrounding memory technologies
(that would be affordable), or pricing for component R&D and system designs that would enable such mid-gen refreshes being a reality.....
...for the MOST part ;)

Due to this, I'm of the personal opinion that the mid-gen refreshes from Sony and Microsoft will focus on the following main goals:

>Greatly reduce peak power consumption targets

>Notably reduce system physical sizes (this will be a particular goal for Sony with their mid-gen refresh)

>Refine specific technological design concepts and techniques as a testing grounds for what 10th-gen systems could bring

>Improved implementation of Infinity Cache for the L1$ (possibly cut down in terms of full supported cache size
for the consoles however)

>AI dedicated silicon geared towards offloading some aspects of game artificial intelligence models.

>Specific vision processing/sampling silicon baked into the GPU. This could be implemented as a single
core to each Shader Array. Would help with real-time image processing. This is something that can
already be done on the regular GPU shader cores, but specialized silicon for the task that integrates
into the GPU pipeline would do it better, and with lower power consumption.

>Improved image upscaling (silicon budget dedicated to image scaling and upsampling)

>Explore a handful of new technological designs and concepts, which could be iterated on with 10th-gen systems

>Provide modest performance enhancements of 9th-gen game content

>Increase storage baselines

>Keep prices no higher than 9th-gen systems, preferably lower...for the most part

These are guiding principals I feel are going to drive the mid-gen refreshes from Sony and Microsoft. A bit above I said that
there are absolutely some things we SHOULDN'T expect with the mid-gen refreshes. However, I do think it's fair to give a brief mention
of at least some of the things that will likely be implemented in them:

>GDDR7, to replace the aging GDDR6

>RDNA 4-based GPU designs, with some RDNA 5 elements custom-built into them. This is assuming a consistent 14-month
period for each RDNA generation step, going off the time span between RDNA 1 and RDNA 2 (14 months). So, RDNA 4 spec and
GPU products would be ready for mass-market by March 2023, and RDNA 5 by May 2024. So you can infer from here timeline
for mid-gen refreshes would range between 2023 and 2024.

>Probably some integration of CDNA 4 features into the GPU designs (likely through extensions of the shaders in the RDNA
silicon, though I can see one of the two (mainly Microsoft) integrate some actual CDNA 4 silicon into their mid-gen refresh)

>At least Zen 5-based CPU designs (following same logic for RDNA generation timings above, Zen 5 would be ready for market
by March 2023)

>At least something chiplet-based (this will most likely be Sony)

>At least some integration of early-stage persistent memory, such as 3D Xpoint or ReRAM (for reasons explained later, this might
more likely be something Sony pursues in particular)

Now then, maybe it's time we move on to giving some system speculations now? Let's go...
 
Last edited:
[Part 2]

[SONY]

Sony's PS5 Pro mid-gen refresh will most likely release in 2023. I see it implementing a chiplet design, not as 2x 36 CU chips, but as 2x 18 CU
chiplets, to mimic the base PS5's GPU setup, only without needing four disabled CUs present (on a chiplet design, redundant silicon doesn't need
to actually be present on the die). It will be RDNA 4-based, but also take features liberally from the RDNA 5 standard and customize the GPU
setup with some of those features. It's very possible Sony would take more from the RDNA 5 spec into their GPU design compared to Microsoft.

Being a chiplet design, we already know the chances are strong that RDNA 3 will be chiplet-based in some capacity. Regardless, I did have my own
idea for how a chiplet setup could pan out. On PS5 Pro, each of the two chiplets would feature 18 CUs and 2x 16 ROP/ColorDepth blocks (32 ROPs per chiplet).

To network the two chiplet in tandem, some of the typical GPU logic would need to be split off onto smaller complementary chiplets. One of these
would be the "Unifying GPU Chiplet", or UGC for short. The UGC would handle a bevy of things. Chiefly:

>The UGC handles the access routines for the GPUs to main
system memory by having the DMA built into it. The UGC chiplet
block then has Infinity Fabric links at a data rate of 640 GB/s
to the GPU chiplets (links of 320 GB/s to each GPU chiplet), so
they can then work with the data as required.

>There could be a Drawcall Management Block (DMB) on the UGC.
It would be paired as an extension to/of the Command Processor,
and be responsible for automating and managing the processes
for issuance of GPU work to the hardware components of the
chiplets which the Scheduler is mainly responsible for (and
links directly to both chiplets via the IF links that are
mentioned above).

>The Command Processor, and other things such as the Scheduler,
are in this block. The Shader Input blocks are in the chiplets
themselves, one to each chiplet, and both have an independent
connection to this block for data throughput input.

*The Shader Input blocks may need some extra functionality
to provide feedback to the UGC, maybe as a means of some
hardware on the GPU chiplets themselves able to detect
when individual CUs on eiher GPU chiplet is free for more
work to be scheduled to it, that can then communicate
with some silicon on the UGC complimented with the UGC's
Drawcall Management block, to prioritize new work to those
CUs to ensure peak saturation of GPU resources are always
maintained.

>There is also a small block of cache on this block: 256 KB L0$,
1 MB L1$, and 4 MB L2$, for any required pre-processing and
dispatch work, plus any drawcall instructions that can be
saved for later.

>The design of the Unifying GPU chiplet block is that it houses
the traditional GPU components for drawcall instruction sorts
and issues to CUs in the SAs of the SEs, and has its own block of
L2$ that it can share data to the CUs with if required. Each
chiplet's CU has its own L0$, and share an L1$, but there is
no L2$ for the chiplets in the same way as on PS5 base. To
compensate, the shared L1$ size of the CUs is enlarged by 20%
per shared L1$.

The other big complementary GPU chiplet component in the PS5 Pro would be the Unifying Framebuffer Chiplet block, which can be called the UFC
for short. This would house the Display block typically seen in AMD GPUs, among some other things, and likely include some various combine
modes for dual framebuffers (for the two GPU chiplets) that could selected as presets by developers depending on what type of rendering
pipeline they'd wish to utilize for their game. You can think of these various rendering display preset combinations for the dual framebuffers
as a mix of the SEGA Saturn's dual framebuffers and the SNES's various Mode settings (such as Mode 7).

The goal here, though, would be to have programming complexity reduced simply to the developer selecting a combination mode for the two framebuffers as long as they understand how the modes function the hardware itself should do the heavy lifting in combining and sorting the stitched outputs depending on what the game wants, and the combination modes should also be able to be switched between 1-2 cycles, to mix combination modes on the needs of what the game needs for optimal output. This is key for allowing maximize use of the framebuffer capabilities,
but it also means the UFC needs to have these different framebuffer combination preset modes readily accessible on some type of private local
memory. It's best to picture it, then, as an advanced VDP (Video Display Processor); some chunk of NOR flash and embedded SRAM cache in the
UFC would be best for this (the NOR flash could store the presets and even allow for XIP (Execute In Place) if desired, while the SRAM would be for
fast memory; some hierarchy of L0$, L1$ and L2$ is probably best).
 
[Part 3]

Aside from the aforementioned GPU talk, a PS5 Pro would likely see improved support for PSVR2, with some bump in the Wifi 6 standard. For wired
and perhaps wireless dongle-based PSVR connectivity, there could also be a Thuderbolt port provided via supercharging the USB-C port. The last big
technological push I could see for a PS5 Pro is inclusion of persistent memory. Sony actually have some patents for ReRAM, which can potentially
be used as both a storage-class and DRAM-like class memory technology. By the time a PS5 Pro would be ready, I think Sony would at least have
storage-class ReRAM ready. The goal of it would be similar to the role Optane memory serves on compliant desktop PCs; as a bridge between
storage-class memory and system RAM.

A block of 32 GB of ReRAM developed in-house (and likely manufactured/fabbed by Sony via TSMC) would be able to provide a notable
performance boost to data I/O on a PS5 Pro while having much lower latency than NAND, support for smaller granularity levels in block data
sizes, much higher endurance P/E cycles, and more bandwidth compared to even high-class SSDs on the market. While there is currently no
commercial ReRAM on the market, there is at least one company with an IP license for storage-class ReRAM providing 25 GB/s of bandwidth.
By the time of a PS5 Pro, especially if the ReRA itself had time to mature in the commercial market from 2021 or even 2022 and onward, Sony
could possibly have a 24 GB/s - 25 GB/s bandwidth ReRAM solution that could be implemented in a mid-gen refresh at an affordable rate,
serving as a great starting ground for similar technology in a PS6.


Due to this, however, I actually DON'T see them doing too much with an SSD I/O spec bump. While the SSD size will likely double (to 1.536 TB, as
6x 256 GB modules, most likely Toshiba brand as in the PS5 itself), the actual bandwidth performance will very likely remain the same. So, 5.5 GB/s
raw bandwidth with compressed typical ranges of 11 GB/s - 12 GB/s, and up to maximum lossy compression range of 17 GB/s - 22 GB/s. This will still
be very impressive even at the time of PS5 Pro and provide perfect compatibility with PS5 base, it just wouldn't be the fastest option available
anymore. However, considering the investment in ReRAM to make up for this, it's not a bad trade-off.

Regarding main memory, GDDR7 would be the standard. HBM2 would simply be too disruptive as a technological shift to implement in a mid-gen
refresh, and still likely carry a price premium compared to GDDR7, while not offering too large a performance benefit (at least in terms of bandwidth;
latency would probably be a different conversation) within a price bracket suitable for a mass-market mid-gen console refresh. While it would likely
provide lower power consumption, the mid-gen console refreshes would still get more than enough power reduction through other means, to have
enough to justify GDDR7 which would, most likely, provide at least SOME power consumption reduction over GDDR6.

For PS5 Pro in particular, Sony would very likely stick with a 256-bit memory bus (they seem to love this bus size ;) ), and they'd want at least
some type of increase of GB per TF bandwidth over PS5 base (~ 43 GB/per TF), regardless of how features like Infinity Cache on AMD's RDNA
architectures shape out and develop. For those reasons, even if it'd require a slight overclock, it's very possible Sony would go for 20 Gbps GDDR7
modules, as 8x 2 GB modules, for a total of 80 GB/s per module, and a system bandwidth total of 640 GB/s on a 256-bit bus.

Audio would likely be a slight iteration on the Tempest Engine; if possible, it could have some of the SPE-style logic simplified further in order
to allow for even easier utilization by developers, and a slight performance increase. Nothing too radical, however; they'd want to ensure it doesn't
compete too much in terms of bandwidth with the CPU and especially the GPU. The CPU, as hinted way earlier, would be Zen 5-based; a similar 8-
core/16-thread setup as the PS5's Zen 2, with better IPC and not only a unified L3$ cache (which Zen 3 would have already introduced), but some
implementation of Infinity Cache on the CPU cache level side as well, this likey being a standard Zen 5 CPU feature however, but nonetheless
worth utilizing. The same 3.5 GHz clock of PS5 would be supported, but a clock increase to something like 3.8 GHz or even 4 GHz would not be out of
the realm of possibility.

Finally, the GPU. As mentioned before, no 72 CU GPU design here; while a chiplet approach would be supported, we'd see it as 2x 18 CU chiplet
blocks. Process-wise, while 3nm (perhaps even 3nm EUV) would be readily available by this time in a general sense, DO keep in mind that costs are
NOT scaling down with node shrinks; rather, the opposite is happening i.e prices are INCREASING. With investments already placed in on the ReRAM
and (very likely) customizations to any aforementioned features of the GPU chiplet design that don't end up being standard in the RDNA spec by this
point, to keep costs down and place investments in other areas Sony would likely go for 5nm EUVL instead, saving 3nm (or 3nm EUV) for a PS6.

It's my personal opinion that the base PS5 is on 7nm EUV. Now, the benefits of 7nm EUV over 7nm DUV (which is what I suspect the Series X is on)
are: 17% density gain, and 10% power consumption reduction OR a 10% performance increase, clock-for-clock. Seeing where the PS5 is landing in
regards to not just its specs but things that reinforce the perception of certain specs (such as the system's size and cooling solution), I'd say the PS5
may've only gone for half of the possible performance gain benefit of 7nm EUV, so 5%. Some people probably feel differently...some probably would
even say it's not 7nm EUV. But I personally feel that to be the case.

With this taken into consideration, a PS5 Pro would see a pure TF performance increase from 10.275 @ 2.23 GHz...to 11.3025 TF @ 2.23 GHz. This,
coming with a 30% power consumption reduction thanks to shifting to the 5nm. While 5nm EUVL would provide an additional 10% power
consumption reduction, and 5nm itself brings a 30% power consumption reduction, THAT power consumption reduction comes over basic 7nm, and
PS5 is already on 7nm EUV and had a 15% power consumption reduction over that. So overall it would come to a 30% power consumption reduction
for them on 5nm EUVL instead of 45%.
 
So picturing all of that, for a 2023 holiday release? Would look rather tempting doesn't it? So let's summarize:

>YEAR: 2023

>NODE: 5nm EUVL

[CPU]

>GEN: Zen 5

>CLOCK: 3.5 GHz (PS5 Base compat), 4 GHz (default clock for PS5 Enhanced Mode performance)

>CORES: 8

>THREADS: 16

>CACHE:

>L1$: 128 K

>L2$: 512 K

**Implements a scaled-down form of Infinity Cache

>L3$: 8 MB

[GPU]

>GEN: RDNA 4 (+ some RDNA 5 features); assumes a 14-month period between generations. RDNA 2 launch Nov. 2020, RDNA 3
launch Jan. 2022, RDNA 4 release Mar. 2023, RDNA 5 May 2024 (PS5 Pro would be one of first RDNA 5-based (in some aspects)
product on market exclusively for 6 months until PC RDNA 5 cards release in May 2024)

>CLOCK: 2.23 GHz

>DESIGN: Chiplet (2x chiplets)

>CUs: 36 (18 per GPU chiplet)

>ROPs: 64 (2x 16 ROP blocks per chiplet)

>ALUs: 2,304

>FEATURES:

>Unifying GPU Chiplet block (UGC)

>Unifying Framebuffer Chiplet block (UFC)

>Improved RT (dedicated RT units built into each Dual CU; the
RT units are linked with adjacent RT units above and below them,
to accelerate RT calculations. Basically, graphics data on
each Dual CU would be broken down to have RT calculations done
for just the shader data that Dual CU is calculating. MOTL)

>Improved AI ML (silicon-level support for GPT 2.0 data models,
though some work would still need to be done on the shaders)

>Improved image upscaling

>TF: 11.3025 TF (2.23 GHz clock)

>POWER CONSUMPTION: 138 watts (average)(106 watts from chiplet design
+ power consumption reduction, + 32 watts for additional GPU hardware
silicon (including larger cache sizes); this is a general wattage estimate, I
have no idea what additional aspects of the GPU would generate what specific
power usage out of this 32 watt figure, the 32 watts would just be the overall
upper limit regardless of the combinations.

>DIE AREA: 94 mm^2 (72 mm^2 from basic die area reduction, + 22 mm^2
from additional GPU hardware; actual die sizes may be larger since
determining mm^2 by wattage per mm^2 is not a 100% method, but a
consistent one for limited estimates)

[MEMORY]

>RAM: 16 GB GDDR7, as 8x 2 GB, 20 Gbps chips @ 640 GB/s (+ 192 GB/s over base PS5)

>PERSISTENT RAM: 32 GB low-level, storage-class ReRAM, 25 GB/s

>STORAGE: 1.536 TB NAND, 5.5 GB/s raw, 11 - 12 GB/s lossless compressed, 17 - 22 GB/s maximum lossy compressed

[PRICE]

>DIGITAL: $299.99

>DISC: $399.99

**Both models will replace their respective base PS5 Digital and PS5 Disc Editions through a gradual phase-
out shift during 2024

--------------

For the next set of parts (probably for later in the week), we'll be focusing on Microsoft, who'll have not one, but two big mid-gen
refreshes to deal with...
 
Welp, I'm back here and ready to throw some thoughts on the 10th-gen systems. First though, I gotta disown what I wrote earlier . Since having some discussions here with multiple users, I've been convinced now that the likelihood of mid-gen refreshes (as in explicit "Pro"-level models) just isn't there this time around. Not in the way it was for PS4 Pro and One X. N6 process doesn't seem like it'll be worth it for even slimline model revisions when considering it only brings density increase, no power reduction savings and no performance gains. Tho theoretically you could get performance gains by utilizing the gains in density for more silicon, but the chip design would have to be flexible to begin with to allow for that type of architecture redesign for leveraging the extra density budget. At that point why not just move to 5nm?

So, IMO mid-gen refreshes will be more in terms of market-expansion peripherals. PSVR2 for example, being one. I've seen some very bare rumors of a "PS5G", which I guess is supposed to be a possible Vita successor. Well, the 5G makes it sound more like a streaming mobile/portable device, and I certainly see that being possible since it could function both on its own and also as a peripheral for PS5, but I would expect it to be streaming-focused to keep costs down. On Xbox side I think some things are probably more in flux; depending on market performance of Series systems over the next year MS may or may not abandon their idea of cyclic model upgrades. That DOESN'T mean they won't have a 10th-gen system: they definitely will. But if say the current idea for their cyclic model is to put out more powerful Series X-style systems every two years, that could change to emphasizing the Series systems as peripheral components to service the current Series X and Series S instead, like the rumors of that Xcloud streaming stick, or a potential VR headset. That type of stuff.

Basically, I'll give a general rundown on what I expect of any mid-gen refreshes/revisions (sans things like VR peripherals.

[SONY]

>PS5 Slim: 5nm, ~140 watt system TDP (30% savings on 5nm, better PPW GDDR6 chips, possibly smaller array of 3x 4-channel NAND modules 384 GB
capacity each, chip-packaging changes and chiplet setup ,etc). RDNA 4-based (16-month intervals between RDNA gens would mean Jan. 2022 for RDNA 3, July 2023 for RDNA 4), 1 TB SSD storage, same SSD I/O throughput (with possibly slightly better compression due to API maturity and algorithms), same amount of GDDR-based memory and bandwidth (so, sticking with GDDR6), $299 (Digital only). November 2023 release.

>PS5 Enhanced: 5nm, ~150 watt system TDP (factoring in disc drive), RDNA 4-based, 6x 384 GB NAND modules (~2 TB SSD), same GDDR6 memory capacity but faster chips (16 Gbps vs. 14 Gbps) for 512 GB/s bandwidth, improved SSD I/O bandwidth (~8 GB/s Raw, up to 34 GB/s maximum 4.25:1 compression ratio), slightly better GPU performance (up to 11.81 TF due to 5nm; this would probably increase total system TDP to about 155 watts), Zen 2-based CPU, disc drive, $399. November 2023 release.

>PS5G (Fold): 5nm, ~25 watt - 35 watt system TDP, RDNA 4-based (18 CU chiplet block), 8 GB GDDR6 (8x 1 GB 14 Gbps chips downclocked to 10 Gbps,3D-stacked PoP (Package-On-Package), 320 GB/s bandwidth), 256 GB SSD storage (2x 2-channel 128 GB NAND modules), 916.6 MB/s SSD I/O bandwidth (compressed bandwidth up to 3.895 GB/s), Zen 2-based CPU, 7" OLED screen, streaming-orientated for PS5 and PS4 Pro titles (native play of PS4 games), $299 (Digital only). November 2023 release

>PSVR2: Wireless connectivity with PS5 systems, backwards-compatible with PS4 (may required wired connection), on-board processing hardware for task offloading from base PS5, Zen 2-based CPU, 4 GB GDDR6 as 4x 1 GB modules in 3D-stacked PoP setup (14 Gbps chips downclocked to 10 Gbps, 160 GB/s bandwidth), 128 GB onboard SSD storage (1x 2-channel 128 GB NAND module, 458.3 MB/s raw bandwidth, up to 1.9479 GB/s compressed bandwidth), AMOLED lenses, $399, November 2022 release.

[MICROSOFT]

>SERIES S Lite: 5nm, RDNA 3-based (possibly with some RDNA 4 features mixed in), possibly some CDNA 2-based features mixed in, 10 GB GDDR6, 280 GB/s bandwidth (224 GB/s for GPU, 56 GB/s for CPU/audio), 1 TB SSD, same raw SSD I/O bandwidth (2.4 GB/s) but increased compression (3.5:1 ratio, up to 8.4 GB/s maximum compression ratio), $199 (Digital only), November 2022 release

>SERIES X-2: 5nm EUV, RDNA 4-based, some CDNA 2-based features mixed in, 20 GB GDDR6 (10x 2 GB chips), 16 Gbps modules (640 GB/s bandwidth), improved SSD I/O bandwidth (~8 GB/s, 3.5:1 ratio compression, up to 28 GB/s maximum compression ratio), lower system TDP (~160 watts - 170 watts), 2 TB SSD storage, Zen 2-based CPU, disc drive, improved GPU performance (~14 TF), $449. November 2023 release.

>SERIES.AIR (Xcloud streaming box, think Apple TV-esque): 5nm, RDNA 3-based), 8 GB GDDR6 (4x 2 GB chips), 14 Gbps modules downclocked to 10 Gbps (160 GB/s bandwidth), 256 GB SSD, same SSD I/O as base Series S and Series X (2.4 GB/s) but improved compression bandwidth (up to 8.4 GB/s maximum compression ratio), $99 (Digital Only), November 2021 release

>SERIES.VIEW (Wireless display module screen that can be added to Series S Lite and Series.Air (to lesser extend Series X-2) for a makeshift portable device, or used as AR extension of VR): Zen 2-based CPU (4-core variant, lower clocks), 2 GB GDDR6 as 2x 1 GB modules (14 Gbps chips downclocked to 8 Gbps, 64 GB/s bandwidth), 8" OLED display, USB-C port (included Male/Male USB-C double-point module can be used to wire Series.View with Series S Lite), $199, Spring/early Summer 2022 release. Also compatible with PC.

>SERIES.VIRTUA (VR helmet developed in tandem with Samsung, for Series system devices as well as PC): Based on Samsung HMD Odyssey + headset but with some paired-down specs for more mid-range performance capabilities. $399, Spring/Summer 2022 release.
------------------

So that's what I'm thinking Sony and Microsoft do insofar as mid-gen refreshes and major peripheral upgrades, up to early 2024. From that point on it's really up in the air, probably easiest to see the two of them doing bundles for various mixes of these refreshes and peripherals. For example, Sony could probably do a package bundle in late 2021 and early 2022 with PS5 (base) and PSVR to drive out remaining stock for the first generation of PSVR and the original PS5 models, making way for the PS5 system refreshes and PSVR refresh in 2022 (PSVR2) and PS5 Slim & Enhanced (2023).

Meanwhile, I think Microsoft will try SKU bundles like Series.Air & Series.View around late 2023 and into 2024, or even later SKU bundles like Series X-2 & Series.Virtua in late 2024 into early 2025. I think that's what Sony & Microsoft will do going into the tail-end of 9th gen and leading into 10th-gen...

..which I'll start posting some ideas on below, probably a bit later tonight, or a bit later. I'm gonna parse through some of what I have there a bit more because at least in Sony's case there's two different paths they could take but I think one in particular will win out for a few specific reasons (basically I see them staying narrow & fast, even if Cerny teased a 48 CU theoretical design back in March for PS5), because it will still be able to give them a huge performance boost and also keep costs very manageable for them. Microsoft, meanwhile, I think they'll keep going wider but leverage some type of new technology innovations to allow much more granularity in precision of power distribution across the GPU hardware while staying in a generally fixed power budget, at costs very likely favorable for MS.

But yeah, I'm gonna get on that as soon as I can, will also try focusing on some ideas outside of the technical aspects like possible controller innovations, features, etc.
 
Are they? I might've made a mistake then and meant to put EUVL, which IIRC is somewhat different from EUV. The nomenclature is slightly trippy sometimes :S
Yes they are and EUV and EUVL both mean the exact same thing, Extreme Ultraviolet Lithography.
You can use different amount of EUV(L) -layers per design but they're still all EUV(L)
 
Yes they are and EUV and EUVL both mean the exact same thing, Extreme Ultraviolet Lithography.
You can use different amount of EUV(L) -layers per design but they're still all EUV(L)

Ah, okay. Thanks for clearing that up. I know there's a 5nm variant process that gives slightly more power consumption savings or performance increase on same design at same clocks. Same for 3nm. Just read from a few spots months ago and they listed them as 5nm, 5nm EUVL, 3nm, 3nm EUV etc.
 
Okay, I'm ready to move on with the 10th-gen system speculation. This took a long time because I've rewritten this for both at least a dozen times, and a lot of things got changed along the way. So I'll start with PlayStation 6 and probably break it down into a few posts, then move on to Microsoft's stuff and do the same. I'll also try explaining why certain things are the way they are.

This is just going to be focused on specs; I'll see about coming up with some ideas for possible business or product strategies afterwards.

[PLAYSTATION 6]

>YEAR: 2026 or 2027.

>2026 likely, but 2027 more likely. Would say 45/55 split between the two.

>Gives PS5 hardware and software more time to "bake" an ecosystem market without contending with PS6 messaging/marketing

>Allows for cheaper securement of wafer production, memory (volatile, NAND) vs. an earlier launch

>Gives 1P studios more time to polish games intended for launch of PS6

>Sony wants to shorten 1P dev times not to bring out hardware faster (returning console gen length to 5 years), but to release more 1P titles in a given (by modern notion) standard console cycle (6-7 years). Allows them to drive more profits in a 6-7 year period, which helps offset R&D/production costs of 10th-gen hardware provided R&D/production costs stay roughly similar to what they were for 9th-gen (PS5), or only 25% - 30% increase at most.​

>NODE: N3P

>Only way for them to get the performance they need at a reasonable power budget

>Will compliment contemporary RDNA architecture designs/advancements very well

>Can have wafer costs managed through scaled offsetting of budget in other areas (die size, memory, etc.)​

[GPU]

>ARCHITECTURE: RDNA 7-based

>Assuming 15-month intervals between RDNA refreshes, RDNA 7 would be completed by February 2027. RDNA 8 would be completed and released by May 2028. A PS6 in either 2026 or 2027 could be predominantly RDNA 7-based, with some bits maybe from RDNA 8 (or influencing RDNA 8) if the release of PS6 is 2027 rather than 2026.​

>SHADER ENGINES: 2

>SHADER ARRAYS (PER SE): 2

>CUs: 40

>72 CUs would double PS5, but also at least double the silicon budget, AND would be on 3nm EUVL (+), which would be more expensive than 7nm in its own right. Only way to offset that would be to either gimp in some other area (storage, memory, CPU etc.) or going with 5nm EUVL which curbs some of the performance capability due to having less room on the power consumption budget.

>CUs will only get bigger with more silicon packed into them. PS5 CUs are 62% larger than PS4 CUs for example, despite being on a smaller node, aka more features are built into the individual CUs relatively speaking (such as RT cores). Any features that scale better with integration in the CU will be able to bump up the CU size compared to PS5, even if the overall CU count remains the same or only slightly larger.

>PS6 CUs could be between 50% - 60% larger than PS5 CUs

>Chiplet design can allow for more active CUs without need to disable out of yield concerns

>Would allow for similar GPU programming approaches in line with PS5

>Theoretically easier to saturate with work
>SHADER CORES (PER CU): 128

>SHADER CORES (TOTAL): 5,120

>Going with a smaller GPU (40 CUs) would require something else to be increased in order to provide suitable performance gains. Doubling
the amount of Shader Cores per CU is one of the ways to do this, though 128 could be closer to a default for later RDNA designs by this point.
>ROPs: 128 (4x 32-unit RBs)

>Doubling of ROPs on the GPU in order to compliment the increase in per-CU shader cores
>TMUs (per CU): 8

>Assuming a 16:1 ratio between SCs and TMUs per CU is maintained, doubling the SCs from 64 to 128 would also double the TMUs from 4 to 8
>TMUs (TOTAL): 320

>MAXIMUM WORKLOAD THREADS: 40,960 (32 SIMD32 waves * 32 threads * 40 CUs)

>MAXIMUM GPU CLOCK: 3362.236 MHz

>PRIMITIVES (TRIANGLES) PER CLOCK (IN/OUT): Up to 8 PPC IN, up to 6 PPC OUT (current RDNA supports up to 4 PPC OUT)

>PRIMITIVES PER SECOND (IN/OUT): Up to 26.8978 billion PPS IN, up to 20.17335 billion PPS OUT

>GIGAPIXELS PER SECOND: 430.366208 G/pixels per second

>INSTRUCTIONS PER CLOCK: 2 IPC

>INSTRUCTIONS PER SECOND: 6.724472 billion IPS

>RAY INTERSECTIONS PER SECOND: 1075915 G/rays per second (1.075915 T/rays per second) (3362.236 MHz * 40 CUs * 8 TMUs)

* RT intersection calculations might be off; figured RT calculations leverage the TMUs in each CU but wasn't sure if that's 100% the case.
>THEORETICAL FLOATING POINT OPERATIONS PER SECOND: 34.4 TF (40 CUs * 128 SCs * 2 IPC * 3362.236 MHz)

>CACHES:

>L0$: 256 KB (per CU), 10.24 MB (total)

>L1$: 1 MB (per Dual CU), 20 MB (total)

>L2$: 24 MB

>L3$: 192 MB (Infinity Cache)

*SRAM bit density is 0.027 microns per bit on 7nm, meaning 128 MB would be ~ 166 mm^2 on 7nm/7nm DUV. 87% density reduction on 3nm EUV would reduce this to about 22mm^2. SRAM cell density of 1.5x on the node could bring this to 192 MB.
>TOTAL: 246.24 MB
>TDP: 160 watts

>Die Area: ~100 mm^2 - 120 mm^2 (factoring in larger CUs, additional integrated silicon, larger caches, revamped frontends and backends, etc.)
...continued below...



 
Back
Top