Predict: Next gen console tech (10th generation edition) [2028+]

Why would next consoles have 32 GBs RAM if PC gets away without it? That's a lot of additional cost. If there's a RAM bump, I'd guess only to 24 GBs. I think it's more important to spend on BW, so I'd like to see stacked RAM. Would definitely take a 16 GB RAM console with HBM over a 32 GB console.

@Dictator : Just had an idea for another DF investigation. How about gimping a PC in various ways and seeing what has the biggest impact on games? So take a monster PC and then put in a poky CPU. Then revert to good CPU and try limited RAM. Then weaker GPU, then slow storage. Benchmark a bunch of games and see what the impacts of each bottleneck are and predict the importance of each part on next-gen consoles?
There absolutely needs to be a RAM bump next generation.. 8GB minimum, ideally it's got to at least double or the generation will simply look like more of the same IMO.

My thoughts for a truly next generation console which could actually differentiate itself drastically from the current gen would require a few things:

-At least 32GB of RAM with higher bandwidth
-Larger memory caches on the chips
-Optical discs with storage capacity of 500GB or more
-Better dev kits/tools for faster working environment and iteration times


Devs always want more physical memory. Yes, obviously appropriate bandwidth increases are important as well, but you've got to give them at least SOME extra memory IMO. Double would be ideal, as I said earlier, paired with an even faster I/O subsystem to keep it fed and a new disc storage medium.

A new high capacity optical disc storage medium with fast access speeds would be one of the most important improvements they could make for a console IMO. It would do a couple things. First off, it would almost essentially make physical media for games a requirement/popular again. Physical is only "dying out" because it's now more convenient to download games atm. 500GB games would shift the convenience back onto the physical disc as it's too much and too long of a download for most people, and SSD storage capacity is only so big. Secondly, 500GB of storage on a single disc would basically unshackle developers in a similar way as to what the SSD did coming from HDD based consoles, by allowing developers in general to worry less (or not at all) about storage capacity.. and for those who are pushing the state of the art to utilize much higher fidelity assets/textures everywhere, as well as animations and voice dialog and whatever else.

I think it would be awesome to see a new disc based storage medium breath in new life for physical storage and games. It could also bring physical games back to the PC platform as well in a big way. I'd love to see that happen.. nobody wants to be downloading 500GB games lol.
 
Physical is only "dying out" because it's now more convenient to download games atm. 500GB games would shift the convenience back onto the physical disc as it's too much and too long of a download for most people, and SSD storage capacity is only so big.
Discs are just a distribution format. They are also inconvenient as you have to have the disc to hand and put it in. Installed games
Secondly, 500GB of storage on a single disc would basically unshackle developers in a similar way as to what the SSD did coming from HDD based consoles, by allowing developers in general to worry less (or not at all) about storage capacity.
You're still going to have to copy everything across to storage for fast enough access.
I think it would be awesome to see a new disc based storage medium
What tech is there that can do this? Is anyone even bothering with researching new storage?
 
There absolutely needs to be a RAM bump next generation.. 8GB minimum, ideally it's got to at least double or the generation will simply look like more of the same IMO.

My thoughts for a truly next generation console which could actually differentiate itself drastically from the current gen would require a few things:

-At least 32GB of RAM with higher bandwidth
-Larger memory caches on the chips
Is GDDR memory going down in cost? So much that 32GB of ram in 2028 will cost the same or less than 16GB or ram in 2020? I'm pretty sure the answer is no. If we get 16GB of GDDR7 next gen with a 4GB DDR chip to load the OS in (compared to the 512MB chip in the PS5) I would understand it.

New process nodes are also hitting walls for SRAM density, so 8MB of L3 cache again wouldn't surprise me.

Every time the PS5 goes on sale the console flies off the shelves, so they don't want an even more costly box, people should turn down their expectations on the actual hardware, and think more about features to make the hardware more efficient while keeping down costs.
 
Discs are just a distribution format. They are also inconvenient as you have to have the disc to hand and put it in. Installed games

You're still going to have to copy everything across to storage for fast enough access.

What tech is there that can do this? Is anyone even bothering with researching new storage?
That's about as inconvenient as having a PC is to play PC games.

Necessity breeds innovation.

There is research being done that proves its possible. The problem is getting it commercialized and affordable enough for mass market devices.


I've always wondered if access times could be improved by physically having more lasers in the drive instead of just one. Have 4 lasers which can each move independently and each read a smaller section of the disc. You could position multiple lasers to read the outer portions of the disc more rapidly for example.

I dunno, my post was more of a wishful thinking idea to truly differentiate the next gen from the current one.. because I DO believe that storage is/will become a big bottleneck now for truly transformative visual quality in games. If we don't do that, we're basically just getting more of the same quality art and assets, but clearer.
 
My prediction for a PS6 in 2028.

4 performance cores + 4 efficiency cores Zen 6-7 at 4 Ghz

30-32 teraflops RDNA 6.5 with level 5 ray tracing acceleration at 3.2 Ghz

16GB GDDR7 for games and 4GB DDR5 for the OS

1 TB SSD at 5.5 GB/s

529$ without disc and 579$ with it (they aren't touching that 600$ price...maybe)


Launching at the same time with a Series S style box that has:

4 performance cores + 4 efficiency cores Zen 6-7 at 3.5 Ghz

10-14 teraflops RDNA 6.5 with level 5 ray tracing acceleration

16GB GDDR7 at half bus width for games and the OS

800GB SSD at 4GB/s

400$ with disc and 329$ without.

Add 50-70$ to every price if Sony gets tired of losing too much money on the hardware
 
Last edited:
That's about as inconvenient as having a PC is to play PC games.
I don't get this. Presently people pop onto their console or Steam and run a game pretty much instantly from storage. They also quick-resume in some cases. Having to go get a physical copy (let alone store it! Shelves of cases is so last millennium!) and put the disc in, then swap it out for another game, is tedious. If we have to go back to that, there'd have to be some huge upsides.
I've always wondered if access times could be improved by physically having more lasers in the drive instead of just one. Have 4 lasers which can each move independently and each read a smaller section of the disc. You could position multiple lasers to read the outer portions of the disc more rapidly for example.
Yes, at greater cost, and limited linear improvements in transfer speed, and still atrocious seek times. You'd have to copy the game to local storage, and then use the disk just as a license to play that game. Which will also need the internet to download dozens of patches.
I dunno, my post was more of a wishful thinking idea to truly differentiate the next gen from the current one.. because I DO believe that storage is/will become a big bottleneck now for truly transformative visual quality in games. If we don't do that, we're basically just getting more of the same quality art and assets, but clearer.
Improvements there will need to come from different asset generation - realtime procedural content creation - because the cost to create assets is too high. Maybe ML content creation will offset those costs. But, games right now could have 500 GBs of assets and use them streamed. No game is doing that because it's cost prohibitive, meaning also that the super fast SSDs aren't being taxed at all because are using using that much information density, because it's too expensive.
 
Last edited:
Unless this hypothetical new optical media has read/write capability and a decent transfer speed say 1 GB/s it could possibly work. But imagining the day 1 patches for the games bought on disc leaves me wondering how bad an idea it truly is. Would hate to pop in the disc and install the game to only be faced with a 300 GB download needed before being able to play.
 
instead of discs, going the nintendo route and going back to NAND would net you high capacity and speeds while preserving physical as a medium
 
Started on a PS5 Slim/Pro spec mockup some time back and then got a bit carried away on 'PS6'... 😁

Certainly on the optimistic side and not meant to be perfectly accurate, but convey a general idea:

Wfx5cBtpBX3DAcD_1718573424.png
 
Last edited:
Why would next consoles have 32 GBs RAM if PC gets away without it? That's a lot of additional cost. If there's a RAM bump, I'd guess only to 24 GBs. I think it's more important to spend on BW, so I'd like to see stacked RAM. Would definitely take a 16 GB RAM console with HBM over a 32 GB console.

@Dictator : Just had an idea for another DF investigation. How about gimping a PC in various ways and seeing what has the biggest impact on games? So take a monster PC and then put in a poky CPU. Then revert to good CPU and try limited RAM. Then weaker GPU, then slow storage. Benchmark a bunch of games and see what the impacts of each bottleneck are and predict the importance of each part on next-gen consoles?

Yeah, GDDR is expensive, probably $30 for 8gb of GDDR7. 24gb would be $90 BOM alone. And what would it be needed for? Everything is streamed from an SSD now at the high end.

One way or another the major bottleneck is memory latency/bandwidth. Compression and caches are what's going to improve performance per watt, and compression can improve performance per area as well. That and resource handling/creation, AMD's work graphs with extensions are a great start. All the RT in the world is unhelpful if the BVH is 8gb and takes several minutes each rebuild, and an 8gb BVH would be useless as you'd get bandwidth restricted instantly even if you could store it.

More ram is mostly pointless.
 
instead of discs, going the nintendo route and going back to NAND would net you high capacity and speeds while preserving physical as a medium
You cant really do this, unfortunately. Well you can, but not without heavy additional costs.

Cuz you still need large SSD storage in the consoles as standard. You need to cater for those who want to have their library digital. I suppose you could offer such digital storage as an optional drive purchase, but that would be very unpopular and against the obvious market trends.

And the benefits aren't that big anyways. Especially with how much patching games get nowadays. Being able to read right from the cart instead of installing the game is neat, but it's not gonna be that big of a selling point, and nobody would be happy to hear that all games suddenly cost $20 more because of it.

I dont even think it'll make sense for Nintendo this upcoming generation and I'm expecting mandatory installs just like Xbox/PS.
 
instead of discs, going the nintendo route and going back to NAND would net you high capacity and speeds while preserving physical as a medium
What about huge 200 to 300 GB day 1 patches that would be required to download for being able to play the game?
 
I wonder if it would be possible to have some special low latency ram, ala the esram for the 360 or similar, that is accessible by both CPU and GPU,
specifically for holding the BVH?
Dedicated memory could be lower latency, either via SRAM/ESRAM, or even normal DDR, vs GDDR.
BVH building, processing and reading instructions could provide the means to access and read form the special memory.
Maybe even chuck a bit of processing logic around it too, faster AABB tests?
you could always duplicate the BVH state to "main GDDR" memory.

But a decently sized low latency block of memory, accessible by both CPU and GPU might be one way to get significant perf increase for RT like problems on a console?
128Mb would probably do it.
 
I've always wondered if access times could be improved by physically having more lasers in the drive instead of just one.
The Kenwood TrueX 72X says hello
In Zen's system (developed in conjunction with Sanyo and licensed by Kenwood), a diffraction grating is used to split a laser beam into 7 beams, which are then focused into the disc; a central beam is used for focusing and tracking the groove of the disc leaving 6 remaining beams (3 on either side) that are spaced evenly to read 6 separate portions of the groove of the disc in parallel, effectively increasing read speeds at lower RPMs, reducing drive noise and stress on the disc. The beams then reflect back from the disc, and are collimated and projected into a special photodiode array to be read. The first drives using the technology could read at 40x, later increasing to 52x and finally 72x. It uses a single optical pickup
Review :

If we have to go back to that, there'd have to be some huge upsides.
There is if it installs from physical media and isnt dependant on the publisher running a server, no one can take it away from you
 
Last edited:
Where did you see it in RT games? There might be a game or two with path tracing where long compute shaders with inlined RT may cause spilling on AMD, as they have to keep the traversal state, variables, and constants in registers alongside the inlined uber shader's stuff. However, there are no multiplatform RT games or other RT games with the same issue.
I'm not sure if you've been paying attention but practically anything that uses NV's RTXDI library/code exhibits massive spilling and we have cases like Control, Deathloop, Doom Eternal, or the recent F1 games where previous generation AMD HW are disproportionately affected in comparison to their current architecture and anyone informed would already know that the speed up is not due to VOPD since RT shaders usually get compiled into wave32 mode for which the compiler doesn't try very hard to optimize around VOPD usage in that mode so that just leaves the increased register file size left after factoring out the moderate clock/core count boost ...

It can't be any coincidence that the increased RF size led to HW being able to store more of these function arguments and variables on-chip where some cases that did exhibit spilling don't anymore!
How is inlining supposed to prevent spilling from happening? Does inlining multiple shaders into one uber-shader mean you need fewer registers?

Based on slide page 60 and onwards, a 'small' shader table will let their compiler near exclusively allocate it's fastest HW resources (register file) while a 'large' shader table force their compiler to allocate slower HW resources (LDS/memory) on top of causing cache contention. Inlining multiple shaders with a lot different arguments/variables could very well be a win if they fit within the ideal resource allocation. Inlining multiple shaders with similar arguments/variables is even better since resource allocation isn't proportional to the number of shaders ...
Alan Wake 2 is full of animated foliage, and I don't see much of a problem with RT there. Besides, if console makers want to innovate and differentiate, here is a good chance for them to get ahead of PC by supporting hardware for BVH building. It had been done before by Imagination, so it wouldn't be completely new. And don't pretend you need a 4090 for BVH in games with modern geometry complexity, because you don't. Neither do you need it for consoles, as dedicated hardware can always help to catch up.
Imagination never claimed that any of their current designs are able to HW accelerate building acceleration structures (Level 5) and even if you don't need a 4090 to handle modern scene complexity there's a strong chance that the next set of consoles will fall well short of that performance level (even more so with RT) because manufacturing costs aren't coming down quickly enough ...
What do function calls have to do with SER? SER is to improve coherence by reshuffling threads so that threads with the same shaders are executed together. Function calls have been supported for years in CUDA without any shader tables, which is certainly not a limitation of NVIDIA's hardware)
This "reshuffling of threads" practically involves saving the memory contents stored in register to cache (spilling) so that way HW can reload a specific set of threads and it's associated memory content for coherent execution. Function calls can also spill/restore state too but this functionality is not available in any gfx API (besides Metal) since it's too powerful (ANY shader stage) in terms of performance hit for any HW so vendors still want graphics programmers to bend over backwards by using more restrictive mechanisms like SER instead ...
SW Lumen is still RT. It's slower, supports only static geometry, and uses multiple lower resolution, ugly SDFs (because you can't approximate thin objects with it, so games using it are pretty much doomed to have light leaking) without any parametrization. This results in the ad-hoc requirement of even uglier constructs such as the shading cards, which introduce a ton of other problems. Software Lumen is just a poor man's RT because current gen consoles are bad at RT, nothing more.
SDFs can be used with world position offset to support moving/non-deforming meshes and BVHs have 'LoDs' too. Thin objects and light leaking can be improved by increasing the SDF resolution ...
Sony is the one who needed PC, not the other way around. Most Sony ports didn't smash through sales or anything even close to that. They recoup some of their investment by releasing games on PC and other platforms. The whole industry is moving into that direction, including Microsoft and Sony, pure console exclusives titles are dead, you can't design an entire hardware philosophy around a dead trend. It's over.

They can certainly try, but they will fail. PC gamers are patient folks, they don't buy games on Epic Game Store and wait for them to be released on Steam, they simply don't care whether Sony releases their titles on PC or not. They didn't care during the era of PS3, nor PS4, and they won't care now.
Consoles software exclusivity might very well be dead but killing off parity in the future is still in the cards ...
You claimed multiple times that consoles moving away from AMD will result in the loss of backward compatibility, which is a no go for console makers. But now you are advocating for it. You also didn't answer the other points of ditching standard APIs, common engine builds after advocating for them hard.
As far as standards are concerned, eventually "lead platform status" WILL start mattering much more from that point forwards. Fear not as multiplatform development will continue but don't go in expecting 'parity' in releases between platforms anymore from them since they'll now face a decision to either ship a downgraded console renderer on PC with missing graphical features or if they're charitable enough make a separate slower/upgraded renderer for PC ...
No, PC apologists care only about progress, making games visually impressive with great graphics and performance, while you only care about the politics behind graphics, even if it means stifling progress, game development, costs, APIs and everything else. You seem willing to accept last gen graphics and technology for the sake of some politics that benefit no gamer, developer or user, now you have gone into overdrive mode wishing for exotic hardware that benefits literally no one! While also ignoring the reality of the situation where console vendors are doubling down on ray tracing and machine learning for their next console updates.
PC apologists are too self absorbed to never realize that not everyone can keep up with them ...
The choice to release games with last gen graphics is an economical choice (to reduce costs and cater for wider hardware), it happens for FIFA in pretty much every generation, even during the PS4 and PS5 eras (with their PC like architectures). This has nothing to do with exotic hardware.

Hardly anything significant, stuff like that happen with other console ports as well. Besides, Deus Ex was released for PS2 two years after the PC version, the developer took the chance and upgraded the graphics when doing the new port, this has nothing to do with the exotic hardware of the PS2.

MGS3 never had a PC version until recently, with the Metal Gear Solid: Master Collection, which is released for all consoles. So if those effects are bugged, they are bugged for all platforms.

PC got them in the GamePass version. And you are mixing things again, the Tomb Raider reboot version is not based on exotic hardware.
No matter your denial in the face of these differing circumstances, nearly all of the cases I listed involving divergent architectures ended up with different outcomes for each of them ...
I was referring to hardware improvements for existing RT APIs and software. See RDNA 4 rumors. Intel is clearly beefing up their RT hardware for Battlemage and Nvidia is likely to do the same. PS5 Pro is supposedly heading in the same direction. Yet we are to believe that RT is dead in next gen consoles?
Open source code dumps so far don't seem encouraging in favour of your assertions and where are the public signs of an upgraded system ?
 
Open source code dumps so far don't seem encouraging in favour of your assertions and where are the public signs of an upgraded system ?

Do those open source code dumps foretell the exotic hardware you keep telling us about?

There is no proof of PS5 Pro specs (why would there be). However there are industry rumors from respected sites such as below that are making specific claims about RT focus.

 
Do those open source code dumps foretell the exotic hardware you keep telling us about?
Just because it isn't being considered for an upcoming architecture doesn't mean it's not being considered thereafter especially when console vendors will come to eventually realize the constraints of manufacturing more complex designs in future ... (they'll be lucky to get a 2-3x higher transistor/logic budget when the next cycle comes around)

They aren't too thrilled about the idea of having tensor cores trading in SIMD throughput or disproportionately increasing the RF/cache size with no SRAM cell scaling ...
 
No matter your denial in the face of these differing circumstances, nearly all of the cases I listed involving divergent architectures ended up with different outcomes for each of them ...
The denial here is on your part, I have shown you that half the examples you brought are incorrect (FIFA, Tomb Raider, MGS3), the other half have nothing to do with your exotic hardware argument.

don't go in expecting 'parity' in releases between platforms anymore from them since they'll now face a decision to either ship a downgraded console renderer on PC with missing graphical features or if they're charitable enough make a separate slower/upgraded renderer for PC
From where I am standing, this is your hopeful fantasy outcome (which never happened before), that relies on your probable exotic hardware theory that have no current palpable proof as of of now (and goes against rumors of PS5 Pro and Xbox Next), you would do well to recognize the "up in the air" aspects of your theory before you state them as absolute facts. Time will tell anyway.

As of right now, the reality is console developers are simultaneously developing console versions alongside PC versions, as evident by the recent Spider-Man 2, Wolverine and even Spider-Man 3 leaks, the PC version of all 3 games is already up and running, and in the case of Spider-Man 2 the game is almost complete and excellently playable from start to finish, which shows commitment from console developers for the PC platform. They even go a step further and upgrade the games on PC with PC specific tech (RT shadows/reflections, HBAO+, SSGI, higher res RT reflections, higher res effects, further draw distance DLSS and DLSS FG, ... etc).

The amount of console specific exclusives is rather limited in a year anyway, for Sony they are around 3 in a year, most of which will be taken up by NVIDIA or AMD partnerships and get upgraded features -as a result- anyway.
 
Last edited:
Can we please move on from this "Exotic Hardware" theory without me having to go through and separate it out from this thread? Opinions have been expressed. No-one has anything new to add. This thread is for predictions where people should be listing what they expect to see. After some friendly discussion of those views, we can move on to the next (crazy) theory.
 
but practically anything that uses NV's RTXDI library/code exhibits massive spilling
Is there any evidence of this? Profiling maybe? Works quite well on Nvidia's HW.

and we have cases like Control, Deathloop, Doom Eternal, or the recent F1 games where previous generation AMD HW are disproportionately affected in comparison to their current architecture and anyone informed would already know that the speed up is not due to VOPD
Why would it be VOPD all of a sudden? It should be the traversal stack handling and other improvements made for the SW traversal in RDNA3, including the larger register file so that it can keep more work in flight (which says nothing about spilling). Again, is the register spilling isolated here, or is it a pure guess?

It can't be any coincidence that the increased RF size led to HW being able to store more of these function arguments and variables on-chip where some cases that did exhibit spilling don't anymore!
I've seen such behavior only in one game Portal: Prelude RTX, where the RX 6800 XT falls off a cliff (by up to 7x per overall framerate, not your typical 1.5-2x) in comparison with the 7900 XTX with more registers. But I doubt that the RTX Remix renderer was ever intended to run well and account for the limitations of amd's hardware, as it pushes the boundaries of what's possible on nvidia's hardware. Had they added DMMs for surfaces, the difference might have been even bigger, so it's irrelevant as an example. Any other examples?

while a 'large' shader table force their compiler to allocate slower HW resources (LDS/memory) on top of causing cache contention.
According to the presentation, they allocate LDS to store/load arguments and return data for the function calls. AMD prefers inlining since it eliminates unnecessary loads and stores for them. It is unclear whether this translates to other architectures.

This "reshuffling of threads" practically involves saving the memory contents stored in register to cache (spilling) so that way HW can reload a specific set of threads and it's associated memory content for coherent execution.
Typically, people use the term "spilling" for pathological cases where the hardware automatically spills the registers to other buffers due to a lack of better options. This is also characterized by very poor occupation and performance. Spilling is a poor characterization of what something like SER does, as SER explicitly controls the program behavior and is not an uncontrollable catastrophic event characterized by extremely poor performance like spilling.

SDFs can be used with world position offset to support moving/non-deforming meshes and BVHs have 'LoDs' too.
No, they can't, as the SDFs themselves are prebaked and unchangeable during runtime. Yes, you can move the unmerged objects' SDFs around and scale/rotate them, but this would be an extremely poor approximation of animation, resulting in even more graphical artifacts, and you can't do the same for the global merged SDF anyway.

Thin objects and light leaking can be improved by increasing the SDF resolution ...
This would diminish any advantages in performance even further, assuming there were any. Additionally, to approximate infinitely thin polygonal edges, you need infinite voxel resolution, which is impossible to achieve, and there is typically plenty of thin geometry in games.

Anyway, regarding hardware RT on consoles, I am pretty sure skeptics will change their tune quickly once a fully featured and performant hardware implementation is available, as was the case with GPU compute a few generations back when there were also plenty of people doubtful about the relevance of compute for traditional raster graphics.
 
Back
Top