Predict: Next gen console tech (10th generation edition) [2028+]

You've gotta be freaking kidding me. Who on earth keeps track of such numbers? This essentially makes it impossible to claim ANY source at all is not credible, meaning we'd have to take seriously every single rumor from anybody ever.
And how are we supposed to decide which one of the people saying "this person is reliable/unreliable?" to believe if neither is presenting evidence? you think it's better to have a technical discussion thread where everyone just posts "that person is a great/lousy source" over and over with no-one willing nor able to prove their point?

You don't have to take every rumour seriously. If you don't want to talk about a rumour, don't talk about it. But you can't stop other people talking about a rumour because you don't trust their source, and you can't tell people to stop believing in a rumour because they are wrong. So long as people post technical discussion and not just fan-drooling, it's okay to talk about ideas originating from nonsense sources.

Also, it's 'hearsay'.
Yes, it is. I had a typo. Sometimes I also type "it's" for the possessive pronoun like a complete chump. Guess I suck then.
 
And how are we supposed to decide which one of the people saying "this person is reliable/unreliable?" to believe if neither is presenting evidence? you think it's better to have a technical discussion thread where everyone just posts "that person is a great/lousy source" over and over with no-one willing nor able to prove their point?

You don't have to take every rumour seriously. If you don't want to talk about a rumour, don't talk about it. But you can't stop other people talking about a rumour because you don't trust their source, and you can't tell people to stop believing in a rumour because they are wrong. So long as people post technical discussion and not just fan-drooling, it's okay to talk about ideas originating from nonsense sources.


Yes, it is. I had a typo. Sometimes I also type "it's" for the possessive pronoun like a complete chump. Guess I suck then.
Sometimes you cant always prove any point fully. If it were always a matter of clearly established facts and absolutely nothing else, there'd be little to discuss. Though this doesn't means we should blindly accept any rumor that comes along, either. It's really quite easy to make up plausible-sounding rumors if you're half informed. But something being plausible sounding isn't the best measure of whether something is actually true or not, either.

The point of my post wasn't to completely discredit the rumor, just to point out that we shouldn't all just take it at face value(which is what people seemed to be doing here and elsewhere) cuz Kepler clearly gets lots of things wrong. Surely a place like this would know that by now and I shouldn't have to go sign up to Twitter and spend a month combing his entire Twitter feed to count every right/wrong thing he's said. That's beyond absurd.

And I spent one sentence of my post talking about Kepler's credibility, and then spent multiple paragraphs still engaging with the possibility of it being true anyways, so I have no idea how you're trying to act like I was attempting to stifle discussion.
 
PS6 needs a cheaper version, but it also needs a handheld version because every other platform has a handheld console.

2nd SOC should be a handheld SOC and it is cheaper than PS6. Performance is around PS5 level.
 
PS6 needs a cheaper version, but it also needs a handheld version because every other platform has a handheld console.

2nd SOC should be a handheld SOC and it is cheaper than PS6. Performance is around PS5 level.
In that case would Sony mandate PS6 games to be compatible with their handheld?
 
PS6 needs a cheaper version, but it also needs a handheld version because every other platform has a handheld console.

2nd SOC should be a handheld SOC and it is cheaper than PS6. Performance is around PS5 level.

As much as I'd love that, a PS5-level portable isn't on the cards any time soon. Even on TSMC's A16 it would be in the realm of 46W.

So maybe something like a laptop/10" tablet form factor, or a half clockspeed mode. And thinking about it, the benefit of the latter is that it guarantees a 60fps mode when docked.

The trouble though, is bandwidth. You'd need 12Gbps LPDDR6 on an enormous 336 bit bus. Maybe LPDDR7 will be viable in a few years, but I can't even find any rumours about it.

Personally, I think Sony need a portable with a PS5/Pro architecture, but stripped down to match the CU/core/clockspeed of the PS4. Bandwidth is still an issue, but either 10.667Gbps LPDDR6 on a 192 bit bus or 6.4Gbps LPDDR5 on a 256 bit bus would do the trick.

That being said, both of those are still large buses for a portable.
 
A PS6 portable wouldn't need to be around PS5 level. It would need to be just a bit more powerful than the switch 2, which in 2027-2028 would be pretty easy.

They would need to spend a lot of time patching games and reducing resolutions for so many games to achieve some level of backwards compatibility, but it should be possible.

We need to abandon this thinking that games are limited by the hardware, when it's units sold that's the the limiting factor. And making games more accessible should be the priority.
What good is a next gen only game when it ends up bankrupting the studio?
 
Sometimes you cant always prove any point fully. If it were always a matter of clearly established facts and absolutely nothing else, there'd be little to discuss. Though this doesn't means we should blindly accept any rumor that comes along, either. It's really quite easy to make up plausible-sounding rumors if you're half informed. But something being plausible sounding isn't the best measure of whether something is actually true or not, either.
Yep.
The point of my post wasn't to completely discredit the rumor, just to point out that we shouldn't all just take it at face value(which is what people seemed to be doing here and elsewhere) cuz Kepler clearly gets lots of things wrong. Surely a place like this would know that by now
Why? Why should everyone, or even a majority on this board, keep track of all the actors in the space? I have no idea who Kepler is nor his track record, so I can't support either view. My only understanding of his accuracy is what people who say they know him say about him, which is that he's unreliable. And that he gets a lot right. ¯\_(ツ)_/¯

and I shouldn't have to go sign up to Twitter and spend a month combing his entire Twitter feed to count every right/wrong thing he's said. That's beyond absurd.
Yes. So then just don't talk about his accuracy. ;) I shouldn't have to sign up to Twitter and follow people to learn over 5 years whether they are accurate or not. You shouldn't have to research to prove it or not. Hence we shouldn't talk about sources in rumour discussions, only the rumours. Makes sense, doesn't it?
And I spent one sentence of my post talking about Kepler's credibility, and then spent multiple paragraphs still engaging with the possibility of it being true anyways, so I have no idea how you're trying to act like I was attempting to stifle discussion.
I didn't say your stifling discussion. I'm saying what your suggesting is unworkable. Am I wrong? How else do you suggest we deal with discussion of rumours if not to either 'ignore the reliability of sources' or 'prove beyond a doubt their reliability'? You posted Kepler wasn't reliable. Globby replied you was. You replied he wasn't. Where do we go from there?

Hence my mod note that 1) If you want to prove someone is/isn't reliable, you need facts. 2) It's better not to even bother and just discuss the rumour, as you did.

If there's a third way I'm happy to hear it.
 
The next big step in upscaling is likely to be AI FPS upscaling. I think it will be able to do real 60FPS or even 120FPS out of 30, i.e. real frames with their original low input lag. This may predict that you don't need very powerful hardware in the next generation. If the AI will be able to produce 4K/60FPS from low resolution and low FPS, then a console that renders the graphics in 1080p/30FPS will be enough.
 
If you extrapolate current movement, you'll get errors. If you interpolate past movements, you'll have lag. I can't see how you can have correct frame prediction and the lowest latency, although the errors might be too small to notice.
 
If you extrapolate current movement, you'll get errors. If you interpolate past movements, you'll have lag. I can't see how you can have correct frame prediction and the lowest latency, although the errors might be too small to notice.
If we have an extremely fast NPU, we can do many operations locally with it that are impossible with the current CPU + GPU pair. We will probably need a lot of memory, which will determine the calculation frame. It is even conceivable that 10-15 frames per second will be enough and the rest will be calculated by the AI with special training. Based on what I've seen in this area so far, this could easily become a reality. I think MS was referring to this when he talked about that certain technological leap in connection with the new consoles.

Using this method, a GPU with 5-6 TFlops performance may be sufficient for graphics rendering in a handheld console. And for the desktop version of this, a ~ 30 TFlops GPUs console. Rendered at low framerates, these consoles may be capable of both 4K and high speed using AI scaling.
 
Sure, but you mentioned low latency. "i.e. real frames with their original low input lag."

If you project from current motion forwards and draw frames based on what's just happened, you get low latency but motion errors when the input for the next frame is different to that predicted. If you wait until you have the user's intended motion then you can tween accurately between last frame and the new frame, but then you add a frame of latency. Also at lower rendering resolutions you'll have lower input sampling. Take something like a 60 fps fighter - rendering that at 15 fps and generating 120 fps will look super smoother and feel off and unresponsive.
 
A more realistic solution would be what Moon Studios is doing with no rest for the wicked. From the DF article on the game:

"What makes this setup work well is the division between rendering and simulation - basically, input responsiveness is separate from frame-rate so, if you're playing on a lower end platform, like a Steam Deck, at 30fps, the game will still feel as responsive as a game running at a higher frame-rate."
 
A more realistic solution would be what Moon Studios is doing with no rest for the wicked. From the DF article on the game:

"What makes this setup work well is the division between rendering and simulation - basically, input responsiveness is separate from frame-rate so, if you're playing on a lower end platform, like a Steam Deck, at 30fps, the game will still feel as responsive as a game running at a higher frame-rate."

That won’t fix the disconnect between predicted input and actual input.
 
Sure, but you mentioned low latency. "i.e. real frames with their original low input lag."

If you project from current motion forwards and draw frames based on what's just happened, you get low latency but motion errors when the input for the next frame is different to that predicted. If you wait until you have the user's intended motion then you can tween accurately between last frame and the new frame, but then you add a frame of latency. Also at lower rendering resolutions you'll have lower input sampling. Take something like a 60 fps fighter - rendering that at 15 fps and generating 120 fps will look super smoother and feel off and unresponsive.
This has been the case so far, with traditional methods. The key here, however, will be the specialized NPU. As an example, let's take a look at where the AI image generation programs that have been around for several years have gotten with their current limited computing power and we can see that they are able to create a 5-10 second video animation from a single high-quality image in seconds. If the operations do not take place on the GPU but on the NPU developed for this purpose, they are much faster. Speed is related to input lag, since input lag is determined by nothing more than the computing speed of a particular processor. This can be minimized. How it will feel from 15 frames to 60 depends on this. In other words, we are talking about rendering the graphics at 15 FPS, for example in 1080p, which is equal to 30 million pixels per second. The AI multiplies/transforms this amount of pixels (based on the pixel patterns of the two extreme frames). Then we get the actual 60 FPS, which in this case is already 120 million pixels per second. How close the input lag is to the actual 60 FPS input lag depends on how fast the NPU is. It could be that while the GPU renders the 15 frames, the AI calculates and inserts the rest, so the input lag is close to the traditional 60FPS input lag.
 
A move I'd like to see is perhaps a 40fps mode mandated in addition to a 30fps mode for all titles which are not targeting 60fps. Then have a system-wide, user toggleable frame-gen built into the PS6 SDK from day one that can bring games up to the 60-120 region, every game would effectively be required to have the inputs/hooks from the outset. Devs could choose whether the feature is enabled by default (with the user being able to toggle it on a pop-up by pressing the PS button) or they could choose to override it and roll their own variant that would utilise the otherwise reserved portion of ML hardware; and in cases where they're going to 60fps anyway, then they'd be free to use it however they wish.

I think a 30fps source for frame-gen is too low, but a 40fps base seems to be a sweetspot and paired with frame-gen tech from 3-4yrs in the future it could fare well. 60fps mandate is too big an ask imo; but I think 40fps paired with this could be a nice middleground. Effectively guaranteeing every game a 40fps-like response and a 60-120fps visual experience from day one.

For games where input needs to be extra-responsive such as COD, then of course it'd still make sense to target a base 60.
 
Last edited:
This has been the case so far, with traditional methods. The key here, however, will be the specialized NPU. As an example, let's take a look at where the AI image generation programs that have been around for several years have gotten with their current limited computing power and we can see that they are able to create a 5-10 second video animation from a single high-quality image in seconds. If the operations do not take place on the GPU but on the NPU developed for this purpose, they are much faster. Speed is related to input lag, since input lag is determined by nothing more than the computing speed of a particular processor. This can be minimized. How it will feel from 15 frames to 60 depends on this. In other words, we are talking about rendering the graphics at 15 FPS, for example in 1080p, which is equal to 30 million pixels per second.
The lag isn't caused by the time taken to generate frames, but the time interval between data samples used to derive the new frames.

Let's say you have a character moving right and render a frame at t=0ms. You press 'jump' at t=70ms, and render a new frame at t=100ms at a 10 fps source where your character is 30ms into their jump.

If you use extrapolation, at t=10ms you generate an AI frame of the character moving right. You generate another one of the character moving right at t=20ms. And at 30, 40, 50, etc. Then you get to t=100ms and the character has already been jumping for 30ms whereas your AI has just been extending the movement to the right. Your AI frames don't match the game state and you'll have some form of glitch, the new perfect rendered frame being different to the previous generated frame.

If you use interpolation, you won't move until t=100ms. At t=100ms, you start drawing the character moving right. At t=200ms, you'll draw them 30ms into their jump. Between t=100 and t=200, you will lerp between the moving state and the jumping state. The AI generated frames now better represent the game state, although not completely accurately because it doesn't know how far into the between-time that jump was pressed, but you have to wait one rendered frame to know what state to end on.

Where the AI generates frames and how long it takes is immaterial to these two drawbacks. Interpolation is great on TV because there's no input lag so frames can be buffered as much as needed for the AI to process. In a game you either don't have such a buffer and the motion will be screwy, or you add lag to buffer states to inform the AI.
 
The lag isn't caused by the time taken to generate frames, but the time interval between data samples used to derive the new frames.

Let's say you have a character moving right and render a frame at t=0ms. You press 'jump' at t=70ms, and render a new frame at t=100ms at a 10 fps source where your character is 30ms into their jump.

If you use extrapolation, at t=10ms you generate an AI frame of the character moving right. You generate another one of the character moving right at t=20ms. And at 30, 40, 50, etc. Then you get to t=100ms and the character has already been jumping for 30ms whereas your AI has just been extending the movement to the right. Your AI frames don't match the game state and you'll have some form of glitch, the new perfect rendered frame being different to the previous generated frame.

If you use interpolation, you won't move until t=100ms. At t=100ms, you start drawing the character moving right. At t=200ms, you'll draw them 30ms into their jump. Between t=100 and t=200, you will lerp between the moving state and the jumping state. The AI generated frames now better represent the game state, although not completely accurately because it doesn't know how far into the between-time that jump was pressed, but you have to wait one rendered frame to know what state to end on.

Where the AI generates frames and how long it takes is immaterial to these two drawbacks. Interpolation is great on TV because there's no input lag so frames can be buffered as much as needed for the AI to process. In a game you either don't have such a buffer and the motion will be screwy, or you add lag to buffer states to inform the AI.
But why shouldn't there be a part of the program that interprets all of this and constantly corrects possible anomalies with amazingly fast probability calculations? Just because there isn't one yet doesn't mean there won't be.
 
But why shouldn't there be a part of the program that interprets all of this and constantly corrects possible anomalies with amazingly fast probability calculations? Just because there isn't one yet doesn't mean there won't be.
Because that's not how ML models work. You'd need to train your ML to not only be able to determine what animation frames should appear, but all sorts of alternative outcomes too, somehow injecting game-state data into the ML model. I won't rule it as impossible but certainly it's incredibly unrealistic. By the time you can do that, it won't be so much AI frame generation but complete ML rendering.
 
Because that's not how ML models work. You'd need to train your ML to not only be able to determine what animation frames should appear, but all sorts of alternative outcomes too, somehow injecting game-state data into the ML model. I won't rule it as impossible but certainly it's incredibly unrealistic. By the time you can do that, it won't be so much AI frame generation but complete ML rendering.

It is possible that they will do something similar sooner than many people imagine today.
 
Could ML incorporate something like a lower resolution rendered frame to help in the frame gen process? Perhaps at a 360p intermediate frame and use the data from it to help generate the rest of the frame.
 
Back
Top