Do you think there will be a mid gen refresh console from Sony and Microsoft?

Spencer has stated that there are no plans for a midgen console.

1. I am convinced that the next desktop Xbox will be seen as a new generation console in 2025 or 2026, which will be based on artificial intelligence. Because it is the only way to present spectacular graphic and gameplay innovations at an affordable price with technologies available in the coming years.

2. Maybe there will be a mobile Xbox in 2025, or just that.
 
I know that MS is building their next gen games and probably the next console entirely on AI. Game functions using advanced AI probably require a local AI processor.
There's no such thing as 'entirely on AI' when it comes to gaming or really any workload. You're buying way too much into meaningless hype and buzzwords surrounding this stuff.

I'm not some big naysayer of AI either, it will have its uses, but it's not gonna be what you think it will be.
 
I have a feeling that all this AI marketing hype is entirely predicated on it's ability to do better upscaling because I have yet to see any truly transformative applications for gaming that requires consumer AI HW ...
Increasing the framerate by a factor of 2 to 5 times without noticeable quality losses, or even improvements in some cases, is the most transformative gaming application one can imagine in all senses.
 
I have a feeling that all this AI marketing hype is entirely predicated on it's ability to do better upscaling because I have yet to see any truly transformative applications for gaming that requires consumer AI HW ...
Let's say that in Starfield 2 we want each planet to be REALLY unique with 10x more complex graphics than in the current version. This may require, for example, a local AI processor that generates this unique complex graphic in real time before landing on each planet. But this is just an example. One could mention the fully AI-based behavior of the NPCs, thanks to which they behave and react in a life-like manner. However, these things require computing power.
 
Increasing the framerate by a factor of 2 to 5 times without noticeable quality losses, or even improvements in some cases, is the most transformative gaming application one can imagine in all senses.
But it's a lesser improvement when compared to increasing framerates by the same amount just with a bigger quality hit.

I'm still 100% a believer in the reconstruction side of things for AI, but developers dont seem that heavily invested in what else it can do in the near future, either. A fun discussion for futurologist types, but perhaps not immediately relevant in the short term.
 
Let's say that in Starfield 2 we want each planet to be REALLY unique with 10x more complex graphics than in the current version. This may require, for example, a local AI processor that generates this unique complex graphic in real time before landing on each planet. But this is just an example. One could mention the fully AI-based behavior of the NPCs, thanks to which they behave and react in a life-like manner. However, these things require computing power.
You would need AI to create unique meshes, textures, shaders, all in a super coherent way, all with proper collision physics models, for all these unique assets, have them all stored in memory, and generally you'd want them to be functional on some level as well lest they just be a bunch of useless window dressing.

I dont see AI being useful for this. At least not in the way you're thinking. I think AI could potentially one day be useful for creating assets *offline*, that then get implemented into games. Or using AI to develop procedural generation algorithms that get tested and then manually implemented by a developer once favorable results are achieved.

But AI actually creating content in real-time in games? I think that would be a giant mess. And certainly you couldn't do something like that for Starfield where the content has to be 99.99% consistent from player to player.
 
Increasing the framerate by a factor of 2 to 5 times without noticeable quality losses, or even improvements in some cases, is the most transformative gaming application one can imagine in all senses.
I'm pretty sure that's not the case if you look at real world testing. Native 4K manages to still be competitive (if not maybe better ?) against an upscaled 1440p image in terms of quality in a lot of cases. With an upscaling factor of 4x, there's no upscaling method out there that can produce comparable quality to native rendering ...
 
Even if we assume that AI will only be used to scale up the image and frames, such a processor in the console still makes sense, because it takes the load off the CPU/GPU pair. In this way, we also get to the point that without it, it is not possible to achieve a meaningful visual development in terms of game graphics, on consoles with realistically affordable prices. We can also look at the topic from this point of view.
 
Let's say that in Starfield 2 we want each planet to be REALLY unique with 10x more complex graphics than in the current version. This may require, for example, a local AI processor that generates this unique complex graphic in real time before landing on each planet. But this is just an example. One could mention the fully AI-based behavior of the NPCs, thanks to which they behave and react in a life-like manner. However, these things require computing power.
Even if we assume that AI will only be used to scale up the image and frames, such a processor in the console still makes sense, because it takes the load off the CPU/GPU pair. In this way, we also get to the point that without it, it is not possible to achieve a meaningful visual development in terms of game graphics, on consoles with realistically affordable prices. We can also look at the topic from this point of view.
Show me a REAL WORLD application for AI in real-time rendering OUTSIDE of temporal filtering techniques! (upscaling/frame generation/denoising/radiance caching/etc)

Can you list where AI has had ANY DIRECT benefit for rendering virtual geometry/textures, shadows, lighting/ray tracing, volumetric/scattering models in real-time applications so far ?
 
You would need AI to create unique meshes, textures, shaders, all in a super coherent way, all with proper collision physics models, for all these unique assets, have them all stored in memory, and generally you'd want them to be functional on some level as well lest they just be a bunch of useless window dressing.

I dont see AI being useful for this. At least not in the way you're thinking. I think AI could potentially one day be useful for creating assets *offline*, that then get implemented into games. Or using AI to develop procedural generation algorithms that get tested and then manually implemented by a developer once favorable results are achieved.

But AI actually creating content in real-time in games? I think that would be a giant mess. And certainly you couldn't do something like that for Starfield where the content has to be 99.99% consistent from player to player.
Content doesn't need to be consistent per player in a game like Starfield. Even in the current version, everyone has a unique landing surface. But there are also hand-built areas, and there will be in the future.

The unique planet surface structure should be imagined as having many GB of pre-made assets in the game, and the AI touches this to transform the textures and shapes, colors, buildings, the entire vegetation, etc. into a unique one. You can do all this in, say, 30 seconds until you land on the surface of the planet. Of course, seeds would be placed in this by the developers, which would somehow regulate all of this.

Without real-time procedural computation, all of this would take up several TerraBytes of data in a game. Remember, we are talking about complex, detailed graphics!
 
Show me a REAL WORLD application for AI in real-time rendering OUTSIDE of temporal filtering techniques! (upscaling/frame generation/denoising/radiance caching/etc)

Can you list where AI has had ANY DIRECT benefit for rendering virtual geometry/textures, shadows, lighting/ray tracing, volumetric/scattering models in real-time applications so far ?
We're not talking about today, we're talking about the future. From a hardware announced in 2026, games of this scale can be ready around 2028. By then, there may be tools for it.
 
With an upscaling factor of 4x, there's no upscaling method out there that can produce comparable quality to native rendering ...
Depends on what you're comparing. In still scenes, they all look better than native resolution with TAA even with the 4x upscaling factor. Simply because TAA uses color clamping, a lossy algorithm by definition, which cannot resolve pixel sized features and removes a great deal of information. Only aids such as the pixel locking mechanism in FSR can compensate for the information culled by color clamping, since FSR is essentially an extension of TAA. However, the pixel locking introduces a new layer of problems, such as pixel smearing in motion due to the pixel lock/unlock logic. Meanwhile, in XeSS or DLSS, convolution networks perform the task for you, preserving pixel sized features and performing accumulation when necessary, while culling the history when it's not.

In motion, I would agree that 4x upscaling is nowhere near as good as the native resolution, even with TAA. However, it might also be inherently harder to spot the difference during motion, as the rendered image can be obscured by layers of motion blur and other post processing effects. I also believe folks have done a pretty poor job here, so some aspects can be vastly improved.
 
I have a feeling that all this AI marketing hype is entirely predicated on it's ability to do better upscaling because I have yet to see any truly transformative applications for gaming that requires consumer AI HW ...

They've been hyping AI in games for decades. Only it was suppose to be smarter CPU to make it more challenging to beat a game.

What does specific NPU silicon on consoles do? Run LLM queries faster? Why would gamers care about that, unless MS is trying to get people to subscribe to CoPilot as well as XBL and GP.
 
Content doesn't need to be consistent per player in a game like Starfield. Even in the current version, everyone has a unique landing surface. But there are also hand-built areas, and there will be in the future.

The unique planet surface structure should be imagined as having many GB of pre-made assets in the game, and the AI touches this to transform the textures and shapes, colors, buildings, the entire vegetation, etc. into a unique one. You can do all this in, say, 30 seconds until you land on the surface of the planet. Of course, seeds would be placed in this by the developers, which would somehow regulate all of this.

Without real-time procedural computation, all of this would take up several TerraBytes of data in a game. Remember, we are talking about complex, detailed graphics!
Like No Man's Sky which is about 12GB and runs on the Nintendo Switch?
 
How useful is it to feature Cell SPU-like inspired hardware on top of their likely non-standard VLIW ISA for game development ?
Very! Could fill that Cell-like set-up with lots of different SPU components, some wired for integer and some for float and some for RT. Could then replace the GPU with a new, improved Cell!! So much the betterness!!!
 
Very! Could fill that Cell-like set-up with lots of different SPU components, some wired for integer and some for float and some for RT. Could then replace the GPU with a new, improved Cell!! So much the betterness!!!
It's too bad for us that the evil hardware vendors don't want any developers programming on them!
 
I'm pretty sure that's not the case if you look at real world testing. Native 4K manages to still be competitive (if not maybe better ?) against an upscaled 1440p image in terms of quality in a lot of cases. With an upscaling factor of 4x, there's no upscaling method out there that can produce comparable quality to native rendering ...

This is true, as much as I love DLSS it never beats out native rendering and now I've moved over to a 4k monitor it's easier to spot.
 
Content doesn't need to be consistent per player in a game like Starfield. Even in the current version, everyone has a unique landing surface. But there are also hand-built areas, and there will be in the future.

The unique planet surface structure should be imagined as having many GB of pre-made assets in the game, and the AI touches this to transform the textures and shapes, colors, buildings, the entire vegetation, etc. into a unique one. You can do all this in, say, 30 seconds until you land on the surface of the planet. Of course, seeds would be placed in this by the developers, which would somehow regulate all of this.

Without real-time procedural computation, all of this would take up several TerraBytes of data in a game. Remember, we are talking about complex, detailed graphics!
It really does have to be consistent. Testing and QA would become nearly impossible otherwise. And for expressly designed quests, you really have to have locked in content.

I think you're also failing to appreciate that whatever the AI comes up with in this theoretical real-time asset creation situation, it needs to be fully recalled should you leave and come back. We're talking the creation of fully realized, expensive and sizeable assets that didn't exist on disk beforehand. That's a memory/write nightmare.

If you're talking about seeding and whatnot, you're talking about a pure sandbox game like Minecraft, not Starfield. And even then, all the AI work would be best done offline rather than real-time. As soon as you add real time AI unique asset creations, you're dogpiling onto the complications and if you play for any length of time, you'll likely go beyond what any system is capable of handling.
 
Back
Top