Access to variable number of cores according to game state on Xbox One

hesido

Regular
According to this:

http://kotaku.com/the-five-possible-states-of-xbox-one-games-are-strangel-509597078

...Xbox One will have different game states. Programmers will have to be aware of which state your game is. That's absolutely vital for the game to react accordingly, most games will probably opt to pause the game when user switches to the dashboard etc.

However, one thing caught my mind, that may be problematic to handle for games that cannot pause itself:
In "Running mode" you have access to 6 cores, 90% of GPU.

In "Constrained mode" you have access to 4 cores, and there's no user interaction, with variable amount of access to GPU resources (45% if game is visible, 10% if not)

Most GPU tasks are scalable so I don't think it would be very hard to handle. But what puzzles me is losing cores. If the game is switched to 4 cores, what happens to the threads on the lost 2 cores? I'm guessing each of those 2 cores will be reduced from different 4 core set that share the L2 cache, so the threads aren't moved between different 4 core sets to avoid trashing the L2 caches.

In any case, the programmer is only ever guaranteed 4 cores, those extra two cores will only be made use of when there's user interaction with the game. Doesn't this complicate things, and prevent "to the metal" approach that makes console programming more efficient than otherwise? Also, doesn't this cause programmers to depend on 4 cores more often than not, as 6 won't be guaranteed?

Different game states will also be on PS4, but will there be vastly reduced cores and reduced GPU access?

I probably don't know what I'm saying as the only programming language I'm familiar with is Javascript. But I'd be glad to hear from knowledgeable forum members.
 
I'm not sure it's a problem, I'd guess that when the user doesn't interact with the game, all you have to keep running is the networking side of the game (if multiplayer).

As for programming model, Cilk like scheduling (1 thread/core) is becoming standard, so I would expect either to have oversubscription for a while (keeping 6 threads running on 4 cores) or just putting two threads to pause.

Not sure about the GPU, if it's just for compute instead of just graphics, it might lead to a few problems scaling its load, but then you could simply pause the game and be done with it. ^^

I'm more curious about the impact of having two 4 cores things glued together and seemingly no shared L2/L3 for locks/semaphores...
(Just curious, not saying it will be a problem at all.)
 
The games run in VMs, so the threads are all virtual threads and they just change the mapping of the six virtual threads from six physical cores to four physical cores, so each thread takes more time to execute but still runs in the background.
 
Thanks for the insight...

I'm also wondering if there's such a game state on the PS4, where the number of available cores drop so you can do "other things" faster. Personally I'm not interested in those other things, I'm guessing/hoping 2 reserved cores + arm for background tasks should be enough, so that you always have 6 cores at your disposal no matter what...
 
Thanks for the insight...

I'm also wondering if there's such a game state on the PS4, where the number of available cores drop so you can do "other things" faster. Personally I'm not interested in those other things, I'm guessing/hoping 2 reserved cores + arm for background tasks should be enough, so that you always have 6 cores at your disposal no matter what...

Why do you care how much CPU resource a game has available to it when you can't interact with it?
 
Why do you care how much CPU resource a game has available to it when you can't interact with it?

Because you may not be the only one interacting with it?

Besides, how will games handle diminished resources? will they still function properly?
 
I'm guessing/hoping 2 reserved cores + arm for background tasks should be enough

I'd be very surprised if there were actually two background cores. I expect one to be disabled for yields, the other for bg tasks.

Because you may not be the only one interacting with it?

Besides, how will games handle diminished resources? will they still function properly?

If you are in the menu, significant parts of the game (notably, all rendering, audio) are not needed. Making this work properly is something that the devs do, and it's not hard. Just pause the rendering threads when the menu comes up.
 
Because you may not be the only one interacting with it?

Besides, how will games handle diminished resources? will they still function properly?

Are CPU cores really ever pegged at 100% in a game? Most of the time, games are GPU-bound because it's incredibly easy to throw more stuff at it, whether it be more characters on screen or more special effects. But CPU-bound stuff? A bit harder. Besides, if most games are locked at 30 or 60fps, that implies that there's still some wiggle room to play with.

Also these statements are not the same: "The system will guarantee you at 90% CPU" is not the same as "You are limited to 90% CPU". It sounds like Microsoft is picking the latter, though it could be the former (and it's up to you to deal with what happens when the system steals 10% if you don't want to have laggy performance in your game).
 
It is extremely easy for a game to not use resources when it's in a "suspended" state.
Continuing running a multiplayer server in the background would probably not be an issue for most games. Besides aren't all the servers supposed to be in the cloud?

Even if as a dev you did nothing, which they wouldn't all that would likely happen is the frame rate would half, that might result in increased latency if you were acting as a server in multiplayer.

MS will undoubtedly have a TCR, probably several that defines how you must handle the state, I just don't think it's a big deal. In fact virtualizing the CPU's means that for most games all you probably need to do is pause the game.
 
I'd be very surprised if there were actually two background cores. I expect one to be disabled for yields, the other for bg tasks.

I'd be astonished if Microsoft and Sony were touting 8 CPU cores on their APUs while planning on disabling one for yield.
 
I can see the constrained mode being very useful for MMOs, it needs to maintain the network, continue the group chat or area chat to your headset, hear the game engine for sudden events... but at the same time, you might want to pop a browser full screen and search for a walkthrough or advancements stuff. The constrained mode would be perfect to give your browser as much CPU as possible, it's no big deal for the game if it doesn't have to render graphics. I'm not sure a single Jaguar core will be snappy and responsive with HTML5 and javascript bloat of the future, in the next 6 to 8 years.
 
It becomes a massive change internally for our entire engine, if they add a few MB to the amount of resources they need, or if they require all their processes to be on one thread. If it's not multi-threaded then we have to put it on one thread. Now we have to find space on one thread, where that can live, that it's not creating a traffic jam on that thread. Sometimes we have to be like, okay, we have to move all this stuff over to a different thread and then put that in to that thread, just to manage traffic.

This is Mark Rubin's comment on:
http://www.eurogamer.net/articles/2...all-of-duty-ghosts-dev-infinity-ward-responds
I know he is talking about changing OS specs and design guidelines but could this also be an extension of variable cpu resources during snap mode, for example?
 
Are CPU cores really ever pegged at 100% in a game? Most of the time, games are GPU-bound because it's incredibly easy to throw more stuff at it, whether it be more characters on screen or more special effects. But CPU-bound stuff? A bit harder.
Actually, my experience is that most of the time games are slightly more CPU bound on current generation consoles. During the development process, a developer optimizes both the CPU and GPU sides until there's no clear bottleneck anymore (both resources are as close to 100% used as possible).

Your experience is likely based on powerful (Intel) PC CPUs running current generation console game ports. The game logic limitations (physics, AI, animation, number of enemies, amount of destructibility, game area size, etc) of these games are designed for the 7 year old console CPUs. Of course your brand new Intel CPU crunches though this data set with ease. But at the same time you are likely running the game at double the frame rate (barely 30 fps console game vs 60+ fps on PC), 1080p (vs sub-HD), good anisotropic filtering and antialiasing and higher detail settings (more GPU side effects). So you need at least 5x-10x faster GPU to reach the frame rate and quality goals on PC. That's why the GPUs are the bottleneck on PC gaming. The demands are just that much higher for graphics quality.

It's much harder to scale up game logic (add more enemies, more destruction/physics, better AI, larger levels) without spending a huge amount of extra time on content production, level design and balancing (the game difficulty cannot change). Of course if you have shared leaderboards / achievements on all platforms, it can be even harder to scale game logic (as game play must be comparable for scoring to be fair between platforms).
 
On the PC end, how much is the draw-call overhead stealing CPU resources? Because that factor may contribute to PC games sticking to current-gen console levels of CPU use, or even to resort to CU more often, also because of latency?
 
Back
Top