Digital Foundry Microsoft Xbox Scorpio Reveal [2017: 04-06, 04-11, 04-15, 04-16]

I mean, imagine that RDR 2 is 1080p or 900p 30 fps in X1/s. So, could be this technique a posibility to achieve 4K "60fps" in a Multi like that in Scorpio?

I know that Halo 5 or the future Forza 7 could be a true posibility at 4k 60fps ( DF article), but they run at 60 fps even in X1/s. With videogames at 30 fps, i think it could be maybe a serious problem even at native 1080p for de Scorpio's Cpu (pass from 30 to 60 fps, tipycal problem of cpu-bound in this gen that we can see even on Ps4 Pro with FFXV for example ). That's why i'm asking about that video info...

Thanks
Typically bottlenecks work like this: you put in as much as possible (on the GPU) until the CPU becomes the bottleneck. Then you optimize the heck out of the CPU code until the GPU becomes the bottleneck. And you go back and forth and back and forth until the release of the product.

Games are designed well in advance as to what they want their games to accomplish. Engine and graphics coders do a decent amount of math to determine how many triangles they want to support per scene, the type of lighting, the frame rate, etc. All these things are more or less decided up front. So if they want a specific setup, given how strong the GPUs are, they already know if they will be bound to 30fps. If they aim for 60fps they will reduce the values of everything else.

So don't be too caught up in this notion that the weak CPUs bottlenecking everything. It's not how it works exactly. You can easily change the perspectives and say, because they set the graphics to be 30fps, then they had that much additional time on the CPU side to put in more work and effort there. If you had a 60fps game, you'd have 1/2 the time, so you'd need to find other ways to accomplish this task, i.e. running dedicated servers.

So it really depends on the goals of the developers. As a personal note, I do find it funny that now that we have 4Pro and Scorpio, all of a sudden after 4 years of service the general masses have actually finally perked up about the CPU (looking at other forums etc).
 
Last edited:
I mean, imagine that RDR 2 is 1080p or 900p 30 fps in X1/s. So, could be this technique a posibility to achieve 4K "60fps" in a Multi like that in Scorpio?

I know that Halo 5 or the future Forza 7 could be a true posibility at 4k 60fps ( DF article), but they run at 60 fps even in X1/s. With videogames at 30 fps, i think it could be maybe a serious problem even at native 1080p for de Scorpio's Cpu (pass from 30 to 60 fps, tipycal problem of cpu-bound in this gen that we can see even on Ps4 Pro with FFXV for example ). That's why i'm asking about that video info...

Thanks
You may find this useful https://forum.beyond3d.com/posts/1982200/
Post by @sebbbi outlining some of what's involved in getting to 60fps on current jag hardware.

The tech mentioned in the video you posted could also be viewed from another direction.
Make the game 60fps on the mid gen machines, and use the tech to simulate (faux) it on the base hardware.

To be honest it's hard to say what games would be able to make the leap to 60fps, as it depends where the bottleneck is, and if the additional resources are enough to overcome them.
May not even be a bottleneck per se, just overall limit on resources to achieve both 4k60, when a game was 1080p30, more so 900p30.

May be able to do say 1080/1440p60 high performance mode, but not 4k60.
I wouldn't hold out on expecting doubling of framerates especially at 4k, may be achievable on the very odd title.
At 1080p may actually find a few games offering performance mode, we'll have to wait and see. The upclock, and slight improvements to caches etc, may be enough for some games.

The additional development work to push it to 4k60 even if possible, using the tech in the video may not be worth it for studio. Especially if there's an easy win on 1080p60 mode available.

You seem like a nice person, so I would say don't hold your breath, may not see you around here again if you did.
 
You may find this useful https://forum.beyond3d.com/posts/1982200/
Post by @sebbbi outlining some of what's involved in getting to 60fps on current jag hardware.

The tech mentioned in the video you posted could also be viewed from another direction.
Make the game 60fps on the mid gen machines, and use the tech to simulate (faux) it on the base hardware.

To be honest it's hard to say what games would be able to make the leap to 60fps, as it depends where the bottleneck is, and if the additional resources are enough to overcome them.
May not even be a bottleneck per se, just overall limit on resources to achieve both 4k60, when a game was 1080p30, more so 900p30.

May be able to do say 1080/1440p60 high performance mode, but not 4k60.
I wouldn't hold out on expecting doubling of framerates especially at 4k, may be achievable on the very odd title.
At 1080p may actually find a few games offering performance mode, we'll have to wait and see. The upclock, and slight improvements to caches etc, may be enough for some games.

The additional development work to push it to 4k60 even if possible, using the tech in the video may not be worth it for studio. Especially if there's an easy win on 1080p60 mode available.

You seem like a nice person, so I would say don't hold your breath, may not see you around here again if you did.
Correct if I'm wrong, but if you have 1080p60 4k60 should be doable given enough graphics horsepower, right? At least in my experience lifting the resolution up doesn't stress CPU that much more
 
Correct if I'm wrong, but if you have 1080p60 4k60 should be doable given enough graphics horsepower, right? At least in my experience lifting the resolution up doesn't stress CPU that much more
if it's 1080p60, 4k60 native is not just doable, but expected, and questions will be asked as to why if it's not reached.

900p60, may get 4k60 native (not expected but possible), but if not reached, the only questions asked will be what are they doing to achieve 4k output, e.g. is it checkerboarded or1800p then upscale to 4k60.

Edit: Although this doesn't mean that it has to be 4k60 native, but that's the expectations at the start. If a studio uses checkerboarding etc, then people will expect a big jump in visual fidelity and effects.
 
I mean, imagine that RDR 2 is 1080p or 900p 30 fps in X1/s. So, could be this technique a posibility to achieve 4K "60fps" in a Multi like that in Scorpio?

I know that Halo 5 or the future Forza 7 could be a true posibility at 4k 60fps ( DF article), but they run at 60 fps even in X1/s. With videogames at 30 fps, i think it could be maybe a serious problem even at native 1080p for de Scorpio's Cpu (pass from 30 to 60 fps, tipycal problem of cpu-bound in this gen that we can see even on Ps4 Pro with FFXV for example ). That's why i'm asking about that video info...

Thanks

I am no expert but I have seen tutorials about game programming make the recomendation of having game simulation and rendering independent from each other for example rendering at 60 fps but game updates at 25 fps, It makes sense because it doesnt seem necesary to make certain things like every IA calculation each frame, I dont have xbox 1 but I have seen halo5 in youtube videos and one interesting thing is that animation of distant characters run choppy I think what is happening is that the animation is updated at 30 fps while the game runs at 60, the position of player changes 60 fps but only animation changes diferently maybe that is for saving GPU and not CPU but its interesting, with physics maybe there are problems like object that go inside other objects for one frame and then be corrected the next frame when the collision is atended I think it depends the game and if the calculation can be skipped for one frame

also I remember that sony made some changes in dev kits and now physics run in GPU that should help a lot the CPU, a game can run at 60 fps if devs wants, its always about how much things they want to use use when displaying things
 
Probably started as soon as Phil was put in charge. It's a pretty big effort to develop a custom SOC like that and all the software tools to go around it.
3 years and their XDK is still not complete ?
kwN5Xll.png
 
Thats a very good non-answer, but I think people are seriously fooling themselves if they expect another 20% to come....
 
3 years and their XDK is still not complete ?

If I recall correctly, they didn't project a finished XDK for Scorpio until July/August anyways according to the timelines published in the various articles from DF.
 
I don't understand why this console doesn't support FP16, according to dice it gives them performance a 30% boost, doesn't seem like a real significant jump to ps4pro.
 
GPR pressure is reduced a lot if it can store two FP16 in one FP32 register.
yes, but I ment the DICE presentation. :yes:

checkerqvjhm.jpg


What about Deferred Rendering? If you store the G-Buffer as a fat fp 4x16 texture, half-precision for lighting and postprocessing would be good enough.

 
Last edited:
yes, but I ment the DICE presentation. :yes:

What about Deferred Rendering? If you store the G-Buffer as a fat fp 4x16 texture, half-precision for lighting and postprocessing would be good enough.
That's already been the case for years, FP16 support mentionned here isn't the texture/data format, but registers & ALU.
 
Microsoft-Xbox-One-X-Scorpio-Engine-Hot-Chips-29-03.png


Bleh, need a 4K screenshot.

Anyways, @3dilettante Any thoughts on the RBE -> 2MB L2 -> 6chan memory setup?

I guess the numbering here sheds a bit more light on how they're tied to a possible shader engine #. Plus it's also doubled L2 per RB vs Durango.
2017-08-21_09.33.44-1_575px.jpg
 
Last edited:
I'm squinting as hard as I can at the upper diagram. Best I can tell, there are 4 memory hubs that correspond to the outbound GPU memory connections. The first and second have a dedicated connection to their physical controller, and share a third connection to another. The third and fourth have the same split arrangement with the other three memory controllers.

I think this may be a mixture of how AMD's APUs have a double-layer of memory controllers (GPU plugs into a graphics memory controller that then interfaces with the CPU domain), and the limited channel sharing of GPUs like Tahiti.
 
Back
Top