Digital Foundry Microsoft Xbox Scorpio Reveal [2017: 04-06, 04-11, 04-15, 04-16]

Oh shiiiii.......

http://www.eurogamer.net/articles/d...-five-ways-your-existing-games-will-be-better

"We built into the hardware the capability of overwriting all bilinear and all trilinear fetches to be anisotropic," Andrew Goossen reveals. "And then we've dialled up the anisotropic all the way up to max. All of our titles by default when you're running on Scorpio, they'll be full anisotropic."

Good quality texture filtering will make a big difference to a large number of Xbox One titles, where typically 4x anisotropic tends to be the balancing point chosen by developers. The leap to 16x, enforced at a system level by the back-compat engine, is a huge boon, especially in concert with the complete lack of screen-tear and smoother overall performance. More good news: this new feature extends to Xbox 360 games too.

Scorpio hardware forces X1 and 360 games in BC mode to 16X aniso.
 
Wrong. Their WiiU OS reserved 50% of all available RAM. ;)

I am disappointed they're using another GB for the OS, but I figure it'll be used for recording and broadcasting in 4K. Those buffers had to be increased too!
1GB makes sense for what Wii U's OS provides. Suspension of gameplay into web browser ; a speedy one at that. Much better than 7th gen web browsers.

In terms of what an OS actually should use, it's not bad. Esp. considering 1GB for games was plenty for a console with comparable raw power to the 7th generation consoles (or less, considering the cpu.)

Doubling what Windows 10 uses, on the other hand is just ridiculous
 
They haven't actually said much about which ip leve lthe gpu supports, Polaris was only mentioned in comparison to the amount of CUs and clockspeeds it has. They mentioned over 60 customisations in one of the videos. I suspect we will get a video detailing some of that. Btw, it has hardware support for checkerboard rendering.
 
Oh shiiiii.......

http://www.eurogamer.net/articles/d...-five-ways-your-existing-games-will-be-better



Scorpio hardware forces X1 and 360 games in BC mode to 16X aniso.


Oooh. *I* called this out as something they could do, but probably wouldn't. 1/2 point for me. ;)

Microsoft is ensuring that all titles work, and this means that some of the five benefits above may need to be dialled back. The onus is on the platform holder to ensure that everything 'just works'.

"We're going to be the ones that ensure that your games run as fast as they can in terms of all those five different features, the best that they possibly can," Andrew Goossen explains. "There will be some cases where we have to dial down some of those attributes... in some games we potentially have to dial down the number of CUs, for example, to maintain compatibility with that title. But again these are all things that Microsoft does, we've always done, that's true of all 360 titles on Xbox One. We just make sure it runs the best it possibly can on Scorpio and we're very excited that Scorpio really will be the best place to run all your Xbox content."
 
Last edited:
Aha, bingo number two: https://forum.beyond3d.com/posts/1924328/
Just did some simple math, fyi a 40CU polaris (with 2560 shader cores) can reach the 6 TFLOP number if it is clocked at 1.17Ghz, which we still don't know what sort of temps and power draw it's reaching to do that (stock RX480 is running at 1.266Ghz). This seems more plausible to me rather than the Vega simply because both the CPU and GPU have to be crammed in a single soc and the bigger die usually means more heat + more space + higher power draw, three things you want to avoid when building a console.

Jun 18, 2016 :D
 
Last edited:
Price performance value is going to be very good on Scorpio.
If Leadbetter's opinion of a $500 price point is true, then it might not be that good, actually.
In November we might get Vega 11 at a sub-$300 price point, and people might be able to build ~$650 PCs that largely best the Scorpio in everything.


Continuation of their work I think, just pushing even further.
That's what I suspected, but why did Leadbetter speak about it as if it was such a great thing?


I stand corrected then. Great feature to have!


Because its early in the bring-up, life and potential software targets for the platform? We've seen many times these specs get altered and optimized as the platform settles, better to make a big reservation to start and refine down later rather than have to alter the developer specs later.
Let's hope you're right. 4GB for OS sounds ridiculous.
 
Really seems like they want out of the console market ... lol

Overall, I think I'm pretty happy with what's shown so far. I wasn't expecting Ryzen or Vega. What they've shown is pretty much what I expected. Looks like it might be reasonably small, and the cooling design looks nice. Hopefully it isn't too loud. Definitely looks like they focused on making this thing more premium than the Xbox One Fat. Will be interesting to see what finishes are on the case.

The 4GB of RAM for OS + Apps is pretty much in line with phones. Wonder how the resource split is going to work. I hope foreground apps/games get pretty much the full performance, the way most phones work. Backgrounded apps should get minimal power, just enough to do background audio, voice, notifcations etc.

Performance sounds good based on Forza, but I guess we'll see how it turns out in the real world. Thank god for improved loading times from having a faster disc. Will be interesting to see how it compares to usb drives. Maybe I'll be copying my stuff back onto the internal, if I get it.

I like the way they're handling backwards compatibility. Force AF, full clock speed.
 
So they explicitly mention Polaris GPU. I expected that, but never knew they could reach 40CU with it. I guess they heavily customized it.
And no Zen? never expected that to be honest!

@DavidGraham Agreed, once Ryzen hit the consumer market I thought for sure the timeline would have allowed Zen in Scorpio six months from now. I guess not. Little argument for Scorpio being a "next gen" console left at this point.
 
Deano's 101 on Command Processors
-------------------------
What is a Command Processor? A Command Processor (CP) takes a list of packets AKA commands from the CPU and decodes them firing off the actual hardware to do stuff.

Whats a command? a command is a binary packet with header and information the hardware will need to perform the job the software asks of it.
i.e.a draw packet might have this kind of info
DrawPacket: header, pointer to index buffer, ptr to vertex buffer, state Id

What does the CP do?
Internally a GPU isn't logically like the interface provided via API, its essentially a bunch of different IP blocks each with there own registers and state. The CP decodes the packet into those various register sets and dma fetches etc. so that command actually get done.

CPs have to do A LOT of work, a modern GPU needs many packets of work to not idle, additionally its usually part of its job to ensure MMU tables and caches are flushed, sometimes handle power concerns, exceptions and state changes (preemption etc.). Whilst we like to see nice block diagram of a GPU with 30+ CU blocks, in reality each block is itself made of lots of bits of interacting HW. CPs will help with scheduling (depending on architecture how much) including breaking things in 'right sized' workloads. Because they often have to wait for various conditions, they usually themselves have a few threads and/or cores that can do work even whilst one part of the CP is stalling due to something or other. Additional there are multiple CPU cores/threads, different priorities and parts of the GPU themselves may issue work to the CP in a variety of conditions, the CP has to merge all this in a single stream of hardware instructions and potentially order some of the work once its been done (writes to host memory etc.)

Oh and the whole thing is hard real time else stuff breaks.

How does ExecuteXXX work? In a effort to reduce the CPU work, modern APIs have commands the offload work to the CP. A packet like this
ExecuteDrawPacket: header, number of draws, pointer to draw buffer each containing a index + vertex buffer ptr and state id
The could be done on the CPU with

from draw call 0 to number of draw
DrawPacket: index buffer ptr[draw call], vertex buffer ptr[draw call], state id[draw call]

So its purely an optimisation reducing work on the CPU and the space/bandwidth up to the CP

There is also ExecuteIndirect calls, these are a further optimisation where the parameters aren't known until after the CPU push the command into the command buffer
An example would be where you want to draw bunch of particles but how many is calculated via a compute shader
They *could* be emulated on the CPU but it would be slow, as it would have to copy the result of the compute shader down to the CPU and then write the command buffer.
ExecuteIndirect get the CP to fetch the parameters via its own dma which is usually fast as the compute buffer and CP will like share the same memory/cache hierarchy

BTW Its called microcode, as its essentially the same as old (pre RISC) CPUs, there the CPU ISA was actually decoded by the microcode engine into the real CPU hardware sets.

So thats the basics of a CP, they are the less glamorous assistant to the magicians in the shader cores, but without them the shader cores would rely on the main CPU a lot more and even then would run at a fraction of the speed they do.

-------------------------
Wrote this quickly, so hopefully no major mistakes, though I expect my grammar, spelling and wrong words will be there as usual for when I write stuff.

HTH,
Deano
 
That's what I suspected, but why did Leadbetter speak about it as if it was such a great thing?
industry guys are in the house ;) so I'm going to shut my mouth and open my ears for the next while and learn something.

but when I was in audience at Build 2015, this was the demo of execute indirect they put on.
I think it's pretty important, a cheap way to offload CPU. @Andrew Lauritzen wrote this demo, you may want to ask him about it.

http://www.dsogaming.com/news/direc...proves-performance-greatly-reduces-cpu-usage/
 
Back
Top