This thread is for discussing the impact of DX12 on consoles. Speculations on future console hardware bleongs elsewhere.
Shouldn't the bottleneck be the bandwidth feeding into the GPU and not so much the GPU itself? To achieve 1080p, they must use eSRAM properly otherwise the whole thing falls apart. The problem is that it's difficult to use, but the new eSRAM API in DX12 is supposed to make it easier and more automatic. I recall MS stated there're four stages of adoption of the eSRAM. The last stage was to asynchronously use the Move Engines to move data into/out of memory so that the GPU has has maximal bandwidth without contention with the CPU. I have no idea if this fourth stage has been adopted especially by third-parties. It makes me wonder why MS chose to upgrade the CPU-GPU bandwidth to 30 GB/s while the PS4 CPU-GPU bandwidth is limited to 10 GB/s. What is the bottleneck from the GPU?Seems reasonable. We've known since day one that the X1 has a faster CPU, and since early last year that with the reserves on the six "gaming" cores released that there's more available to games.
Unfortunately for X1, the most common bottleneck is always going to be the GPU.
Well... lets see the maximum theorethical CPU diference (pure raw power only).
According to VGLeaks, Xbox One has a 112 Gflops CPU, and PS4 a 102,4 Gflops.
My calculations show the exact same numbers:
Xbox One - 1750Mhzx8coresx8ipc=112 Gflops
PS4 - 1600Mhzx8coresx8ipc=102,4 Gflops
When using 6 cores for games we had (Launch)
Xbox One - 1750x6x8=84 Gflops
PS4 - 1600x6x8=76.8 Gflops
A 7.2 Gflops diference (9.3% diference)
With 50% of the Xbox One 7th Core we have (Xbox 7th Core minimum usage)
Xbox One - 1750x6.5x8=91 Gflops
PS4 - 1600x6x8=76.8 Gflops
A 14.2 Gflops diference (18.5% diference)
With 80% of the Xbox 7th Core we have (Xbox 7th Core maximum usage)
Xbox One - 1750x6.8x8=95.2 Gflops
PS4 - 1600x6x8=76.8 Gflops
A 18.4 Gflops diference (almost 24% diference)
With 80% of the Xbox 7th Core and 100% of the PS4 7th Core (Current status with PS4 7th core reportedly unlocked at 100%)
Xbox One - 1750x6.8x8=95.2 Gflops
PS4 - 1600x7x8=89.6 Gflops
A 5.6 Gflops diference (6.3% diference)
Regardless of the value, its clear Xbox has a CPU advantage.... And granted, 18.4 Gflops is wothy of consideration on CPU bottlenecked games with no assync compute, but on a system with a 530 Gflops diference on the GPU (40.5% diference) , and with GPGPU assync compute capabilities at use, are 5.2 Gflops on the CPU capable of making a real diference?
And even more if we consider the statements that claim the PS4 API goes deeper to the metal than DX 12?
Yes, in fact we already do run some of our AI work on GPGPU. And physics. And...
Shouldn't the bottleneck be the bandwidth feeding into the GPU and not so much the GPU itself? To achieve 1080p, they must use eSRAM properly otherwise the whole thing falls apart. The problem is that it's difficult to use, but the new eSRAM API in DX12 is supposed to make it easier and more automatic. I recall MS stated there're four stages of adoption of the eSRAM. The last stage was to asynchronously use the Move Engines to move data into/out of memory so that the GPU has has maximal bandwidth without contention with the CPU. I have no idea if this fourth stage has been adopted especially by third-parties. It makes me wonder why MS chose to upgrade the CPU-GPU bandwidth to 30 GB/s while the PS4 CPU-GPU bandwidth is limited to 10 GB/s. What is the bottleneck from the GPU?
Maximum difference if CPU is the bottleneck is 1 to 3 frames par second at 30 fps or 3 to 6 frames per second at 60 fps.
The bottleneck will move around many many times during the course of a frame - perhaps BW one moment, ALU another, ROPs another still. BW and optimal esram use does seem to be a factor going by the various leaks and dev comments, but the single largest "bottleneck" (if you can call it that) seems to be the number of CUs. Of course, if there were more CUs then BW would be more of a factor ...
Edit: The presentations on DX12 indicate that there are lots of little 'bubbles' that are present during rendering where CUs are stuck waiting for work. DX12 should allow the CUs to keep busy more of the time. X1 is already supposed to have had a lot of work done to allow the CPU to keep the GPU busy though.
Not necessarily. DX12 is to allows more to be done in parallel, so where one core was once a bottleneck many cores can now create work and submit it to the GPU. So you could possibly be looking at many tens or hundreds of percent increase in performance if this was an enormous bottleneck.
Of course, in console games this probably won't be as they'll have been designed around the hardware so shouldn't find themselves in this situation.
Draw calls can always be optimized, I believe we touched on it much earlier on this thread, a sub topic of "what value do huge draw calls bring".Drawcall was not a problem on console before this generation and is not with a good API....
For long time PS3 and 360 were capable to do much more drawcall than PC particularly under DirectX 9...
Because you believe the GNM don't do the same thing... From what I understand the ICE Team os responsible of PS4 API and are competent.
Well according to the guy in the article above (http://gamingbolt.com/the-park-dev-...p-between-ps4-and-x1-cpus#VpChyHoKdG5Q3lO5.99) no, it doesn't.
Which doesn't mean it can't change. It has nothing to do with whether ICE Team are competent or not, and there's no need to try and reframe this discussion in those terms.
Oles Shishkovstov: Let's put it that way - we have seen scenarios where a single CPU core was fully loaded just by issuing draw-calls on Xbox One (and that's surely on the 'mono' driver with several fast-path calls utilised). Then, the same scenario on PS4, it was actually difficult to find those draw-calls in the profile graphs, because they are using almost no time and are barely visible as a result.
In general - I don't really get why they choose DX11 as a starting point for the console. It's a console! Why care about some legacy stuff at all? On PS4, most GPU commands are just a few DWORDs written into the command buffer, let's say just a few CPU clock cycles. On Xbox One it easily could be one million times slower because of all the bookkeeping the API does.
But Microsoft is not sleeping, really. Each XDK that has been released both before and after the Xbox One launch has brought faster and faster draw-calls to the table. They added tons of features just to work around limitations of the DX11 API model. They even made a DX12/GNM style do-it-yourself API available - although we didn't ship with it on Redux due to time constraints.
I think I'm fairly positive that dx12 and GNM are not the same feature set. Nor will they need to be.
There will come a time in which to increase graphical complexity will require a substantial increase in draw calls that the new APIs provide, but it may not be this generation of consoles. The weight of dx11 games are already crushing them.
And a hardware and software feature setThe weight of DX11 games... DX11 is just an API
High level APIs are needed as well as low level APIs. The point of an API is to make the lives of developers easier to develop games, with no APIs they would spend an eternity trying to make gamesand the CPU and GPU in PS4 and Xbox One can do much better GNMX and DX11 are inefficient and can't show what the hardware are truly capable of...
it's not an error for a company to produce a game.I think GNMX is an error on Sony side, some dev like the Ubi soft team creating the Crew probably used it for AAA games...
And a hardware and software feature set
High level APIs are needed as well as low level APIs. The point of an API is to make the lives of developers easier to develop games, with no APIs they would spend an eternity trying to make games
it's not an error for a company to produce a game.
The Xbox One and PS4 have now been on the market for two years and there are still more graphically impressive games releasing. What can you tell us about optimizing for the Xbox One and PS4 and do you feel the consoles will outlast the relatively short shelf-life of their components?
These are powerful machines with fixed architectures and some astonishing debug tools. On one of the games I worked on we created some amazing effects for Xbox One that have given us ideas for how it could be visually much better and more performant. The main variable in all this is how many skilled people you have and how much you’re willing to spend.
These days the investment in great engine tech is no longer necessarily rewarding you with better sales. You have less developers going with their own solutions, instead investing money in programmers for the core game and those who can maintain a bought 3rd-party engine.
You have some established pioneers like Naughty Dog thankfully making leaps and bounds in this area, but I’m unsure right now whether we’ll see as much progress as we did with the previous generation.
fairly certain this is not the whole truth. There are multiple types of dev kits that were available to download.GNM is an API too and on PS3/360 there were no inefficient Direct X style of API on consoles and games were made.
fairly certain this is not the whole truth. There are multiple types of dev kits that were available to download.
I think GNMX is an error on Sony side, some dev like the Ubi soft team creating the Crew probably used it for AAA games...