Digital Foundry Article Technical Discussion Archive [2014]

Status
Not open for further replies.
Also curious is how the new low-level access fits in the with VMs and overall original design. Was there always going to be a low level option and DX was just a stop-gap, or have MS abandon something like forwards compatibility in order to release more for the XB1 games?

It's also weird considering DX12 is coming to Xbox One. I wonder if they've basically just brought over a subset of the DX12 model early. Otherwise you get someone making a game with a transitional API. If it's better(performance) than working with DX12, why bring DX12 over at all? Curious, to say the least. DX12 would keep forward compatibility in the cards.
 
Of course they used their knowledge and experience, what an utterly pointless thing to say. They also have knowledge and experience of x86 and will have some of high level use of GCN.

But this is a shifting of goal posts after the previous "it's just PC->PS4 therefore easy" has been shown to be bullshit simply by reading the damn article that we *should* be discussing.



Indeed, it's a big asset. ND are onto two new architectures that are very different to the ones they optimised their tools and code for. They weren't even on unified shaders with the garbage RSX, and not all of their Cell code will run as fast on Jaguar.

That said, with enough resources - meaning with it as a high enough priority - I'm sure they'd have achieved a rock solid 60 fps instead of just a very solid 60 fps.

I'm sure ND will be fine. They'll adapt quickly and continue to succeed just like they did on the PS3.



Perhaps TR would have been a better - and certainly more sensitive - comparison. It did add more to the remaster graphically than TLOU though, so frame rate compromises are more understandable, at least superficially.

I'm going to leave the ND discussion here though, as it's the rest of the article that's really interesting (far more so than RLBs name drop in the frame rate question!).



I suppose it's a matter of perspective. I like high and consistent frame rates. And if they're not going to be high then they should at least be consistent. I find unlocked frame rates distracting. And tearing too. Xbox 1 Titanfall would be quite unpleasant for me, I think!

You guys are being way too sensitive and uptight about the ND comment. I don't see it as a dig at all. They're basically saying, "Even Naughty Dog didn't get perfect 60Hz, and they're the best of the best." Take a chill pill.
 
I'm also curious to know why the kinect reservation is basically variable. If you aren't guaranteed the resources, you're not going to use them.
 
It's also weird considering DX12 is coming to Xbox One. I wonder if they've basically just brought over a subset of the DX12 model early. Otherwise you get someone making a game with a transitional API. If it's better than working with DX12, why bring DX12 over at all? Curious, to say the least.

PS4 also has a similar arrangement; an api for you to do it yourself and another that takes care of certain things, but apparently, not to the extent of hindering performance. I think the -to the metal- api is more for devs who might want to further fine tune and extract as much performance as they can, while the 'normal' api allows any dev to get excellent performance out of the console without spending too much time trying to get it.
 
I'm also curious to know why the kinect reservation is basically variable. If you aren't guaranteed the resources, you're not going to use them.

I assume some of that 10% is still being used, and that only the unused part of that previously 10% is available for the game. Meaning, it's now a a flexible slice.

If stuff like auto log on, pad tracking and a 3D component of mic noise cancelling were still to be active then some of that time would need to be used I guess.

Raises the question again of whether diskinecting might raise performance by a couple of points?

... or perhaps it's (none Kinect) stuff like video encode that's the culpret?
 
From the TLOUR DF article:

DF said:
Just over 15 minutes of gameplay from The Last of Us Remastered running in 60fps mode, giving a good indication of how well the game holds its lock.

The full quote, because the context (as usual) matters:

Context said:
Just over 15 minutes of gameplay from The Last of Us Remastered running in 60fps mode, giving a good indication of how well the game holds its lock. Overall - not bad at all. You'll need to skip ahead to around the ten minute point to see how the engine copes when it is really under load.

The first 10 minutes are rock solid. They tell you after that you get to see "how the engine copes when it is really under load". They specifically state that after ten minutes is showing the game really under load.

It's showing that the game holds it's lock really well, except when the engine is really under load. Their conclusion: "Not bad at all".
 
"Well, you kind of
answered your own question - PS4 is
just a bit more powerful. You forgot to
mention the ROP count, it's important
too"
I like how he adds this to Leadbetter question. I've always found it odd how leadbetter kind of glosses over the rop count.
 
I've seen a lot of cases while profiling Xbox One when the GPU could perform fast enough but only when the CPU is basically idle. Unfortunately I've even seen the other way round, when the CPU does perform as expected but only under idle GPU, even if it (the CPU) is supposed to get prioritised memory access.

One of the most interesting parts, and likely relevant to trying to achieve higher resolutions that will necessarily place higher demands on main memory BW (even more so if parts of the buffer(s) are in main memory).

Considering how well Kaveri does on 1/2 the BW, this does seem strange.

Fully tile and move all buffers and textures to esram?
 
DF interviews with 4A are the best, Oles gives no fuck. Great insight into Xbone's state of API, drawcalls, memory bandwidth, kinect service shutdown and other stuff.

I really can't believe they went with full on DirectX11 for launch Xbone! Now those tweets from Corrine Yu are even more telling.
 
I suppose it's a matter of perspective. I like high and consistent frame rates. And if they're not going to be high then they should at least be consistent. I find unlocked frame rates distracting. And tearing too. Xbox 1 Titanfall would be quite unpleasant for me, I think!

Indeed. Coming to PlayStation 4 from PlayStation 3, where for the most part frame rates were often an issue and tearing too pervasive in games for the early years, I have been very impressed with the frame rates this generation.

Also curious is how the new low-level access fits in the with VMs and overall original design. Was there always going to be a low level option and DX was just a stop-gap, or have MS abandon something like forwards compatibility in order to release more for the XB1 games?

We use selective and adaptive VM technology developed in-house which completely encapsulate some parts of the hardware (CPUs, RAM, all I/O) but gives complete access to bespoke processors with no impact to performance compared to the same task on the hardware in a non-VM environment.

If you design your VM well, and Microsoft have some very cool VM technology, then this isn't a problem.

It's also weird considering DX12 is coming to Xbox One. I wonder if they've basically just brought over a subset of the DX12 model early. Otherwise you get someone making a game with a transitional API. If it's better(performance) than working with DX12, why bring DX12 over at all? Curious, to say the least. DX12 would keep forward compatibility in the cards.

I bet Microsoft had endless discussions deep into Xbox One development about alternatives to the DirectX API. They must surely have been conscious of AMD's Mantle programme and the direction for DirectX12 (fewer layers, less abstraction, more metal!)

I wonder to what extent DirectX is really a boon for developers bringing across Windows code. Nobody seems to be struggling with the PlayStation 4 and it's proprietary GNM and GNMX APIs.
 
DF interviews with 4A are the best, Oles gives no fuck. Great insight into Xbone's state of API, drawcalls, memory bandwidth, kinect service shutdown and other stuff.

I really can't believe they went with full on DirectX11 for launch Xbone! Now those tweets from Corrine Yu are even more telling.

Never go full DirectX. :nope:

I wonder to what extent DirectX is really a boon for developers bringing across Windows code. Nobody seems to be struggling with the PlayStation 4 and it's proprietary GNM and GNMX APIs.

That's a good point.

Can't help wondering if a desire to create a unified environment with Windows might have gone too far. Metro and the app ecosystem presumably sits on top of something DX11-like, perhaps mono fed into the "other" OS as well as "game" OS ...?
 
Can't help wondering if a desire to create a unified environment with Windows might have gone too far. Metro and the app ecosystem presumably sits on top of something DX11-like, perhaps mono fed into the "other" OS as well as "game" OS ...?
Yeah, I can't help but feel that corporate committee politics steered a decision or two, but equally you can't see Microsoft completely dropping it because presentationally it looks bad ("Microsoft take the 'X' out of Xbox" etc).

Two APIs, one high and one low, would have been the way to and go and where perhaps they are heading. Everybody wins.
 
It's also weird considering DX12 is coming to Xbox One. I wonder if they've basically just brought over a subset of the DX12 model early. Otherwise you get someone making a game with a transitional API. If it's better(performance) than working with DX12, why bring DX12 over at all? Curious, to say the least. DX12 would keep forward compatibility in the cards.

Back at //BUILD, they mentioned in a Q&A that they were actually prototyping DX12 on the XB1, so the low-level API is probably exactly that - proto-DX12.

I'm also curious to know why the kinect reservation is basically variable. If you aren't guaranteed the resources, you're not going to use them.

Because it's not entirely a kinect reservation, it was a system reservation. A good portion of it was kinect, but it also covered things like system-drawn UI elements, such as notification popups, or drawing the UI of snapped apps.

With the flexible GPU-sharing model, any time that the system takes is going to take away from the time the game has available, and that time taken is going to be less than the fixed reservation, but not entirely predictable.

I'd imagine that the PS4 is in a similar situation - it also has notification popups, the twitch interface can take over some of the screen to draw its own UI, etc. Drawing those is never going to be free, so you always need to be aware that at any point the system might take some cycles away from you on the GPU.
 
Last edited by a moderator:
4A are genius's at this kind of work. I liked what i played of the PS3 version of last light, and the PS4 version looks tons better then that.

LB's comparison with ND was unnecessary, both development houses did exceptional work under the time and development constraints they were given. They both put Silent hill HD to shame
 
I think Globalisateur explains it. Porting from PS3 to PS4 is probably a lot harder than it is PC->PS4/XB1.

I think it's arguable that ps3->ps4 porting is actually much easier to make relatively speaking for the primary reason that most ps3 work was shifting graphics load to spu with lots of specialty code. All of that code be thrown away on ps4 and simply let the gpu do it's thing. PC->xb1/ps4 can get somewhat more complicated if they had overestimated how much power console gpu's were going to have.


Can't help wondering if a desire to create a unified environment with Windows might have gone too far. Metro and the app ecosystem presumably sits on top of something DX11-like, perhaps mono fed into the "other" OS as well as "game" OS ...?

I wonder if it really has to be that way, so encased in cruft as to bog things down. I'm guessing things are just still incomplete behind the scenes.
 
My general impression of Xbox One was that it was not fully ready at launch on the software side of things. The hardware seems to be complete and without issues, but the OS, the development kits, the APIs and other tools all seemed to be in a poor state, and things have been changing rapidly. It's almost like they just slapped Windows and directx as-is(generalizing) on the thing, and have only been able to optimize it post launch. A lot of assumptions were made about Direct3D being low overhead on Xbox One, but that doesn't appear to be the case. I wonder if they thought Direct3D 12 would be ready earlier, or something like that.
 
I found this pretty interesting from a PC gamers point of view:

GCN doesn't love interpolators? OK, ditch the per-vertex tangent space, switch to per-pixel one. That CPU task becomes too fast on an out-of-order CPU? Merge those tasks. Too slow task? Parallelise it. Maybe the GPU doesn't like high sqrt count in the loop? But it is good in integer math - so we'll use old integer tricks. And so on, and so on.

That makes it sound as though more future games are going to be tailored to the idiosyncrasies of GCN which in thry would give that architecture a big leg up over Nvidia's in the future. Strange that we still don't seem to be seeing that though in the games so far released. Metro Redux comparisons between AMD and NV on the PC should be very interesting in light of that interview though.

Also this:

And let's not forget that programming close to the metal will usually mean that we can get 2x performance gain over the equivalent PC spec. Practically achieving that performance takes some time, though!

Is that really still applicable in a world where DX12 is about to bring close to the metal programming to the PC and developers are clearly making low level optimizations to game code that will directly benefit x68 AVX enabled CPU's and GCN GPU's?
 
That makes it sound as though more future games are going to be tailored to the idiosyncrasies of GCN which in thry would give that architecture a big leg up over Nvidia's in the future. Strange that we still don't seem to be seeing that though in the games so far released. Metro Redux comparisons between AMD and NV on the PC should be very interesting in light of that interview though.
It's been known for a while that GCN isn't good with interpolation, don't know if Kepler shares the same properties or not, But generally low level optimizations like this are different on consoles than PCs. In fact benchmarks from the Redux versions show a ridiculous advantage for NVIDIA hardware, an advantage that wasn't at this magnitude in the old versions.

http--www.gamegpu.ru-images-stories-Test_GPU-Action-Metro_Last_Light_Redux-test-mtero_r_1920.jpg


Is that really still applicable in a world where DX12 is about to bring close to the metal programming to the PC and developers are clearly making low level optimizations to game code that will directly benefit x68 AVX enabled CPU's and GCN GPU's?
I think there is a bit of exaggeration here, At any rate, the limiting factor in current consoles is the lowly CPU, which means that "2X performance" will be nowhere near any of the top medium end PC hardware, and that's before we factor in DX12.
 
How completely bizarre for a game that was previously not optimized for GCN and now is! I'd love to hear the devs take on that one.

The effects introduced in the redux version simply run better on nvidia hardware, regardless of any specific optimising that may be present?

Brief because this is the console forum, I can recall as far back as I got into 3D gaming (nvidia Riva128) that some architectures simply had better implementations that favoured certain graphical effects. This runs better on ATI/AMD. That runs better on nvidia. It was one if the things that drove me crazy as a PC gamer, there was almost no definitive better option except perhaps the GeForce 5xxx series which was all rough.

Sent from my iPhone using Forum Runner.
 
Status
Not open for further replies.
Back
Top