DirectX 12: The future of it within the console gaming space (specifically the XB1)

Correct me if I'm wrong, but wouldn't DX12 be less of a big thing for the Xbox One since it's existing API is already low level? Will there be any increase in performance? I understand PC's getting a boost due to draw calls and better access to the GPU. And unified memory pools on their way to PC's also.But consoles have that advantage to boot.

And isn't PS4's GNM API even more open?
 
It's just a question of what additional features DX12 brings to the game now.

It's still unclear as to what the difference is between being fully DX12 compliant, or just being compatible, and this is in the same way like how all GCN1.0 cards are DX11_1, but only GCN1.1 support Tier 2 tiled resources

So hopefully this gets cleared up over the next couple of months.

I have my fingers still crossed that there will hopefully something new from a hardware perspective.

I can say without a doubt, dynamic GI is the biggest next-gen feature that one could easily identify with regardless of resolution. Even baked GI makes a huge difference.

I did a quick search on threads on dynamic GI on these boards in 2006 and the responses towards dynamic GI was interesting. Now we have 2 games (unreleased) that have dynamic GI, on relatively weaker hardware than one might have believed would have been the requirement for such features back in 2006.
 
Is there any even remote link between DX12 and facilitating Xbox 360 emulation on Xbox One, particularly in regards to the PPC to X86 conversion issue?
 
I did a quick search on threads on dynamic GI on these boards in 2006 and the responses towards dynamic GI was interesting. Now we have 2 games (unreleased) that have dynamic GI, on relatively weaker hardware than one might have believed would have been the requirement for such features back in 2006.
I don't remember discussions from back then, but my view was the opposite. In 2002 I did a little research on GI and concluded it wasn't worth putting too much effort into it since we'd be able to brute force it soon.

Boy was I wrong. I must have had a unit conversion problem in my calculations. :oops:
 
Correct me if I'm wrong, but wouldn't DX12 be less of a big thing for the Xbox One since it's existing API is already low level? Will there be any increase in performance? I understand PC's getting a boost due to draw calls and better access to the GPU. And unified memory pools on their way to PC's also.But consoles have that advantage to boot.

And isn't PS4's GNM API even more open?

Not sure but recently Halo developer was asked if DX12 would have "an appreciable difference in power/performance for Xbox One" and he simply stated "Word".

Question is what does "appreciable" mean?
 
Absolutely nothing, as he said.

Well the lower api overhead reduces the cycles needed to send drawcalls to the gpu, as well as threads them across multiple cores. With top X360 games reaching 20k drawcalls @ 30fps you'd think it would help free up cpu cycles somewhat on pcs. Emulating the the 360 would be extraordinarily cpu demanding so being able to free up the cpu as much as possible would be beneficial.

I don't see how he came to his conclusion, unless its because he thinks its a moot point because X1 cpu doesn't have the raw clock speed to emulate the 360 cpu.
 
Last edited by a moderator:
Yes, bit it's still meaningless as a system level emulator won't be using the high level developer facing graphics API.
 
Yes, bit it's still meaningless as a system level emulator won't be using the high level developer facing graphics API.

D3D10 had a <10% performance improvement in pcsx2 over D3D9. Likely due to the rework of the entire api architecture in d3d10, of which included shifting more of the work onto the gpu and off the cpu, which was critically important for pcsx2, since getting games to run at full 30fps still sometimes required a +4.5ghz core2/i5/i7.
 
Last edited by a moderator:
...... which was critically important for pcsx2, since getting games to run at full 30fps still sometimes required a +4.5ghz core2/i5/i7.

This explains why DX12 while a certainly useful boost to CPU time is still nowhere near close to providing enough power for emulation of the Xenos CPU. Jaguar has awful IPC compared to any of the Core i architectures and is running at 1.7 Ghz (I think?) to boot.
 
Jaguar has awful IPC compared to any of the Core i architectures and is running at 1.7 Ghz (I think?) to boot.
It's not awful. Benchmarks tend to mispresent IPC of Jaguar because it doesn't have turbo. i7 (Haswell ULV) at 1.7 GHz turbos at 3.3 GHz. This is why the single core performance of ULV 1.7 GHz Haswell beats Jaguar so badly (it has twice the clock when only single core is active). Jaguar IPC is roughly comparable to Intel Core 2.
 
Last edited by a moderator:
It's not awful. Benchmarks tend to mispresent IPC of Jaguar because it doesn't have turbo. i7 (Haswell ULV) at 1.7 GHz turbos at 3.3 GHz. This is why the single core performance of ULV 1.7 GHz Haswell beats Jaguar so badly (it has twice the clock when only single core is active). Jaguar IPC is roughly comparable to Intel Core 2.


You're right I was engaging in unnecessary hyperbole, it's just not great compared to the CPUs necessary to emulate the EE at full speed.

There is a benchmark cooked up from the FF-X-2 scenes running PCSX2. It's a long way off a perfect benchmark but it does show that the threshold for emulation is a 3.3Ghz cpu (E6800 appears to be fast enough). There are also benchmark results from the Dolphin forums for emulating the Wii CPU that imply that straight emulation of the Xenos CPU is unlikely for a Jaguar CPU core at the clockspeeds in the XB1 (Google Doc of the results).

I had been under the impression that a 'core' of an emulated cpu couldn't be split across multiple cores on the host but if there are then that certainly would change things. Or are there ways you could leverage GPU compute for CPU emulation perhaps?

Edit: Is IPC not clock independent? I thought IPC was supposed to represent how many instructions a cpu can retire per clock cycle? Or is it so badly mangled as benchmark it's as much use as a SPECint number (from what I've read it seems to be compiler tricks have spoiled this as a benchmark)?
 
Last edited by a moderator:
Back
Top