DirectX 12: The future of it within the console gaming space (specifically the XB1)

Ok folks fasten your seatbelts. Good ole MisterX may have been right after all. See this article at tom's for a totally batshit crazy rumor. Exclusive: DirectX 12 Will Allow Multi-GPU Between GeForce And Radeon http://www.tomshardware.com/news/microsoft-directx12-amd-nvidia,28606.html If this wasn't tom's or similar authority I would completely disregard this rumor. If that rumor is true, then all of those extra processors and programmable DSPs suddenly become potential additional graphics resources. This also makes the display planes even more valuable. This also opens up a viable upgrade path for the existing box.
 
Ok folks fasten your seatbelts. Good ole MisterX may have been right after all. See this article at tom's for a totally batshit crazy rumor. Exclusive: DirectX 12 Will Allow Multi-GPU Between GeForce And Radeon http://www.tomshardware.com/news/microsoft-directx12-amd-nvidia,28606.html If this wasn't tom's or similar authority I would completely disregard this rumor. If that rumor is true, then all of those extra processors and programmable DSPs suddenly become potential additional graphics resources. This also makes the display planes even more valuable. This also opens up a viable upgrade path for the existing box.

How does this apply to either of the consoles that have no Nvidia GPU to link to?
In the PC space, the architectures do not have to provide output that is consistent between each other, such as differences in texture filtering and output, level of support for specific functions, and there isn't an existing requirement that the proprietary drivers for different vendors play nice within the same application.
And why does this make a DSP a viable DX12 graphics target?

edit:
Good ole MisterX may have been right after all.

To paraphrase The Lord of the Rings:

"A day may come when the courage of men fails MisterX is right, when we forsake our friends and break all bonds of fellowship. But it is not this day. An hour of wolves and shattered shields when the age of Men comes crashing down MS Paint diagrams of random Powerpoint sections pasted together like ransom notes isn't a waste of time! But it is not this day!"
 
Last edited:
How does this apply to either of the consoles that have no Nvidia GPU to link to?
In the PC space, the architectures do not have to provide output that is consistent between each other, such as differences in texture filtering and output, level of support for specific functions, and there isn't an existing requirement that the proprietary drivers for different vendors play nice within the same application.
And why does this make a DSP a viable DX12 graphics target?

The whole point of that article was the DX12 was agnostic to GPU resources it treated everything as if it was a single GPU. Thus, main GPU, spare CPU cycles, random processors, and additional video cards all form a single virtual GPU. In the PC space, functionally that is probably limited to AMD and Nvidia cards plus intel integrated chips and AMD apus. However, in a closed, defined system the DSPs and other things come into play. For example, the bone could offload HUD, Skybox, and system plane to non-CU resources. Again this all a crazy rumor at this point, but it would make the bone design, especially if any of the Yukon design principles survived, make more sense.
 
The whole point of that article was the DX12 was agnostic to GPU resources it treated everything as if it was a single GPU.
The hardware won't act like that, and I'm not aware of a reason for why the vendors who write the DX12 drivers for the hardware would cater to that.

For example, the bone could offload HUD, Skybox, and system plane to non-CU resources. Again this all a crazy rumor at this point, but it would make the bone design, especially if any of the Yukon design principles survived, make more sense.
The GPU would have no conception of what to do with skybox data sent to an audio DSP. An API isn't going to change that. It's not like the DSP would have the slightest conception as to what it was being asked to do.
 
The hardware won't act like that, and I'm not aware of a reason for why the vendors who write the DX12 drivers for the hardware would cater to that.


The GPU would have no conception of what to do with skybox data sent to an audio DSP. An API isn't going to change that. It's not like the DSP would have the slightest conception as to what it was being asked to do.


You seem to be missing the point. You wouldn't need the drivers. IIRC, there are substantial programmable DSPs in the bone beyond the audio ones. You just program them to function as a GPU.

Currently this is nothing but a crazy rumor on a credible site. However, if true, it has far reaching implications in the PC space and potential game changing ones for the bone.
 
Exclusive: DirectX 12 Will Allow Multi-GPU Between GeForce And Radeon
...

this makes total sense to me ... just like a goal of C++ AMP was to allow you to programme against ANY ACCELERATOR so should the DxAPI .. I'm glad they're doing it for Dx12 (assuming rumors are true) ...\

p.s. Brad and some AMD folk on twitter have been hinting at this over the last few weeks ..
 
Last edited:
You seem to be missing the point. You wouldn't need the drivers. IIRC, there are substantial programmable DSPs in the bone beyond the audio ones. You just program them to function as a GPU.
Just like that?

Currently this is nothing but a crazy rumor on a credible site. However, if true, it has far reaching implications in the PC space and potential game changing ones for the bone.
A Tensilica HiFi DSP has something like one ALU, one MAC, and one Load/Store unit.
The Xbox One has 12 CUs, each with one scalar unit, 16 L/S units, 64 FMADD units, 16 ROPs operating on multiple channels each, and so on.
I see it changing about, what 1/700th of the game?

...

this makes total sense to me ... just like a goal of C++ AMP was to allow you to programme against ANY ACCELERATOR so should the DxAPI .. I'm glad they're doing it for Dx12 (assuming rumors are true) ...\

p.s. Brad and some AMD folk on twitter have been hinting at this over the last few weeks ..
If you're running the same compute across disparate hardware, expect weirdness.

Also, AMD's hinting at Crossfiring with an Nvidia GTX?
 
And here I thought the tweets were about xDMA on a 390x2...

Edit: What a nightmare Tom's is describing, leaving it up to the developer to divide the workload - but wow imagine how many new hardware configs you would have to test against. Sorry, I am putting this in the "WHAT?" box until it is proved/disproved next week.
Edit2: Unless it is only on new hardware, and only on devices designed to work in this manner.
 
Last edited:
6 days to go I guess. Until this is more fleshed out its too big of an ocean. There are always limitations and if this is true the limitations will likely be restrictive in a manner that would make sense.

Can't put the circle in the square type bit. But if its otherwise I'd expect a lot of overhead then. So something has to give.

I liked the LOTR speech. Lol. Solid.

I also apologize for the dx12 NDA thread game thing: if you were an individual I am sorry it was not my intention to witch hunt.
 
Hmm the benefits seems to be a bit exaggerated but I'd love to see this used to split off latency sensitive compute work to CPU integrated GPU's leaving the big discrete GPU's for the less latency sensitive heavy lifting.

Any feelings on how likely that is?
 
Hmm the benefits seems to be a bit exaggerated but I'd love to see this used to split off latency sensitive compute work to CPU integrated GPU's leaving the big discrete GPU's for the less latency sensitive heavy lifting.

Any feelings on how likely that is?
Certainly enough to make me rethink buying a MBP retina.

I would love to see this happen.

This also brings the possibility of cloud resources back into the picture a bit. If
You can divide your frame up to different GPU assets that type of split should be no different than waiting on remote resources in theory.
 
AMD's hinting at Crossfiring with an Nvidia GTX?

not cross firing with NVidia, just making it easier for devs with multiple amd gpu's to programme them ...

'Mantle let's you individually program each GPU and its memory. It is up to the programmer.'

'Mixed GPU setups. Mantle doesn't care what you have installed. It's not like DX11'

And from what Brad was saying, and reading between the lines with Dx12 being HW OEM agnostic, guessing being able to programme ANY GPU or accelerator for that matter is not outside of the realms of possibility ..

and more Hallock quotes "GeForce and Radeon GPUs could soon combine VRAM thanks to DX12, Mantle"
 
Last edited:
What about those external GPU's that use USB 3.0, and all those intel GPU's that are just turned off.

More articles popping up, but linking Tom's. Link
 
not cross firing with NVidia, just making it easier for devs with multiple amd gpu's to programme them ...
That part seems fine, but that's not the sort of claim I was responding to.

And from what Brad was saying, and reading between the lines with Dx12 being HW OEM agnostic, guessing being able to programme ANY GPU or accelerator for that matter is not outside of the realms of possibility ..
It's one thing to program towards varied hardware thanks to API abstraction and another to program towards varied hardware in system configurations they've never been intended to run in before.
Then there's the fact that there are known cases where the vendors themselves put driver locks even in cases where it was less problematic, like mixed systems with PhysX.


That's a humorously ambiguous headline, although for clarity's sake I think article points to a more straightforward case where AMD's cards can combine memory in XFire modes, and Nvidia's can combine memory in SLI modes, not SLIXFire.
 
Earlier in this thread we had a discussion about what dx12 features X1 could possibly have and it's impossibility of having features still being decided or worked on.

Certainly if there was one feature to cause serious debate and delay it's this one. All other features seem trivial compared to this.

From a business standpoint, cooperative standpoint, technology standpoint. This is a lot of koom-bi-ya.
 
Tom'shardware seems to be really confident with this news.

If we look at DX12 as a "vendor-agnostic Mantle", it would make sense.
Graphics drivers from different vendors have been supported since Windows 7, and we saw hybrid GPU setups using split-frame rendering work with Lucid Hydra over 5 years ago, so it's definitely not impossible.

Of course, nVidia loves cockblocking graphics features from AMD cards in TWIMTBP games so they're definitely going to hate this more than anyone.
 
Good ole MisterX may have been right after all.

Even a broken clock is right twice a day. Besides, in order for him to have been right the Xbox One would have three GPUs in it, among other ridiculously outlandish statements.

all of those extra processors and programmable DSPs suddenly become potential additional graphics resources. This also makes the display planes even more valuable. This also opens up a viable upgrade path for the existing box.

Extra processors? I think I missed those.
 
Extra processors? I think I missed those.

Me too.. Specially since they seem to be powerful and fast enough to interfere on GPU work, while using the DDR3. Damn... Are they as powerfull as Cell?

Guess I'll have to cram my PC with all the sound cards with DSP I can place on the PCI slots to use "THE EXTRA POWER". :)
 
Even a broken clock is right twice a day. Besides, in order for him to have been right the Xbox One would have three GPUs in it, among other ridiculously outlandish statements.



Extra processors? I think I missed those.


hmm seems someone's sarcasm detector is broken. There are 50 some co-processors by MS' account, not all of which have been fully defined.
 
Back
Top