DirectX 12: The future of it within the console gaming space (specifically the XB1)

They are selling GPUs, if he say "hey! DX12 will be announced next year!" people will wait until that announcement and will not buy a current gen GPU.

It is bussines.

Shure! But in that case why would he say that? He could just avoided the question. If the Xbox One GPU had any extra specs it would be logical to assume it would end up on PC very quickly.
 
Shure! But in that case why would he say that? He could just avoided the question. If the Xbox One GPU had any extra specs it would be logical to assume it would end up on PC very quickly.

What if one of the new features in DX12 is cloud computing? Something like "DirectCloud" xD
 
He probably did.
But I wouldn't be surprised to see esram/edram or bigger caches in future GPUs because it needs less power than active logic. Just a matter of cost/performance/power sweet spot with future process steps.
I'm entirely serious. PC graphics could do with 10s/100s or more MB of on-die memory. Producer-consumer algorithms that are forced to spill off-die due to the slightest irregularity or imbalance (AMD tessellation says "hi") die horribly when the spill happens.

Try running multiple producer-consumer algorithms concurrently and you're dead.

What's the point in having a GPU that can support multiple graphics and compute kernels concurrently if the on-die memory says fuck off?
 
I mean, cloud computing, like Forza 5 A.I. in the cloud.

I know hence the :yes: :no: With so many months since last we saw
secret
sauce thrown about ( including Ray Tracing again !! ) I could help myself :devilish: I am weak... :(

Well that and the temping bit of alliteration ;-)
 
Last edited by a moderator:
I'm entirely serious. PC graphics could do with 10s/100s or more MB of on-die memory. Producer-consumer algorithms that are forced to spill off-die due to the slightest irregularity or imbalance (AMD tessellation says "hi") die horribly when the spill happens.

Try running multiple producer-consumer algorithms concurrently and you're dead.

What's the point in having a GPU that can support multiple graphics and compute kernels concurrently if the on-die memory says fuck off?

It would make sense for them to try to cover what Intel (Iris Pro 5200) and X1 are doing. I'm not sure what APIs Intel has built for their EDRAM.
 
It would make sense for them to try to cover what Intel (Iris Pro 5200) and X1 are doing. I'm not sure what APIs Intel has built for their EDRAM.


Two issues as to why I'd be sceptical we'll see any large on chip scratchpads migrating back to PC from consoles. First nI believe EDRAM has only two foundries at the moment (IBM & Intel) and the two GPU manufacturers rely on TSMC who don't have any experience with it (do GloFo do any AMD GPU work, I read they'd moved entirely to TSMC). Second ESRAM seems to require a lot of manual optimisation work to get best results and I can't see that being something amenable to being hidden behind an API especially with arbitrary differences between vendor implementations, different GPU families from each vendor and even between individual GPU SKUs within a family.
 
Two issues as to why I'd be sceptical we'll see any large on chip scratchpads migrating back to PC from consoles. First nI believe EDRAM has only two foundries at the moment (IBM & Intel) and the two GPU manufacturers rely on TSMC who don't have any experience with it (do GloFo do any AMD GPU work, I read they'd moved entirely to TSMC). Second ESRAM seems to require a lot of manual optimisation work to get best results and I can't see that being something amenable to being hidden behind an API especially with arbitrary differences between vendor implementations, different GPU families from each vendor and even between individual GPU SKUs within a family.

Maybe not, but it seems like something they could do. I wouldn't put money on it, but it's one of the more believable ideas floated for Directx 12 features. And isn't the point of the API to hide differences in implementation? It may not cover what's currently available, but it's a spec for vendors to design against.
 
But the Iris Pro 5200 is a true cache and doesn't require nor even allow software programmability afaik.

This mirrors my own understanding of what's going on, certainly Intel haven't proposed any extensions to DX or at least haven't been public about. Then again Intel seem to be quite magically unaware of the highly negative opinions about their GPUs and their driver 'support' team in particular. The only reason Iris is any use whatsoever rather than another variant on i945 <shudder> is that Apple more or less forced them down this road by making OpenCL so essential to OSX.

APIs are supposed to mask implementation variations but they're not magic and there are certain fundamental constraints. Part of the reason why DX and OpenGL were so high level for so long was that it made it much easier for s/w vendors to produce code that runs very well on a hell of a lot of wildly varying h/w. Now that we're talking about exploiting low level specific h/w features (X MB h/w cache, N ALUs, etc, etc) I have a hard time seeing how that can be obfuscated.

To abuse a car analogy a higher level API like DX11 is like buying a bluetooth speaker that plugs into your cigarette lighter vs buying a model specific integrated B/T kit (DX12, console SDKs). One will work anywhere but has certain inefficiencies, the other offers the best experience but if you want that tight integration then a VW group B/T kit won't work on GM or Ford group cars.

Of course I'm not a graphics programmer so perhaps there are generalisable techniques that require lower level access which are impossible with todays APIs but will work across multiple GPU generations and vendors. Mantle today is restricted to a very few AMD cards and it will be very interesting to see whether in 2+ years it is still supporting the current cards.
 
Iris/Crystalwell is a fully automated cache. As long as you keep your access patterns good, it will give you nice bandwidth and latency benefits (for both CPU and GPU accesses). You don't need to specially code around it. For a modern GPU you always want to be cache friendly, since all GPUs have L1 and L2 caches as well. Crystalwell just builds on top of Intel's L3 cache.
 
Iris/Crystalwell is a fully automated cache. As long as you keep your access patterns good, it will give you nice bandwidth and latency benefits (for both CPU and GPU accesses). You don't need to specially code around it. For a modern GPU you always want to be cache friendly, since all GPUs have L1 and L2 caches as well. Crystalwell just builds on top of Intel's L3 cache.

Am I correct that ESRAM does not have the latency benefits of a true cache?
 
Heres a little recap of situations that make me believe DX 12 is nothing more than a low level API in a direct response to Mantle, and that it will not require new hardware.

- Microsoft goes to AMD for a APU for its console
- Microsoft and AMD define the specs for the Xbox One APU.
- AMD claims no DX 12 is visible on the horizon
- Microsoft presents its console as a 11.1+ Dx Specs machine
- Xbox is Released with no mention of aditional DirectX Support
- Microsoft defines DX 11.2 specs, and they require no new hardware
- Amd releases a low level API called Mantle
- Microsoft speaks about DX 12
- Microsoft adds the Xbox Logo on the DX 12 page.

It seems obvious that if Microsoft, the makers of the DirectX Spec and API were to ask AMD to build a GPU that was above the known DirectX 11.1 specs, this would cause AMD to, at the very least, suspect that a new DX 12 was to be released. And as such it would not make claims about DX 12 not beeing visible on the horizon, and even less by the mouth of its vice president.
Microsoft presented its console as 11.1+ on the Hot Chips. This leads to believe that at that time Dx 11.2 was not ready, and even less DX 12.
A claim at that time that the console was DX 12 would be a tremendous Marketing Weapon, since people would want to know more about it before deciding wich console to buy. And all hardware comparisons would have to refrain since they would not know if DX 12 would include hardware changes. So the only explanation on why Microsoft didn´t talk about it is simple. It didn´t existed at the time (well it existed on Xbox but it was called DX 12 since it was never intended to be released as a new DX)!
Then AMD releases Mantle, and Microsoft speaks about DX 12.
This leads to believe DX 12 is nothing more than the Xbox One low level API bought to PC and extended to other hardware, and that no extra hardware will be required.

Am I seeing this wrong? Or is this just logic?
 
You folks are bsc / delusional if you think DX12 is a direct response triggered by Mantle. Things don't move that fast. Mantle may have accelerated DX12 schedule, but it in no way is the sole and entire trigger for it.
 
You folks are bsc / delusional if you think DX12 is a direct response triggered by Mantle. Things don't move that fast.

This.

Back in the days stuff for Windows Q+1 got specification before or around the time Windows Q RTMed. You need time to think about a feature, spec, fix, spec, fix,... and you simply can't do that post RTM as it's too little, too late (unless it's a really tiny thing and most of your customers are internal). You tell your partners about your plans (and/or release some early build), gather feedback, tweak spec/implementation and iterate even before you announce the feature to the public. Windows 8.1 RTMed in August and Mantle was unveiled in September (IIRC). If the scope of changes is in a ballpark of DX11 <-> Mantle, then there's no way DX12 is a reactionary response to AMD's API.
 
You folks are bsc / delusional if you think DX12 is a direct response triggered by Mantle. Things don't move that fast. Mantle may have accelerated DX12 schedule, but it in no way is the sole and entire trigger for it.

How about Mantle being derived from the consoles' low level apis and MS simply taking their console's lower level version of DX, intergrating a nVidia path and porting it to a PC.
 
Last edited by a moderator:
You folks are bsc / delusional if you think DX12 is a direct response triggered by Mantle. Things don't move that fast. Mantle may have accelerated DX12 schedule, but it in no way is the sole and entire trigger for it.

We aren't talking about DX being released to the public or near completion by GDC on March 20th. For all we know DX12 may have a 2015 release date along side Windows 9.
 
Back
Top