I'm talking about PC. On console it's somewhat less relevant as very few developers who are willing to target - say - PS3 are scared of a "low level" API like libgcm. And rightfully so... these APIs aren't really difficult per se, especially when they are only for one piece of pre-determined hardware.Almost all the studios which survived till now still have their own engines! Basically all console newcomers (RAD fe.) still build their own new engines, which have some chance of being ported to PC (see Resident Evil).
Also note that when I say "engines", I'm talking about technology created by graphics "experts" that are used across multiple titles. Ex. Frostbite is obviously an engine even though it is not sold as middleware.
That further supports my point - if they haven't moved forward to even DX10/11 then it has nothing to do with the "ease of use" of the API and continuing to cater to programmers who want "safer, easier" APIs is wasted effort.And a great deal of them are stuck at DX9/OGL2 and will never move forward.
I don't think the resourcing has ever been a huge concern to be honest... as you yourself point out, to these big companies it's peanuts. I wouldn't be surprised if some misguided notion of "protecting" the advantages of the Xbox platform vs. PC have played a role in the past, but I don't think anyone has really said "we're not doing this because it would take some time".It's always possible to move forward, it was always possible to make "cool" APIs. I conjecture the problem ist _not_ the game developers, the problem is that the API providers don't know where something experimental leads to, and because it's business they feel reluctant to throw resources behind a crazy idea, they don't want market segmentation and whatnot business-based concepts.
Obviously DX12 as it is defined would not have worked on hardware 12 years ago, so you can hand wave about a "DX12-like API" but I think it's far from clear that you could do a similarly portable and efficient API more than a few years ago.Ultimately, the reason for DX12 coming 12 years too late, is business, nothing else IMHO.
They couldn't support WDDM2.0 for one, which is an important part of the new API semantics. It hasn't been that long that GPUs have had properly secured per-process VA spaces - certainly 12 years ago there were a lot of GPUs still using physical addressing and KMD patching, hence the WDDM1.x design in the first place.Why?
Ha, don't be fooled by their re-positioning - do the math and it's about the same cost as it was before for a AAA studio. The only difference is it's somewhat more accessible to indy devs now too, a la. Unity.Example: UE is not sold for 7 digit numbers, but is "free-to-play".
Not sure which developers you were "preaching" to, but as I said for as long as I've been in graphics it has been clear to at least the AAA folks.I doubt that, I preached it to developers since 2006, and it was met with "Where's the eye roll emoticon" each time I've tried. So, I'm in kind of deja vu here.
Meh, you can already do that with "compute", and the notion isn't even well-defined with the fixed function hardware. There are sort/serialization points in the very definition of the graphics pipeline - you can't just hand wave those away and "we'll just talk *directly* to the hardware this time guys!". Talking directly to the hardware *is* talking to the units that spawn threads, create rasterizer work, etc.Yes, that could happen, that's why we remove the frontend. And program GPU directly.
By all means expose the CP directly and make it better at what it does! But that's arguing my point: you're effectively just putting a (currently kinda crappy) CPU on the front of the GPU. That's totally fine, but the notion that the "GPU is a better CPU" is kind of silly if your idea of a GPU includes a CPU. And you're going to start to ask why you need a separate CPU-like core just to drive graphics on an SoC that already has very capable ones nearby...CP, which is a totally standard CPU, but we cannot program directly even that, because it's managed by the byte-code generated by driver.
Last edited: