The biggest changes that I've seen for DX11 is the addition of programmable tessellation, together with the associated pipeline stages ("control points shader" and what not before the vertex shader). I guess this is what Rys is referring to. (Note that there was some relevant info in AMD's slides from I3D 2008... unfortunately the link appears to be dead on their site.) Then again, this article seems to model the DX11 tessellation pipeline slightly differently, so perhaps there have been some changes over time.
It's an interesting choice to go this route and kind of imples that DX is designed to be "an API for graphics" rather than "and API for controlling the hardware". Perhaps not revolutionary, but it certainly leaves a big gap for stuff like CUDA/PTX and CAL/CTM, in addition to abstractions that sit on top of these like RapidMind/Brook.
Haven't heard much about the OM plans... anyone have links?
Anyways, regarding the linked interview/article, it kind of missed the point on a few levels:
It wasn't a terrible interview, but didn't really address anything relevant to GPU vs CPU architecture enough to draw the conclusions that they did. I do agree with his closing comments about lots of interesting stuff being possible when we get to break down the current graphics pipeline a bit more, but I think that's kind of vacuously true.
It's an interesting choice to go this route and kind of imples that DX is designed to be "an API for graphics" rather than "and API for controlling the hardware". Perhaps not revolutionary, but it certainly leaves a big gap for stuff like CUDA/PTX and CAL/CTM, in addition to abstractions that sit on top of these like RapidMind/Brook.
Haven't heard much about the OM plans... anyone have links?
Anyways, regarding the linked interview/article, it kind of missed the point on a few levels:
... uhh... wow."You're not going to have 8 GPUs, but you are going to have 8 cores"
Lol! Apparently parallelism is easy, it's just the GPUs that make it hard with all of their complicated 'shader models' and what not That's probably not what he intended to say, but it came out kind of funny"Aside from learning how to multi-thread your game engine, that's it - you already know how to program a CPU"
It wasn't a terrible interview, but didn't really address anything relevant to GPU vs CPU architecture enough to draw the conclusions that they did. I do agree with his closing comments about lots of interesting stuff being possible when we get to break down the current graphics pipeline a bit more, but I think that's kind of vacuously true.
Last edited by a moderator: