Huddy: "Make the API go away" [Edit: He wants a lower level API available too.]

The question is why that level of care for PC has reduced?
Is its simply to the metal access or is it financial?

Maybe because graphics is not everything ? Maybe because u dont to the metal access for a game to be good. ?

And in my opinion graphics and graphics is quite relative. The things that Huddy says about the games that should look 10 times better is nonsense. A lot of these days depends on things like animation, graphics art and so on. And on consoles there are much higher budget games with some of the most talented artist out there.
Check out the latest AMD tech demo. Its dx11 and it looks like a piece of shit. No low level acess to the hardware would made that demo look better.
 
I thought the 360 was DirectX, basically, but allowed some lower-level manipulation of the hardware? Not sure I'd want to see the APIs completely disappear. I'm sure the IHVs don't like the current situation because it's hard to differentiate their products from the competition when the performance and support is relatively equal because of the APIs. Not sure I want to see the PC space turn into console space, where you have games that are written for AMD, and not Nvidia, or vice versa, where backwards compatibility is an issue because of drastic changes to the hardware in each generation of the product

Edit: The only way I can see this type of thing working is by having some sophisticated sandboxing in the OS, where if your game crashes you're protected from locking up your OS. Not that games can't lock up your OS already.
 
I think DirectX will never go away, why would it? An abstraction layer is always nice to have, and will always be useful.

However, the question is whether developers should be allowed to optimise on a higher level.

The biggest question is - can Microsoft ever allow this, especially now that Windows itself also uses the GPU for more and more tasks in the Desktop, even in the browser. Can something like that ever reliably work in combination with a game doing more low-level things? I'm guessing that could get pretty complicated.
 
A lot of these days depends on things like animation, graphics art and so on. And on consoles there are much higher budget games with some of the most talented artist out there.
And every piece of art and animation you see, is controlled by the speed we can render things at. Make no doubt every game I've worked on has looked better before we had to trim back to hit performance targets, not going to change in the next generation (and by generation I meant the original version of 25 years!)

We don't have separate teams for consoles, u do know that, right? Its the same budget and artists on the PC or PS3 or X360 version. No-one is crazy enough (I think) to suggest having separate teams making consoles and PC version of the games (except for a few rare cases where other factors have come into play).

Thats said the correct thing to ask, is if simply increasing the performance of the engine would be that usable without an increase in art budget as well? Thats is a completely valid concern about the idea.
 
To me the unfortunate thing is that if API's went away, eventually most games would be Nvidia only, the party most willing to make sure developers coded only for their hardware. Rather than seeing small differences, like MSAA support or PhysX addons, we'll have complete lack of support for any other vendor. It wouldn't happen immediately but it would be a gradual process that would exponentially tip the balance as more and more publishers take the cash.
 
Maybe because graphics is not everything ? Maybe because u dont to the metal access for a game to be good. ?
Of course you don't need to the metal for the game to be good!

But what games can't be made right now that are possible with the power you have in a modern PC?

If your game design requires 100,000 enemies on screen, you don't make it right now. So yes graphics do affect game play, never underestimate the power the primary interface into the game world has on the game itself.

Graphics are literally your window into the game world. Many games are possible right now, but some aren't and want/need a better window.
 
Here's the question I have. We all realize that lower level access to the hardware would be a win in terms of performance and flexibility, no?

However, lower level access also increases the likelihood of system level crashes from badly written code like back in the early days of PC gaming, DOS - Win9x era. It's extremely rare in this day and age for a game to take down your OS due to the relatively high level of abstraction.

When MS moved to consolidate Windows (consumer and enterprise) on the NT kernel, they also further abstracted the graphics API (compared to Win9x) in order to enhance system stability.

How likely is it then that MS would relent and allow more "to the metal" level of access to the graphics hardware in a system? Or am I looking at this incorrectly and we're not actually discussing getting closer to the metal from an OS perspective?

I fully trust that good developers are far less likely to have code that isn't tested against a broad set of hardware configurations and thus likelihood of a system level crash would be rare for them. However, software releases for the past 2 decades don't engender much hope that all developers are so diligent. :p

Regards,
SB
 
Anyone remember how Dave was saying that some games were "front end" limited on ATI cards compared to NVidia? I assumed that it was geometry, given the resolution scaling and tessellation tests we've seen, but it always seemed like it wasn't a good enough explanation given the 69xx's doubled geometry throughput and limited use of tessellation in games aside from HAWX2.

Now it looks pretty clear to me that NVidia's superiority for most games lies not in their caches, shader architecture, tessellation, etc., but rather in their ability to handle state changes:
Huddy said:
On consoles, you can draw maybe 10,000 or 20,000 chunks of geometry in a frame, and you can do that at 30-60fps. On a PC, you can't typically draw more than 2-3,000 without getting into trouble with performance, and that's quite surprising - the PC can actually show you only a tenth of the performance if you need a separate batch for each draw call.
What's more is that even with their experience from Xenos, ATI can't combat this problem on the PC. Somehow DX compliance is the issue.

It's quite surprising to me. We have multiple cores giving the GPU plenty of compute power to power drivers. We have GPUs buffering frames so that they can do whatever processing they feel like on the command buffer to reduce batch count. How exactly is DX holding them back and not NVidia?
 
*Noob question* Why is the PC so limited with draw calls in comparison to consoles?
 
*Noob question* Why is the PC so limited with draw calls in comparison to consoles?

Because of D3D9 API.
(OpenGL never had the problem, D3D10+ don't have the problem that much either.)

"What is my limit on draw calls for D3D10 to reach 60 Hz? 30 Hz?

Direct3D 9 had a limitation on the number of draw calls because of the CPU cost per draw call. On Direct3D 10, the cost of each draw call has been reduced. However, there is no longer a definite correlation between draw calls and frame rates. Because draw calls often require many support calls ( constant buffer updates, texture bindings, state setting, and so on) the frame rate impact of the API is now more dependent on the overall API usage as opposed to simply draw call counts."
Source : http://msdn.microsoft.com/en-us/library/ee416643(v=vs.85).aspx

Basically you have a lot of validation done in D3D9 draw calls, less so in D3D10 because the API is better (and validation is spread in relevant calls), on consoles you don't have any validation. (It's all do it yourself.)
 
Now it looks pretty clear to me that NVidia's superiority for most games lies not in their caches, shader architecture, tessellation, etc., but rather in their ability to handle state changes:
That's quite possible, but ironically it's the other way around for the CPU overhead of state changes in my experience (NVIDIA's is much higher than AMD's in DirectX). The key point here is that the API tries to abstract over something which is highly non-uniform... namely the set of "expensive" vs "cheap" state for a given piece of hardware. That's yet another reason to expose that to the developers who want the best performance.

Because of D3D9 API.
(OpenGL never had the problem, D3D10+ don't have the problem that much either.)
Yes the overhead is less in D3D10+ and OpenGL, but it's still quite large compared to consoles. Try rendering 10k batches where you change the texture (or similar) between each one even in OpenGL ;) In Windows, the operating system owns the graphics memory, so at some level there are expensive APIs that need to be called to make sure that everything is in place for rendering.

It's worth noting that obviously some of this overhead is necessary to make a multi-tasking OS work at all (you can't run two games at once on consoles!), but a lot of it is unnecessary if game developers were willing to structure their code like they do for consoles - managing their own versioning and dependencies and so on. That stuff is more efficient for the game to do, whereas on PC right now it tries to reconstruct it all from your DX/GL command stream on the fly.
 
Could the future be to segment GPU memory, like system memory, and have an OS thread that runs in fixed memory, protected by the OS, through a driver and APIs? That way you could split the GPU to operate with complete freedom in a large chunk of the memory while reserving a finite portion for the OS for stability?
 
What about DX11 multithreaded rendering? AFAIK it would have to be first introduced in the graphic drivers of both vendors though.

The implementation has not shown if its amenable to good near linear scaling yet but at best that gives you, N x speed up, where N is the number of cores.

The command buffer support was/is the bigger hope for dramatic speed up, but the choice to go without fixups (even the 360s slightly odd implementation would have been better than none at all), was a serious mistake imho and makes it only useful as a display list system which isn't enough for serious usage.
 
So, do you think multithreaded rendering support and the command buffer feature will come forth and materialize or will it remain on the wishfull thinking until the next directX revision?
 
Dumb question, but I was actually thinking about this thread a lot today...

Could this be why AMD is gearing up their development team? I've seen some ads lately for software developers for 'em...
 
So, do you think multithreaded rendering support and the command buffer feature will come forth and materialize or will it remain on the wishfull thinking until the next directX revision?
Depends what you mean. Drivers will support the multithreaded rendering stuff in DX11 soon enough but as DeanoC noted, that's not addressing the base problem. That's at best distributing the slow code over 2-4 cores, not fixing the slowness.

"Command buffer feature" is much more nebulous... to truly extract the performance that people get on the consoles from directly addressing command buffers you need to write hardware-specific code, and thus you sacrifice portability. I'd argue that at least the top tier games can handle a few more paths on PC consider what they already do on consoles (and no one is arguing that we eliminate the higher-level layers), but that's what this thread is about :)
 
I suspect he actually isn't too concerned about competing with the 360 and PS3 - I'd bet he has his sights set firmly on competing with the next generation of consoles.
 
Back
Top