DX-Next/WGF -- Features vs Performance

Reverend

Banned
Anyone knows if developers are pushing the IHVs hard for :

a) more performance; or
b) more features

in the IHVs' WGF/DX-Next chips? And no, I don't think we can logically have both!

During the timeframe of DX-Next, will the current GPU/VPU programming model be :

a) feature-limited; or
b) performance limited ?
 
Hasn't it always been features first and performance later?
The R300 broke the mould slightly with "FP features" and "performance" but remember how long it has taken a standard feature like AA to become the norm?

And IHV's are still finding ways to improve trilinear and aniso speeds.

More I think about it the more muddier the answer becomes.
 
From a users perspective I've always looked for the exact opposite, I'll almost always take performance over features.

What good are features if they don't perform well enough for you to use? I can see the developers having a different point of view since new features mean marketing gold, but couldn't common sense apply a bit too? :?
 
IHV's have to innovate with new features otherwise we would still be stuck with TNT style GPU's with probably 128 TMU's, EMBM and nothing else... if an IHV cannot bring about new features for the consumer to gobble up and developers implement in their new games then the IHV cannot justify their transistor budget which would adversely affect fabrication plants and cause an economic meltdown in Taiwan.

Or something.... :rolleyes:
 
Well, "common sense" can be in the form of being extremely selective in picking a feature (or a few features) to use as well as not going overboard with the features they picked wrt the developers.
 
Sometimes it's hard to distinguish between the two.

Take one-pass render-to-cubemap as an example, this is a feature nearly every 3D programmer will embrach, yet it provides nothing we can't achieve with current API/HWs today, it's just a faster alternative(I hope so :LOL: ).

Same principle applys to many upcoming features, such as true GPU topology(Dean Calver presented a way to to limited topology stuff with SM3.0 HW in ShaderX3).
 
Rev i am going to play Devil's Advocate here and go for the juggular...apologies in advance for causing any problems.

This has been debated before umpteen times and caused a lot of problems for some people but it illustrates my point perfectly:

Was the implementation of a (1)Floating Point in general the most important innovation or the implementation of (2)Floating Point 32?

If we say it was (1) then the implementation of this new feature was done with performance so we had the best of two worlds. It was also no small thing as everything before then had been based on radically different technology.

If we say (2) then the it was not usable but added more precision. If precision alone was needed then the fact that we moved to a floating point architecture was not as important as the fact that it was Floating Point 32. Only problem with this is that it has taken a long time for this feature to become finally usable (and even then to limited degrees).

So... if a feature is implemented is it not important that the return also is immediate to the end user, developer and the IHV?

Or is it more important that the feature is implemented and a foundation is laid for a strong specification which will take time to be actually used.

For the user and developer I am going to have to go with the former but that is a personal opinion.
 
This has sort of been the question every time that a new architecture has come about, yet sure enough the high end comes out and offers more performance and more features. MS and the IHV's are timing new DX revisions not just on MS's needs/wants but also with process node advancements - more silicon space usually not just allows for extra feature but also further advancements on increasing the performance per feature or per pixel. My guess is that anyone that implements a single chip, high end, WGF2.0 solution will do so on 65nm. Of course, the IHV's will also know the timescales for process changes, their own process plans and have rough ideas for budgets of particular features which is where the bargaining with MS over "whats in and whats out" of DX comes in.

Possibly, though, the question is more pertinent to the mid end where the balance of past and future requirements becomes a little more tricky as the silicon budgets are lower and tougher choices need to be made over what goes in and what goes out - especially when we are on the cusp of removing legacy fixed function operations over to fully shader laden titles.
 
The two very similar questions come to mind :

1) Is the 'best' graphics card for the game developer necessarily the same 'best' card for the consumer ?

Or maybe the question should be :

2) How much die space (transistor budget) of current generation cards should be dedicated to features that help game developers research for next-generation game engines?
 
PeterAce said:
Is the 'best' graphics card for the game developer necessarily the same 'best' card for the consumer ?
Nope, no way. The best for the developer is the ultimate top-end, the best for the consumer is the best value balance 'tween performance/price/features. (Generally in that order for me too.)
 
DaveBaumann said:
Possibly, though, the question is more pertinent to the mid end where the balance of past and future requirements becomes a little more tricky as the silicon budgets are lower and tougher choices need to be made over what goes in and what goes out - especially when we are on the cusp of removing legacy fixed function operations over to fully shader laden titles.

The way new GPU's are designed makes the mid-end a harder battlefield than ever as the mid-end has at this time the same featureset as the highend but cuts on transistor size by removing duplicated area's of the die e.g. 'pipelines'.

I can't see this changing anytime at all WRT NVIDIA and ATI but another manufacturer (S3 perhaps?) may target mid-range but has to differentiate on features that are not strictly related to 3D.
 
PeterAce said:
The two very similar questions come to mind :

1) Is the 'best' graphics card for the game developer necessarily the same 'best' card for the consumer ?

Almost never, game developers are usually free on buying whatever HW they need, this is quite different from consumers who usually have a limited budget.
 
I personally find it silly when people say "as a gamer I only want features that have good performance". We have an excellent example of just that with the R300. And how long did it take for games to make decent use of the features - we're just now seeing it IMO with HL2 and FC (Granted owners of R300 can play those games with good performance now but so can R420/NV40 owners). Developers must have the hardware in-hand for quite a while before incorporating the new features into shipping titles. So the "as a gamer" stance is pure nonsense. Unless IHV's start shipping separate "gamer" and "developer" SKU's that is.

As for the thread question - I think developers will be pushing for more features as IMO they impact game development a lot more than performance. UE3 for example is being developed without any guarantee of future performance - but the featureset is set in stone (which makes me wonder how much the mere existence of the NV40 has aided them in optimizing for SM3.0). Basically, you can code for features and hope/wish/pray for performance but it doesn't work the other way around.
 
PeterAce said:
The two very similar questions come to mind :

1) Is the 'best' graphics card for the game developer necessarily the same 'best' card for the consumer ?
JC said no (cen't be bothered to find URL to his .plan where he explicitly said this) so he must be right! He he :)
 
IIRC it was JC's plan where he talked about the NV30 (FX 5800) vs. the R300.

All you need to know is that he prefereed the FX 5800 over the Radeon 9700. Ask any gamer which they'd prefer. ;)

Seriously, there are fewer "limitations" of the NV30 in terms of shader length, precision, etc. So as a developer, Carmack preferred the 5800 because of the flexibility it gave him to "try new stuff." This is a completely different question of what stuff is usable / practical from a gamer perspective.
 
I am beginning to understand now Joe.

However did the 5800 do JC any good? Sometimes getting what you want causes you to become handicapped.
 
Tahir said:
I am beginning to understand now Joe.

However did the 5800 do JC any good? Sometimes getting what you want causes you to become handicapped.

Hence Doom 3....yeesh I cannot believe people say that the graphics is "great" or "awesome" for the game...well then again I am biased after seeing Far Cry, HL2 and mostly Chaos Theory. :oops:
 
suryad said:
Tahir said:
I am beginning to understand now Joe.

However did the 5800 do JC any good? Sometimes getting what you want causes you to become handicapped.

Hence Doom 3....yeesh I cannot believe people say that the graphics is "great" or "awesome" for the game...well then again I am biased after seeing Far Cry, HL2 and mostly Chaos Theory. :oops:

How would Doom3 have benefited if JC used R300 hardware for development instead of NV3x?
 
It wouldn't have mattered, that was just a jab at D3 by another individual that disliked it for some reason.
 
Back
Top