Next gen graphics chips approach - were's the software?

Ta micron - I posted this point of view on 20 sites that matter - trying to raise a ground swell of questioning around the proposition

"For 7 years we have been forced to wait 18 months - 2 whole generations of 3d cards - before each cycle of amazing hardware is used in games - is this trend set to continue or can something improve this obsolence before use paradigm?"

I hoped here if anywhere could shed some hope for the future. Whilst I can see 100 ways this will not happen, I am looking for one or two exceedingly clever and practical ways in which it might.

As it stands I am sure NVidia and ATi would be thrilled it if did - as would all people who sell PC equipment upgrades - more demand for high end gear means more sales. If game developers could find a way of doing it that would not be too risky or laborious I am sure they would enterain the idea. There must be a catalyst that will improve the way things stand.
 
Software takes that long IMHO to catch up with hardware, because developers need also time to develop their games. Presupposition is that they have X compliance hardware in their hands to develop on them.

Even if things would be different performance and what the widest majority of the user base out there owns, is a consideration too for developers. This topic is as old as 3D, if not older....
 
There's always a trade-off.

Think about this as a developer. If you construct your title around HW that is "too recent" then you end up alienating much of your potential user base while at the same time attracting too few new customers at the high end. (The point of diminishing returns)

OTOH, if you go "too ancient" then you wind up attracting too few new customers at the low end while turning everyone else off through your piss poor graphical engine.

The logical conclusion is that there is an optimal point along the curve between those two extremes where the developer maximizes his profit. Now, of course, there are such things as scalable engines and scalable art assets but those are not free either; these costs will go into the calculation as well. Remember, the devs are in this to turn a profit, not make mind-blowing software that maxes out your VGA.
 
One of the biggest problems is content creation. It takes sooo long to do all the artwork, levels, and what not.
 
you can always develop a game based around what you expect next generation hardware to perform, get a nv3x and go bankrupt.
 
Thank you for negative examples - I can think of many more. But with a scalable graphics engine - such as Source - you can now target many levels of users and a much broader install base - while still showing you are a thought leader and doing cutting edge stuff. Its the best of all worlds, you get alot more cash, you can be cutting edge for fast adopters, and cover all generations of your user base - something that an unscalable graphics engine can't do.

So a scalable engine gives you far greater user base and the claim to cutting edge graphics. It should be simple to scale it upwards even further in expectation of increased performance - e.g. Pixel Shader 2.0 throughput - if not increased or anticipated features of say NV4o or 50 or R420 or R500.

I simply don't accept a game has to have an unscalable graphics engine targeted at only one demographic to maximise its proft potential in the short, medium or longer term. A scalable engine would be hugely attractive in a license deal over an unscalable one too.

Put the negative thoughts aside and try see the full half of the glass!
 
The problem you hae g__day is that it takes soo long to write an engine. Think about Source, Doom3, UT2k3 Engines. How long have they been working on it? You start work with a good idea of what is in store for the future. Thus you have to take a snap shot in time. However it takes you a long period of time to realize your engine so by the time its ready..the future is here or has past. How good was your snapshot now that its taken you X days to develope it? Thats the whole cart before the horse arguement. And yes a cleaver developer can do some things to help but its something that is not very easy at all to do.....
 
jb its a good analogy - but there is hope. To a degree you can future proof things in how you design in extensibility. Some of the delivery systems I designed for Banks in the 80s and 90s are still going strong because I designed them to be fast, simple, robust and emilently extensible.

Back then I had no view of an internet or e-mail or PDAs or middleware or 3G wireless or WebTV or a host of other technologies. But I saw the fundamental need was to transport messages from originating device to host systems anywhere. I knew there would be new devices and message protocols all the time - so that level of extensibility was designed in. The architecture was fit to grow.

So too should be a great game engine. You shouldn't have to throw away a great game engine - you should be able to refine its sub engines dealing with physics, collisions, A.I., sound, networking and graphics as the opportunities arise. Lets concentrate on graphics alone. If you have a view that the future will deliver more features and performance you might say today I can have models with 5K vertices and run 20 shaders against them of modest size. In a year my models might be 10K - 20K and I might use 40 shaders some of increased length and complexity - but I still need that fallback for older cards that will only handle 5K and 20 or fewer shaders.

To me that is designing the first stages of scalability and longevity into a game engine. I may not know what 3d effects a future card can do - but I can assume 1) it will be delivered by shaders so I need a way of extending my shader library over time 2) not all users will have these cards so I need fallbacks to simpler models and alternate shader routines.

Once I have written these shaders I have them for all time - so hopefully re-use is very high. When a new card comes out with some amazing features I ask 1) how do I deliver that with shaders and 2) what is teh fall back for older technology - can I emulate or approximate this affect or must I simply not provide it on older cards. Complexity goes up but so to does longevity and re-use. And if your engine is good enough you can license it and your shader library many times over. You could probably borrow a trick from LINUX and have a bevy of talented users will to contribute neat shaders to an OPEN source body all the time - enriching everyone that wants or has developed great 3d effects for everone.

Just my view of Nirvana as I said. Its request that or tolerate cutting edge that will be obsolete before its used many, many times over.
 
g__day said:
jb its a good analogy - but there is hope. To a degree you can future proof things in how you design in extensibility....

To be honest, at this point, I believe the issue is going to be less and less on the "time required to create an engine", and more and more time on the content (art) requied to take advantage of the new engines / hardware.
 
Joe wrote:
To be honest, at this point, I believe the issue is going to be less and less on the "time required to create an engine", and more and more time on the content (art) requied to take advantage of the new engines / hardware.

Agreed.

jb wrote:
The problem you hae g__day is that it takes soo long to write an engine. Think about Source, Doom3, UT2k3 Engines.
If I'm not mistaken Carmack said that they (id) have been done with the engin for quite some time. The artwork and content is what's taking so long to finish
 
g__day said:
"For 7 years we have been forced to wait 18 months - 2 whole generations of 3d cards - before each cycle of amazing hardware is used in games"

I'm not sure if this is true.
This year's hot new generation card is the GeForce FX5200
Last year's is GeForce4 MX440
The year before that is GeForce2 MX400

Do you honestly think these cards were not used to their potential in the year of their release?

Because these (and the RVE, R7500, R9200) are the cards that are sold to the majority of the gamers.
At least to those that upgraded that year.
I'm pretty sure that the majority of people doesn't upgrade more frequently than every 2 years.
 
Content is/will definately be taking the most time. And while having the most flexible engine ever conveived sounds great in theory, but it's usually not how things have worked out - altho it's definately getting better, and soon we wont be change the structure and functionality of an engine much.
 
To reuse the artwork, you can freeze the development of the engine, or you can make both scalable.

To make a game engine scalable, you need to use a high-level graphic language and let the driver compile it as best as it can. (GLSL) The main problem with that is, that you cannot be sure how it will run. So, you need the engine to benchmark specific things at runtime and enable/disable them to tune it. And you need to be able to scale down the polygons and textures.

Things like bump-mapping take the opposite approach: they enable the hardware to make things look better if enabled. But those are dependant on new features you don't know will exist in future hardware. You cannot code directly for them. But when you use the approach as outlined above, smart drivers can enable them, as long as the source code gives them enough information about the intended result.
 
An additional bonus to a scalable engine is that you can inspire hardware manufacturers to implement things that improve yours if you make a big hit.

:D

Of course, we can leave all of that to Microsoft. They would like that.



(I am not against Microsoft, I am trying to figure out how to play breathtakingly stunning games next year. And I want them to improve if I buy better hardware.)
 
DiGuru said:
To make a game engine scalable, you need to use a high-level graphic language and let the driver compile it as best as it can. (GLSL) The main problem with that is, that you cannot be sure how it will run. So, you need the engine to benchmark specific things at runtime and enable/disable them to tune it. And you need to be able to scale down the polygons and textures.
The game will still be able to tell which graphics cards are running the game. And the developers need to do this sort of testing today anyway if they're going to bother with any sort of auto-detection. After all, the consider the GeForce FX 5900 vs. GeForce FX 5200. They have massive performance differences, but report the same featureset. Any auto-detection of the resolution/features to enable for the two cards should return a different result for both.

It would be interesting, of course, if games started shipping with built-in benchmarks that tuned a game to a specific machine that may or may not have hardware that existed at the time the game was written.

But I say that's just too much work. GLSL doesn't change anything here.
 
Chalnoth said:
DiGuru said:
To make a game engine scalable, you need to use a high-level graphic language and let the driver compile it as best as it can. (GLSL) The main problem with that is, that you cannot be sure how it will run. So, you need the engine to benchmark specific things at runtime and enable/disable them to tune it. And you need to be able to scale down the polygons and textures.
The game will still be able to tell which graphics cards are running the game. And the developers need to do this sort of testing today anyway if they're going to bother with any sort of auto-detection. After all, the consider the GeForce FX 5900 vs. GeForce FX 5200. They have massive performance differences, but report the same featureset. Any auto-detection of the resolution/features to enable for the two cards should return a different result for both.

It would be interesting, of course, if games started shipping with built-in benchmarks that tuned a game to a specific machine that may or may not have hardware that existed at the time the game was written.

But I say that's just too much work. GLSL doesn't change anything here.

I don't know about that. I haven't done any 3D programming so far, but I do program just about anything. And I never, ever take things for granted. My programs mostly check anything to see if they can come up with a valid result. For example, my current project is a system to automate the management and installation of Windows workstations. The tools I wrote for that consist for more than 90% of checking what hardware, operating system, network, applications etc. it runs on and tries to figure out how it can perform the intended function with all that.

My bottom line: it should always work as expected.

That is where the big "bummers" are. It is relatively easy to create the functional part. And that will absorb most of the processing power and resources. As long as you can make it run as intended. And that part eats most of the time and resources.

When you have your engine up and running, the real problems start occuring.
 
Back
Top