Why are games so far behind 3d technology?

noko said:
Sounds like the video card designers themselves should release a game with their new technology. If the R300 had a game using the technology included, it would definitely up the standards of what a game could be like.

Could up the standards is more like it. While I totally agree that it would be very cool if either ATI or nVidia started doing this, they would definitely have to do it right to be successful. If it's done wrong first, especially if it costs a significant amount of money, it will cause everybody to hesitate to attempt it again.
 
I thought about this 2-3 years ago, chipmaker releasind their own game engine supporting every new gen chip features, and optimised for their architecture.

That would have been cool, although I'm not sure it would have been that usefull...

Well I suppose that what they do (sort of) by releasing demos for their new gen products.
(Wolfman, Islands...)
 
A supplied up to date technology game from an IHV doesn't have to be very complexed or involved. It doesn't have to be a master piece but does have to show how to get the new technology used. For example, walking through a forest, swimming in a lake, watching birds while you can interact with the enviroment turning on and off key features and even building upon the base game with mods and what not, making it easy from the get go to use everything the card can do. Let people from the start like here or at Rage3d or at NVnews show off this new technology in creating their own games. That would be cool seeing games developed right in front of our eyes. Demo's does this somewhat except they are missing the most important key ingredient INTERACTIVITY. Demo's are also in a much less reusable form then a game with a game engine. Once a game engine is developed it can be upgraded by each company as they progress their technology. The major point is to sell the hardware and technology and to make a decent profit so that the company can survive. Releasing hardware in which the software that is going to be used is 3-5 years behind the technology like it is now is hurting the potential sells.
 
It certainly does have to make a good game, though. Otherwise we've not advanced any further than we are today. We already have tech demos.

I don't think interactivity changes much.
 
Tech demo's are missing the components that make a game -> INTERACTIVITY. Plus building from a tech demo I would think would be much harder then building from an already working game engine is the point I was trying to make. An up to date hardware implimentation would be available while a developer can add their unique features or modules to the game engine. A more modular approach to game development where you can add new modules to take advantage of the newer hardware while maintaining old hardware support even if limited would be unique and probably useful. Maybe it is a pipe dream but I see it workable.

ATI, Nvidia or whoever could do it in house (maybe somewhat hard) or have a game engine developer as in Lithtech, Id, Croteam design the game engine right from the get go of the project so that a game engine that is usable is available to the game developers to use or tweak using the new technology. Just some ideas to help solve this problem or for discussion. In any case this problem will only get worst as far as I can see it where more features will be installed on future GPU/VPU's costing extra money which in many cases will never be used.
 
Murakami said:
Sorry for the mistake.. my tought was, speaking about DOT3: "Each time a triangle of the bump mapped object is moved or rotated the normals of the normal map need to be transformed again, because their direction changed in relation to the light source or the view port.
Oh gosh no. The values in the normal map are not touched (there are far to many to do that every frame!). What instead happens is that the light direction is computed relative to the surface's coordinate system. This implies doing a 3x3 matrix * vector multiply (i.e. the local surface coordinate system * the light direction)per vertex. The resulting vector is the light direction expressed in the local coordinate system (i.e directly in the normal map's coordinates).
This is then used (after interpolation) to do the per-pixel 'bumped' lighting calc.

Indeed the values can change on every frame (and the VS is useful for calculating the light vector). It's just that when you said "shift" it sounded very much like you meant displacement mapping.


Anyway guys, see you in 2 weeks time....
 
Simon F said:
Murakami said:
Sorry for the mistake.. my tought was, speaking about DOT3: "Each time a triangle of the bump mapped object is moved or rotated the normals of the normal map need to be transformed again, because their direction changed in relation to the light source or the view port.
Oh gosh no. The values in the normal map are not touched (there are far to many to do that every frame!). What instead happens is that the light direction is computed relative to the surface's coordinate system. This implies doing a 3x3 matrix * vector multiply (i.e. the local surface coordinate system * the light direction)per vertex. The resulting vector is the light direction expressed in the local coordinate system (i.e directly in the normal map's coordinates).
This is then used (after interpolation) to do the per-pixel 'bumped' lighting calc.

Indeed the values can change on every frame (and the VS is useful for calculating the light vector). It's just that when you said "shift" it sounded very much like you meant displacement mapping.


Anyway guys, see you in 2 weeks time....

In other words, this appears to mean that the processing must be done by the CPU each time a light moves with respect to a surface. So, for example, the GeForce/2 was good mostly just for bump maps on surfaces that didn't move, such as world geometry, and wasn't good for dynamic lighting. Otherwise, the normal map would have to be generated and uploaded to the video card every frame...which would certainly dwongrade performance significantly.
 
Chalnoth said:
In other words, this appears to mean that the processing must be done by the CPU each time a light moves with respect to a surface. So, for example, the GeForce/2 was good mostly just for bump maps on surfaces that didn't move, such as world geometry, and wasn't good for dynamic lighting. Otherwise, the normal map would have to be generated and uploaded to the video card every frame...which would certainly dwongrade performance significantly.

I still don't think you are getting it. The normal map never changes (and is pre-generated). It doesn't have to be uploaded every frame. It just like a texture map.

The light vector does have to be both calculated and transformed into the texture's space every frame (if the object or the light moves), but I believe the GPU can do this instead of the CPU.

-M
 
Well, then, what's the benefit of the GeForce3? Does it retransform "on the fly" every time a DOT3 bump map is used, while the GF2 has to do it separately?
 
Chalnoth said:
Well, then, what's the benefit of the GeForce3? Does it retransform "on the fly" every time a DOT3 bump map is used, while the GF2 has to do it separately?

No. Implementing DOT3 is the same for all of them. GF1,GF2, and GF3 all have the T&L to do the transforms. Its a matter of other features that the GF3 has over the other two beside the obvious speed advantage.

-M
 
With regards to DOT3 the GF3 has a quality advantage over the GF1/2 if you're using the special HILO format, which will be normalized after filtering.
GF3 also has the advantage of supporting 3d textures, which make the implementation of dot3 lighting much simplier.
 
Humus said:
With regards to DOT3 the GF3 has a quality advantage over the GF1/2 if you're using the special HILO format, which will be normalized after filtering.
GF3 also has the advantage of supporting 3d textures, which make the implementation of dot3 lighting much simplier.
No one is able to answer me about 3 different modes of bump mapping in Evolva? All DOT 3 or some of these are embossed? Thanks all.
 
Humus said:
GF3 also has the advantage of supporting 3d textures, which make the implementation of dot3 lighting much simplier.

GF3 is too slow to implement its supposed support for 3d textures.:) In fact, I have yet to see any gaming company use 3d textures!

-M
 
Too slow for 3D textures? ... I wouldn't say that. I've done many demos using 3d textures, some runs well even on the original Radeon. I'm not sure if you've seen my "GameEngine" demo I released earlier this year. It uses 3d texture quite extensively for the dot3 light, yet runs very well on GF3/Radeon 8500 and up.
3D texture should really not be significantly slower to use than 2d texture ... especially since 3d textures tends to be small.
 
Humus said:
Too slow for 3D textures? ... I wouldn't say that. I've done many demos using 3d textures, some runs well even on the original Radeon. I'm not sure if you've seen my "GameEngine" demo I released earlier this year. It uses 3d texture quite extensively for the dot3 light, yet runs very well on GF3/Radeon 8500 and up.
3D texture should really not be significantly slower to use than 2d texture ... especially since 3d textures tends to be small.

I'll have a look at your game engine demo. However, you stated the obvious, 3d textures will indeed need to be quite small in order to maintain speed.:) Try making your demo use 3d textures for every object at 512x512x512 each.:)

-M
 
well ... a 512x512x512 3d texture would be 512MB, which wouldn't fit into the memory of any card available today. 256x256x256 should work on 128Mb cards though. In my demo I use 64x64x64, have tried 128x128x128 too without noticing any significant slowdown. I don't use larger textures as it's not needed, could do with 32x32x32 too if I would have liked.
3d textures are seldom used for materials, but rather for stuff like vector fields, attenuation maps etc. I wouldn't say that the GF3 is to slow to use 3d textures, but rather that it has too little memory to be efficient if 3d textures were to be used for materials.
 
Not only memory, but 3d texture lookups are also, in my experience, rather slow.

Even given the hardware availability, most everyone I have talked with is still doing attenuation through a 2D and a 1D texture (or a 2D referenced with two seperate tex coords).

And 3d texture compression does exist to alleviate the memory problems, but, as with all texture compression, it is only a short term answer. I'm sure once we have wide availability of virtualized video memory (and cards that can handle 3d textures 'better') that there will be plenty more people using 3d textures, but for now they are just one of those features to be largely ignored (EDIT: In the retail games market that is, if you are just making some demos you can get by with 3d textures).
 
Humus said:
3d textures are seldom used for materials, but rather for stuff like vector fields, attenuation maps etc.

You must be speaking for the realtime world only, because the offline world uses 3d textures as mainstream.. Attenuation maps, vector fields, etc.. are never pre-computed but computed by the CPU during rendering.

Btw, I looked at your webpage and see that you have substantial experience in the realtime graphics world. Never have I seen someone have so many projects done and not actually be working for a company that does those things!:)

-M
 
Back
Top