DX7 GFX CAN MATCH DX9 GFX LEVEL GRAPHICS!!

NeoCool

Newcomer
And it's damn true. It's pathetic, a demo with T & L can look just as good as a demo utilizing DX9 HLSL with pixel shader 2.0/vertex shader 2.0. T & L demos can even look better than demos with environment mapped bump mapping!! Don't believe me? Visit these two sites and tell me the difference between the water quality:
http://cgi3.tky.3web.ne.jp/~tkano/tlwater.shtml
(direct-x 7 water)
http://esprit.campus.luth.se/~humus/
(direct-x 9 water)
And Direct-X 7 can provide per-pixel bump mapping, lighting, stencil shadow volumes, shadowmapping, bump mapping, accurate environment maps, realistic reflections, realistic refractions, specular lighting, etc. It can also produce animation quality beyond vertex shaders, more effeicient too. Sure, HW T & L ain't programmable, and doesn't even come close to matching the speeds of dx9 hardware(unless you provide new heatsinks/fans/bus compatbility/up clock speeds, memory type, memory amount, & rendering pipelines/etc) but developers still fail to even take advantage of it's full potential, yet alone direct-x8's/9's full potential. Sad, isn't it? So, what do u guys say, eh? :p
 
I ahve had the TL Water demo downloaded for over a year now I think and whilst it is nice and there are things you can do with DX7 that are neat you should take a look at more than one effect to base your whole conclusion on.

A good reference point would be the various HL2 shots and comparisons in different modes.
 
Hhhhmmmm, true. Actually, valve doesn't do a very good job in hl2 of taking advantage of dx6-dx9's full potential, I'm not very impressed at all, but does anyone have resource screenshots of hl2 comparing the IQ of dx6-dx9 class hardware? :?:
 
...they are nearly identical.

Not even close.

http://users.otenet.gr/~ailuros/TLwater.JPG

http://users.otenet.gr/~ailuros/water.JPG

It can also produce animation quality beyond vertex shaders, more effeicient too.

:rolleyes:

...but developers still fail to even take advantage of it's full potential.

There are more than a few spots in games where these kind of reflections or EMBM f.e. come into play. Needless to say that the TLwater demo is a rather bad paradigm used. I'll make it simple: I don't see anything dripping into that pool...
 
NeoCool said:
Hhhhmmmm, true. Actually, valve doesn't do a very good job in hl2 of taking advantage of dx6-dx9's full potential, I'm not very impressed at all,
Sorry, but you have no idea what you are talking about.
 
NeoCool said:
And it's damn true. It's pathetic, a demo with T & L can look just as good as a demo utilizing DX9 HLSL with pixel shader 2.0/vertex shader 2.0. T & L demos can even look better than demos with environment mapped bump mapping!!
A 2 meg plain 2D PCI VGA is actually sufficient to display screenshots. Heck, it can even display real world photography!

It just can't render the stuff, which brings us ... here :rolleyes:

NeoCool said:
And Direct-X 7 can provide per-pixel bump mapping, lighting, stencil shadow volumes, shadowmapping, bump mapping, accurate environment maps, realistic reflections, realistic refractions, specular lighting, etc. It can also produce animation quality beyond vertex shaders, more effeicient too.
OMG.
If you'd realize for a second that eg refractions require dependent reads into environment maps ...

You can do a lot of stuff per vertex and then make your triangles roughly one pixel in size. That doesn't make it particularly "per pixel" in my book. And it obviously doesn't increase performance if you have a million triangles instead of thousand. Add insult to injury, if you want to do 'advanced' effects per vertex, you'll end up with no choice but software emulation of vertex programs anyway.
NeoCool said:
Sure, HW T & L ain't programmable, and doesn't even come close to matching the speeds of dx9 hardware(unless you provide new heatsinks/fans/bus compatbility/up clock speeds, memory type, memory amount, & rendering pipelines/etc) but developers still fail to even take advantage of it's full potential, yet alone direct-x8's/9's full potential. Sad, isn't it? So, what do u guys say, eh? :p
I say you have no idea what fixed function T&L's limitations are.
 
NeoCool said:
And it's damn true.
Yes, you can do a tremendous amount with even the original GeForce. Some of the biggest benefits of later hardware architctures are performance and accuracy.

Simply put, there's no way 8-bit integer calculations will come close to competing, under the right circumstances, to the accuracy of 24-32 bit floating point calculations available in today's GPU's. ATI has one good example of this involving bump mapping. Basically, for many objects, you just can't get a smoothly-curving surface from a bump map unless you're using at least somewhere around 16-bit integer for the bump map.

And then there's the whole performance issue. Any reasonably complex rendering algorithm you want to do on one of the original Geforce or Radeon cards will take many, many passes. They're just incredibly inefficient for any sort of reasonably complex scenario.

So, yes, many of the cool things that can be done on a GeForce FX or Radeon 9500+ can also be done on a GeForce 256 or Radeon 7200. The difference is that those things will totally destroy the performance of the older cards, or may end up looking just plain terrible due to the limitations of the older chips.
 
NeoCool said:
Maybe, but still, both water surfaces from both demos, they're quality, they are nearly identical!!!! :oops:

Maybe because it's basically the same technique in both demos (AFAIK anyway). The difference however is that that DX7 demo does it all on the CPU, while my demo does it on the GPU. That demo also does it on a per vertex level, while my demo does it on a per pixel level.
 
well, why talk about DX7, while you can do same stuff with DX6 as well: http://cgi3.tky.3web.ne.jp/~tkano/bumpwater.shtml

so, basically DX6 Bump Mapping is is using per pixel effects. (a sort of...) But it is not about can you do the some effect with certain hardware, but how fast that hardware does it. With CPU only, you only sky is the limit, but be prepared for FPD (frames per day), not for FPS.

heck, go to bullying some old skool demo scene guys and they port that on Commodore 64 in real time. (at least it could be faked to look like real time rendering on 64 with few tricks. ;) ) But that doesn't mean that we all should be playing games with VIC-II chip nor that you can do Half Life 2 on 64 with playable frame rates... ;)
 
OpenGL guy said:
NeoCool said:
Hhhhmmmm, true. Actually, valve doesn't do a very good job in hl2 of taking advantage of dx6-dx9's full potential, I'm not very impressed at all,
Sorry, but you have no idea what you are talking about.

Well he does have the option not to be very impressed.
(Whether it take advantage can be argued though)
I am not all that impressed by what I saw either, it is just an opinion, like perhaps he doesn't like color X.
 
HL2 only uses around 18 base PS2.0 shaders which are split statically into different shaders I.E. if alpha blend use shader one else use shader two saving a few lines of code between shaders. So I guess maybe around 50 PS2.0 shaders in HL2 if it has 2.5 shaders per base shader.
 
I hope this thread is a joke...

Water effects were done ok way back even with a original Radeon...

water.jpg


Yet none approach the Realism of Humus's Excellent Demo.

Neverwinter Nights is a classic example how 99% of Developers render water...

nwnscreens_3_20_small.jpg


Vs.

Pixel Shader Support

atishiny_small.jpg


I have to say that I played Neverwinter Nights with my Radeon 9700 with the 1st pic type water effects for approx 5 months before I got support for the Pixel Shader Water option, and the game looked like a brand new game. Especially fantasy style games where the experience is about eye candy.
 
All I'm saying is developers take an extremely poor job of taking advantage of older hardware. :rolleyes: oh, and that dx6 demo would need to run on radeon 7xxx class hardware, direct-x 7, environment mapped bump mapping, you know... :rolleyes:
 
NeoCool said:
All I'm saying is developers take an extremely poor job of taking advantage of older hardware. :rolleyes:
Old hardware is too slow to make use of these advanced effects.
 
Yeah but increase the speeds/amount of memory/type memory/memory pipelines/bus compatbility to the point of dx9 class speed hardware on older dx7 cards and you may be able to achieve visuals equal or better to a dx8 card!! :oops:
 
And older dx7 hardware can also achieve just as good stencil shadow volumes, shadowmapping, per-pixel lighting, specular lighting, and bump mapping as newer cards can, just not at a reasonable speed. Oh, and that T & L water demo uses the GPU quite heavily or it wouldn't be able to run @ 30fps like it does on T & L hardware.
 
Yeah but increase the speeds/amount of memory/type memory/memory pipelines/bus compatbility to the point of dx9 class speed hardware on older dx7 cards and you may be able to achieve visuals equal or better to a dx8 card!!

But the DX8 or DX9 cards would also have all of these increases in the "speeds/amount of memory/type memory/memory pipelines/bus compatbility" that the DX7 cards have. And therefore DX8 or 9 games would look better when pushed to their limits.

The fact that the DX9 refrast takes several minutes to render just one frame of current DX9 games even on the fastest PC should give you an idea about the capabilities of modern cards compared to the DX7 dinosaurs. ;)
 
NeoCool said:
Yeah but increase the speeds/amount of memory/type memory/memory pipelines/bus compatbility to the point of dx9 class speed hardware on older dx7 cards and you may be able to achieve visuals equal or better to a dx8 card!! :oops:

Oh please....

***edit: here this one might help as a starter:

http://www.beyond3d.com/articles/vs/
 
Back
Top