3D Mark 2001 SE Today?

Gking,

Pixel Shader 1.4 is not just about visuals, its also about increasing the speed of the visuals..which I'm sure PS 1.4 can do.


Here is a thread from Rage3D, this comment was from a David Kirk interview where he stated Pixel Shader 1.1 could do the same effects as 1.4.

GeForce3 cards are DX8.1 cards, period. NVIDIA has Microsoft WHQL certified DX8.1 drivers for GeForce3. I believe that the new pixel shader versions introduced in DX8.1 do not offer significant new functionality over the original DX8 pixel shader versions; they are simply different. I'm not really sure what the value of these new versions really is. We certainly don't hear a lot of interest in them from game developers.

Then Jason from ATI responds:

That's just blatant misinformation. I'm shocked it was even published. For example, the ocean water screen saver has two shaders: water, clouds. Neither can possibly be done on a GF3 (or pre 1.4 pixel shader part) regardless of how many passes you break it down to. The math simply cannont be expressed without 1.4 pixel shaders. We wrote them, we know.

The statement of 1.4 being a small step is just plain wrong. The 8500 can do 12 texture samples and 16 generalized math ops in a single pass. The GF3 isn't even close. Not a single developer I've spoken to doesn't think ps1.4 is leaps beyond the GF3's pixel pipe.

Alex V, ATI

I'll take his word for it, maybe some effects can be done the same but at the expense of more instructions and more work for the programmer :smile:

_________________

http://www.rage3d.com/board/showthread.php?s=&threadid=33595019&perpage=20&pagenumber=2


<font size=-1>[ This Message was edited by: Doomtrooper on 2002-02-14 02:27 ]</font>
 
Well Humus, I am not beyond giving a complement even though I disagree with you, and your demos show promise.

However, I still think that actually coming up with great pixel and vertex shader effects (even for ps1.1 or vs1.0) that are truly dramatic and can't be done on DX7 hardware (for instance) is quite tricky.

For example, a big deal was made in the console boards about non-photorealistic rendering (e.g. cell shading) and fur (shell) rendering. Everyone thought that you needed pixel shaders to do it, but the reality is, it can be done on DX7 hardware.


Now, there are a few effects which are mathematically interesting to coders, and which can't be done easily on different hardware, but which make almost no different in the end to the user. Take for example, per-pixel phong shading vs interpolated phong, or local lights vs infinite lights. We developers think that "more correct" = better, but actually, the approximations look pretty good and perform way better.


I happen to think that 90% of what accounts for impressive look (e.g. the difference between the real-time CGI look and the pre-rendered real-world look) is shadows and global illumination. Neither are these are solved by pixel shaders alone and are in fact, multipass only. The other thing is adaptive/stochastic AA with up to 64 samples per pixel.

When real-time 3D can do AA and shadows/diffuse illumination, it will look leaps and bounds better. Most of the pixel shader effects I have seen look like variations on specular lighting. That makes nice shiny stuff, but nice shiny stuff doesn't impressive me as subtle illumination.
 
I'll stand by my assertion that *every* effect possible in ps1.4 is also possible in ps1.3.

From what I remember from the white paper (been a while since I've read it), ATI's water effect was simply two interacting bump maps (which looked awful, and was about the worst example of real-time water I've seen short of a simple texture animation).

You can do the same effect with render-to-texture by using two passes to build up the combined bump map image in a third texture map, then use a third pass with the combined texture map for rendering. It was less complicated (and much less interesting) than the dynamic bump map example Greg James has on the NVIDIA developer website (which includes a pretty neat GPU-created water animation).
 
its my understanding that by using pixel shader 1.4 radeon can do stuff in one pass instead of 2-3 like the geforce .... am i wrong ?
 
On 2002-02-14 02:48, gking wrote:
I'll stand by my assertion that *every* effect possible in ps1.4 is also possible in ps1.3.

From what I remember from the white paper (been a while since I've read it), ATI's water effect was simply two interacting bump maps (which looked awful, and was about the worst example of real-time water I've seen short of a simple texture animation).

You can do the same effect with render-to-texture by using two passes to build up the combined bump map image in a third texture map, then use a third pass with the combined texture map for rendering. It was less complicated (and much less interesting) than the dynamic bump map example Greg James has on the NVIDIA developer website (which includes a pretty neat GPU-created water animation).

I'm not sure if PS 1.1 can do what PS 1.4 can, Alex at ATI doesn't think so. Yet were not talking effects here, were talking about how fast the effects are done. Ps 1.4 can do things more efficiently which is what we all want. So judging from your replies you think PS 1.1 is just fine and lets stop progression right now and leave it at PS 1.1 forever. :rollseyes: If you don't think PS 1.4 is superior and can deliver superior results your opinion goes againts alot of people in the industry including Epic and Carmack.

BTW I happen to like the Ocean Screen Saver, I've seen ALOT worse before.

2560-2423.jpg
 
No, I don't think ps1.3 is fine (1.1 is missing a few texture addressing modes that should have been included to simplify things).

But the notion that ps1.4 is a quantum leap forward (vs 1.3) is absurd. It allows a few rendering passes to be collapsed (which makes it faster), but doesn't enable any really new effects to be performed. It's biggest advantage is the higher precision arithmetic (which is nice). Arguing about the improved syntax seems rather absurd, since we'll all be coding in a higher-level language (more akin to Renderman SL) in the future, anyway. Besides, it's not as if ps1.4 isn't loaded with its own set of hardware nuances (loss of alpha after PHASE, f(g(x)) can't be calculated in one pass, etc.) Compared to what's upcoming, both ps1.3 and ps1.4 will look like DX7.

And the whole collapsing render passes seems a bit overblown... Faster is always better, but upcoming engines will spend so many passes rendering without textures or pixel shaders that the speed of the final surface shading passes won't make or break a game's performance.

Re: Ocean demo. ATI improved the content quality of their water demonstration. The original tech demo looked absolutely terrible.
 
According to ALEX at ATI this screensaver is using two shaders, the results look very impressive, especially the larger waves roll in. The SKY and the WATER.

AtiSushi00.jpg


<font size=-1>[ This Message was edited by: Doomtrooper on 2002-02-14 06:24 ]</font>
 
I've seen similar clouds on a GeForce 3 (1 pass) in OpenGL (ps1.1 doesn't expose the full blending capabilities of the GeForce 3). It doesn't look like the clouds are being back-lit at all.

I'm still not particularly fond of the water.
 
There's a paper out there that shows how any RenderMan light shader can be done via bog-standard OpenGL via multipass (modulo loss of precision)


In most new engines, you already have 3-4 passes minimum to do shadow volumes, and that's not counting extra passes to do all kinds of caustic effects, volumetric tricks, etc.

A great example is a paper by Cass Everitt at Nvidia

http://www.opengl.org/developers/code/features/oimfinal/perpixel.html

which shows how to do dot-product on non-dotproduct capable hardware using 12-passes.

Of course, this runs terribly on PC architectures, but on some architectures, like the PS2 with its super high bandwidth/high fillrate, multipass with display lists is very quick.




<font size=-1>[ This Message was edited by: DemoCoder on 2002-02-14 07:06 ]</font>
 
On 2002-02-14 01:11, gking wrote:
Humus --

That effect doesn't require PS1.4. It can be achieved just fine with PS1.1 and multipass/render-to-texture.

You use the first pass to render per-pixel specular exponent and H dot N into the R and A channels of the frame buffer.

Then, apply that frame buffer as a projective texture in stage 0, and use an AR dependent read into a 2D r^s texture map in stage 1. Exactly the same effect performed in 2 passes.

Ok, there are hacks to circumvent some limitations, but personally I feel it's better to not have to and being able to do it just by thinking of the effect.
Then of course we have another problem, nVidia doesn't support the WGL_ARB_render_texture OpenGL extension (which I find kinda redicolous btw since it's been ARB approved for several months now and their hardware is fully capable of supporting it AFAIK).
 
On 2002-02-14 02:45, DemoCoder wrote:

Well Humus, I am not beyond giving a complement even though I disagree with you, and your demos show promise.

However, I still think that actually coming up with great pixel and vertex shader effects (even for ps1.1 or vs1.0) that are truly dramatic and can't be done on DX7 hardware (for instance) is quite tricky.

For example, a big deal was made in the console boards about non-photorealistic rendering (e.g. cell shading) and fur (shell) rendering. Everyone thought that you needed pixel shaders to do it, but the reality is, it can be done on DX7 hardware.


Now, there are a few effects which are mathematically interesting to coders, and which can't be done easily on different hardware, but which make almost no different in the end to the user. Take for example, per-pixel phong shading vs interpolated phong, or local lights vs infinite lights. We developers think that "more correct" = better, but actually, the approximations look pretty good and perform way better.


I happen to think that 90% of what accounts for impressive look (e.g. the difference between the real-time CGI look and the pre-rendered real-world look) is shadows and global illumination. Neither are these are solved by pixel shaders alone and are in fact, multipass only. The other thing is adaptive/stochastic AA with up to 64 samples per pixel.

When real-time 3D can do AA and shadows/diffuse illumination, it will look leaps and bounds better. Most of the pixel shader effects I have seen look like variations on specular lighting. That makes nice shiny stuff, but nice shiny stuff doesn't impressive me as subtle illumination.

Much of it of course lies in the expectation, "truly dramatic" is of course very hard, especially if you have performance considerations too. For me, "looking cool" or maybe "looking cooler than what can be done with &lt;insert technology here>" is often enough :smile:
 
I think that Doomtrooper makes a valid point with regards to 3DMarks favoritism of nvidia technology. Sure it is great that they have alowed for a new Pixel Shader test but it is not really fair for them to block any point gain for the test. If it is supposed to be a benchmark for new video card technology then why are they blocking points to be earned for the PS 1.4? Also I do agree that they should ethier allow for both or none to earn points. The point of all of this is that the benchmark has become a standard measurment of what users should be buying, and the most simple way for end users to learn this is too look and see what card has the highest benchmarks. So I guess to put it in laymans terms 3DMark seems to think that ATi's new technology is not worth putting on the point scale untill nvidia incorperates the same technology, never mind they would not wait for ATi to implement a pixel shader for DX8.0. The whole thing kind of reeks if you ask me and the only ones who defend 3DMark are nvidia fan boys In My Humble Opinion.

Sabastian
 
My opinion of the matter is just "why?"

More specifically, why include the test, using different paths for different cards, IF the end results are not comparable.
 
"why" is exactly the question. Why do they give points for nvidia pixel shader but refuse to give points for ATi pixel shader? Even though ATi has the most recent one suggesting some sort of superiority. Why wouldn't they just have made the original one a pointless tech demo like they did to ATi's pixel shader. "Why" is the question no doubt, nvidia shoving cash into Madonions pockets? Who knows ?

Sabastian
 
Actually, I heard it was Enron that paid off Madonion. Pixel Shaders 1.4 are more efficient, and we all know evil energy companies like Enron want nothing better than to strike against efficiency and promote energy wastage. :smile:
 
Back
Top