ShaderMark v3.0 Poll - Multi API or Multi OS

ShaderMark v3.0 Poll - Multi API or Multi OS

  • C++ + OpenGL 2.0 + DirectX 9 (Multi API)

    Votes: 0 0.0%
  • no matter, and/or other suggestion

    Votes: 0 0.0%

  • Total voters
    187
It should stay the same. There probably wouldn't be too much difference anyway. The point is to test approximately how capable your graphics card is, not to test the differences between D3D & OpenGL, Java & C++, Windows & Linux, etc.
 
I voted for OGL + D3D, because if I had to choose between Java + OGL and native OGL + D3D, I'd pick the latter.
Mainly because Java doesn't add much to the equation when it comes to graphics performance. The OGL driver or the hardware doesn't suddenly get faster or slower if you call it from a different language, and the tests themselves are probably nowhere near CPU-bound anyway.
Also, the hardware won't get faster or slower when you use another OS, and there should also be little or no differences in driver performance (well, there are, in some cases, but there shouldn't be).

On the other hand, by far the largest group of users uses Windows, especially when it comes to 3d games, and a lot of these games use D3D. So, ignoring D3D would be ignoring a significant part of the market, which can't be right.
So I think Windows + D3D is most important. OGL could be an interesting extra option, but I would not favour it over D3D.
One possibility is ofcourse to develop the OGL-portion of the benchmark in a platform-independent way, so that you get both options in one.
But to me, the list is as follows, in order of decreasing priority:

1) Windows/D3D
2) Windows/OGL
3) Other OSes/OGL
 
tb said:
You can also post your feature request here.

Thomas
I voted for the C++ option simply because I feel it's Wrong to require to download and install a 15 meg framework if it isn't strictly necessary.
If you have a working C++ codebase, go ahead with that.

A Java based Linux port would allow investigating how much the driver logic (shader compilation+optimization) differ between Windows and Linux. Ehrm ...
1)I simply don't care enough. This isn't platform specific at all, so I fully expect the big two IHVs to share that piece of logic between Windows and Linux driver releases. That's an assumption, and it may be wrong, even if it would disappoint me. But it's not worth the effort IMO.
2)You'd cut out DirectX Graphics which makes a far more interesting competition.


And I'm really glad to see that you have GLslang testing on your mind. Nice one :)
 
Sticking with Windows would be a good idea, considering ATI is quite lazy on their linux driver development. People don't want to see their X800 slower than FX5900 under linux, even if it's truth, they won't believe it. But if you have enough resources, I think developing a linux port might not be a bad idea, it can push IHVs perfecting their support to this OS, good to linux users, I think.

Supporting OGL 2.0 is definitely necessary. As for features, I'd lile to see options for customized shaders. People supply their own shaders to the program, then shadermark 3.0 gives result. That'll be cool, especially for people with some knowledge of shader programming.
 
991060 said:
Sticking with Windows would be a good idea, considering ATI is quite lazy on their linux driver development. People don't want to see their X800 slower than FX5900 under linux, even if it's truth, they won't believe it. But if you have enough resources, I think developing a linux port might not be a bad idea, it can push IHVs perfecting their support to this OS, good to linux users, I think.

Supporting OGL 2.0 is definitely necessary. As for features, I'd lile to see options for customized shaders. People supply their own shaders to the program, then shadermark 3.0 gives result. That'll be cool, especially for people with some knowledge of shader programming.

;)

Well you could do a little code customising in the 2.1 version already, but only in the primary shader profile limits, which means:

max 2_0 for all shaders which have a 2_0 in their fx file
max 2_b for all shaders which have a 2_x in their fx file
max 3_0 for all shaders which have a 3_0 in their fx file

P.S. Anyone here with a HLSL to GLSL (or the other direction) converter? Otherwise I would have to write my own, because I'm too lazy to write all shaders multiple times....

Thomas
 
I'd really like to see an OpenGL port.

As for generating the code shaders once and splitting between GLSL and HLSL, well, there should be a couple of options available. One may be to use ATI's Rendermonkey interface. I've never used it personally, but as long as its XML interface generates HLSL and GLSL shaders, and it has the flexibility in the XML language to write the shaders in, it could be used.

It looks like there was a proposal for exactly what you're asking for almost a year ago:
http://www.web3d.org/x3d/workgroups/x3d_programmable_shaders_proposal/

It looks like a preview release is available, but doesn't yet support GLSL:
http://www.bitmanagement.de/developer/contact/preview/6.2/shader/

...as a side comment, you could always use Cg, but that would be potentially bad for a benchmark, since you really would want to use IHV-independent tools, if at all possible. That said, it might be fine to use it for Cg for a first implementation, with an appropriate caveat for all results in OpenGL. The major issue here is that you'll mostly be benchmarking nVidia's compiler vs. Microsoft's (so if a Cg path is implemented, it'd also be nice to allow for the DX targets, just to see what the compiler difference is).

991060 said:
Sticking with Windows would be a good idea, considering ATI is quite lazy on their linux driver development.
I don't see why that should factor in.

Anyway, if an OpenGL profile is made, it should be really easy to make a Linux port (since really the only significant difference is that windows are created differently in Linux...though there is a potential problem, that you won't be able to use the same resolution/bit depth in Linux on every machine configuration).
 
Well, I'm just saying a benchmark conducted on immature driver is somewhat useless(my personal opinion). Everybody knows X800 is much more powerful than NV3x, but the former got beaten in the latest linux game test. What also concerns me is that how many linux users out their care about their GPU's performance. A tool used by no one doesn't justify the effort to develop it, even if it's easy to build. I think tb is quite busy on developing shadermark, there're very few people helped him with the development, if at all. I suggest postpone the linux project for a while, perfecting the windows version, while waiting IHVs do sth about their linux driver.
 
tb said:
P.S. Anyone here with a HLSL to GLSL (or the other direction) converter? Otherwise I would have to write my own, because I'm too lazy to write all shaders multiple times....

Thomas

There's none as I recalled, but ATI's ashi engine can compile HLSL/GLSL to DX9 VS/PS and OpenGL VP/FP. Maybe you can write HLSL shader one, use ashi to compile it to VP/FP and use them in the OpenGL profile. But I just don't know about ashi's cross-platform compiling efficency.
 
991060 said:
Well, I'm just saying a benchmark conducted on immature driver is somewhat useless(my personal opinion). Everybody knows X800 is much more powerful than NV3x, but the former got beaten in the latest linux game test. What also concerns me is that how many linux users out their care about their GPU's performance.
Well, it doesn't matter if that's the only driver available for the platform.

And I care. It's nice to be able to play some of my games under Linux. I typically need to use Linux for work, so it's quite nice to not have to reboot to play my games.
 
I don't think you should use Cg or ATI's Ashi for GL.
GLSL should be compiled at runtime! It is meant to be used that way, not to be compiled out of the application and then load the fragment program generated. That is okay for Cg and D3D, but not GLSL. You should compile GLSL code in runtime, or don't bother using it. The idea is for the driver to decide the best way to compile GLSL code without the need to have targtes like PS1.0, PS1.1 or whatever...
 
Sigma said:
I don't think you should use Cg or ATI's Ashi for GL.
GLSL should be compiled at runtime! It is meant to be used that way, not to be compiled out of the application and then load the fragment program generated. That is okay for Cg and D3D, but not GLSL. You should compile GLSL code in runtime, or don't bother using it. The idea is for the driver to decide the best way to compile GLSL code without the need to have targtes like PS1.0, PS1.1 or whatever...

all shaders will be compiled at runtime, otherwise you can't "edit" them...
 
Multi-OS. There are plenty of DX tests on Windows. I think there would a really good niche filled if one could get a nice consistent benchmark across platforms.
 
Humus said:
I'm voting for multi-OS with GLSL.

Me too, but not JAVA, you can do C++ + SDL + OpenGL to have a nice cross OS (and platform since is would work with Macs) system, or anything you want as long as you stay away from JAVA.

(I like java, but for non GUI stuff, and for servlets/JSPs mostly.)
 
Back
Top