Will 3DMark_Next support DirectX10.1 [Shader Model 4.1]

Shtal

Veteran
Microsoft isn't the only developer downplaying DirectX 10.1. Cevat Yerli, CEO of Crytek, states,"We pride ourselves on being the first to adopt any important new technology that can improve our games so you would expect us to get with DX10.1 right away but we've looked at it and there's just nothing in it important enough to make it needed. So we have no plans to use it at all, not even in the future." http://www.dailytech.com/article.aspx?newsid=9656
One of the only features added to RV670 is the inclusion of DirectX 10.1 support, an API layer that will be rolled out with Microsoft's upcoming Windows Vista Service Pack 1.

Based on the reading what it says, I wonder if FutureMark has same feeling about DX10.1 that is minor update of DX10, so it's not important feature to add; or FutureMark thinks differently and delays new 3DMark release date due to adding SM4.1 support. (It resembles Pixel Shader 1.1 updated to PS1.4)

If FutureMark does / will add SM4.1 support in new 3DMark, I would bet Radeon HD3800 series cards will score higher than GF8800 series.
 
It's not the question of will or will not 3DMNext arrive with SM 4.1 implementation. Question is what type of test, and features will be promoted. IMHO Deferred shading and Global Illumination are MUST for the Next 3DMrk! Those two techniques are the future of the lightning on in the upcoming years, just like HDR, and Normal mapping wore the big things two years ago.

And if Futuremark decide to go in that direction support for SM 4.1 is a natural and only reasonable thing to do!
 
If FutureMark does / will add SM4.1 support in new 3DMark, I would bet Radeon HD3800 series cards will score higher than GF8800 series.
No s%$! :smile: The 8800's can't and won't run DX10.1 software unless Nvidia are hiding something clever up their sleeves...

SM4.1 relaxes some interstage restrictions which could be useful at the high end of the spectrum, which is where benchmarks usually reside if they're to stretch the hardware at all. Although I can't see the commercial proposition in a benchmark for an API that apparently isn't going to be widely used...

Jack
 
If FutureMark does / will add SM4.1 support in new 3DMark, I would bet Radeon HD3800 series cards will score higher than GF8800 series.

Heh, and then it will be an even more useless as a predictor of graphics performance than 3dmark06 is right now.
 
If they add SM4.1 functionality it'll be similar to what we saw with 3dmark2001 SE and the PS1.4 test, i.e just a tech demo and not counting towards the score. Or possibly SM4.1 will be used in places where it makes sense too to speed up certain operations, in this case I suppose RV670 based boards would get a small boost but nothing really significant I don't think. I do believe the first case is more likely though.
 
Call me crazy but I think 3dm2k8 should wait for OpenGL 3 to be executed with WinXP and also VS2008 to get better CPU optimizations. I wouldn't mind to wait other year and get this hehe :D I don't like the idea to require Vista to execute it... and ofc don't like DX10.1, unless Microsoft finally decides to port it to WinXP using a new driver model...
 
Last edited by a moderator:
Call me crazy but I think 3dm2k8 should wait for OpenGL 3.0 to be executed with WinXP and also VS2008 to get better CPU optimizations. I wouldn't mind to wait other year and get this hehe :D I don't like the idea to require Vista to execute it... and ofc don't like DX10.1, unless Microsoft finally decides to port it to WinXP using a new driver model...
Haven't these points been beaten to death and back several times yet?

Admittadly oversimplistically:

- OpenGL 3.0 isn't going to make a huge difference to the current graphics landscape even if it does ever come out (;))... games will still primarily use DX as there's simply no compelling reason to change. GL3.0 doesn't even bring the spec up to the level of DX10, let alone 10.1 (see the details that I posted in the other thread).

- What "optimizations" are you expecting in VS2008 that are going to make any huge differences? The biggest C++ changes in 2008 are the support for C++0x stuff which while quite useful is less about performance than language flexibility and elegance.

- DX10 is only going to run on Vista. Get used to it. And I refuse to accept any more whining on this point as Vista is IMHO just as/more usable than XP for anyone with a video card that supports DX10, and it costs a mere fraction of what such a video card costs... hell a quite usable version (Home Premium) is much cheaper than buying a usable version of XP (Pro).

I don't mean to come off as too harsh, but these things have been covered several times in B3D articles and epic threads.

As far as the original question goes, the only compelling reason I could see to support DX10.1 stuff is if the benchmark is using heavy deferred shading. In that case, supporting 10.1 would merely provide a small performance improvement to supporting cards (if that). Thus I don't see a pressing need to support any of the new API features.
 
It's good to see AMD embracing DX10.1 so quickly, as it will eventually be the way of the world. The new capabilities that DX10.1 enables are enhanced developer control of AA sample patterns and pixel coverage, blend modes can be unique per render target rather, vertex shader inputs are doubled, fp32 filtering is required, and cube map arrays are supported which can help make global illumination algorithms faster. These features might not make it into games very quickly, as we're still waiting for games that really push DX10 as it is now. But AMD is absolutely leading NVIDIA in this area. http://www.anandtech.com/video/showdoc.aspx?i=3151&p=2

If FutureMark does implement SM4.1 support in new 3DMark; it should be attractive :D

Edit: Features DirectX10.1 includes:
Separate Per-MRT blend modes,
Pixel coverage mask,
Sample Pattern Selection,
Cube Map Arrays,
VS Inputs 32 for DX10.1 vs. 16 for DX10,
Blending INT 8 & 16 for DX10.1 vs. only 8 for DX10,
Filtering FP32 for DX10.1 vs. FP16 for DX10.
Real time global illumination
Frees restriction on AA support
Enhance Features: Programmability, Pipeline precision, HDR lighting, Anti-aliasing.
 
Last edited by a moderator:
- OpenGL 3.0 isn't going to make a huge difference to the current graphics landscape even if it does ever come out (;))... games will still primarily use DX as there's simply no compelling reason to change. GL3.0 doesn't even bring the spec up to the level of DX10, let alone 10.1 (see the details that I posted in the other thread).
Yep you are right, but I was thinking more in Mount Evans which should give us the same features than DX10. Sorry for the 3.0... I really wanted to put 3.1, but like there is not spec yet don't know what was the MEvans version :p

- What "optimizations" are you expecting in VS2008 that are going to make any huge differences? The biggest C++ changes in 2008 are the support for C++0x stuff which while quite useful is less about performance than language flexibility and elegance.
New optimized OpenMP vcomp.dll, SSE3 compiler optimizations(not confirmed) and SSE4 intrinsics headers for Phenom/Penryn, new "blended" instruction model(based on P4 set?) and x86 refinements. Well, the point of this is that is supposed that Microsoft improves a bit the compiler optimizations each new version, and considering passed 3 years they had time to optimize it... so I hope a .EXE compiled with VS2k8 goes faster than one with VS2k5... or perhaps not...

- DX10 is only going to run on Vista. Get used to it.
I hope once Mount Evans is released and nobody uses DX10 then Microsoft will change their mind and port DX10 to WinXP using a new driver model ( to get VRAM virtualization and other things not implemented in XP's WDM ).

hell a quite usable version (Home Premium) is much cheaper than buying a usable version of XP (Pro).
Here a vista premium's non-update package costs 360€. My über salary is below 1000€... and considering I need a new computer to run Vista... I would say to use Vista Premium gonna cost me a lot really... so I'm gonna stay with my OS by now.

On the other hand... Why I could pay that when Ubuntu is free and MacOSX is only 100€?
I still remember when W95 cost here less than 100€. Microsoft went crazy and their prices are completely absurd for a basic OS. If you see Ultimate's price the situation is even worse.

If 3dm2008 is Vista-only won't be popular at all ( but i'm going a bit offtopic, sorry ).. just because Vista is not and I doubt will be popular on its current status... But well, there is other thread talking about this.

About the DX10.1 in 3dm2008 I think will be good to support the latest features. Not sure what futuremark team is doing atm, but they should include it because the new cards(Rad 3870 and GeForce 9?) are supporting it.
 
Last edited by a moderator:
New optimized OpenMP vcomp.dll, SSE3 compiler optimizations(not confirmed) and SSE4 intrinsics headers for Phenom/Penryn, new "blended" instruction model(based on P4 set?) and x86 refinements. Well, the point of this is that is supposed that Microsoft improves a bit the compiler optimizations each new version, and considering passed 3 years they had time to optimize it... so I hope a .EXE compiled with VS2k8 goes faster than one with VS2k5... or perhaps not...

Real game programmers don’t use OpenMP. It sucks for this purpose.
If it comes to optimizations you better use the Intel compiler. I am sure that VS 2008 is still not able to reach the same level.

I hope once Mount Evans is released and nobody uses DX10 then Microsoft will change their mind and port DX10 to WinXP using a new driver model ( to get VRAM virtualization and other things not implemented in XP's WDM ).

As it is much easier to add a D3D10 path to a d3d9 engine then porting everything over to OpenGL 3.0 your hope had never a big chance.
 
As for Vista, there's a whole bunch of brand new Home Premium and Business on Ebay for 70-80€. I just bought Home Premium for 60€ this week.
 
If it comes to optimizations you better use the Intel compiler
Some Intel DLLs ( that includes the ICCRT ) have problems running over an AMD cpu.

As it is much easier to add a D3D10 path to a d3d9 engine then porting everything over to OpenGL 3.0 your hope had never a big chance.
You need to change lots of things to port D3D9 path to D3D10, the API is very different ( and Vista API too ). Probably you will need to do the same work to port to OpenGL 3... but considering the poor D3D10 portability the OGL path will be better because same code will run on different OS, so is very relative.

On the other hand, I won't like 3dm2008 if only can use D3D10... Some games will use OGL3 in the future... so I can't trust in a syntethic test that is not testing well the graphics card ( and that includes OGL driver bindings too ). They should include a OGL3 path first to port it to other OSs and also to test the OGL drivers, not only the D3D ones. That will be good to compare D3D vs OGL driver speed too...

As for Vista, there's a whole bunch of brand new Home Premium and Business on Ebay for 70-80€. I just bought Home Premium for 60€ this week.
I hope you purchased a don't-opened package... because if the person who sold it already activated it you won't be able to use it. Vista uses online activation based on hardware hashing, so if you change the motherboard/cpu/etc in a short time period needs to reactivate/buy another license...
 
Last edited by a moderator:
You need to change lots of things to port D3D9 path to D3D10, the API is very different ( and Vista API too ).

I have done this already and the differences aren’t that big at all.

Probably you will need to do the same work to port to OpenGL 3...

Such a port will force me to port all shaders too. OpenGL GLSL shader handling is a pain in the ass and I have no hope that it will become better with OpenGL 3.0.

but considering the poor D3D10 portability the OGL path will be better because same code will run on different OS, so is very relative.

They only OS beside of Windows that may have a critical mass of gamers is OSX. But even there the numbers of releases are low and some are only runs with a “windows emulator” called cieder.

This reduces the port problem to Windows XP users with Direct3D 10 hardware. IMHO they will never reach a critical mass as new hardware will mostly sold as part of complete systems. Such systems with D3D10 hardware will be sold with Vista for sure.


On the other hand, I won't like 3dm2008 if only can use D3D10... Some games will use OGL3 in the future... so I can't trust in a syntethic test that is not testing well the graphics card ( and that includes OGL driver bindings too ). They should include a OGL3 path first to port it to other OSs and also to test the OGL drivers, not only the D3D ones. That will be good to compare D3D vs OGL driver speed too...

A 3dmark tests how fast a graphics car can run this 3mark. Not more not less.
 
Such a port will force me to port all shaders too. OpenGL GLSL shader handling is a pain in the ass and I have no hope that it will become better with OpenGL 3.0.
Well, HLSL shading style and GLSL syntax aren't very diffirent. Effects are other history.
Btw, glFX and GLSL v2 are comming :p

They only OS beside of Windows that may have a critical mass of gamers is OSX. But even there the numbers of releases are low and some are only runs with a “windows emulator” called cieder.
In ID Software linux and MacOSX are valid options too.
Cieder? I think you wanna say Cedega and Wine?

Well whatever... software developers cannot make apps for only one OS. We need portability. See ID's policy change with the Tech 5 engine... read John's interviews and how he remarks what is critical for a game is portability. To develop a game or app only for one OS is a suicide. I think if 3dm2008 is D3D10 and Vista-only gonna be a complete fiasco, but time will say. Tons of companies ( and also users ) stays "windozed" but hopefully that's changing... finally Vista got something good.

A 3dmark tests how fast a graphics car can run this 3mark. Not more not less.
3dmark is a synthetic test to measure the graphics card ( well... and computer ) performace. Due to that, must try to test all the aspects of the graphics performance ( fill rate, calling overhead, CPU balance, blah blah blah )...

If you read the technical PDF that comes with 3dmark you can see a brief explanation of the techniques they are using and why are using those tests... For example, the test with the airplanes + smoke was designed to measure the T&L + particles + fill rate.

Now, based on the mark score that gives you can extrapolate the results to know how fast is your card... but then, you run Doom 3 and see it goes very slow... but how can that be if you got 13k amazing 2006 marks? Perhaps your drivers or card is not well optimized for OGL but is a machine using D3D... So need some other indicator to know the GPU speed ( for example the mentioned Doom3 FPS test ).

To be more complete, 3dmark should include an OpenGL test port too... also to see the real primitive calling overhead between D3D and OGL, differences in the rendering quality and precision, etc ... That will help to improve drivers too.

So I think they should wait and implement 3dmark 2008 for both DX10.1 and OGL Mount Evans paths... and with the OGL path they could port it to linux, Solaris and MacOSX too... but well, that's only my opinion.
 
Last edited by a moderator:
Well, HLSL shading style and GLSL syntax aren't very diffirent. Effects are other history.
GLSL made some terrible design decisions IMHO, and there's no excuse since they could have simply just used HLSL/Cg and been done with it. Just another stupid "we want to be different for no reason" from OpenGL land.

To develop a game or app only for one OS is a suicide.
That's pretty funny, because many of the top-selling games support only a single platform :D

To be more complete, 3dmark should include an OpenGL test port too... also to see the real primitive calling overhead between D3D and OGL, differences in the rendering quality and precision, etc ... That will help to improve drivers too.
While quite a "noble" goal, it just isn't practical IMHO... I can count on one hand the number of PC games that use OpenGL, and personally I dislike all of them ;) Thus testing OpenGL performance is about as important as testing the exact rendering path of a particular game in my books... if you want to know how well Doom3 runs on your PC, download the demo and try it.
 
Edit: Features DirectX10.1 includes:
Separate Per-MRT blend modes,
Pixel coverage mask,
Sample Pattern Selection,
Cube Map Arrays,
VS Inputs 32 for DX10.1 vs. 16 for DX10,
Blending INT 8 & 16 for DX10.1 vs. only 8 for DX10,
Filtering FP32 for DX10.1 vs. FP16 for DX10.
Real time global illumination
Frees restriction on AA support
Enhance Features: Programmability, Pipeline precision, HDR lighting, Anti-aliasing.
Ok, we can all go home now!
 
GLSL made some terrible design decisions IMHO, and there's no excuse since they could have simply just used HLSL/Cg and been done with it.
HLSL is not an standarized(in the concept of multiplatform/multivendor) thing... and is on that way due Microsoft monopoly and constant effort to own patents, private code, make things only for their own products like Windows and Xbox, abandon any dialog with the ARB cometee and Khronos, etc...

Cg is only well optimized and 100% compatible on NVIDIA cards.

GLSL have some good decisions too... but I agree, needs some modifications which, on the other hand, are in progress. But if you want to write good portable code for Windows, MacOSX, linux and Solaris.. there is only one option... OGL.

That's pretty funny, because many of the top-selling games support only a single platform :D
And could have even sell more if were more portable/multiplatform. Why to limit your business profit opportunities to an unique platform? (unless Microsoft is paying you a fortune for it or you're completely windozed ).

While quite a "noble" goal, it just isn't practical IMHO... I can count on one hand the number of PC games that use OpenGL, and personally I dislike all of them ;)
Almost all of ID(doom3, quake 3, RtCW, quake 4, QWarsET ), MDK 2, Serious Sam, Sims 2, Prey, Sin 2, Kingpin, Rune, Soldier of Fortune, Descent 3, HeroesM&M, Second life, World of Warcraft, Eve online, Neverwinter nights, Knights of the Old Republic, [SIZE=-1]Chronicles of Riddick, Final Fantasy, GTA, Medal of Honor, Call of Duty, Unreal Tournament 3(expected but delayed atm), etc... a lot of PC games use/can use OpenGL... you should have malformed hands then...
[/SIZE]
I think with OGL3 the number will grow even more. DX10 is becoming a complete selling fiasco and clearly slower than WinXP+DX9 by the moment, making laptop IHVs to protest to include the old WinXP because clients don't want Vista installed, etc... Good design but bad marketing decisions/portability perhaps...

Thus testing OpenGL performance is about as important as testing the exact rendering path of a particular game in my books... if you want to know how well Doom3 runs on your PC, download the demo and try it.
Well, if I could do a fast OGL speed test just checking an option in 3dm2008 will be good. I use 3dmark to know the approximate speed of my computer and GPU... but is clear to have better results you need to run various tests and real games/apps.

Real time global illumination
I think Humus can tell us more about using 10.1's cube map arrays to accelerate the GI.. after all he used cubemaps and SH for his GI demo.

Btw, Stepan uses OpenGL on his GI Lightsmark benchmark http://www.opengl.org/news/permalink/lightsmark_opengl_20_benchmark_released/
without requiring Vista and neither DX10.1 ...
 
Last edited by a moderator:
And could have even sell more if were more portable/multiplatform. Why to limit your business profit opportunities to an unique platform?
Simple: because the cost of developing a code path for another platform (in this case, OpenGL and all of the OS-API calls, plus testing, QA, drivers, etc) is more than the revenue that you'd bring in by adding the additional support. I'm willing to bet that the software vendors have done the math on this one, and they seem to pretty-much all agree on this point :)

Almost all of ID(doom3, quake 3, RtCW, quake 4, QWarsET ), MDK 2, Serious Sam, Sims 2, Prey, Sin 2, Kingpin, Rune, Soldier of Fortune, Descent 3, HeroesM&M, Second life, World of Warcraft, Eve online, Neverwinter nights, Knights of the Old Republic, Chronicles of Riddick, Final Fantasy, GTA, Medal of Honor, Call of Duty, Unreal Tournament 3(expected but delayed atm), etc... a lot of PC games use/can use OpenGL.
"Can use" is completely besides the point, unless the OpenGL version is much faster (I've never seen this be the case - usually it's the opposite). Now eliminating really old games from that list (irrelevant to a new hardware/API benchmark) and accepting that ID/Carmack just likes OpenGL, we're left with:

- GTA, fine but not exactly a graphics powerhouse
- WoW, as above
- KOTOR, fair enough
- Riddick, fine
- CoD, I'll believe you although CoD4 uses DX...
- Sims2, uhh doesn't it use D3D on windows?

Anyways my point was not to argue about what games use it an which don't... it's clear that the vast majority of games use DX - there's no way I could list the names in the forum here for even the games in the last year or two! So I'm sorry if I was overly sarcastic/exaggerating in my post, but I think the point has been made.

DX10 is becoming a complete selling fiasco and clearly slower than WinXP+DX9 by the moment, making laptop IHVs to protest to include the old WinXP because clients don't want Vista installed, etc...
No way - both Vista and DX10 are doing just fine. DX10 is just as fast or faster than DX9; the people who say things to the contrary really have no idea what they're talking about, usually in that they're comparing apps with greatly differing rendering paths/quality for each API. There are always whiners when new stuff comes out, but I've been quite happy and impressed with the speed at which people are upgrading to DX10-capable hardware and software.
 
Last edited by a moderator:
Simple: because the cost of developing a code path for another platform (in this case, OpenGL and all of the OS-API calls, plus testing, QA, drivers, etc) is more than the revenue that you'd bring in by adding the additional support.
I agree with JohnC...
http://www.destructoid.com/john-carmack-hates-vista-maybe-you-should-too-29216.phtml

there is no point to require DX10 for a game.. almost by now... so if Future mark wanna use DX10-only for 3dm2k8 perfect... but gonna be a fiasco due to the reduced Vista users quantity and the poor quality/speed of the Vista drivers. In fact let me remind you that Roy's devblog was closed for talking the truth about all this...

it's clear that the vast majority of games use DX
Well, on other OSs OGL is your only option... OGL runs too on Windows... that's the point of OGL. If they use DX they should do an obligated port to OGL to support other OSs, because Microsoft put all its effort to make it very incompatible with other OSs and prohibited virtualization splicitly(see Vista license trap).

No way - both Vista and DX10 are doing just fine.
Totally disagree. Vista, for me, is one of the worse OS created ever ( and, curiously, one of the more expensive ones too)... and i'm not alone in that affirmation ( don't make me post some reviews of IHVs asking Microsoft for changes or users asking vendors to remove their Vista from their laptops ). We moved back from Vista to WinXP in our enterprise. I personally uninstalled Vista at home 3 times(btw, requiring 3 different license reactivation on HW change) due to bad drivers and app incompatibilities with firewall, SATA, printers, scanners, pen tablets(artists here can tell you 1M stories about that), gamepads, UAC/rights pain and also due to excessive resource allocation... and I don't know if you program using .NET 2.0 but almost the half of the controls have problems running on Avalon... so pls, don't tell me Vista is doing fine... :devilish:

DX10 is just as fast or faster than DX9. ; the people who say things to the contrary really have no idea what they're talking about, usually in that they're comparing apps with greatly differing rendering paths/quality for each API.
Perhaps, but I could show you some games with almost the same appeareance and DX10 runs almost at the 70% of the DX9 path. On the other hand DX9 games go usually much slower than run in WinXP.
Ok, perhaps are immature drivers, bad Vista kernel design or other thing... but today DX10 gaming is not very good... Ask any serious FPS gamer and will tell you the typical "vista sux".
See these:
http://blogs.zdnet.com/hardware/?p=797
http://politech.wordpress.com/2007/02/12/vista-not-playing-nice-with-fps-games/
http://games.slashdot.org/article.pl?sid=07/02/12/2212248

google it... no player likes Vista performance ( specially the x64... which is very paradogical ).

even IHvs, press and even Microsoft tech chief blame it!

http://bink.nu/news/seagate-blames-vista-drivers-for-poor-performance.aspx
http://www.news.com/Dell-brings-back-XP-on-home-systems/2100-1046_3-6177619.html?tag=nefd.lede
http://www.news.com/8301-10784_3-9785337-7.html?&subj=NewsBlog
http://www.macworld.com/news/2006/12/12/allchin/index.php

I could be all the day posting you people blaming Vista... but don't wanna go too offtopic... and don't need to cite more links... I tested Vista myself and I think is very bad OS.

There are always whiners when new stuff comes out, but I've been quite happy and impressed with the speed at which people are upgrading to DX10-capable hardware and software.
What speed are you talking about? Vista came one year ago and I can just see a very small bunch of DX10-prepared games(AFAIK today I can buy in a store just Crysis, Lost Planet and Bioshock...)... and I still see very common BSoDs on the drivers... SLI is unstable, x64 port versions of application lacks quality or is inexistent... and the Vista incompatible list of apps is absolutely preocupant...

I would like to talk 3dm20008 team and ask them for Vista and DX10 problems.

Uhh... okay. I use D3D10 in my shadows app... what's the point here? Neither of these are relevant or comparable in scope to 3dmark.
Well, somebody mentioned GI above as a DX10.1 feature... You can find a leaked ATI internal document about GI using cubemap arrays in DX10.1. Well, I just wanted to show that very decent GI techniques ( Lightsprint, Geomerics, FantasyLab ) are possible in realtime today using OpenGL 2.0 hardware... without requiring DX10.1 and neither Vista... From a PoV of an user, that is automatically translated into more graphics quality without having to buy a new graphics card or a new computer... But ofc, IHVs and Microsoft makes noise about all this because they want to sell us their new products.
 
Last edited by a moderator:
Back
Top