More 3DMark from ExtremeTech

Doomtrooper said:
What exactly does cg not supporting PS1.4 tell you?

It tells me they are holding graphics progression back because they chose not to support it (we know the advantages of PS 1.4) nothing more.

We'll, it can be argued progression is now 2.0 and 1.4 doesn't really matter. But I do think your correct in the sense that Nvidia is currently protecting their older cards by not including 1.4 in cg or their released fx drivers.

but the question is how often will devs make the extra effort to use 1.4 considering the installed base of cards?

I think there is significant reason to support it, don't you.Why should these card owners be not supported by DEVS..and a list coming soon will show you there is significant support..in fact the two games that Borsti used as examples "Tiger Woods" "UT 2003" support PS 1.4.

If the future Nvidia cards support it, then this entire arguement about the use of PS 1.4 is a moot point.

There is definitely a good reason to use 1.4 as can show a marked difference compared to 1.1 as seen in 3DMark. However Nvidia seems to be using what resources are in their control to make sure 1.4 is used as little as possible. I thought that devs wouldn't bother with 2.0 already on the horizon and all cards supporting 1.1? As Dave says that may not matter. ATI's dev rel may do it for them and I'm sure the dev's won't mind as long as it's not extra work for them.
 
In the end we must as consumers, developers, IHV's that are on this forum and everywhere look to what will help accellerate PC graphics technology and bring engines like Doom 3 running with full detail quicker to the platform.

To debate about a DX revision PS upgrade that was designed to help developers to implement these features with better speed seem silly to even talk about.
I could see it if the jump to PS 1.1 to 1.4 was similar to 1.1 to 1.3, but there is significant improvement there, and Nvidias future cards will support it so in the end this is good for all of us.
 
Sxotty said:
You could say ATI was holding back progression by not writing a backend for CG, which I understand they are going to do with R350.

PLEASE, know the facts before posting some Rubbish, do you know how stupid it would be for ATI to support a non-standard HLSL written by a competing company !
ATI IS supporting HLSL but only the two standards we've always had

1) Microsofts DX9 HLSL
2) Opengl 2.0 HLSL


Sxotty said:
Nvidia is holding it back by not writing a back end for render monkey, but does it really matter.

Rendermonkey is not a HLSL, its a tool that works with DX9 HLSL and OGL 2.0
 
So, you know for a fact that ATI won't support Cg?

And please, stop with the alternating bold. Its nearly as annoying as all caps.
 
RussSchultz said:
So, you know for a fact that ATI won't support Cg?

And please, stop with the alternating bold. Its nearly as annoying as all caps.

At least i read it in 2-3 interviews already that they won't support Cg.
 
RussSchultz said:
So, you know for a fact that ATI won't support Cg?

And please, stop with the alternating bold. Its nearly as annoying as all caps.

Lol no they won't support CG, why should they DX9 HLSL is more complete and is not controlled by a competing IHV..as is Opengl 2.0
 
I actually did read somewhere that ATI said they would release some form of support for Cg, for the R350, of course since I cannot find where it was, it could well have been BS, but I am 100% certain that I did see that stated.
edit:
And actually it would make sense despite what you say, even though I am sure you are very indignant to hear someone say it, if developers are using Cg, then it would make sense for ATI to write something to integrate support for it on their hardware. If they had to pay Nvidia something to do so I can understand it not making sense. There is really nothing wrong with having more than one or two languages to code in. Perhaps you think that it is a waste of their resources, but if even one popular game is coded in only Cg, and ATI's product looks crappy in performance, then it will hurt their image and is therefore a valid use of resources. Tenebrae which I realize is a Mod, is going to be coded in Cg for example, and here I have my 9500 Pro, and I want it to run well on my computer so I certainly would hope ATi would put effort into it.

If ATI doesn't then I am likely to buy a product that will have a wider scope of application.
 
No offense..Tenebrae does nothing for me (not a official supported mod, and the lead programmers are Nvidia slanted with one ATI guy making sure support is there)..I'm much more concerned about Future titles like Stalker etc..

There is some people here on this forum believe 'more is better' meaning HLSL..I disagree.
More doesn't mean better, means more confusion..'if it isn't broke, don't fix it'.
There has been two standard API's in the PC graphics industry for years, Glide was a third but since 3DFX is gone then so has the API.
Both of these API's are controlled by a governing body made up of all the IHV's and other high level players in the PC industry like Dell, Intel etc...it is not controlled by a graphics card company...Nvidia still has control of CG and can twist and manipulate it whichever way they wish and ATI couldn't do a damn thing about it.
Opengl 2.0 and DX9 HLSL offers everthing CG does, except one large excpetion..its optimized for all hardware..not just one company.
 
Opengl TITLE again...this would never happen with DX9


http://www.4drulers.com/ampdownload.html

AMP II playable demo. This is the latest demo of the AMP II game engine.

System requirements:

Nvidia GeForce 3 and 4 ti. graphics accelerator*

Windows 95, 98, 2000, XP, NT.

64 Megs of System memory

60 Megs of Hard drive space

Pentium III 400 Mhz or higher.

*The only video accellerators cards the demo supports right now is Nvidia GeForce 3 and GeForce 4 ti. Note: You must run the latest official Nvidia drivers, not the beta drivers. GeForce mx and ATI are not supported in this initial release. We will support those cards in the next release.

Release a Demo so only a certain installed user base can check it out, limiting their market penetration = one more game to overlook on the shelves.

A Screen Shot..too bad thats all I can see.

http://www.4drulers.com/gore/sshots/amp2shot02.jpg
 
Supporting PS 1.4 makes all the sense in the world. If you have some kind of (optional) graphical effect in your engine, and it can be achieved by 1.4 and 2.0 shaders, you would want to write the effect in 1.4 so it can run on a wider range of cards. Simple as that.

Requiring PS1.4 as baseline is a different thing, however. I don't think there are enough 1.4 ATI cards to base your minimum spec on them. I expect the minimum spec to jump from DX7 to PS1.1 to PS 2.0.

To think that the minimum spec is still not PS 1.1, and will not be for a while... and it's all nVidia's fault.
 
Jare said:
Supporting PS 1.4 makes all the sense in the world. If you have some kind of (optional) graphical effect in your engine, and it can be achieved by 1.4 and 2.0 shaders, you would want to write the effect in 1.4 so it can run on a wider range of cards. Simple as that.

Requiring PS1.4 as baseline is a different thing, however. I don't think there are enough 1.4 ATI cards to base your minimum spec on them. I expect the minimum spec to jump from DX7 to PS1.1 to PS 2.0.

To think that the minimum spec is still not PS 1.1, and will not be for a while... and it's all nVidia's fault.

I have to disagree, for developers starting to code right now. coding for Ps1.4 makes a LOT of sense cause by the time the game ships the number of users having ps1.4 cards will be HUGE. Remember all DX9 cards have to support ps1.4 and ALL ATI DX8 class cards have that support, also coding for DX9 i.e. ps 2.0 at this point will be not only expensive (in terms of sale) but also illogical since you will be coding for an even smaller installed base.
 
My long thoughts on this

Why developers liked NVidia
The truth of the matter is that until the 9700 and the late FX NVidia was on the way to totally owning the market for gamer-oriented 3d accelerators. Support for ATI and other chips was becoming more and more of an afterthought for developers--who in fact liked it that way. Things were easier if they only had to develop for the NVidia architecture, and since the XBox also had that architecture it gave them an easy path to the console market. If the FX had come out on time (before the R300) NVidia's market dominance might have been unstoppable, and game developers would have happily targeted their games entirely at the NVidia architecture. As alternative video cards became less well supported, they would have become less popular and developers would have even less reason to support them--which they would have regarded as a good thing.

NVidia also had the advantage of solid, often updated drivers with excellent OpenGL support, a close relationship with Microsoft, and excellent developer relations--including monetary incentives for NVidia branding.

Some recently released titles were developed under the assumption of NVidia dominance, and many still in development have developers that were working within that model.

Cg and Developers

Part of the excellent developer support for NVidia was a web-site with well documented SDK's and tools. Cg built on that by providing a state-of-the art tool for writing complex shaders, of the sort that NVidia promised would be optimum for their upcoming hardware.

If you were a developer that shared the assumption of NVidia dominance, Cg seemed like a fairly easy choice to make. It would provide the best possible support for the dominant platform, and NVidia promised at least passable support for the "standards" (which in any case appeared to be becoming less important to the market of the future). It supported OpenGL, and so the latest NVidia extensions, including those in the drivers which emulated the chip that promised to cement NVidia's future dominance, the FX. If Cg captured enough developer support, the other IHV's would be forced to supply back-ends optimized for their hardware--to the extent possible.

Expectations meet reality

Last summer ATI released the R300, a GPU which had much greater speed and much more advanced capabilities than NVidia's current flagship product. At first it was assumed that NVidia's FX chip would trump the R300 soon after, but as it turned out NVidia would have no competitive chip for 6 months or more--and numerous questions remain about the FX's performance, availability and driver support.

This gave ATI the opportunity to sell a lot of DX9 cards, and in fact to introduce a line of DX9 cards at various price points. ATI did not make great inroads in market share on a unit basis, but among higher-end card buyers, which are particularly important to developers because they spend the most money on games, ATI made significant inroads. In fact, the only significant market for a title released with DX9-specific features in the next few months would be ATI owners. The assumption of NVidia dominance has proven incorrect, at least for the next several product cycles.

Developers now have to take ATI owners into account, or face an angry backlash from a large proportion of their most vocal audience (cf. Neverwinter Nights). ATI can now afford to disdain Cg and urge developers towards the "IHV-neutral" "standard" platforms to insure that ATI boards are supported properly.

If NVidia had counted on special optimizations available from Cg back-ends to get optimum performance on their future cards, they now are in a tenuous position. They have to sell Cg to developers even while the best available hardware to run next-generation shaders is made by a company that is adamantly refusing to support Cg. And if Cg does not gain support among developers, NVidia may find its devices comparing less favorably on other development platforms.

[Edited for formatting]
 
RussSchultz said:
So, you know for a fact that ATI won't support Cg?

And please, stop with the alternating bold. Its nearly as annoying as all caps.

R.Huddy said so explicitly in the directxdev forums. You are a nvidia shareholder, no doubt?
 
I have to disagree, for developers starting to code right now. coding for Ps1.4 makes a LOT of sense cause by the time the game ships the number of users having ps1.4 cards will be HUGE. Remember all DX9 cards have to support ps1.4 and ALL ATI DX8 class cards have that support, also coding for DX9 i.e. ps 2.0 at this point will be not only expensive (in terms of sale) but also illogical since you will be coding for an even smaller installed base.

And i have to disagree somewhat with you :)

Problems is, you're saying that all DX9 cards supports PS 1.4. But, all DX9 cards also supports PS 2.0 so why not use that instead ?

The talk about the installed userbase is imo a rather moot point since we're talking about games that are in the beginning of their development phase at this moment. This because all DX8 PS 1.4 capable cards are very slow. Going by 3D Mark 2003, none of them will be usable when we're starting to see PS1.4/2.0 games like GT2-4. And of course especially GT4 :)
 
Is UT2003 slow?

If a shader can fit into the PS1.4 model, then why not use it? You've instantly opened up the appeal to a wider range of users.

If you are developing shaders for the first time and using DX9 HLSL to do it then you may find that they fit into version that you didn't expect. Evidently HLSL you can tell it to comiple to a target and see if you shder will fit into that target - its not too difficult to compile to each of the sahder revisions to see which model your shader program can fit in; you may end up with more hitting PS1.4 in this case.
 
DaveBaumann said:
Is UT2003 slow?

If a shader can fit into the PS1.4 model, then why not use it? You've instantly opened up the appeal to a wider range of users.

If you are developing shaders for the first time and using DX9 HLSL to do it then you may find that they fit into version that you didn't expect. Evidently HLSL you can tell it to comiple to a target and see if you shder will fit into that target - its not too difficult to compile to each of the sahder revisions to see which model your shader program can fit in; you may end up with more hitting PS1.4 in this case.

I thought we were talking about games that are in the beginning of their development phase at this moment (released 2-3 years from now ?). And Carmack already talks about going over the shader limit on the R300 so i'm guessing that games like that won't have a PS 1.4 fallback.

Edit: Another thing, the two licensies for the new Unreal engine that i know of (Thief 3, Deus X 2) are using a completely new rendering engine (looking a lot more like 3D Mark GT2-3 i might add) so i wouldn't use the UT 2003 benchmarks as an indicator of performance with those games. Maybe i should add that i think that it's in games like these that we're probably are going to see PS 1.4 support since AFAIK, they are to be released (unless they get the DNF disease :)) in <= a year or so. But as i said, i wouldn't use UT 2003 as a benchmark for these games.
 
I'd just like to lay down one little comment on the 1600x1200 score.

I had thought that previously, with the FSAA benchmarks, the reason for the higher-than-normal reduction in performance was due to memory usage, which could explain it well in those situations.

However, given that there is such a drop in a situation that shouldn't be any more memory size bound than any other architecture, this is clearly highlighting a different aspect of the architecture. There are two questions here: Is it the same flaw that results in low 16x12 FSAA performance? Or is it different?

I really do not buy that it could be memory bandwidth problems, unless nVidia's memory bandwidth savings tech just doesn't work at 16x12 (which would be silly...to say the least). To put it simply, the z-buffer compression will work better when there are more pixels per triangle. Higher resolution does this. The same goes for other occlusion detection technologies and whatnot. In other words, higher resolutions should be less memory bandwidth bound than lower resolutions (compared to fillrate limitations).

Due to the extreme high poly nature of the 3DMark tests, one possible explanation is that the FX is storing the geometry in video memory, whereas the 9700 Pro is not, leading to an increase in overall memory bandwidth usage of the FX.
 
Well, I'm just talking about the use of PS1.4 in general. However, just because GT2&3 are slow does not necessarily follow that all games will be slow. If you've not followed the converstations about 3DMark03 there are reasons why it is slow, and this is a consious decision to do it in that fashion.

However, just becuase JC has "bumped into the shader limits of R300" doesn't instantly equate to all games requiring that many instructions in 2-3 years; JC is most likely talking about his internal testing, not relating to anything to doomIII or even, necessarily, his 'next' engine.

The point being is that I've had a number of converstations with developers saying that, well, DX8 was a bit of a waste - its the interim release that introduces stuff, but ultimatley gets pushed to the sidelines. Some developers are likely to be picking up shaders for the first time with DX9 class hardware, partly becuase they need to and partly becuase of HLSL - they are not immediatly going to run up to 96+ instructions, and as they start to 'dabble' its very possible that they begin to realise that the shaders they are looking at will be able to fit into lower models (as was the case with 3DMark). Even if they don't, the IHV's dev rel's probably will.
 
DaveBaumann said:
Well, I'm just talking about the use of PS1.4 in general. However, just because GT2&3 are slow does not necessarily follow that all games will be slow. If you've not followed the converstations about 3DMark03 there are reasons why it is slow, and this is a consious decision to do it in that fashion.

I've followed the discussion and i agree. I don't think that games will be that slow. But, if you're about to release a game within 2-3 years you have two choices as i see it. Either adapt the game so that it'll run on DX8 cards or don't. The first means that you'll have to lower you standards a LOT as i see it (huge performance difference between DX8 and DX9 cards). Have we ever had such performance difference between different generations (DX versions) of cards ?

I'm just guessing that because of that, the DX8 generation of cards will be sort of a lost generation and that we will see a jump over to the DX9 generation. Cause even if Doom3 will run a lot better then the 3D Mark 2003 GT tests, it'll still require a DX9 card to run with any kind of FSAA/Aniso which i know that a lot of people are used to by now (me f.e). We're using 1024 as a standard res now and going by the leaked alpha, i'm guessing that i have to go down to 640*480 NO FSAA/No aniso to get good framerates on my GF4.

However, just becuase JC has "bumped into the shader limits of R300" doesn't instantly equate to all games requiring that many instructions in 2-3 years; JC is most likely talking about his internal testing, not relating to anything to doomIII or even, necessarily, his 'next' engine.

I don't think he talks about Doom3, but probably his next engine.
I'm pretty sure that that engine won't run that good on the R300 or the GF FX.

The point being is that I've had a number of converstations with developers saying that, well, DX8 was a bit of a waste - its the interim release that introduces stuff, but ultimatley gets pushed to the sidelines. Some developers are likely to be picking up shaders for the first time with DX9 class hardware, partly becuase they need to and partly becuase of HLSL - they are not immediatly going to run up to 96+ instructions, and as they start to 'dabble' its very possible that they begin to realise that the shaders they are looking at will be able to fit into lower models (as was the case with 3DMark). Even if they don't, the IHV's dev rel's probably will.

I wrote the thing above before i read this :)

Although i agree that they're probably not going to run up to 96+ instructions right away , i would still say that DX8 hardware will have very big problems with most games (at least when we're talking about FPS games) coming out in a 2-3 years. They're pretty much outdated when Doom3 comes out as i see it and i'm expecting that game this year.
 
Doomtrooper said:
Release a Demo so only a certain installed user base can check it out, limiting their market penetration = one more game to overlook on the shelves.

That's pretty stupid...they should have held off the month required to get it running on ATi cards and GFMX's (wtf were they thinking not supporting the MX?????!!!!!!!!!!!!!!!!!!!!!!!!!!).

Bjorn said:
I'm just guessing that because of that, the DX8 generation of cards will be sort of a lost generation and that we will see a jump over to the DX9 generation. Cause even if Doom3 will run a lot better then the 3D Mark 2003 GT tests, it'll still require a DX9 card to run with any kind of FSAA/Aniso which i know that a lot of people are used to by now (me f.e). We're using 1024 as a standard res now and going by the leaked alpha, i'm guessing that i have to go down to 640*480 NO FSAA/No aniso to get good framerates on my GF4.

I think you're mixing up card generations and DX generations. The reason the Radeon 9700 is faster than the Radeon 8500 has nothing to do with DX9. Remember the R8500 was a lot faster than the original Radeon too (and even faster than an equally clocked R7500). If DX9 was never released the 9700 would still be faster just because it's a newer (and better) architecture. In fact, outside of 3Dmark we have no real measure of DX9 performance at all.

I don't really see anything at all to support your contention that DX8.1 is just going to fall by the wayside. DX8.1 added a lot more than some of the other DX versions, so I don't really see how it's a "waste". Time will tell, I suppose.
 
Back
Top