Reverend at The Pulpit #5

Reverend

Banned
Reverend at The Pulpit #5

Here we go again...

Tomb Raider : Angel of Darkness
It's been a pretty eventful month-or-so for me wrt this game. I had never intended to buy this game because the past few sequels have been pretty absymal in terms of graphics and gameplay but I did have hopes for a change for the good for this latest sequel solely because it had been advertised to use a "completely re-worked graphics engine" (I think this is a quote from somewhere, long time ago). Bought the game, played it and found it quite frustrating in terms of gameplay. Graphics looked good initially to me but ran quite abysmally at first using a Radeon 9600PRO but that was before I learned more about the game (and I had, as usual, tried to set all graphics settings to "maximum/extreme"). After being informed by a site partner that this game can be benchmarked, I proceeded to try recording a demo and benchmarking it. The benchmarking feature worked lousy out of the box (official patch v42), again, because I didn't know all the details (=debug features) of the game. So, I contacted Core Design and everything took off, for the better. Almost all my suggestions (with hardware reviewers' needs in mind) were implemented, including command line parameters that allows benchmarking mode, ability to set AA, etc. etc. You can read all this in our TRAOD demo benchmark page/readme. It's been a pleasure working with Core Design, helping to fix issues (an annoying Z-fighting one, removing unnecessary performance-sapping effects, etc.) and getting executable builds privately to ensure what I wanted was implemented correctly was most definitely something I can appreciate.

Some have questioned the quality of the graphics engine, others have questioned the relevance of using this game because they think it is "buggy" and has just plain lousy gameplay. The first can be a concern for Beyond3D, the second (gameplay) is of no concern to Beyond3D. However, if the graphics engine is "bad" (and I have personally witnessed a few unwanted rendering defects... like wobbly textures in one level, for instance), then it should be logical to assume it will affect all types of graphics card. This is important, when we do comparisons. The fact of the matter is that the game is (probably) the first full purchase-able game feature quite a number of DirectX9 technologies. Pixel shaders 2.0 are used for both performance (less passes than 1.x Pixel Shaders) and quality reasons. Floating point in 2.0 Pixel Shaders helps improve quality. 2.0 Pixel Shaders with many instructions gives the developer the effect he wanted to achieve (although we can always argue that the shaders don't necessarily be that big to achieve approximately the same effect). 2.0 Pixel Shaders help prevent some defects that happen with 1.1 or 1.4 pixel shaders. We've got post-processing using 2.0 Pixel Shaders -- the Depth-of-Field (DOF) is quite cool (its "blurriness" is never fixed, as you move closer and closer to a blurred texture, the texture becomes sharper/in-focus). 2.0 Pixel Shaders using floating point textures are used to improve quality of projected textures for shadows -- it's not immediately noticeable but as they say "Everything counts in order to achieve perfection". Many have commented that even though this game uses so many DX9 technologies, it doesn't look impressive, and that it actually doesn't look any better than a game like Unreal Tournament 2003. Well, you have to ask yourself what is the "wow" factor in UT2003. Can't put it down in words, even though you like its graphics? It's the colorful textures, the detail textures, the high geometry, the more elaborate level design, the extensive use of cubemaps, the very contrasty lighting. These are "wow" factors that have an immediate impact. TRAOD is very different, with the developer knowing exactly what he wanted to use DX9 technologies for, and I think he achieved what he wanted, as per the reasons I gave above.

There are some who aren't too sure if our articles/reviews that uses this game can be trustworthy, mostly because it's been "exclusive" to Beyond3D so far and hence our results can't be verified by other outlets. Tough luck. If there are any such doubts, Core Design or Eidos would be among the first to say "Don't trust those articles and reviews at Beyond3D using what they claim is a private EXE build given to them by Core, because Core did no such thing". They haven't said anything like this, have they?

All of this is quite thrilling to me actually. To have helped provide the community with a new game benchmark in Splinter Cell was great. To do this again with another game is also great. We need more.

I was actually quite surprised to read that Valve's Gabe Newell commented about Dave's TRAOD shootout article. A few months back, I had sent him an email asking if Beyond3D could be afforded an early opportunity to benchmark Half Life2, much like what NVIDIA arranged with id Software wrt DOOM3. Gabe never replied :( . Hopefully, he's now heard of Beyond3D :)

And so, what does using TRAOD mean? It means the NVIDIA's GeForceFX line can run it fine (after removing the unnecessary "Glow" effect... I don't even know where I can see the effect in the level that our custom-recorded demo is based on). It also means it loses to ATI's DX9 Radeon line. Does this mean such a result applies to all other DX9 games? No, because this depends more on the muscle of NVIDIA and ATI developer relations teams, than on the actual muscle of their respective hardware. It also depends on IHV/developer politics, the ultimate choices and decisions of developers ("Are we going to be brave? What cards are selling the fastest?") and very likely a host of other considerations. Is any IHV "holding the industry back"? Can't say for sure. What I do know is that while NVIDIA is having a hard time with the GeForce products in the face of competition from ATI's DX9 Radeons, NVIDIA's PR personnels that Beyond3D deals with are quite helpful and in one of their PR personnel's words about the priorities of his colleague who is "in charge" of Beyond3D, "... has Beyond3d at the top of his priorities." I usually talk to a particular NVIDIA PR personnel using very, very frank words (we call each other "butthead") and he does the same with me but we have history. He tries to help, which is a good thing. I told him about the poor relative GeForceFX performance in TRAOD. NVIDIA will get down to work. Last I heard (after our articles and reviews using this game), the official game patch is being delayed (v49, as given to Eidos by Core Design, the version we have been using in all our articles and reviews... Core Design actually made an official v49 patch and gave it to Eidos who is responsible for releasing it to the public) and that there is likely to be a newer version (later than v49). The reason? Speculatively, let me remind you of what I wrote in the Albatron GeForceFX 5900PV review :

This game is relatively new in the benchmarking scene (in fact, this site is the only place you can see the game benchmarked thus far, at least until the official patch is released to the public) and if it is eventually used regularly by hardware review media outlets it is likely that we will see IHVs try to improve the performance of their hardware in this game.

That's speaking specifically from the point-of-view of driver improvements for this game... who knows what happens behind closed doors between IHVs and Eidos/Core Design. We will stay vigilant, of course.

Developer stuff
Dany Lepage, a former 3D Architect (identicle title as someone like Gary Tarolli at NVIDIA, for instance) at both Matrox and NVIDIA but now with Ubisoft making games like Splinter Cell and its sequels, haven't been posting at our Developer forums but he's been real busy working on SC-X and Pandora while also making a presentation about a month ago at SIGGRAPH. It's an interesting presentation about Splinter Cell, with a few pages (it's in PowerPoint format) dedicated to Shadow Volumes vs Shadow Buffers (i.e. DOOM3 vs Splinter Cell) and the lighting engine used in the game. Dany also provided me images taken from an Atomic magazine article (issue 30, June'03) about Splinter Cell that explains many interesting details about the game and its various ports (XBOX, PS2, GameCube, PC... did you know the PS2 version was made by Ubisoft's Shanghai/China team?), how difficult it was to make the game on all these platforms, the differences of the game on all these platforms (the PS2 and Gamecude really suffer graphically compared to the PC and XBOX versions)... very interesting stuff. You should try and get that issue of Atomic magazine (because I can't give you the images due to copyright issue). I'm not sure if his PowerPoint presentation is listed on SIGGRAPH's official site but if it's not, I'll see if Beyond3D can provide it to the public... but you guys will have to let me know if you want it first. Dany will post at our forums when he has time (duh).

Futuremark, 3DMark03 and the next 3DMark. Things are moving at Futuremark and... no, wait, I can't say, I'm under NDA :p . Suffice to say, I have good hopes that the next 3DMark will meet expectations in a good way.

I haven't been talking to Tim Sweeney lately. But I intend to ask him if we can have an early glimpse of his "next gen" engine that he's currently working on. Everyone, fingers crossed.
 
Thanks, Reverend and Core Design. I hope the professionalism and honesty that you and Dave give to B3D will allow you to do similar things with other developers.
 
Rev-

First, I want to add my name to the chorus thanking you for what appears to be an incredible amount of work and dedication getting the TRAOD benchmark going. Yes, it joins the pantheon of shitty games being used as benchmarks, and yes, much more attention will (rightfully) be paid to using HL2 as a benchmark (assuming its benchmarking features can approach what you've put together with Core in terms of power and configurability), but it seems an incredibly useful data point to have--perhaps almost as useful as those shittiest of games, the 3dMark03 Game Tests, would be if we could reasonably expect any cheats would be disabled.

Does this mean such a result applies to all other DX9 games? No, because this depends more on the muscle of NVIDIA and ATI developer relations teams, than on the actual muscle of their respective hardware. It also depends on IHV/developer politics, the ultimate choices and decisions of developers ("Are we going to be brave? What cards are selling the fastest?") and very likely a host of other considerations.

Of course ISVs will not (nor should they) design their games so that they can't reach decent speeds with recent hardware. However, I think it's fair to say that, for the near future, most PS 2.0 shaders of the sort that enable new graphical effects (as opposed to the sort that allow DX7/8 style rendering in fewer passes) will be selectable as part of the game's graphics options, and in many cases will also have a shorter, lower quality PS 2.0 or PS 1.4/1.1 replacement as a selectable option as well.

That is, the situation that most consumers see will not look much like the apples-apples set of benchmarks in Dave's TRAOD shootout with all graphical settings set to (near) highest, but rather like the benches at default settings: R3x0-based cards will get broadly similar framerates to competing NV3x-based cards, but with significantly more of the most advanced graphical options (particularly in the realm of PS 2.0 shaders) enabled. And, whether the different configurations are made automatically as part of a card-detect script or by user trial-and-error as gamers tweak for their favorite combination of effects, resolution, AA/AF and framerate, that's exactly as it should be (assuming the end-user can indeed see and override any default settings).

I would assume Nvidia's DevRel are hard at work coming up with PS 1.1 or cut-down PS 2.0 shaders to approximate the effects of some longer PS 2.0 shaders upcoming games will use--and as that should improve the gaming experience of many Nvidia card-owners, that's exactly what they should be doing. I would further assume that most ISVs will be happy to offer such shaders in their games as selectable options that are clearly marked as offering reduced quality. This is all in keeping with any ISVs goal of offering the most end-users the best possible game experience based on the limitations of their hardware.

What I don't expect to see is ISVs letting Nvidia get away with replacing full-quality PS 2.0 shaders with cut-down shaders in their drivers, without informing end-users or hardware reviewers or offering them the chance to use the full-quality version. Nor do I expect Nvidia's DevRel to be able to convince ISVs to forego the use of high-workload PS 2.0 shaders because NV3x hardware is not able to run them with decent performance, so long as there is a large enough consumer base with cards that can run such shaders to good effect. (R3x0's installed base is already large enough to meet that standard for some developers, and as R3x0 moves further into the mainstream and R4x0 and NV4x cards hit the market that will be the case more and more.)

In other words, I expect that benchmarks of most "DX9 games" (by which I primarily mean games that make substantial use of PS 2.0 shaders, although in fairness if there are compelling enough VS 2.0 effects they should be counted as well) will tell as similar story to TRAOD (as Gabe Newell has hinted is true of HL2): when all the heaviest DX9-only options (and long PS 2.0 shaders in particular) are enabled, R3x0 cards should have a very substantial lead over competing NV3x cards (on the order of 60-80%, likely); in the configurations in which most people will play, competing cards will see similar performance, but the NV3x cards will likely have a much smaller subset of the high-end effects enabled.

In other words, the way reviews are typically conducted (all effects set to highest), things will likely look very bad for NV3x.

But you seem to feel differently. Is that because you think that PS 2.0 shaders of any real complexity can actually be "optimized" to offer similar performance on NV3x and R3x0 hardware without bowdlerizing the intended effect? Or because you think Nvidia's DevRel has the political power to prevent high-profile games from utilizing lengthy PS 2.0 shaders to a significant degree (at least until NV4x cards are getting the lion's share of benchmarking attention)? Or what?

From all I've seen and judging from certain anonymous developer comments Dave has referred to, NV3x's poor PS 2.0 performance is due to the fundamental structural limitations of the design, both in terms of lack of shader execution resources and full-speed temp registers, rather than due to some finicky-ness with respect to instruction scheduling or some such problem that could be fixed with new drivers or rewritten (but functionally equivalent) shaders. If you feel otherwise I'd be interested to hear why.

NVIDIA's PR personnels that Beyond3D deals with are quite helpful and in one of their PR personnel's words about the priorities of his colleague who is "in charge" of Beyond3D, "... has Beyond3d at the top of his priorities." I usually talk to a particular NVIDIA PR personnel using very, very frank words (we call each other "butthead") and he does the same with me but we have history. He tries to help, which is a good thing.

I'm very happy and very relieved to hear both that B3D has a strong contact within the Nvidia organization, and that Nvidia isn't trying to marginalize B3D as a result of the fact that much of what gets published here invariably (but of course fairly) puts their products in a poor light relative to the competition from ATI. It has seemed that way at times (i.e. that Nvidia's B3D strategy was to ignore and try to marginalize), although it may be that B3D is now well-known and of course respected enough among the Internet 3d hardware community at large that such a strategy is no longer tenable.

Which is of course a testament to your and Dave's knowledge and integrity. :)cry: :oops: :D)

Futuremark, 3DMark03 and the next 3DMark. Things are moving at Futuremark and... no, wait, I can't say, I'm under NDA :p . Suffice to say, I have good hopes that the next 3DMark will meet expectations in a good way.

So there will be a 3dMark04 (or 3dMark03SE) showing off new effects enabled by PS/VS 3.0? And we won't have to wait for DX10 for the next 3dMark after all?? Great news! :D

(And way to not disclose anything! ;))
 
I'd like to add another voice to the chorus thanking you for your (B3D's) proactive approach to finding new and relevant games to benchmark. Good work, Rev, and much appreciated.

L'Inq suggests the HL2 playtest/demo should be out within a month. I'm getting anxious. Work your magic, Rev. :)
 
When i received my new PC the other day, I was quite shocked (though wasn't quite sure whether to be glad/dissapointed) when TRAOD full version came with the 9800Pro.

Since I didn't have any other DX9 games, I thought it would be nice to have a look and see what Rev/B3D have been studying for the last couple of weeks.

After playing it for a couple of days, I must make a couple of comments. Firstly, yes the controls are rather annoying and the gameplay leaves something to be desired.

Secondly, some of the PS2 effects are quite amazing. The DOF effect I did not notice at first, as I thought it was a low res background. Only when I got to about the third or fourth level did I realise that when I got close to far away images, it seemed a 2D background formed into a crisp 3D image.

Also some of the lighting effectsm especially with the helicopter on the rooftop with the search light, was some of the best lights I have ever seen IMO. No more of the solid beams, these lights looked extremely real.

I look forward to the patch and being able to benchmark different modes.
 
DaveH said:
But you seem to feel differently.
Actually, I don't... but I can understand why this may be the impression I'd given... because many a time, I want to say what I want but I decided against it for the good of this site, and hence has to make folks wonder what I really mean ;)

Is that because you think that PS 2.0 shaders of any real complexity can actually be "optimized" to offer similar performance on NV3x and R3x0 hardware without bowdlerizing the intended effect?
Oh, most definitely and in the true sense of the word "optimization". I'm certainly not the most proficient or knowledgeable of programmers but there are times when you do wonder why a programmer decided to use certain codes and instructions for his intended effect (that he tells you about, explanation of what his intentions are, that is). Sometimes there are needless instructions, other times there should be some re-shuffling or re-scheduling of instructions (that gives a performance boost... doesn't matter if it's a significant boost or a small one... they all matter).

Or because you think Nvidia's DevRel has the political power to prevent high-profile games from utilizing lengthy PS 2.0 shaders to a significant degree (at least until NV4x cards are getting the lion's share of benchmarking attention)? Or what?
I think it may be a bit unfair to name NVIDIA specifically but it is understandable given the current DX9 hardware scenario (i.e. NVIDIA losing to ATI). Competition is the name of the game but I would much rather a scenario where all IHVs do study a wide range of games (or, understandably so, popularly used game benchmarks) and try to provide feedback to game programmers how things may be improved. It can be a bit difficult of course, given that we see architectural differences amongst hardware and hence game programmers have to look at all sides of the situation. But to answer your question directly, yes, I think political and budgets/money can help a IHV prevent their products from looking worse, or looking better -- IHVs are known (at least to me) to pay developers millions of dollars for a variety of reasons/contracts. Sorry if this sounds a bit of a tease but I really shouldn't say more.

From all I've seen and judging from certain anonymous developer comments Dave has referred to, NV3x's poor PS 2.0 performance is due to the fundamental structural limitations of the design, both in terms of lack of shader execution resources and full-speed temp registers, rather than due to some finicky-ness with respect to instruction scheduling or some such problem that could be fixed with new drivers or rewritten (but functionally equivalent) shaders. If you feel otherwise I'd be interested to hear why.
Instead of answering you, and against my better judgement (which you all always wish for anyways :) ), here's what a developer wrote me recently :

Anonymous Developer said:
btw: Great article on the shader performance of the R3xx/NV3x (Tomb Raider),
we, developers knew for a while that Nvidia is simply not up to speed for
DX9 (don't publish that please [:)] ) and that people should not buy Nvidia's
solution if they expect any real performance from any pure DX9 title. This
is kind of a touchy thing for Nvidia because they design chips to be fast
with old applications (like Doom3 that has fundamentally old technology, was
developped with GF1 in mind) and only execute the new stuff to allow
developers to experiment with new features. Unfortunately for them, DX9
picked up much faster than DX8 did and they are screwed... In term of
architecture efficiency, Nvidia is in a very difficult position. Here is
one of the email I sent to a coworker recently (ANNEXE 1).

Talk to you more soon!

ANNEXE 1:
---------

They are in a tough position.

Currently, here are the characteristics of both best chips from Nvidia/ATI:

ATI ---> R350 core, 70 watts power dissipation, 0.15u tech

NVIDIA ---> nv35 core, 75 watts power dissipation, 0.13u tech

Both chips have similar performance (slight edge to ATI!) but when ATI is
going to start using 0.13u tech, they are going to leave Nvidia in the dust.
Nvidia won't be able to shrink its technology for a while (0.09 is not
ready).

Using basic (and very approximate maths), and making some very unwise
assumptions (see below), ATI shrink to 0.13u tech is going to be:

0.13 micron = 0.13 volts supply (from 0.15 volts)
0.13 micron = proportional shrink from 0.15 micron

Power output = K (constant, transistor dissipation) x frequency x
SQUARE(voltage)

This means that a 0.13 micron chip is going to consume ~75% of the power of
a similar .15u chip at the same frequency. Meaning, ATI could either boost
the freuquency by 33% and get an overall performance increase of 33%
(assuming no memory bandwidth limitations). However, they are going to
increase the transistor count by ~33% and double their fillrate (selective
unit increase), this is their bottleneck).

So overall, yes, Nvidia is in a very difficult position, their core tech is
basically only at 75% of the efficiency of the ATI tech (or even a little
less).
They will have a tough time competing against ATI unless they improve their
Performance to Transitor ratio.

I'm sure many will enjoy the above :) We, the hardware review outlets, can say what we want and provide all the benchmarks we can... but until folks hear it directly from "respected" folks like game developers (instead of hardware review outlets, like B3D for instance), these folks tend to always give the benefit of the doubt to IHVs. You don't need hardware review outlets if many developers posts stuff like that for the public to read... but that's only in a perfect world, right?

You cannot believe how tough it is for me, from all points of view (and I hope you get what I mean).

NVIDIA is losing the DX9 war with ATI. I told a NVIDIA personnel (that I have history with and where we call each other "butthead" in emails) this. He replied that their DX9 part is "fine". I never said it wasn't "fine". That basically sums up the whole situation.

PS. As for the thanks to me, no need for that (even though I appreciate it) -- thank Beyond3D instead, as I do not think I will ever have a similar level of enthusiasm (doing whatever I have done, is doing or ever will do) for a website other than Beyond3D. I have a "Beyond3D" tattoo on my chest where my heart is underneath :)
 
Reverend said:
NVIDIA is losing the DX9 war with ATI. I told a NVIDIA personnel (that I have history with and where we call each other "butthead" in emails) this. He replied that their DX9 part is "fine". I never said it wasn't "fine". That basically sums up the whole situation.

Nvidia is sucessfully making particularly average products. ATI is making the best products they can, putting as much engineering excellence into it that they can. This is why ATI currently have better products than Nvidia - ATi want to make the best products they can, while Nvidia just want to make GPU's that are just "fine".

This lack of willingness to even admit to a problem is what will really make Nvidia suffer. They are taking their eye off the ball and focussing on marketing and cheats, refusing to admit to themselves they need to change their whole approach.
 
IHVs are known (at least to me) to pay developers millions of dollars for a variety of reasons/contracts.

Generally speaking those millions are paid to the publishers.
 
I think the problem with Nvidia is that the decisions are being driven by short term profitability by the board, and not by the fundamentals.

In NVidia's mind, if the NV34 and NV31 low end and average parts are "fine" and bringing in lots of revenue, than the lack of criticial acclaim from the high end isn't a failure. They are measuring success based on their current short term market position.

ATI was in a similar position a few years ago, what I'd call the "Apple mentality". At one point, around the time of WIndows 95 release, Apple computer was the number one vendor of computers in volume of sales. It took about 2 years, and billions in losses, before they woke up.

Hopefully, Nvidia realizes that the graphics market moves even faster, and they must recover quickly from their misstep. In other words, miracle of miracles, and unlike DX6,7, and 8, DX9 seems to be taking off very quickly. This is the reverse of the era when NVidia decsigned forward looking features into their chips, but API uptake was very slow. DX9 has lit the fire of developer interest like no other version.

NVidia needs to cut out all the old legacy crap and design a chip specifically do execute DX9/OpenGL2.0/DX10/whatever as efficiently as possible, and if it gets beat on Quake3 in benchmarks, oh well.

They also need to get serious about AA. It's not hurting them in the pocket book too much right now, so they aren't getting the message clearly. There are probably lots of people internally in denial, claiming exemplary AA doesn't matter.

Maybe HL2 will be the catalyst that really pushes them or scares them. It certainly looks like it will be one of those events in the game industry, like the release of Wolfenstein, Doom, or the original HL, that is a milestone moment.

I am dying for Counter-Strike 2 on HL2.

Perhaps if they get some future PS3.0/VS3.0 chip, with better AA, and a programmable primitive processor (and presumably ATI too), we'll be back to the status quo of two great cards on the market that are hard to choose between. Right now, I think the choice is blantantly obvious.
 
Anonymous Developer said:
This is kind of a touchy thing for Nvidia because they design chips to be fast with old applications (like Doom3 that has fundamentally old technology, was developped with GF1 in mind) and only execute the new stuff to allow developers to experiment with new features. Unfortunately for them, DX9 picked up much faster than DX8 did and they are screwed...

Holy hell...where have I heard that before! ;)

http://www.beyond3d.com/forum/viewtopic.php?t=7568&postdays=0&postorder=asc&start=75
 
The ironic thing is nVidia is one of the primary reasons DX9 did pick up so much quicker than the previous DXs. There was a strong push by Microsoft to make sure everyone was developing DX9 products much faster than they did with previous generations, and there was a strong following by nVidia and ATI to make sure that happened. ATI delivered with DX9 cards with great performance for the entire consumer spectrum, and nVidia promised to meet or beat ATI's offerings (as they should have considering the timeframes), along with another card claiming "DX9 for $89" (or some other low price, I don't remember anymore). Now that developers are actually doing what they were told (that is, switch to DX9 ASAP) to do by Microsoft, nVidia, and ATI since around mid 2001, nVidia is caught with their pants down.

It's like they didn't even believe their own PR.
 
Right Ilfirin. It is kind of funny how nVidia’s philosophy of Dx9 everywhere (at least in theory where the 5200 is concerned) is coming back to bite them in the ass. BTW Reverend do you think that there is any backlash towards nVidia from developers, because of this. I think that if someone buys a game and it does not perform well they will partly ascribe blame to the game itself. OTOH, a game that list the use of advanced features is visually unimpressive due to the fact that, unknown to the user, that these very features are limited or bypassed for performance reasons, again will hold the developers somewhat to blame.
 
nelg said:
BTW Reverend do you think that there is any backlash towards nVidia from developers, because of this
Nah, I think most developers want their DX9 games to run well on the majority of machines. And with NVIDIA DX9 parts, I don't think we'll see DX9 games within the next year or so to feature DX9 tech heavily. NVIDIA GFFX cards can't run such games well, simple as that.

It's only if certain DX9 games have been released and NVIDIA GFFX cards run them poorly, that NVIDIA may tell (or perhaps persuade with cash in hand) such developers to release patches that cut down on such DX9 features, or use other shaders that makes NVIDIA GFFX cards run them better, that developers may be tempted to go "Damn NVIDIA". Speculation of course. But we all know politics always exist in this industry.
 
Reverend said:
And with NVIDIA DX9 parts, I don't think we'll see DX9 games within the next year or so to feature DX9 tech heavily. NVIDIA GFFX cards can't run such games well, simple as that.

A full DX9 game is actually a lot more economically viable than you might think, for small developers that spend little to no money on dev, that is. Probably even completely ignoring the nVidia side of the market (as you'd likely have to do for a TRUE DX9 game.. with nVidia drivers still missing all those essential features). I mean, when you think about it, there's a lot of people out there making shareware (and shareware sales has been on very steep rise lately) with only a handful or even just one person where they aren't spending any real money or very little in developing the game (except, of course, that time=money). And there's well over a million ATI DX9 cards out in consumers hands (as per ATI's statement earlier this yer).

While that's obviously a tiny amount compared to the amount of cards that fully support DX8 or 7, it's still plenty enough to justify the development to a small developer. It'd probably make a lot more than anything else they'd do because there's tons of automatic PR that comes with having the only game out to make full use of the latest graphics tech (and even breaking even on games these days is considered a success). If, for example, they were to include a good benchmarking mode that'd automatically get them on almost every hardware site where there's thousands of hits per day of exactly their target audience.

And targetting DX9 as the bare minimum doesn't mean you lose all your potential customers with DX8 and DX7 cards either. As they upgrade, a lot of them will undoubtly check out "that one game that sported full-DX9 years before the rest".

I doubt you'd make millions, but I'm sure you'd make enough to justify the work put in.

[edit] By full DX9 use I don't mean the standard HDR feature set with DoF, Bloom, motion blur, etc. I think we've all seen enough of that :) I mean like fully arbitrary BRDF lighting w/ irradiance SHs, soft shadows, digital grading, etc.
 
I don't see why Nvidia's DX9 implementation can't handle that. BRDF won't stress their implementation. Procedural textures would be where they would fall down the most on perfomance.

It's hard to define "full DX9 game"
 
The lack of FP render targets is a real problem whenever you can't get everything done with one pass.

A lot of calcs need to be done in full HDR, and only fit to the dynamic range of the monitor as a last pass, with everything before that rendered to a FP rendertarget (or a high-precision integer format). The grading/color-correction/blue-shift/etc can be done after the tone-mapping, but if you start dropping precision to 8-bit on intermediate results you're in for some ugly results. Most of the extra passes come out of things that have to be done seperately, not just limited shader lengths.

And when I said I don't mean just the standard HDR effects, I meant not ONLY those. Those would, of course, be included as well. But that hardly makes a DX9 game.

Basically what I mean is something that runs, after being fully optimized, at about 30fps at 1024x768 with no AA or AF on a 9700Pro. And my initial post was a bit overly utopian, I admit. My point was simply that if everything was done and marketed correctly, a small-time developer could end up making enough money to make it worth doing. Even if it was released like tomorrow.

[Edit] Full isn't actually the word I was looking for - Extensive use of DX9 features is what I meant.
 
A lot of the settings seem to have changed with the 42 patch...

Rev, what's the command line you use to launch the benchmark (it's not in the .rtf). I try launching using the built in paris3.pat demo, but I get a "SYS_ERROR_DI_PADREAD" error. Also, when I try and create my own demo using the -padrecord switch, I get a "Generic Game Error ASSERTION: 'pData && length>0'" error.
_____
 
Ratchet said:
A lot of the settings seem to have changed with the 42 patch...

Rev, what's the command line you use to launch the benchmark (it's not in the .rtf). I try launching using the built in paris3.pat demo, but I get a "SYS_ERROR_DI_PADREAD" error. Also, when I try and create my own demo using the -padrecord switch, I get a "Generic Game Error ASSERTION: 'pData && length>0'" error.
_____
Forget it... using v42 will not give you accurate/reliable benchmark results, as well as errors. You need v49, which no one other than B3D atm has.
 
Reverend said:
Ratchet said:
A lot of the settings seem to have changed with the 42 patch...

Rev, what's the command line you use to launch the benchmark (it's not in the .rtf). I try launching using the built in paris3.pat demo, but I get a "SYS_ERROR_DI_PADREAD" error. Also, when I try and create my own demo using the -padrecord switch, I get a "Generic Game Error ASSERTION: 'pData && length>0'" error.
_____
Forget it... using v42 will not give you accurate/reliable benchmark results, as well as errors. You need v49, which no one other than B3D atm has.
well crap, then.
 
Back
Top