SimHQ - Bubba does the ATI 9700 Pro.......

Status
Not open for further replies.
crystalcube said:
And no matter how high the precion is someone is going to run out of it some day and they will have to deal with it.

As myself and others have stated - repeatedly - its not just about precision. Read up on the benefits of using a W buffer over a Z buffer. Then read my posts and you'll have your answer.

Also, you can run out of W buffer precision. I already do buffer partitioning in my code for this wrt both Z/W buffers. Again, I have already described how this works.
 
What about leaving the PC space to enter the console world.

It's the land of the HAPPY developers, no hardware change, no driver bugs, only one platform...

Sounds like a dream :)

(I sure would like to work on consoles if I could)

edit: stupid typo error.
 
In relation to other games that use w-buffer, doesn't Operation Flashpoint (huge outdoor areas, the biggest in a ground based game?) Although any FAQ says disable it, that game suffers a lot from flashing textures etc.with variuous workarounds always suggested eg;

disable w-buffer
disable shadows
run in software TnL mode

Not sure if its the game faults or these are specific ATI game issues and therefore relevant to DS's issues.
 
Derek Smart [3000AD said:
]
As myself and others have stated - repeatedly - its not just about precision. Read up on the benefits of using a W buffer over a Z buffer. Then read my posts and you'll have your answer.

Also, you can run out of W buffer precision. I already do buffer partitioning in my code for this wrt both Z/W buffers. Again, I have already described how this works.

Yes there are many benefits of using W buffer specially in your case ...
but now guess what, W Buffer is not there anymore on ATI 9xxx series cards and according to you , the missing W buffer has been handled gracefully in your game...

So what was your complaint again in first case ?

there was a feature missing , which you handled correctly and now even precision is no longer a problem....

So what exactly is the problem ?
 
Derek Smart [3000AD said:
]
Since when was this slander? Or a campaign for that matter? Surely you jest. So, all the gamers, devs, reviewers etc who are bitching about the SORRY STATE OF ATI DRIVERS are on a slander campaign too, then?

What other developers are you talking about here? Who are they? As far as reviewers go I am scratching my head here wondering just what the heck you are talking about?

http://www.simhq.com/simhq3/hardware/reviews/radeon9700pro

Conclusion

"ATI has done it. They have not only surpassed NVIDIA fastest product, the GeForce4 Ti4600, but also stomped it into the ground. The video card market needs competition and ATI has proven that they are in the game not just to compete, but to win outright.

Big, Big Kudos to the ATI development, engineering and driver teams. You guys have turned the video card market upside down with Radeon 9700 PRO and proven that the driver Myth no longer exists."

"Today belongs to ATI. They are on top of the graphics world and for the moment, there is no question or doubt who has the fastest product with the best image quality. It’s a Radeon."

A comment from a Microsoft rep from that same review:

"ATI has made tremendous improvements in the driver department. They have become a very long way in driver development in a short time. Currently, I would put ATI’s Catalyst drivers on at least a level playing field with NVIDIA’s Detonators based on Windows stability."

http://www.hardocp.com/article.html?art=MzQw

Conclusion

“The ATi Radeon 9700 Pro is the best video card we have ever laid our hands on. ATi is going to be first to market with some very impressive technology and the gamers are sure to take notice. I think the benchmarks at 4XAA/16XAF speak for themselves. The image quality is nothing less than beautiful in both 2D and 3D. Then considering we already know it is going to be a powerhouse card that has the ability to run DOOM]|[ next year and you are almost guaranteed to have a nice long term upgrade for your hard-earned cash. I just don't see how the ATi Radeon 9700 Pro is going to leave you any way but satisfied.â€

http://www.anandtech.com/video/showdoc.html?i=1683&p=1

Conclusion

the Radeon 9700 Pro does live up to every last one of our expectations. The question truly ends up being, does it meet your expectations?
There are three things that the Radeon 9700 Pro can offer at this point:
1) The highest performance in current and future games.
2) The ability to play at 1600x1200 in just about any game currently available or soon to be made available, and
3) The ability to play virtually any game at 1024x768 with 4X AA and 16X anisotropic filtering enabled at smooth frame rates.
http://www17.tomshardware.com/graphic/02q3/020819/index.html
Conclusion
The King is dead! Long live the King! How's this for a plot-twist? The challenger Radeon - a real "Performeron" - has actually done it and usurped the throne from the former king! ATi has earned itself not only the performance crown in gaming environments, but also that of the technology leader!
The Radeon 9700 PRO proved to be superior in all possible categories, be it the framerate while the game was in progress, the triangle throughput, FSAA, anisotropic filtering, or pixel and vertex shader performance. NVIDIA's flagship trails the new champion in every discipline.

http://firingsquad.gamers.com/hardware/r300/default.asp

Conclusion

As you've seen in the benchmarks, RADEON 9700 PRO brings new levels of performance to the desktop performance segment. Even with RADEON 9700's early drivers, GeForce4 Ti 4600 just isn't able to keep up! We were astounded at just how significant some of the margins are, especially in 3DMark 2001SE. And the best part is, as the RADEON 9700 PRO's drivers mature, performance will only go up from here.

http://www.extremetech.com/article2/0,3973,475966,00.asp

Conclusion
This time around, the brass ring clearly belongs to ATI. And while nVidia is never one to be counted out, the high-profile GPU maker finds itself in the uncomfortable position of looking squarely at its primary competitor's tail lights. And its new can of nitrous oxide that will be NV30 is still some months away. Even if NV30 ships before year's end, nVidia will have missed the all-important Q4 holiday buying season with its next flagship GPU. But you can't summarily dismiss GeForce 4 Ti 4600 now that Radeon 9700 has arrived. While the Ti 4600 can't go the full fifteen rounds, our data shows that it keeps frame rates playable in the majority of test cases, and only buckles under the most severe test conditions. Respectable though it may be, there's only one winner in a flat-out performance contest, and Radeon 9700 scores well ahead of GeForce 4 Ti 4600, at times with more than a 2X performance lead.
“Part of what we're seeing is the fruits of ATI's labors under the leadership of CEO David Orton, veteran of SGI and ArtX. And the ArtX-now-ATI west coast design team that built the Flipper chip for Nintendo's GameCube has shown what it's capable of in the Radeon 9700, not only at a hardware level, but also in delivering solid, stable drivers.â€

http://www.hothardware.com/hh_files/S&V/radeon9700pro.shtml

Conclusion

“First and foremost, I don't think anyone can refute that ATi is firmly in the "driver's seat" with the release of the Radeon 9700 Pro. The benchmarks, testing and showcase that you have seen here today, clearly show that ATi's new flagship VPU is faster, more powerful and has better features and 3D quality than anything on the market, period. This is a bold statement to make for sure but after spending some quality time with this new graphics card, we can honestly say that it is the most impressive product we have seen in a long time, from any Graphics OEM. NVIDIA certainly has some catching up to do. The NV30 needs to get out the door in a hurry because ATi is going to eat up market share, with the various incarnations of this product, in the weeks and months ahead.â€
“At this point in time, it's safe to say that ATi has captured the performance and quality crown from their long time rival and market leader NVIDIA. The NV30 is rumored to be around the corner, sometime in Q4 this year. We'll have to wait and see what it's made of, to determine which competitor will drive the market for the next 6 months. For now, all the lime light is on ATi. They've earned it by delivering a product that is truly innovative and leading edge. “

http://tech-report.com/reviews/2002q3/radeon-9700pro/index.x?pg=1
Render farm on a stick

"Conclusions

I believe I've said enough about the Radeon 9700 Pro by now. You have seen the results and screenshots for yourself, and you know that it's got more pixel-pushing power, faster pixel shaders, better vertex shaders, more poly throughput, and better real-world performance than any other graphics card you can buy. The image quality is second to none, especially with antialiasing enabled. I've racked my brains and raked this thing over the coals trying to find a significant weakness, and I'll tell you what: I haven't found one."

"But as a 3D graphics chip, the Radeon 9700 is darn near perfect. ATI has taken the time with this chip to increase precision and expand registers and tweak functional units to the point where everything works as advertised."

http://www.sudhian.com/docs.cfm/id/224.sud

Conclusion:

"Looking at the benchmark results, we have only one conclusion to make. The 9700 Pro has dealt a blow to the previous king, the Ti4600 – leaving it wondering what just happened. With performance up to, and over 50% greater than what the Ti4600 offers in some tests, the 9700 Pro is the clear winner. There is no single test that we run in which the Ti4600 can claim a lead overall. In some games, such as Comanche 4 and Jedi Knight II, the drivers allow the Ti4600 only the slightest of leads. However, one must only turn on 4X FSAA and 16X AF in order to see clearly who is on top. The 9700 Pro has delivered not only fast performance, but fast performance in extremely high quality modes."

Performance Without Limits : ATI’s Radeon 9700 Pro

http://www.gamepc.com/labs/view_content.asp?id=r9700pro&page=1

Conclusion.
"ATI has certainly delivered on all accounts with the Radeon 9700 Pro. Not only does it beat the competition in every benchmark thrown at it, but in most tests, it simply rips other cards apart. Performance is phenomenal across the board, but the R9700 Pro really shows its strength when used in high resolution gaming, especially with full-scene anti-aliasing and anisotropic filtering. Using this card and a fairly fast processor, gaming at 1600 x 1200 is a reality with every current game on the market. Or, if your monitor doesn’t like high resolutions, crank it down to 1024 x 768 or 1280 x 1024 and turn on the anti-aliasing and get extremely good performance as well. "

"When it all comes down to it, the Radeon 9700 Pro is a great graphics card. It's fast, and shows very little slowdown, even when at really high resolutions and with lots of rendering effects enabled. If they can keep a good supply of cards on the market and gradually drop prices, no doubt they'll have an extremely successful card. Congrats, ATI. I didn't think you could do it, but you've proved me wrong."

I think that this is more then enough evidence to clearly say that derek smarts conclusions about reviewers is false and unfounded. Again more bull from derek. Surprised? I am not.

EDIT:Last but not least. http://www.beyond3d.com/reviews/ati/radeon9700pro/
 
MDolenc said:
If something has had support for 2 generations of hardware (Radeon 7x, Radeon 8x) why should we expect that Radeon 9x would lack this feature?

Because we expect to see new features that are not in previous generations.

In subsequent iteration of software/hardware releases feature are both added & removed. It is quite normal I suppose.
 
Derek Smart [3000AD said:
]MS docs are not saying ANYTHING different about checking for a W buffer before using it, than they don't say about EVERY other feature in the DX api. Get it?

Yes, I understand that many of the features are optional, and you should check for them, but do they specifically list out that same caveat at the discription of those features in the documentation? Why are they specifically singling out the W-buffer here and reminding you that this is what you should do?

MDolenc said:
But what can you rely on in DX (or in OpenGL)? If something has had support for 2 generations of hardware (Radeon 7x, Radeon 8x) why should we expect that Radeon 9x would lack this feature?

There is certain core functionality that you should expect to be there, and with the ‘compliance’ nonsense MS have introduced a system for minimum functionality that you should expect to see in hardware (i.e. for DX8 compliant hardware VS/PS1.0 will be there as a minimum, DX8.1 VS1.1 and PS1.2 as a minumum, etc) – but there is also optional support of a feature, i.e. one that you cannot rely on being there. (Plus as a general rule of thumb, up until DX8, DX followed core OpenGL functionality on what needs to be supported, which is probably why Z buffer remains optional).

I don’t see that having the support for an optional feature in older hardware is any precedent for its continued support in future hardware – even from the same IHV.
 
Actually, WW2OL is a ground-based game with outdoor areas that dwarf Operation Flashpoint's.

It uses (or did use, optionally, at one time) a W-buffer. WW2OL is exhibiting a lot of Z errors on the 9700 with the current ATI drivers, so maybe it is running into the same sort of problem. The developers are currently re-writing it to the DX8.1 API, so I hope that takes care of the problem.

A W-buffer is useful when the scene has a great deal of dynamic range in the Z dimension (small and large objects, near and far, all mixed together). Just because only a small portion of games use it, does not mean that it is not, in general, a useful feature.

As hardware limitations drop away with more advanced video cards, it should be easier for developers to produce such advanced games, not more difficult. I think that's why Mr. Smart is complaining.

Z-partitioning is a very messy solution compared to not having to do it. Sometimes objects don't fall into neat realms that can be partitioned. As games attempt to model more and more of the big complex world, such partitioning becomes more difficult. It will become increasing difficult and messy as the hardware takes over higher and higher levels of function. Therefore, ideally the hardware gives the option of a high precision Z-buffer, a floating-point Z buffer, or a W buffer.

Does DX9 support a floating-point Z buffer? Will the 9700 support it under DX9?

(I can't believe that otherwise rational people here can't get over Mr. Smart's attitude issues. If you saw a guy walking down the street on crutches, would you kick his crutches away and insist he could walk on his own if he really tried to? :) )

Derek Smart at Beyond3d FAQ
 
MDolenc said:
Even if that trick with ps_2_0 shaders mentioned by earlier would work it would disable all early pixel rejections (since hardware does not know what the final depth will be until execution of shader is complete) and would cost a great deal of performance.

the way it was originally described* i see no reason why it would not work. aamof, the work-around technique should produce exactly the same result as a hw w-interpolating circuit of said precision (say, 24 bit float). of course, your concern about the early-out depth checks is 100% valid.

derek said:
Below is my email to the head of dev rel, in its entirety. His reply basically said that the W buffer is not (!) in the HW, they had no intentions of supporting it - and that one of the devs will come up with a sample app to do it via pixel/vertex shaders (I already tried it - and as I suspected, it does not bloody work!). Thats when I told him that if they say it works, they can bloody well write the damn sample app like nVidia would if I asked them to prove something to me. He agreed. And I'm in a holding pattern waiting for it.

derek,
what shader code did you try and how did it not work as expected? since the workaround seems 100% viable to me (the early-out depth check issue aside), we could try figuring out what was the problem. of course, only if you feel like doing so.

derek said:
Of course, note that this shader solution won't work for legacy apps (such as my previous games, which rely on a W buffer)

a valid concern, nothing can be done here :(


* anonymous ati engineer's w-buffer workaround (no code, just an outline):
Code:
The basic principal is to setup the W computations in the vertex shader, then pass that to the pixel shader by using a texture address component, which are 32b floats. In the pixel shader, the shader will receive a 32b interpolated float, which will be the per pixel W value. Writing that out, it will get converted to a 24b float, which can be Z buffered, if you make sure that all W's written out are positive. Or, you could convert to integer, and write out that value.
 
Derek Smart [3000AD said:
]

Since when was this slander? Or a campaign for that matter? Surely you jest. So, all the gamers, devs, reviewers etc who are bitching about the SORRY STATE OF ATI DRIVERS are on a slander campaign too, then?

Come on derek who are all the developers and reviewers? There are very few gamers upset with the card. You don't have a leg to stand on here. You are full of it. I will even go out on a limb here and call you a straight out liar. Further a slanderous statement is one that is untrue and cannot be backed up. At least on one count you are with regards to reviewers. Who are the developers derek? Other then yourself of course.(Debatable if you should actually be fit in this group IMHO.) I really would like to know how much you are charging for that patch BTW.
 
DaveBaumann said:
MDolenc said:
But what can you rely on in DX (or in OpenGL)? If something has had support for 2 generations of hardware (Radeon 7x, Radeon 8x) why should we expect that Radeon 9x would lack this feature?

There is certain core functionality that you should expect to be there, and with the ‘compliance’ nonsense MS have introduced a system for minimum functionality that you should expect to see in hardware (i.e. for DX8 compliant hardware VS/PS1.0 will be there as a minimum, DX8.1 VS1.1 and PS1.2 as a minumum, etc) – but there is also optional support of a feature, i.e. one that you cannot rely on being there. (Plus as a general rule of thumb, up until DX8, DX followed core OpenGL functionality on what needs to be supported, which is probably why Z buffer remains optional).

I don’t see that having the support for an optional feature in older hardware is any precedent for its continued support in future hardware – even from the same IHV.

Microsoft did not invent "compliance nonsanse" IHV-s and news sites did! Microsoft really hates to be asked about "what features make a card DX9 compliant" you know that? Pixel shaders and vertex shaders 1.x were just the most important features in DX8. The same goes for pixel and vertex shader 2.0+ in upcoming DX9. So why don't we just cut out all the wraping modes since we can do them in shaders? Why don't we cut lighting if we can do it in shaders? There will be other features in DX9 that won't be related to shaders so why bother to support them if they "are" optional? If you think this way then NV30 (or any other DX9 chip) could flat out cut everything but shaders... :rolleyes:
 
Has a single productive piece of information come out of any of these multipage Derek Smart debates on this forum? Nope...

Now let's ask why. Either it's (1) or (2):

1) We are all anonymous internet trolls with the intelligence level of a Chia pet and no knowledge of graphics or game engines whatsoever. We should all just shut up and stop posting, so that the only person on this board would be Derek Smart. Then every single post would be the whole and perfect truth, and there would be no fanboy trolls disagreeing with him.

2) Derek Smart is not capable of having productive discussion. Instead, he only posts senseless flames that incite more of the same.

I too am sick and tired of all the garbage posted by Mr. Smart.
 
There will be other features in DX9 that won't be related to shaders so why bother to support them if they "are" optional? If you think this way then NV30 (or any other DX9 chip) could flat out cut everything but shaders...

Thats happening anyway. Shaders will become more advanced and faster until the point that many of the 3D functions will run through shaders. This occured with EMBM and Dot3 from the first shader spec. Does a chip like P10 have much that isn't done via shaders?
 
Derek Smart [3000AD said:
]What? Do we now have to trace through every line of our code each time a new card comes out, in order to find out which CAPS report is failing?
If your code isn't already checking CAPS and falling back as needed, then your code is broken.

Every feature is Direct3D is optional? Then please explain the required specs in PC2002. Please explain the required tests in WHQL. I'll give you a hint: Not all features are optional. Maybe you should do some more reading about D3D specs so your code will work properly.
 
antlers4 said:
If you saw [insert]derek smart[/insert] walking down the street on crutches, would you kick his crutches away and insist he could walk on his own if he really tried to? :) )

[action]KICK[/action] :D
 
antlers4 said:
Z-partitioning is a very messy solution compared to not having to do it. Sometimes objects don't fall into neat realms that can be partitioned. As games attempt to model more and more of the big complex world, such partitioning becomes more difficult. It will become increasing difficult and messy as the hardware takes over higher and higher levels of function. Therefore, ideally the hardware gives the option of a high precision Z-buffer, a floating-point Z buffer, or a W buffer.

above said needs correction. z-partitioning is not messy at all - it's very clean as it generally does not care about objects falling completely into a single realm/band - near and far distances clipping within the frustum clipper take care of splitting objects at the depth bands boundaries.

of course, it does have a speed disadvantage as using Nx z-partitioning takes (Nx z-buffer_clears + ~Nx triangle_setup_work + Nx frustum_clippings) compared to a regular, 1-depth-pass scene rendition. so it's not a free solution <insert kristoff's signature here>.
 
DaveBaumann said:
There will be other features in DX9 that won't be related to shaders so why bother to support them if they "are" optional? If you think this way then NV30 (or any other DX9 chip) could flat out cut everything but shaders...

Thats happening anyway. Shaders will become more advanced and faster until the point that many of the 3D functions will run through shaders. This occured with EMBM and Dot3 from the first shader spec. Does a chip like P10 have much that isn't done via shaders?

Pixel shaders required dependant reads and dot 3 so if a card supports pixel shaders it also supports EMBM and Dot3 even if it does not run in "pixel shader mode". But did this break anything? Did anyone said "yes we can't run EMBM with SetTextureStageState use pixel shaders instead" or "no we can't run dot 3 blending in SetTextureStageState use pixel shaders instead"? Which features are not exposed in P10 trough caps but availible via shaders?
 
On a related note, I saw somewhere mentioned that NV30 supports unlimited dependent texture reads in pixel shading...

Anyone knows whether it's the same in P10?
 
MDolenc said:
Pixel shaders required dependant reads and dot 3 so if a card supports pixel shaders it also supports EMBM and Dot3 even if it does not run in "pixel shader mode". But did this break anything? Did anyone said "yes we can't run EMBM with SetTextureStageState use pixel shaders instead" or "no we can't run dot 3 blending in SetTextureStageState use pixel shaders instead"? Which features are not exposed in P10 trough caps but availible via shaders?

I was just making a comment.

However, it still doesn't change the fact that there is no precident for having to support an optional feature of one API (and not even in the other API that you cater for) just because prior hardware does.
 
Status
Not open for further replies.
Back
Top