ATi 2900 XT 1GB DDR4 for pre-order...

Your reply is an error; because it is not based on the topic discussion....

I`m trying very hard not to say what is an error:|

Getting back on topic, I`d like to see a comparison between the 512MB and the 1GB variants done in NWN2, with everything as high as possible. As I think I`ve already mentioned, this game tends to spill over 512MB in a number of situations.
 
driver bottleneck? :oops:
dx10-12x10.png
 
Well, you could almost smell the conspiracy here, eh? :LOL:

Just ask NV why it took them half an year to figure out the magic of the driver model in Vista. ;)
 
I'm banking on the fact that you`re joking. That being said, the R600s issues are more extensive than those pertaining to drivers-even though drivers suck ATM, IMHO. But even when the drivers will be up to par(a year from now or whenever:) ), it'll still be quite mediocre, IMHO.
 
Banking... is this something to do with money of yours? :cool:

On a more serious note - I'm beginning to think that most of the R600's issues galore is down to the command processor weirdly acting with conjunction with the uber-complex arbitration system in the chip. Duh!

It just can't be rational a skinny GPU like G84 to come even close to such a monster, and you throw almost all the blame on the fill-rate side, texturing and so on.
 
2) Subsequent nVidia gpus post nV3x (nV4x and later) were not related to nV3x in terms of architecture, but instead have been radically different. So this would indicate to me that nV3x was "too complex" for production regardless of process.
The general architecture is surprisingly similar for being "radically different".

Continuing the OT, the primary reason the NV30 had done so little with more hardware is that it wasn't a quad-based rendering architecture in a contrast to its rival at the time -- R300. Being so, there was a lot of redundancy logic to spend transistors on, and I haven't counted in the twice narrower memory bus.
How is NV30 not quad-based?
 
Banking... is this something to do with money of yours? :cool:

On a more serious note - I'm beginning to think that most of the R600's issues galore is down to the command processor weirdly acting with conjunction with the uber-complex arbitration system in the chip. Duh!

It just can't be rational a skinny GPU like G84 to come even close to such a monster, and you throw almost all the blame on the fill-rate side, texturing and so on.

No I don't. Those are the obvious thing's that'll keep it in the realm of mediocrity. There are a lot of other things that make little sense:the hit with AA being sometimes bigger than the last generation(and no, I don't think that lack of ROP resolve fully explains this), what you mentioned above, and a few other things. This can be attributed to driver greenness(and I hope that's the case, because otherwise I`m stuck with about 800$ of sub-mediocre HW:D). What I was saying is that even though all of those are solved, it'll not blow the doors of of anyone, IMHO.
 
A short gleam in my mind: R600 - too complex to be a plain graphics processor...

In a contrast - G80 have so few issues to solve, just waiting for a proper die shrink, like more robust caching and buffering (GS anyone), some depth-occlusion tweaks and SP clock domain to skyrocket almost at will.
 
The general architecture is surprisingly similar for being "radically different".


How is NV30 not quad-based?

You realize that if you tease Walt about NV30 he'll whip up a post so long and so deep you'll pray you had simply smiled and waved, right?:D
 
A short gleam in my mind: R600 - too complex to be a plain graphics processor...

In a contrast - G80 have so few issues to solve, just waiting for a proper die shrink, like more robust caching and buffering (GS anyone), some depth-occlusion tweaks and SP clock domain to skyrocket almost at will.

First of all, lets not talk about hardware complexity as almost no one who posts here knows enough about either of these architectures and the associated hardware and software engineering difficulties to do so. I assure you that if it were as oh so simple to design a simple yet powerful architecture, the hundreds of ATI engineers employed on the r600 project would have, at some point, grasped what you just posted in your 2nd paragraph of my quote.

Moving on, I like "r600- too complex to be a good graphics processor" better. What good is complexity if it leads to inelegant solutions?
 
Well, as far as I get it, everybody comes here to comment and discuss on the technology with a common available information to the open public. Otherwise should we justrepress every deeper assumption or analysis, because of uncertainly?

Anyway, regarding my thought on R600, I used "plain" for abstract depicting of the rather trivial workload the current GPUs are used to do, compared to the architectural trends.
Don't get me wrong or offensive - my graphics card now is HD2900 and I wouldn't buy R600 based product if I wasn't confident enough in ATi's commitment to put their best to succeed.
 
You realize that if you tease Walt about NV30 he'll whip up a post so long and so deep you'll pray you had simply smiled and waved, right?:D

Heh-Heh...;) I just don't have time for many of those these days--which isn't so bad, I guess...;) Normally, I would have taken the bait...but, well, not this time. I think we've pretty well beaten the nV3x horse into the dirt.

I am puzzled, though, by the disparity of commentary I read on R600.
 
I am not understanding some posts. If it's true there is no difference between DX9 and DX10 (as posted by others) in some of these games but still sustain a big hit in performance, why on earth are we concerned about the R600? There seems to be a larger picture here that is being overlooked:
-How are these games coded and why?
-Why is there very little difference between Dx9 and Dx10 with the performance hit?
-Why is it that some of these games give acceptable frame rates out of one hardware brand?
-Are these games in fact DX10 using pixel shader 4.0? Or are they using only a portion of it...some how?
-Why are these games not coded specifically for DX9 or DX10 and place on a DVD install asking you if you want to install "DX9 or DX10?"

Look at Lost Planet for example, I still don't see any difference between dx9 and dx10. So I have to ask a simple question are we seeing:
Resource virtualization
Geometry Shading
Unification of shaders, meaning no difference between pixel & vertex shader
in any of these so called DX10 games? For example if a game is using both shader and vertex calls in DX10 is this really a DX10 game? There's more questions then answers here and I really question how a DX10 game is defined.
 
Last edited by a moderator:
WaltC said:
I am puzzled, though, by the disparity of commentary I read on R600.

Problem is, as consumers, no amount of discussion can fix the negative vibe given to every R600 owner through AMD's bad publicity, bad marketing, and bad product control.

Heck, the first string of product boxes weren't even labelled properly...


Do i need mention UVD?

do I need mention horrible performance, lack of Crossfire support for numerous applications, lack of "hardware AA"....

IT seems all in all, this card is just that...lacking. ot me this speaks volumes about AMD's execution, and lack of control, in fact...I used to only buy AMD/ATI products. This recent fiasco has me in the position now that I won't ever buy another, and I won't be spending the company budget on AMD products either...

I own every ATI product released into the consumer PC space. I used to be proud about it. Now it just embarrasses me.


The discussion continues, because the shock still hasn't worn off. Die-hard fans are still hoping for a miracle fix, and until it comes, or a completely new product is released, AMD will have this hanging over thier head. If they fail to deliver AGAIN..well..i don't want to discuss that.


When I contacted AMD about UVD not working...at first thier supprt techs assured me UVD WAS working. Then they changed thier minds, and stated that it didn't work...

When I asked them about getting this functionality, I got "Sorry, but we do not have control over our board partners. R600 never had UVD, nor was it meant to, and we told our board partners that, and they mislabelled their boxes against our wishes. As such, we hold no responsibility, and there is nothing we can do to help you with this issue"

Then Mr Farhad Sadough of AMD Professional Services asked me for proof that these boxes said what they did.


Proof? He already knew about the problem!He had a prepared response!

That contact from AMD soured my stomach. You know, I'm glad they took the time to call personally, rather than email, but the complete lack of professionalism, the complete denial of any problem, the complete lack of customer satisfaction, really, really, is gonna hurt AMD.

In fact, the desparaging remarks you refer to..they are just part of it.

They didn't even acknowledge there was an issue. They took the time to give a statement about the "Lost Planet" demo in release reviews, but failed to mebntion that reviews that stated R600 had UVD were false, failed to call "tests" that meansured UVD performance FAKED...


They've just swept it under the rug.

That attitude will see them swept under the rug if it doesn't change, very soon...you mark my words.
 
If it's true there is no difference between DX9 and DX10 (as posted by others) in some of these games but still sustain a big hit in performance, ...
It mostly depends on where's the bottleneck. If your game is limited by anything related to vertex shading/pixel shading/ROP/texture, basically pretty much any kind of hardware function that's already there for DX9, you'll see very little performance difference between DX9 and DX10 on the same GPU: it's using the same hardware for both.

DX10 defines a better API when it comes to issuing a large number of draw commands compared to DX9. This is one place where it can benefit wrt performance. If your game needs to instance tons of the similar 3D models in slightly different configurations, DX10 should remove the bottleneck between CPU and GPU.
However, if you're going to use this increased efficiency to draw even more objects, you'll also increases the amount of work for the backend, whose performance is not different compared to DX9. So your game will run slower...

In other words, if you're seeing current DX10 games running slower than their DX9 equivalent, it's because programmers are asking the GPU to work harder. Not because DX10 is bad.

And then there's geometry shading, which allows some neat effects, but probably not used a lot in the first batch of DX10 games.

-Why is there very little difference between Dx9 and Dx10 with the performance hit?
Because DX10 doesn't make anything faster than DX9, except for the batching bottleneck, which most intelligently designed engines already avoid try to avoid.
As for the limited visual difference: DX10 has some better options wrt anti-aliasing and HDR, but it's still the skill of the programmers and designers that determine visual brilliance. And they are running into the law of diminishing returns: DX9 games are already quite neat. To make it look ever slightly better, you need a lot more calculations.

-Why is it that some of these games give acceptable frame rates out of one hardware brand?
Because some drivers are not mature? Because there were programmed when there was only 1 DX10 card available? Because some vendors have a better developer support infrastructure than others? Who knows...

-Are these games in fact DX10 using pixel shader 4.0? Or are they using only a portion of it...some how?
I don't think that really matters: it's mostly just an underlying intermediate low level definition that has little impact on the way you program the whole thing.

-Why are these games not coded specifically for DX9 or DX10 and place on a DVD install asking you if you want to install "DX9 or DX10?"
Because making full use of the performance benefits of DX10 requires changes to an existing engine. And since the engine determines to some extent how the artwork and levels are designed, the amount of additional work would be huge.

Unification of shaders, meaning no difference between pixel & vertex shader in any of these so called DX10 games?
Unification only means that you use the same execution model for all shader types. It has little or no impact on the way you actually write your shaders, DX9 or DX10.

For example if a game is using both shader and vertex calls in DX10 is this really a DX10 game?
Yes, of course: DX10 or no DX10, you'll always have to transform vertices and calculate pixel colors...

... and I really question how a DX10 game is defined.
There is this idea that DX10 is a revolution compared to DX9. I believe this is incorrect. It's an evolution that removes some bottlenecks here and there that allow some neat incremental effects. But the mathematics, principles, and techniques behind creating a great visual effect haven't really changed much.
 
Back
Top