So what?
Your reply is an error; because it is not based on the topic discussion....
So what?
Your reply is an error; because it is not based on the topic discussion....
Your reply is an error; because it is not based on the topic discussion....
Well, sure it is.That would be so nice. Too bad that's not the case
Err... driver bottlenecked?
The general architecture is surprisingly similar for being "radically different".2) Subsequent nVidia gpus post nV3x (nV4x and later) were not related to nV3x in terms of architecture, but instead have been radically different. So this would indicate to me that nV3x was "too complex" for production regardless of process.
How is NV30 not quad-based?Continuing the OT, the primary reason the NV30 had done so little with more hardware is that it wasn't a quad-based rendering architecture in a contrast to its rival at the time -- R300. Being so, there was a lot of redundancy logic to spend transistors on, and I haven't counted in the twice narrower memory bus.
Banking... is this something to do with money of yours?
On a more serious note - I'm beginning to think that most of the R600's issues galore is down to the command processor weirdly acting with conjunction with the uber-complex arbitration system in the chip. Duh!
It just can't be rational a skinny GPU like G84 to come even close to such a monster, and you throw almost all the blame on the fill-rate side, texturing and so on.
The general architecture is surprisingly similar for being "radically different".
How is NV30 not quad-based?
A short gleam in my mind: R600 - too complex to be a plain graphics processor...
In a contrast - G80 have so few issues to solve, just waiting for a proper die shrink, like more robust caching and buffering (GS anyone), some depth-occlusion tweaks and SP clock domain to skyrocket almost at will.
You realize that if you tease Walt about NV30 he'll whip up a post so long and so deep you'll pray you had simply smiled and waved, right?
WaltC said:I am puzzled, though, by the disparity of commentary I read on R600.
It mostly depends on where's the bottleneck. If your game is limited by anything related to vertex shading/pixel shading/ROP/texture, basically pretty much any kind of hardware function that's already there for DX9, you'll see very little performance difference between DX9 and DX10 on the same GPU: it's using the same hardware for both.If it's true there is no difference between DX9 and DX10 (as posted by others) in some of these games but still sustain a big hit in performance, ...
Because DX10 doesn't make anything faster than DX9, except for the batching bottleneck, which most intelligently designed engines already avoid try to avoid.-Why is there very little difference between Dx9 and Dx10 with the performance hit?
Because some drivers are not mature? Because there were programmed when there was only 1 DX10 card available? Because some vendors have a better developer support infrastructure than others? Who knows...-Why is it that some of these games give acceptable frame rates out of one hardware brand?
I don't think that really matters: it's mostly just an underlying intermediate low level definition that has little impact on the way you program the whole thing.-Are these games in fact DX10 using pixel shader 4.0? Or are they using only a portion of it...some how?
Because making full use of the performance benefits of DX10 requires changes to an existing engine. And since the engine determines to some extent how the artwork and levels are designed, the amount of additional work would be huge.-Why are these games not coded specifically for DX9 or DX10 and place on a DVD install asking you if you want to install "DX9 or DX10?"
Unification only means that you use the same execution model for all shader types. It has little or no impact on the way you actually write your shaders, DX9 or DX10.Unification of shaders, meaning no difference between pixel & vertex shader in any of these so called DX10 games?
Yes, of course: DX10 or no DX10, you'll always have to transform vertices and calculate pixel colors...For example if a game is using both shader and vertex calls in DX10 is this really a DX10 game?
There is this idea that DX10 is a revolution compared to DX9. I believe this is incorrect. It's an evolution that removes some bottlenecks here and there that allow some neat incremental effects. But the mathematics, principles, and techniques behind creating a great visual effect haven't really changed much.... and I really question how a DX10 game is defined.