Ati Crossfire capable of 14X FSAA

swaaye said:
How NV let themselves get beaten in AA with two generations is beyond me...
In fairness to nVidia though, it's a HELL of a lot better than the FX generation AA.

That's why I'm all interested in their new AA, hopefully they got a clue and got it right 'cause it sounds like ATi is getting ready to hit an out-of-the-park homerun in image quality.
 
digitalwanderer said:
That's why I'm all interested in their new AA, hopefully they got a clue and got it right 'cause it sounds like ATi is getting ready to hit an out-of-the-park homerun in image quality.

It would be a pity if this "out of the park" IQ requires more than one video card because I think you'll find that the most fervent ATI supporters think using dual cards is absolutely stupid and pointless :p
 
wireframe said:
digitalwanderer said:
That's why I'm all interested in their new AA, hopefully they got a clue and got it right 'cause it sounds like ATi is getting ready to hit an out-of-the-park homerun in image quality.

It would be a pity if this "out of the park" IQ requires more than one video card because I think you'll find that the most fervent ATI supporters think using dual cards is absolutely stupid and pointless :p

Haha.
 
Can someone refresh my memory on exactly why R300+ is capable of 6xMSAA and NV3x+ is only capable of 4x - something about number of passes through the ROPs right?
 
wireframe said:
digitalwanderer said:
That's why I'm all interested in their new AA, hopefully they got a clue and got it right 'cause it sounds like ATi is getting ready to hit an out-of-the-park homerun in image quality.

It would be a pity if this "out of the park" IQ requires more than one video card because I think you'll find that the most fervent ATI supporters think using dual cards is absolutely stupid and pointless :p

depends on price.... ;)
 
wireframe said:
It would be a pity if this "out of the park" IQ requires more than one video card because I think you'll find that the most fervent ATI supporters think using dual cards is absolutely stupid and pointless :p
From my perspective also, I'm still AGP single-card here too. :?

But for the people who are considering dual-card solutions it'll be a big plus for ATi over nVidia....if ATi's implementation works/isn't buggy/doesn't have some hidden surprises/etc.

It's too early to call it, I'm not saying what WILL happen....I'm just putting my predictions in I guess. Sorry if it sounded like more than that, I'm having an awful run of days here. :oops:
 
trinibwoy said:
Can someone refresh my memory on exactly why R300+ is capable of 6xMSAA and NV3x+ is only capable of 4x - something about number of passes through the ROPs right?

Well, the number of passes through the ROP is likely also a factor of the buffer and and sample compression you have on chip for this. The other reason is that NVIDIA don't use as high a resolution sparse sample grid - ATI has a subsample accuracy that can sample from 144 (12x12 grid) different sample locations per pixel. NVIDIA made an alteration to the sample grid for 4x FSAA and although its rotated I don't thenk they have altered the number of positions it san sample from.
 
wireframe said:
It would be a pity if this "out of the park" IQ requires more than one video card because I think you'll find that the most fervent ATI supporters think using dual cards is absolutely stupid and pointless :p
On further reflection I gotta point out that the reason most "fervent ATi supporters think using dual cards is absolutely stupid and pointless" is that there is no need/huge advantage to it yet.

If you could show them an advantage to having dual cards I think they might reconsider their opinion, and 24xAA is definately looking to be an advantage.

Just a thought. ;)
 
DaveBaumann said:
trinibwoy said:
Can someone refresh my memory on exactly why R300+ is capable of 6xMSAA and NV3x+ is only capable of 4x - something about number of passes through the ROPs right?

Well, the number of passes through the ROP is likely also a factor of the buffer and and sample compression you have on chip for this. The other reason is that NVIDIA don't use as high a resolution sparse sample grid - ATI has a subsample accuracy that can sample from 144 (12x12 grid) different sample locations per pixel. NVIDIA made an alteration to the sample grid for 4x FSAA and although its rotated I don't thenk they have altered the number of positions it san sample from.

Oh, didn't know about the difference in grid resolution. So essentially, ATi has better sample compression so a third run through the ROP's is not as much of a bandwidth hit as it would be on Nvidia's cards? Also, what determines how many AA samples can be output per clock?
 
wireframe said:
because I think you'll find that the most fervent ATI supporters think using dual cards is absolutely stupid and pointless :p
Which may well change once ATI's multi-card solution is announced and available.
 
wireframe said:
digitalwanderer said:
That's why I'm all interested in their new AA, hopefully they got a clue and got it right 'cause it sounds like ATi is getting ready to hit an out-of-the-park homerun in image quality.

It would be a pity if this "out of the park" IQ requires more than one video card because I think you'll find that the most fervent ATI supporters think using dual cards is absolutely stupid and pointless :p

Did you consider that maybe the point of the exercize was to give them a reason to change their mind? ;)
 
geo said:
wireframe said:
digitalwanderer said:
That's why I'm all interested in their new AA, hopefully they got a clue and got it right 'cause it sounds like ATi is getting ready to hit an out-of-the-park homerun in image quality.

It would be a pity if this "out of the park" IQ requires more than one video card because I think you'll find that the most fervent ATI supporters think using dual cards is absolutely stupid and pointless :p

Did you consider that maybe the point of the exercize was to give them a reason to change their mind? ;)

Not really, and this is also why I am mostly incapable of taking sides: I don't have a mind. ;) (no brain, no ice-cream headache! (for some reason I am expecting someone to respond to this particular part of my post))

I'm just having a bit of fun with the flag waving loons. However, one thing I thought of reading this and that I do not particularly like is the thought of features (as opposed to performance) being "SLI" dependent. Sure, it's easy to argue that more performance allows more samples in this case and more options/features is always better, barring some huge sacrifice to get them, but I really think that if an IHV is going after a certain feature, they should do so in the single unit (ie: keep their vision atomic to the single board configuration). Even on the performance scaling side, I would find it repulsive if one IHV decided that "SLI" was necessary for satisfactory high-end performance. I think it's great as an over-the-top solution for those willing to throw more bucks for some more performance, but I don't like the thought of the innocent being forced into a "buy two" mentality to get what the ad campaigns promise.

Wow...I added some majestic blah-blah to something I intended to only be the first paragraph. Oh well.
 
More importantly, using multiple cards for different samples within a pixel would just be stupid to do performance-wise.
 
wireframe said:
However, one thing I thought of reading this and that I do not particularly like is the thought of features (as opposed to performance) being "SLI" dependent. Sure, it's easy to argue that more performance allows more samples in this case and more options/features is always better, barring some huge sacrifice to get them, but I really think that if an IHV is going after a certain feature, they should do so in the single unit (ie: keep their vision atomic to the single board configuration). Even on the performance scaling side, I would find it repulsive if one IHV decided that "SLI" was necessary for satisfactory high-end performance. I think it's great as an over-the-top solution for those willing to throw more bucks for some more performance, but I don't like the thought of the innocent being forced into a "buy two" mentality to get what the ad campaigns promise.

Wow...I added some majestic blah-blah to something I intended to only be the first paragraph. Oh well.

And I've been saying exactly the opposite --that SLI only makes sense if it can do features/IQ beyond current single-card settings. :LOL: I guess you're more of a populist than I am. :)

I'm also assuming that the two (single card vs mvp) climb together in performance & features from generation to generation, rather than say an IHV start artificially/marketing-based knocking functionality out of single-card solutions in an effort to force people into mvp. I suppose it could be a danger it could go that route, but hopefully competition moderates any such effort by either IHV.
 
Chalnoth said:
More importantly, using multiple cards for different samples within a pixel would just be stupid to do performance-wise.

Why? The only performance problem I am able to see at the moment is that the a slower card will limit a faster.
 
geo said:
And I've been saying exactly the opposite --that SLI only makes sense if it can do features/IQ beyond current single-card settings. :LOL: I guess you're more of a populist than I am. :)

Well, I can see the validity of that point, but I see it as becoming very messy in practice. A company should have a vision and follow that vision and their products should be accessible. What they should never do is something like "this SONY VCR delivers twice the picture quality of competing brands*"



*Only when coupled with a SONY receiver and monitor supporting UltraVision Technology (TM).


I am not accusing anyone of such tactics in the 3D scene, but it would be a pity if it went in that direction. In some ways I don't even think how I feel about this makes sense because you should get more if you add a second card. I just don't like thinking about how it will be angled and promoted from the "SLI" side and people buying half of that, feeling cheated, and finally being suckered into buying a second one to get what they thought they would be getting in the first place. This is all for the uninitiated masses, of course. As a side point on that, I don't really understand the economics of SLI for the masses because I figure it would be very difficult to explain to an uninformed dad why he should buy you two of the same, whereas he may be willing to pay twice as much for a single card that he may somehow understand to be better. Yet, I see people online talking about their 6600GTs. Sure, bragging and all that, but this is what I don't like about this type of thing; the other side of the coin, so to speak. I completely understand an informed customer wanting two 6800 Ultra 512MB SLI even if it is very expensive. So, you can see that in some way I am contradicting myself because I don't fully buy into the performance scaling when it is in the middle - buying a second 6600GT later to improve performance as an economically sound plan - because I figure that by the time that need arises there will be needs beyond simple performance in terms of features. You'd be better off buying a 6800GT right off the bat or simply sell your 6600GT and buy the next gen "6600GT" when it's out as it would presumably be twice as fast.

My goodness...I was gonna tell you you were wrong, but now I have blah-blah-blahed some more instead. :p
 
trinibwoy said:
Oh, didn't know about the difference in grid resolution. So essentially, ATi has better sample compression so a third run through the ROP's is not as much of a bandwidth hit as it would be on Nvidia's cards? Also, what determines how many AA samples can be output per clock?
trin, I had much the same questions recently. I think you'll find the answers in Dave's 6800U p/review, in the ROPs section. Well, most of them, as I don't remember reading about sample compression contributing to the potential number of AA passes (but then I didn't know AA required sample buffers, either, though it makes sense). I believe the ROP architecture determines the number of passes per clock, and limiting it to two might be for transistor savings (and thus maybe yields).
 
Well if you can get more fsaa samples from crossfire, thus inproving iq, I'm all for it!
 
Demirug said:
Chalnoth said:
More importantly, using multiple cards for different samples within a pixel would just be stupid to do performance-wise.

Why? The only performance problem I am able to see at the moment is that the a slower card will limit a faster.
The advantage of having two cards render two independent, slightly jittered images is that you can use RGSS. The disadvantage is that you either lose texture cache efficiency when you adjust the LOD or you don't get as good textures as you could.

trinibwoy said:
Oh, didn't know about the difference in grid resolution. So essentially, ATi has better sample compression so a third run through the ROP's is not as much of a bandwidth hit as it would be on Nvidia's cards? Also, what determines how many AA samples can be output per clock?
Their sample compression should be about identical in compression ratio, at least for the modes supported.

Higher AA modes require the rasterizer to support a new sampling pattern, more bits for the coverage mask that have to be kept around througout the whole pipeline, more space in the on-chip tilebuffer, more loops through the ROPs, a modified compression algorithm, modified downsampling, additional status bits here and there...
Overall, the changes might not seem like a lot, but it's not a must-have feature for NVidia and therefore probably somewhere at the end of the to-do list.
 
Demirug said:
Why? The only performance problem I am able to see at the moment is that the a slower card will limit a faster.
You also have issues with texture cache and a total lack of sharing geometry processing.

I'll say it again: if you can make a single card do the rendering for all of the subsamples, it'll be vastly more efficient.
 
Back
Top