Crossfire Info

Status
Not open for further replies.
Ratchet said:
wrt the 32x AF thing [H] mentioned, it has to do with the 2x SSAA mode. I need to go cut teh %@!#! grass again so I'll just quote the bit from the CrossFire whitepaper:
ATI in their CrossFire whitepaper said:
An added benefit of these modes is that they work together with SMOOTHVISION HD Anisotropic Filtering (AF). This is a high quality filtering technique designed to produce sharper, clearer textures by blending multiple texture samples (2, 4, 8, or 16) for each pixel. Since Super AA can render each pixel from two slightly different viewpoints and combine them, the texture samples from each viewpoint get combined as well. This means the number of texture samples per pixel is effectively doubled, so up to 32x Anisotropic Filtering can be supported.
32 samples do not necessarily result in 32x – or let's better write it as it was done at first to make it clear, 32:1 – anisotropic filtering.
 
I don't see how the AA thing is an advantage for ATI, considering you can already get 16xAA with the Geforce 6, and afaik that's with 4xSSAA as well.
 
DudeMiester said:
I don't see how the AA thing is an advantage for ATI, considering you can already get 16xAA with the Geforce 6, and afaik that's with 4xSSAA as well.

News to me. That's today-supported, today-unsupported (i.e. reg hacks or somesuch), or theoretical? Linky please. . .
 
DudeMiester said:
I don't see how the AA thing is an advantage for ATI, considering you can already get 16xAA with the Geforce 6, and afaik that's with 4xSSAA as well.
Well the diff is you ca use the 6xAA with ATi ,and not the 16AA or even 4AA. with NVDA. I would think one will beable to use the 12AA on the crossfire.
 
Theoretical you can do 4xMSAA and 4xSSAA to get 16xAA on gf6(20x in ati terms)

The big advantage from the ati method is the programmable sample patterns which give them better MSAA sample patterns even with high numbers where nv MSAA sample pattern in 16x for example are not very optimal
 
Unknown Soldier said:
That's not what was discussed before on the forums. Apparently the R520 would work with a R420/480(or so the rumours that were flying around said).

I don't know what forums you've been reading, but it evidently isn't this one!
 
I doubt its not really equivalnt 32 af as if you go off and do the smapling patterns I recon it would look more like 2 trapazoids stuck together or overlapping rather then one big trapazoid. Though it prolly does look somewhat better then 16x af. All sane people don't believe everything that comes out of ATI or NV mouths.

............

Edit I just saw the SSAA sampling patterns of hardocp its definately can't be called 32 af but its still prolly looks slight better then 16x af.


What I liked was the hot-hardware review.

To enable multi-GPU rendering ATi had to "roll their own" ASIC specifically dedicated to image composition. This device is an FPGA
That is quite funny.

Also the fact that you need 4 slots for two cards is quite funny.
 
geo said:
DudeMiester said:
I don't see how the AA thing is an advantage for ATI, considering you can already get 16xAA with the Geforce 6, and afaik that's with 4xSSAA as well.

News to me. That's today-supported, today-unsupported (i.e. reg hacks or somesuch), or theoretical? Linky please. . .
In the driver panel you only get up to 8xS. 16x is in the Quadro panel AFAIK.
But frankly, how many enthusiasts, i.e. those that buy SLI systems, don't use third party tools?
 
tEd said:
Theoretical you can do 4xMSAA and 4xSSAA to get 16xAA on gf6(20x in ati terms)

The big advantage from the ati method is the programmable sample patterns which give them better MSAA sample patterns even with high numbers where nv MSAA sample pattern in 16x for example are not very optimal

However with multiple cards, you can use subpixel offets to help with some of issues with a fixed OGSS grid.
 
Has someone noticed the 240MHz RAMDAC?
obviously there's a limitation from the DVI bandwith coming from the slave, and they decided not to put a faster RAMDAC since it would be useless using Crossfire..

this sucks, maybe it's not even able to do 2048x1536 at 60Hz (or if it does, what's the point of suffering the flicker). A high end CRT at very high resolutions is a perfect match for a SLI/Crossfire, no?

I don't see the point of multi-GPU with a LCD (you can't even see the > 50fps framerates, you'll be limited to 1280x1024, or if you have a higher end bigger screen it may be slower..)
 
Blazkowicz_ said:
Has someone noticed the 240MHz RAMDAC?
obviously there's a limitation from the DVI bandwith coming from the slave, and they decided not to put a faster RAMDAC since it would be useless using Crossfire..

DVI Single link can handle 1920x1080 (1080p) @60Hz. DVI dual link can handle higher.


I don't see the point of multi-GPU with a LCD (you can't even see the > 50fps framerates, you'll be limited to 1280x1024, or if you have a higher end bigger screen it may be slower..)

Wrong. Dell ships a 24" LCD monitor that handles 1920x1200 with 11ms and 16ms response time, which will deliver 60Hz 1920x1080 (single link DVI) or 1920x1200 (dual link) If a card can't manage 1080p @ 60hz, it has a shitty TDMS transmitter.

If Crossfire can't handle those resolutions on either CRT or LCD effectively, then it's a huge problem, and you'd be limited to SuperAA modes, instead of cranking the resolution up.
 
tEd said:
Theoretical you can do 4xMSAA and 4xSSAA to get 16xAA on gf6(20x in ati terms)

The big advantage from the ati method is the programmable sample patterns which give them better MSAA sample patterns even with high numbers where nv MSAA sample pattern in 16x for example are not very optimal

You can go even higher if you have Video RAM for such experiments.

4xMSAA+16xSSAA = 64xAA is possible on 512 MB cards if you don't care about the speed.
 
Xmas said:
geo said:
DudeMiester said:
I don't see how the AA thing is an advantage for ATI, considering you can already get 16xAA with the Geforce 6, and afaik that's with 4xSSAA as well.

News to me. That's today-supported, today-unsupported (i.e. reg hacks or somesuch), or theoretical? Linky please. . .
In the driver panel you only get up to 8xS. 16x is in the Quadro panel AFAIK.
But frankly, how many enthusiasts, i.e. those that buy SLI systems, don't use third party tools?

Well, I hope somebody does some IQ and performance comparisons of that in SLI vs ATI's crossfire.
 
DemoCoder said:
Blazkowicz_ said:
Has someone noticed the 240MHz RAMDAC?
obviously there's a limitation from the DVI bandwith coming from the slave, and they decided not to put a faster RAMDAC since it would be useless using Crossfire..

DVI Single link can handle 1920x1080 (1080p) @60Hz. DVI dual link can handle higher.


I don't see the point of multi-GPU with a LCD (you can't even see the > 50fps framerates, you'll be limited to 1280x1024, or if you have a higher end bigger screen it may be slower..)

Wrong. Dell ships a 24" LCD monitor that handles 1920x1200 with 11ms and 16ms response time, which will deliver 60Hz 1920x1080 (single link DVI) or 1920x1200 (dual link) If a card can't manage 1080p @ 60hz, it has a shitty TDMS transmitter.

If Crossfire can't handle those resolutions on either CRT or LCD effectively, then it's a huge problem, and you'd be limited to SuperAA modes, instead of cranking the resolution up.

Being very interested myself in Dells 24" screen, its response time seems to typically lie in the 30+ms range. Haven't the manufacturers creative claims regarding LCD screens been debunked often enough? Dell recently suffered the ignomity in a recent test at Anands of having exactly the same panel as Apple, but speccing its response time much more aggressively. And even some big-name sites actually test response times over a range of levels.

You actively have to look the other way to ignore the collected public data.

As confirmed by the engineering done on Dells 24" screen, 1080p on a single DVI lies at the outer gasping limit of what the interface is designed to manage. Compared to, say, the abilities of my Sony F520 CRT, that supports 2048x1536 at 85+ Hz, this is feeble indeed. While this might not be a major limitation for many people right now, it would seem that Crossfire just as SLI by it's very nature targets people with atypical demands, and not primarily the folks who choose inexpensive monitors.
(In fact that Dell 24" would seem a nice match.)

It's a limitation worthy of notice, and it's up to the individual to judge how significant it is. I had missed it completely, and if I were in the market for this kind of solution, I would care. YMMV, obviously, but it can't hurt to be aware of all the issues.
 
4*4 (ie 16x sample) SSAA isn't really usable on NV4x's and from what I can tell it's limited to 1024*768 also.

For those interested:

1xAA/16xAF:

1.jpg


16xSSAA/16xAF:

16.jpg


Between those two the latter results to an (expected) -2.0 LOD offset.
 
Entropy said:
As confirmed by the engineering done on Dells 24" screen, 1080p on a single DVI lies at the outer gasping limit of what the interface is designed to manage.

It doesn't matter that it's near the limit The interface was designed for it, and it works.

To Compared to, say, the abilities of my Sony F520 CRT, that supports 2048x1536 at 85+ Hz, this is feeble indeed.

I disposed of many a *good* 21" CRT monitor because the picture quality is inferior. You complain about inaccurate response time and running "near the limits" (of a digital point to point interface no less) but then turn around and talk about a resolution mode running near the limits of monitor dot pitch, convergence, RAMDAC, and analog cable S/N envelope. The F520's 2048x1536 image at 21" would be significantly less visually stable than a 1920x1200 LCD, whilst simultaneously being harder to read because of smaller pixels that bleed light. Couple that with ClearType for text and TAA for games it's no contest IMHO.
 
DemoCoder said:
Entropy said:
As confirmed by the engineering done on Dells 24" screen, 1080p on a single DVI lies at the outer gasping limit of what the interface is designed to manage.

It doesn't matter that it's near the limit The interface was designed for it, and it works.

To Compared to, say, the abilities of my Sony F520 CRT, that supports 2048x1536 at 85+ Hz, this is feeble indeed.

I disposed of many a *good* 21" CRT monitor because the picture quality is inferior. You complain about inaccurate response time and running "near the limits" (of a digital point to point interface no less) but then turn around and talk about a resolution mode running near the limits of monitor dot pitch, convergence, RAMDAC, and analog cable S/N envelope. The F520's 2048x1536 image at 21" would be significantly less visually stable than a 1920x1200 LCD, whilst simultaneously being harder to read because of smaller pixels that bleed light. Couple that with ClearType for text and TAA for games it's no contest IMHO.

You are dodging the issue, which is the low upper bound RAMDAC bandwidth. The fact of the matter is that the low upper bound means that you can't drive CRTs to higher resolutions and refreshrates at all. (And there is every reason to suspect that the signal properties won't be anywhere close to ideal close to the cutoff).

LCDs vs. CRTs has been argued to death, and there is little new ground to be covered. I simply reacted against your referral to the wildly inaccurate response time claims. I still use CRTs for some things (speed for gaming and colour gamut for photoshopping), LCD for others (text). Horses for courses.

LCD resolutions are creeping upwards though, and it doesn't really make sense to assume that someone would use pay $1000 for the graphics cards+MB alone, and couple that to cheap displays. 21" CRTs if you value their properties ,and 23-24" LCDs and up is the display hardware that I would assume someone in that market would use.
 
Entropy said:
Dell recently suffered the ignomity in a recent test at Anands of having exactly the same panel as Apple, but speccing its response time much more aggressively.

I just picked one of those Dell 20" widescreens up.....and I have to tell you I've never enjoyed gaming so much. The thing is just amazing...
 
DaveBaumann said:
Unknown Soldier said:
That's not what was discussed before on the forums. Apparently the R520 would work with a R420/480(or so the rumours that were flying around said).

I don't know what forums you've been reading, but it evidently isn't this one!

Really?

http://www.beyond3d.com/forum/viewtopic.php?t=22018&postdays=0&postorder=asc&start=0
http://www.beyond3d.com/forum/viewtopic.php?t=21129&postdays=0&postorder=asc&start=20

It was touched on in these threads . .no doubt spurred on by
http://www.theinquirer.net/?article=22485

I didn't say it was the truth .. just that it was rumours. If you say it's garbage .. then fine .. it's garbage and I won't mention it again.

No need to talk down on someone who is just having fun in the speculating (just like everyone else not in the know how).

You did the same with Josh.
 
There is a fundamental difference is the type of speculation that just parrots any old thing going around and actual reasoned speculation, which I would hope people would partake in here.

I shall repeat again: Different generations of chips will not just have different features and capabilities (which would mean limiting caps for one of them, futher muddying the driver complexity), but also different precisions, potentially different filtering and FSAA techniques and rounding differences (ATI fog bug) - the potential IQ differences are likely to be a deal breaker here because, although when viewed side by side may be slim, when you have two boards rendering alternate frames, or parts of the same frame the differences can be much more noticable.

Common sense?
 
Status
Not open for further replies.
Back
Top