Crossfire limitation

Pete said:
Ail, radeonic was laughing at the smackdown you gave my answer. :)

Wasn't my intention; I felt dumb for asking over and over again. I obviously didn't understand either why he was laughing ;)
 
geo said:
The implication is that the "ideal" mode has only been selected for the ones with profiles. One hopes that the default mode (i.e. the great mass) is a max-compatibility mode of something close to 100% reliability, rather than a max-performance mode that may or may not work. Then you can fiddle from there on those non-profiled apps. At least that's what I'm interpreting.

You obviously can enable Super-AA on that, but apart from that nothing else. What though if an older game actually benefits from AFR quite a lot, doesn't though scale a bit in default mode?

If I'd have the feeling that keeping this one 100% transparent to the user would mean endless complications I wouldn't probably even ask; but this one sounds so ridiculously simple to unlock. At very least it can't hurt ATI or it's CF platforms, rather the contrary.
 
Ailuros said:
You obviously can enable Super-AA on that, but apart from that nothing else. What though if an older game actually benefits from AFR quite a lot, doesn't though scale a bit in default mode?

If I'd have the feeling that keeping this one 100% transparent to the user would mean endless complications I wouldn't probably even ask; but this one sounds so ridiculously simple to unlock. At very least it can't hurt ATI or it's CF platforms, rather the contrary.

We're either talking past each other, or you just know something I don't and this is your way of communicating it. :LOL: My understanding (based on nothing in particular --just what is available at this place) is that what you are asking for above is what they are doing for all older games --a default mode (super-tiling) that can be changed to what you want (AFR, for instance), and it is up to you to decide which you like better; only the initial choice is made by ATI for those older games. That it is only the handful of profiled games that are "locked in" to a given mode determined by ATI. But maybe you're right --it does seem a bit odd to "lock in" a few when you already built-in to the driver the ability to change them. But then I don't know the details of what/how much is going on with those profiles that might make it more work than is obvious on the surface to make those "profiled" games available to the driver as well for changing.
 
Dave Baumann said:
This is what NVIDIA were tyring to impress on people at their editors day, except I'm not sure people had much buy in to it. The very fact there there are 3 different advertised rendering modes and only two API's should tell everyone that profiling in some cases is a requirement, otherwise there frankly wouldn't be any point in providing 3 different modes.

NVIDIA appear to be stuck on the issue of profile/no-profiles, but what ATI were trying to impress is that the default for Crossfire will be on, whereas the default for SLI when it was initially launched would be off, unless there is a game profile. Exactly how much that is an issue by the time Crossfire is actually released now is an entirely different question, but the argument behind profiling is a bit of a red herring IMO.

Haha.. I thought this argument was funny. I don't think what ATI was trying to impress people with was the Crossfire will be "on" by default. What they were trying to impress was rather vague "The Crossfire will work with ALL GAMES while SLI doesn't." To many, this might sound like "Any game would run faster/prettier with the Crossfire" rather than "Whether the game will benefit or not, the Crossfire mode will be always turned on." And obviously NVIDIA felt the need to say something.

In this case, the Crossfire's method is obviously superior since a user wouldn't have to worry about the profiles. (Although we'll have to see) But at the same time, there is no reason for NVIDIA not to include a profile, in the drivers, for a game that scales well with SLI configuration. Beyond that, it's just different people/parties saying same thing differently. Some games will benefit with SLI/Crossfire, some games will not, regardless what the "default" mode is. Different story - Same ending. It's a typical 'you see what you want to see' situation. (Or in this case, 'you say what you want to say')

lop
 
Pete said:
Chal, forgive my density, but how then does the compositor handle frame rates lower than the 60Hz refresh rate: by repeating the previous frame until it has enough data for a new one?
Ah, just consider that the slave video card will have no idea that it's running in tandem with a second. It simply won't render to the entire screen (SFR/supertiling), or will leave every other frame blank (AFR), or will output a normal frame with a custom pixel center (SuperAA). But in any case, it will think it's rendering directly to a monitor that is set at a particular refresh rate.

Then the compositing chip just needs to combine that output with the output from the master card for display on the monitor.
 
Thanks, Chal. That's clear to me, but my focus is more on what happens in between the compositor and a CRT. If the framerate is below the refresh rate--say, 40fps at 60Hz--then a CRT simply resends the previous frame if it doesn't get a new one before the next refresh, yes (and a LCD updates nothing)? Couldn't this same principle mean CRTs can be driven above the compositor's apparent 60Hz/fps limitation as dictated by its TDMS communicators, or does the compositor (and in turn the RAMDAC) do its job on the fly (no intermediate stage, aka memory space)?

I understand now why you said it was crucial whether or not the compositor had any RAM of its own. I don't fully understand how SuperAA works without some RAM on the compositor. I can see how the compositor can combine scissor or tiling modes, as nothing necessarily overlaps, so it just directs traffic as the frame arrives, bit by bit. It switches between GPUs once every frame with AFR, twice per frame with scissor mode, or many times per frame with tiling. But, with SuperAA, does it receive both copies of the frame at the same time and combine the frames on the fly, pixel by pixel (which I suppose requires a byte or four of RAM)? Or does it receive two full frames into a buffer and then combine them (which would require two frame's worth of RAM)? In this case it seems the RAMDAC could run off the compositor's RAM and not be limited by the 16x12@60Hz TDMS link.

I couldn't find the answer in either AT's or Hexus' Xfire previews, and I don't think Dave wrote one. Forgive me if I've missed an article that explains the Xylinx FPGA compositor (per Hexus).
 
Pete said:
Thanks, Chal. That's clear to me, but my focus is more on what happens in between the compositor and a CRT. If the framerate is below the refresh rate--say, 40fps at 60Hz--then a CRT simply resends the previous frame if it doesn't get a new one before the next refresh, yes (and a LCD updates nothing)? Couldn't this same principle mean CRTs can be driven above the compositor's apparent 60Hz/fps limitation as dictated by its TDMS communicators, or does the compositor (and in turn the RAMDAC) do its job on the fly (no intermediate stage, aka memory space)?
Display devices are passive. They just receive image data at a fixed rate of x Hz. There is no difference between having a CRT or an LCD attached to the graphics card, except for maximum resolution/refresh rates. The compositing engine has no concept of fps, because it doesn't receive frames, but output images from both chips/cards. Therefore it does not limit fps at all, only refresh rate.
As long as you don't enable VSync, there is no fps limit (though some CrossFire modes might require VSync to be enabled).

I understand now why you said it was crucial whether or not the compositor had any RAM of its own. I don't fully understand how SuperAA works without some RAM on the compositor. I can see how the compositor can combine scissor or tiling modes, as nothing necessarily overlaps, so it just directs traffic as the frame arrives, bit by bit. It switches between GPUs once every frame with AFR, twice per frame with scissor mode, or many times per frame with tiling. But, with SuperAA, does it receive both copies of the frame at the same time and combine the frames on the fly, pixel by pixel (which I suppose requires a byte or four of RAM)? Or does it receive two full frames into a buffer and then combine them (which would require two frame's worth of RAM)? In this case it seems the RAMDAC could run off the compositor's RAM and not be limited by the 16x12@60Hz TDMS link.
The compositing engine does on-the-fly composition (with some limited buffering to cater for transmission latency, I guess). But as Dave said, SuperAA composition is currently done by the master GPU with transmission over PCIe.


As long as the compositing engine is used, CrossFire is limited to the display modes supported by the slave's TMDS output. With transmission over PCIe, you could use any display mode supported by the master card.
 
Last edited by a moderator:
Brain ... forsaking me....

Xmas said:
Display devices are passive. They just receive image data at a fixed rate of x Hz.
Argh, another 512/8=128! I knew that. I meant to say the "TDMS transmitter/RAMDAC resends" the previous frame, as that's what's in the framebuffer.

As for the rest of my questions: /smacks forehead. I *think* (I've been on a stupid tear lately, so nothing is certain) I finally understand what's been staring me in the face. The limitation is not of framerate (assuming no vsync, we just see partially-updated frames at regular, refresh-rate-dictated intervals) or CRT refresh rate (as there's a 240MHz RAMDAC hanging off the Compositing Engine, as Dave said, that I'm guessing runs independent of the Slave GPU's TDMS transmitter rate), but of resolution! So the complaint is mainly by people who have CRTs capable of higher resolutions than 19x12 (say, 20x15 on 22" CRTs). Man, I've been overcomplicating this from the start.

Thanks Xmas, Chal, and whoever else took the time to answer my silly questions.

Two more (hopefully less silly) questions, tho:

1) OK, DVI-spec TDMS, and thus the CE, is still limited to ~60Hz, or ~60 images sent per second, at high res (16x12, 19x12). AT showed 136fps at 16x12 in HL2. That means the frame buffer is written more than twice per screen refresh. Does this mean some of what's been rendered will never be seen?

2) If SuperAA is run via PCIe, is it technically limited to 165MHz (19x12@60Hz), or is ATI using more than DVI-spec bandwidth b/w GPUs?
 
Pete said:
Two more (hopefully less silly) questions, tho:

1) OK, DVI-spec TDMS, and thus the CE, is still limited to ~60Hz, or ~60 images sent per second, at high res (16x12, 19x12). AT showed 136fps at 16x12 in HL2. That means the frame buffer is written more than twice per screen refresh. Does this mean some of what's been rendered will never be seen?

2) If SuperAA is run via PCIe, is it technically limited to 165MHz (19x12@60Hz), or is ATI using more than DVI-spec bandwidth b/w GPUs?

1). Yes, since display refresh doesn't affect frames drawn (they don't have to be displayed). Think Vsync. You want > refreshHz to maintain it without stepping down in performance but those frames aren't shown on your monitor. Same thing with Crossfire (and any 3D really).

2). There's more bandwidth available across the PCIe bus, but latency is higher than squirting frame data over the cable.
 
Tweaker said:
why are you surprised?

Well I thought even Nvidia was of the opinion that the Inq was run by clowns. Didn't the Inq report that Nvidia didn't "like" them or something like that?
 
Wow, it's amazing how far competitors are willing to go...I really think they should chill out, that was very childish, worse than Sassen's benchmarks!

All this pre-launch hysteria from the ATI haters and childish attempts to discredit them indicates only one thing: they are really afraid
 
trinibwoy said:
Well I thought even Nvidia was of the opinion that the Inq was run by clowns. Didn't the Inq report that Nvidia didn't "like" them or something like that?

hehe, they just picked , what they need and for such kind of FUD stuff is Inq ideal source:smile:
 
Tweaker said:
hehe, they just picked , what they need and for such kind of FUD stuff is Inq ideal source:smile:

Trash tabloid sites will pick up on any bullshit from any "source". R520 has as much 32 pipelines, as it has an internal codename "Fudo" heh....:rolleyes:
 
Yup, I was in on that call and looked at the stuff they put forth. It did get me thinking though. After around 24+ hours of going over this information, reading and re-reading, as well as talking to some others... my final conclusion is that the 1161 receiver is not the limitation here. Which is why I wrote my little article about why we shouldn't worry about ATI using that chip as a receiver to get the DVI input in from the slave card.

The compositing chip with its slow external RAMDAC and its TMDS... that is the issue.

I agree with Dave's points about profiles/AI. It is pretty neat that ATI enables SuperTiling as the default mode for any unspecified game, but then turns on different methods if they prove to be better. NVIDIA is always talking about AFR being the best solution due to the geometry scaling, and I think they have a valid point and I have no argument with that either. I think ATI would probably agree with that also. But for user transparency, I can see how SuperTiling would be the best default way to go. It is probably the least troublesome mode for ATI since it was designed in at the chip level.

Both of them do this stuff, and honestly their marketing departments would be bored to tears if they didn't have dirt to spread on each other. Condone or condemn, I really don't care, as either way solid information does get out every once in a while.

I for one am really looking forward to seeing what the official CrossFire reviews come out as.
 
NVIDIA is always talking about AFR being the best solution due to the geometry scaling, and I think they have a valid point and I have no argument with that either

This is probably because. In my dealings with Nvidia. They are insistent that AFR will be the dominent rendering mode in future titles. Once they worked out the kinks with the AFR2 rendering mode it has pretty much been the only rendering method used in the Nvidia SLI profiles as of recently. Yes there are some older SFR profiles but Nvidia is aiming at making any future game AFR compatible.

Regarding AFR2. I also asked them to clarify what exactly it is.((I have some ideas though)) I couldnt get exact details as its apparently propietary. But they did say AFR2 is going to be their primary rendering mode in the future. AFR 1 will likely become obsolete due to AFR2 solving alot of the early problems with SLI.
 
Last edited by a moderator:
Dave Baumann said:
http://www.techpowerup.com/reviews/NVIDIA/CrossfireTruth

OK, I guess that explains why these issues are just being raised.

Shocked. Shocked, I. . .nah, I can't even pull it off.

Who do these two think they're fooling? Us apparently, because there is no way in hell they are fooling each other. Distribute a document like that to more than three people and it ends up in the other guys office within an hour anyway. :rolleyes:
 
Back
Top