Pete said:Ail, radeonic was laughing at the smackdown you gave my answer.
Wasn't my intention; I felt dumb for asking over and over again. I obviously didn't understand either why he was laughing
Pete said:Ail, radeonic was laughing at the smackdown you gave my answer.
geo said:The implication is that the "ideal" mode has only been selected for the ones with profiles. One hopes that the default mode (i.e. the great mass) is a max-compatibility mode of something close to 100% reliability, rather than a max-performance mode that may or may not work. Then you can fiddle from there on those non-profiled apps. At least that's what I'm interpreting.
Ailuros said:You obviously can enable Super-AA on that, but apart from that nothing else. What though if an older game actually benefits from AFR quite a lot, doesn't though scale a bit in default mode?
If I'd have the feeling that keeping this one 100% transparent to the user would mean endless complications I wouldn't probably even ask; but this one sounds so ridiculously simple to unlock. At very least it can't hurt ATI or it's CF platforms, rather the contrary.
Dave Baumann said:This is what NVIDIA were tyring to impress on people at their editors day, except I'm not sure people had much buy in to it. The very fact there there are 3 different advertised rendering modes and only two API's should tell everyone that profiling in some cases is a requirement, otherwise there frankly wouldn't be any point in providing 3 different modes.
NVIDIA appear to be stuck on the issue of profile/no-profiles, but what ATI were trying to impress is that the default for Crossfire will be on, whereas the default for SLI when it was initially launched would be off, unless there is a game profile. Exactly how much that is an issue by the time Crossfire is actually released now is an entirely different question, but the argument behind profiling is a bit of a red herring IMO.
Ah, just consider that the slave video card will have no idea that it's running in tandem with a second. It simply won't render to the entire screen (SFR/supertiling), or will leave every other frame blank (AFR), or will output a normal frame with a custom pixel center (SuperAA). But in any case, it will think it's rendering directly to a monitor that is set at a particular refresh rate.Pete said:Chal, forgive my density, but how then does the compositor handle frame rates lower than the 60Hz refresh rate: by repeating the previous frame until it has enough data for a new one?
Display devices are passive. They just receive image data at a fixed rate of x Hz. There is no difference between having a CRT or an LCD attached to the graphics card, except for maximum resolution/refresh rates. The compositing engine has no concept of fps, because it doesn't receive frames, but output images from both chips/cards. Therefore it does not limit fps at all, only refresh rate.Pete said:Thanks, Chal. That's clear to me, but my focus is more on what happens in between the compositor and a CRT. If the framerate is below the refresh rate--say, 40fps at 60Hz--then a CRT simply resends the previous frame if it doesn't get a new one before the next refresh, yes (and a LCD updates nothing)? Couldn't this same principle mean CRTs can be driven above the compositor's apparent 60Hz/fps limitation as dictated by its TDMS communicators, or does the compositor (and in turn the RAMDAC) do its job on the fly (no intermediate stage, aka memory space)?
The compositing engine does on-the-fly composition (with some limited buffering to cater for transmission latency, I guess). But as Dave said, SuperAA composition is currently done by the master GPU with transmission over PCIe.I understand now why you said it was crucial whether or not the compositor had any RAM of its own. I don't fully understand how SuperAA works without some RAM on the compositor. I can see how the compositor can combine scissor or tiling modes, as nothing necessarily overlaps, so it just directs traffic as the frame arrives, bit by bit. It switches between GPUs once every frame with AFR, twice per frame with scissor mode, or many times per frame with tiling. But, with SuperAA, does it receive both copies of the frame at the same time and combine the frames on the fly, pixel by pixel (which I suppose requires a byte or four of RAM)? Or does it receive two full frames into a buffer and then combine them (which would require two frame's worth of RAM)? In this case it seems the RAMDAC could run off the compositor's RAM and not be limited by the 16x12@60Hz TDMS link.
Argh, another 512/8=128! I knew that. I meant to say the "TDMS transmitter/RAMDAC resends" the previous frame, as that's what's in the framebuffer.Xmas said:Display devices are passive. They just receive image data at a fixed rate of x Hz.
Pete said:Two more (hopefully less silly) questions, tho:
1) OK, DVI-spec TDMS, and thus the CE, is still limited to ~60Hz, or ~60 images sent per second, at high res (16x12, 19x12). AT showed 136fps at 16x12 in HL2. That means the frame buffer is written more than twice per screen refresh. Does this mean some of what's been rendered will never be seen?
2) If SuperAA is run via PCIe, is it technically limited to 165MHz (19x12@60Hz), or is ATI using more than DVI-spec bandwidth b/w GPUs?
Dave Baumann said:http://www.techpowerup.com/reviews/NVIDIA/CrossfireTruth
OK, I guess that explains why these issues are just being raised.
trinibwoy said:Hmmm. They quoted the Inquirer!! WTF?
Tweaker said:why are you surprised?
trinibwoy said:Well I thought even Nvidia was of the opinion that the Inq was run by clowns. Didn't the Inq report that Nvidia didn't "like" them or something like that?
trinibwoy said:Hmmm. They quoted the Inquirer!! WTF? And they list "nTune" as something ATi doesn't have .
Tweaker said:hehe, they just picked , what they need and for such kind of FUD stuff is Inq ideal source:smile:
NVIDIA is always talking about AFR being the best solution due to the geometry scaling, and I think they have a valid point and I have no argument with that either
Dave Baumann said:http://www.techpowerup.com/reviews/NVIDIA/CrossfireTruth
OK, I guess that explains why these issues are just being raised.