AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
I really, really hope ATI allow us to have full high quality AF without any under filtering without forcing us to disable AI, it's pointless having best quality AF if we have to disable bugfixes/optimizations to get it.

When we're paying for the top end card we should be able to get the max image quality possible, not under filtered and flickering because of it.

Looks like it's still going to be worse than nvidia though, wish ATI would fix this.

http://www.hardware-infos.com/news.php?news=3208&sprache=1

After we reported that the anisotropic filter of the HD 5000 series seems to work perfectly angle-undependent - though the slight angle-dependency of the HD 2000- to HD 4000 series would not differ a lot from the better and round blossom of the HD 5000 series -, it leaks through from other sources that the main problem has not been pursued.

So we have information on hand that the filter is still flickering stronger than on current Geforce graphic cards, which relates to under-sampling. By this, although ATI saves two frequencies of the texture units and so gains higher performance, the screen quality has to suffer.

If at least the quality without A.I. will be available with other options, we can not be certain, yet.

Anyway, we can already be disappointed, that AMD did not learn from the past and refuses perfect image quality for the purchasers of a high-end graphic card, although performance is available en masse, as so often.
Almost cynically said, that Crossfire does not work anymore with A.I. deactivated - and so with still acceptable filter performance. So purchasers of multi GPU systems or X2 cards will still be locked out completely.
 
Who would want you to believe filtering is not up to nVidia's standard?
KonKorT! :LOL:

Fuad with his usual spin and mistakes:

On the HD5850:
It works at 725MHz core and 1000MHz GDDR5 memory. The chips are the same as on the Radeon HD 5870, 32Mx32 and the memory bus is 256-bit. This part of the spec didn’t change from 4870, RV770 times.
The performance of the card is definitely good compared to both Nvidia GT200-based offerings or Radeon 48x0 cards, but again, with no DirectX 11 games around, it might not be a bad idea to see what the other graphics company has in store for DirectX 11.
Yep, as neutral as Charlie. :LOL:
 
Not sure if this has been posted (didn't see it). Looks like a Gigabyte marketing presentation about the 5870. One thing that I noticed, which I haven't seen before, is the mention of HDR texture compression...

EDIT: There is also one for the 5850 here (mentions 1440 stream processors).
 
Last edited by a moderator:
The AF wheel is only one measuring stick. It's also incomplete without extensive analysis...just a static shot won't tell you much.
Exactly. Far too many people have this wrong assumption.

There is the gradient calculation and the filtering part which takes the samples along the longer gradient. The first one was already good enough since the HD 2000 series, but they are not sampling very well.

In contrast NV there is no visible difference between my reference filtering which takes all samples and NVs HQ filter, so I'm assuming they learned their lesson with NV40/G70.

I hope ATI will come to their senses in the future as well. I think this should be a DirectX requirement. We have full IEEE754 precision in the ALUs with D3D11, but they can do what they want with texture filtering.
 
http://www.scribd.com/doc/20009711/Gigabyte-GVR587D51GDB-graphics-card

Powerpoint presentation for the HD5870 from Gigabyte. ;)

Edit: another excerpt from RV870 presentation, too...


gpuh.jpg
 
Last edited by a moderator:
I hope ATI will come to their senses in the future as well. I think this should be a DirectX requirement. We have full IEEE754 precision in the ALUs with D3D11, but they can do what they want with texture filtering.

Can you please qualify the impact of this alleged subpar filtering? What would be the visible result?
 
KonKorT! :LOL:

Fuad with his usual spin and mistakes:

On the HD5850:


Yep, as neutral as Charlie. :LOL:

Ehm, suggesting you could wait and see what the competition has in store, isn't that generally sound advice?

Seems pretty harmless compared to the usual Charlie diatribes.
 
Ehm, suggesting you could wait and see what the competition has in store, isn't that generally sound advice?

Seems pretty harmless compared to the usual Charlie diatribes.
Yes, when was the last time he suggested that at an NV launch? Hmm. Seems the traffic goes only way, not the least bit surprised.
 
but they can do what they want with texture filtering.

RefRast for DX9 is pretty horrible though so I'm not sure you want that. Also, IMHO, this is more likely to be a software limitation rather than a hardware one. It does not mean it'll be solved tomorrow, or ever (I have no idea), but it just points the finger in the right direction.
 
I've said this before, but I think we should really have a reference rasterizer which simply used a ridiculous supersampling ratio (gaussian averaging filter) and no texture filtering at all (no mipmapping either). Do a MSE comparison against that to see which does better (or SSIM, or A/B). Would provide a better way to judge all the exotic AA methods too.
 
Last edited by a moderator:
So, how many samples does it use at say 45 degrees?
The number of samples does not depend on an angle, but on the distortion of the texture that is reevaluated for each sample.

Look at this diagram:
cfdiagram.png


The parallelogram is defined by the rate of change of the texture coordinates in the X and Y direction in screenspace.

For a perfect line of anisotropy the GPU then needs to calculate the major and minor axis of the enclosed ellipse. The major axis is the desired LOA.

The level of anisotropy is then determined by the quotient of the length of these two axes and clamped to the AF level the App requested (16:1 maximum).

After that the GPU should take enough samples along the major axis as necessary to fulfill the nyquist shannon sampling theorem. But ATI doesn't.

RefRast for DX9 is pretty horrible though so I'm not sure you want that. Also, IMHO, this is more likely to be a software limitation rather than a hardware one. It does not mean it'll be solved tomorrow, or ever (I have no idea), but it just points the finger in the right direction.
Even with A.I. disabled ATI is still undersampling. I don't think the hardware is even capable of filtering perfectly. I think they do this because they can save on buffer sizes, because the maximum latency goes down or something like that
 
Well what I find funny is that the review threads are typically only three or four pages long. After all the hundreds of posts of speculation, when the real thing arrives it's invariably a lot les exciting.

Just like sex. You spend all that time building it up only to blow your load in 5mins. Atleast it makes for comfortable sleep.
 
After that the GPU should take enough samples along the major axis as necessary to fulfill the nyquist shannon sampling theorem. But ATI doesn't.
So, how many? (At a surface angled above max. anisotropy and at 45 degrees, which is nastiest for memory access.)

Or to put it another way, how many samples do they skip?
 
Just like sex. You spend all that time building it up only to blow your load in 5mins. Atleast it makes for comfortable sleep.

Solution: don't blow your load that easily. Err, I mean reviewers should write better reviews:)
 
After that the GPU should take enough samples along the major axis as necessary to fulfill the nyquist shannon sampling theorem. But ATI doesn't.

Thanks for the explanation. So why then does the flower show a perfect pattern if they're undersampling? Is it because the angles used by that particular pattern aren't affected?
 
Back
Top