Once more, with feeling (it's, in a roundabout way, just a couple of posts above yours):So whats with the new FSAA option ? Will we get it on the 5x00 series ?
http://twitter.com/CatalystMaker/statuses/28416950671
Once more, with feeling (it's, in a roundabout way, just a couple of posts above yours):So whats with the new FSAA option ? Will we get it on the 5x00 series ?
I'm not AMD fan, btw I'm using Geforce as we speak. I rather find being fan of any company not the wisest approach, but maybe its just me. I'm a fan of innovative technology and its implementations which useful for us :smile:I find it amusing that as an AMD fan you aren't appalled that in this very thread AMD is telling you that it made more sense to reduce the default filtering quality on 5800 cards because raising the other cards (and the 6800 series) up to the 5800's level would affect more users.....I'm still trying to figure out what Dave was thinking by even making that statement in public. I guess the 5800's texture quality was also overboard and unnecessary.
So you ignore the big picture (lack of usefulness of extreme tessellation), and blamed AMD's approach as inadequate, and after I showed how AMD's approach is actually a smart one, you twist it around and blame AMD again, that they dont have a bigger influence over devs?Anyway, you are proving my point. AMD is complaining/talking/whining about overblown tessellation workloads. Besides the irony that they found themselves in this position in the first place I'm saying instead of talking they should be out leveraging their 7 generations of tessellation hardware experience to influence devs in doing it the right way. You know, doing more than just talking.
I find it amusing that as an AMD fan you aren't appalled that in this very thread AMD is telling you that it made more sense to reduce the default filtering quality on 5800 cards because raising the other cards (and the 6800 series) up to the 5800's level would affect more users.....I'm still trying to figure out what Dave was thinking by even making that statement in public. I guess the 5800's texture quality was also overboard and unnecessary.
Anyway, you are proving my point. AMD is complaining/talking/whining about overblown tessellation workloads. Besides the irony that they found themselves in this position in the first place I'm saying instead of talking they should be out leveraging their 7 generations of tessellation hardware experience to influence devs in doing it the right way. You know, doing more than just talking.
Nvidia HQ is still superior ..and AMD is giving it as well, and even better quality than possible with Nvidia's cards. Plus they give more settings for it, even Carsten mentioned it.
So whats with the new FSAA option ? Will we get it on the 5x00 series ? I don't see either of these cards as a replacement for my 5850 at 5870 speeds if anything its a side step and currently i don't need anything more powerfull for my gaming. It be nice if the new fsaa comes to the 58x0 series , it seems like a really nice thing for eyefinity resolutions
10.10a adds HQ AF filtering options for Evergreen as well, filtering tests need to be done, again.
Its not even clear what settings they are using, most likely defaults, which I dont use anyway. They also say AMD's have the only angle-independent filtering option, as well as Dave said, Radeons have high precision LOD, unavailable for NV users. If you choose highest quality settings, you will get native texturing, free from all this filtering optimizations.Nvidia HQ is still superior ..
and there are still some filtering issues in HD 6870 :
http://www.rage3d.com/reviews/video/amd_hd6870_hd6850_launch_review/index.php?p=6
No angle independency does nothing to improve quality over nvidia dependency , same for higher LOD .They also say AMD's have the only angle-independent filtering option, as well as Dave said, Radeons have high precision LOD, unavailable for NV users. If you choose highest quality settings, you will get native texturing, free from all this filtering optimizations.
They do indeed downclock the core (to 300Mhz) - it's just the memory they don't. But with gddr5 memory this makes a substantial difference. But yes I agree with you that AMD simply didn't bother.One cannot assume that they could downclock the cores that much, even if they have four times as many shaders, because the UVD unit may require a minimum frequency to work.
This all filtering issue as I understand they made default quality similar to Nvidia's.
So you ignore the big picture (lack of usefulness of extreme tessellation), and blamed AMD's approach as inadequate
Oh. Whoops. They indeed do. And it does seem that this is may be about shader power.They do indeed downclock the core (to 300Mhz) - it's just the memory they don't. But with gddr5 memory this makes a substantial difference.
Umm, not 'simply'. There's always a reason. Too much work for little benefit, corner cases where they do need the bandwidth/latency, some other obscure reason.But yes I agree with you that AMD simply didn't bother.
It does when looking at scene with high angle dependance. Its something different, but there will be (small) areas where we have higher quality sampling than NVIDIA.No angle independency does nothing to improve quality over nvidia dependency , same for higher LOD .
You are only looking at a single aspect. In others it is better.That's not the case at all. It previously wasn't as good as Nvidia's, now it's even worse.
Its been known for quite some time. Even Carsten pointed it out in his article. Now, though, everything gets the same baseline and everthing gets better control over it, NI just gets additional quality as well.That's fine and all and would probably be brushed under the rug as usual - I was just surprised Dave was so brazen about it.
No, its about memory support and the GDDR5 memory controller. Switching clock on <G5 memories is easy, however G5 is more complicated because it requires training. RV7xx could not train GDDR5 fast enough to exist within a VBLANK; specific changes were made to subsequent G5 implementations to improve the training time such that it could be done within a VBLANK.Oh. Whoops. They indeed do. And it does seem that this is may be about shader power.
On the HD4550, HD4670, HD5450 they don't.
On the HD4850, HD5670, HD5850 they do.
But there's no way to know if the reason why they don't downclock, for example, the HD4670 to HD4650 levels is because there are some situations where the extra shader power is used, or because it's not worth it.
No sir , it doesn't , I have been looking at an awful lot of AF comparisons , and I have yet to see a difference even between HD 4000 terrible dependencies and G80 , GT200 , that is outside theoretical tests of course , I would be happy if you can provide me with a sample to prove your point .It does when looking at scene with high angle dependance. Its something different, but there will be (small) areas where we have higher quality sampling than NVIDIA.
trinibwoy. What do you expect AMD to do other than tell developers and reviewers what they feel is best for gamers and AMD hardware? Maybe you think AMD should pay off developers to cap tessellation levels, but others have said they don't want AMD to do this as it increases costs in the end.
That's the reason why ubisoft asked nVidia for help. You can watch how nVidia and Ubisoft implemented it in HAWX 2 (start at 40:00): http://nvidia.fullviewmedia.com/gtc2010/0920-a5-2157.htmlI personally want developers to make the final choice as to what's appropriate for their game without having received money from any IHV.
Maybe AMD should do something for gamers - and i mean not to lower the default iq of their filtering.