AMD: R9xx Speculation

I find it amusing that as an AMD fan you aren't appalled that in this very thread AMD is telling you that it made more sense to reduce the default filtering quality on 5800 cards because raising the other cards (and the 6800 series) up to the 5800's level would affect more users.....I'm still trying to figure out what Dave was thinking by even making that statement in public. :) I guess the 5800's texture quality was also overboard and unnecessary.
I'm not AMD fan, btw I'm using Geforce as we speak. I rather find being fan of any company not the wisest approach, but maybe its just me. I'm a fan of innovative technology and its implementations which useful for us :smile:

This all filtering issue as I understand they made default quality similar to Nvidia's. But it doesnt affect me, even though I'm planning to get some Radeon (probably Cayman based), because I prefer to have better quality (I always pick HQ settings, regardless if its Geforce or Radeon), and AMD is giving it as well, and even better quality than possible with Nvidia's cards. Plus they give more settings for it, even Carsten mentioned it.

Anyway, you are proving my point. AMD is complaining/talking/whining about overblown tessellation workloads. Besides the irony that they found themselves in this position in the first place I'm saying instead of talking they should be out leveraging their 7 generations of tessellation hardware experience to influence devs in doing it the right way. You know, doing more than just talking.
So you ignore the big picture (lack of usefulness of extreme tessellation), and blamed AMD's approach as inadequate, and after I showed how AMD's approach is actually a smart one, you twist it around and blame AMD again, that they dont have a bigger influence over devs? :rolleyes:

In this particular case (HAWX) AMD did provided help, but Ubisoft refused to use solution which is beneficial to both manufacturers, and preferred to get paycheck (or whatever they are getting) from NV instead. Their choice, and I would rather not have AMD doing the same thing, even if they could afford to. PC market is shrinking, so back-stabbing and dividing the remaining market isnt the brightest way, nor it helps us as end-users.
 
I find it amusing that as an AMD fan you aren't appalled that in this very thread AMD is telling you that it made more sense to reduce the default filtering quality on 5800 cards because raising the other cards (and the 6800 series) up to the 5800's level would affect more users.....I'm still trying to figure out what Dave was thinking by even making that statement in public. :) I guess the 5800's texture quality was also overboard and unnecessary.

Anyway, you are proving my point. AMD is complaining/talking/whining about overblown tessellation workloads. Besides the irony that they found themselves in this position in the first place I'm saying instead of talking they should be out leveraging their 7 generations of tessellation hardware experience to influence devs in doing it the right way. You know, doing more than just talking.

spoken as though right from the pulpit itself.. ironic that as someone who "claims" to be all about choice features that when presented with just that you hop right on the nV supplied talking points yet again.. really you are better than that (or at least you're not Xman)
 
I don't remember ever seeing evidence that Evergreen's angle-independence resulted in superior image quality to NVidia - whereas there is no doubt that the shimmering and visible transition line problems are solely ATI's.

As for whether the HAWX2 tessellation is excessive, we'll have to wait until AMD produces the hack they've promised (or maybe the full game has the AMD code?) for screenshot/movie comparisons.
 
So whats with the new FSAA option ? Will we get it on the 5x00 series ? I don't see either of these cards as a replacement for my 5850 at 5870 speeds if anything its a side step and currently i don't need anything more powerfull for my gaming. It be nice if the new fsaa comes to the 58x0 series , it seems like a really nice thing for eyefinity resolutions


These new 68xx cards are not replacements for 58xx. Those are coming next month in the form of 69xx.
 
10.10a adds HQ AF filtering options for Evergreen as well, filtering tests need to be done, again.
 
Nvidia HQ is still superior ..

and there are still some filtering issues in HD 6870 :
http://www.rage3d.com/reviews/video/amd_hd6870_hd6850_launch_review/index.php?p=6
Its not even clear what settings they are using, most likely defaults, which I dont use anyway. They also say AMD's have the only angle-independent filtering option, as well as Dave said, Radeons have high precision LOD, unavailable for NV users. If you choose highest quality settings, you will get native texturing, free from all this filtering optimizations.

So yes, Radeon owners can get higher quality, which is unavailable for Geforce owners, at least for now.
 
All the new settings in CCC unlocked + the old tent AA filters:

34rx34h.png


:D
 
They also say AMD's have the only angle-independent filtering option, as well as Dave said, Radeons have high precision LOD, unavailable for NV users. If you choose highest quality settings, you will get native texturing, free from all this filtering optimizations.
No angle independency does nothing to improve quality over nvidia dependency , same for higher LOD .
 
One cannot assume that they could downclock the cores that much, even if they have four times as many shaders, because the UVD unit may require a minimum frequency to work.
They do indeed downclock the core (to 300Mhz) - it's just the memory they don't. But with gddr5 memory this makes a substantial difference. But yes I agree with you that AMD simply didn't bother.
 
This all filtering issue as I understand they made default quality similar to Nvidia's.

That's not the case at all. It previously wasn't as good as Nvidia's, now it's even worse. That's fine and all and would probably be brushed under the rug as usual - I was just surprised Dave was so brazen about it. It's amazing what you can get away with as the "underdog" :)

So you ignore the big picture (lack of usefulness of extreme tessellation), and blamed AMD's approach as inadequate

Any approach that is all talk with no results is inadequate. I'm sure even you can see that.
 
They do indeed downclock the core (to 300Mhz) - it's just the memory they don't. But with gddr5 memory this makes a substantial difference.
Oh. Whoops. They indeed do. And it does seem that this is may be about shader power.
On the HD4550, HD4670, HD5450 they don't.
On the HD4850, HD5670, HD5850 they do.
But there's no way to know if the reason why they don't downclock, for example, the HD4670 to HD4650 levels is because there are some situations where the extra shader power is used, or because it's not worth it.

But yes I agree with you that AMD simply didn't bother.
Umm, not 'simply'. There's always a reason. Too much work for little benefit, corner cases where they do need the bandwidth/latency, some other obscure reason.

Anyway, since I don't use postprocessing at all, just UVD, I can probably edit the BIOSes myself.
 
trinibwoy. What do you expect AMD to do other than tell developers and reviewers what they feel is best for gamers and AMD hardware? Maybe you think AMD should pay off developers to cap tessellation levels, but others have said they don't want AMD to do this as it increases costs in the end. I personally want developers to make the final choice as to what's appropriate for their game without having received money from any IHV.

While we're on the subject of tessellation does anyone know if the HAWX 2 benchmark looks to be representative of game play or is it gratuitous shots of low flying planes. As long as it represents game play or something more intense it seems AMD and Nvidia hardware will play the game just fine based on the Hardware.fr FPS results.
 
No angle independency does nothing to improve quality over nvidia dependency , same for higher LOD .
It does when looking at scene with high angle dependance. Its something different, but there will be (small) areas where we have higher quality sampling than NVIDIA.

That's not the case at all. It previously wasn't as good as Nvidia's, now it's even worse.
You are only looking at a single aspect. In others it is better.

That's fine and all and would probably be brushed under the rug as usual - I was just surprised Dave was so brazen about it.
Its been known for quite some time. Even Carsten pointed it out in his article. Now, though, everything gets the same baseline and everthing gets better control over it, NI just gets additional quality as well.

Oh. Whoops. They indeed do. And it does seem that this is may be about shader power.
On the HD4550, HD4670, HD5450 they don't.
On the HD4850, HD5670, HD5850 they do.
But there's no way to know if the reason why they don't downclock, for example, the HD4670 to HD4650 levels is because there are some situations where the extra shader power is used, or because it's not worth it.
No, its about memory support and the GDDR5 memory controller. Switching clock on <G5 memories is easy, however G5 is more complicated because it requires training. RV7xx could not train GDDR5 fast enough to exist within a VBLANK; specific changes were made to subsequent G5 implementations to improve the training time such that it could be done within a VBLANK.
 
It does when looking at scene with high angle dependance. Its something different, but there will be (small) areas where we have higher quality sampling than NVIDIA.
No sir , it doesn't , I have been looking at an awful lot of AF comparisons , and I have yet to see a difference even between HD 4000 terrible dependencies and G80 , GT200 , that is outside theoretical tests of course , I would be happy if you can provide me with a sample to prove your point .
 
trinibwoy. What do you expect AMD to do other than tell developers and reviewers what they feel is best for gamers and AMD hardware? Maybe you think AMD should pay off developers to cap tessellation levels, but others have said they don't want AMD to do this as it increases costs in the end.

Maybe AMD should do something for gamers - and i mean not to lower the default iq of their filtering.

I personally want developers to make the final choice as to what's appropriate for their game without having received money from any IHV.
That's the reason why ubisoft asked nVidia for help. You can watch how nVidia and Ubisoft implemented it in HAWX 2 (start at 40:00): http://nvidia.fullviewmedia.com/gtc2010/0920-a5-2157.html
AMD is not interessted in bringing Tessellation to the pc community right now. They don't even have a own Tessellation-Showcase. After CES i think they stopped immediately their support for it. None of the latest DX11 games (which have all Tessellation) are on their DX11 games slide.
The only thing i'm seeing from them it's their pr who is complaining that Tessellation is not anymore the "biggest hardware feature of DX11".
 
Back
Top