AMD: R9xx Speculation

I just took the 20 fps gap between the 5870 and 5770 and assumed it would be similar on a 5670 compared to the 5770. Perhaps saying unplayable wasn't exactly true, but you sure wouldn't be happy with it.
 
jimbo75: HD5800 has dual rasterizer, which may somewhat increase tessellation efficiency. But HD5700 and 5600 both have one rasterizer - if the performance is really limited by GPU's front-end (tesselation/setup/rasterizer), their framerates shouldn't be much different.
 
jimbo75: HD5800 has dual rasterizer, which may somewhat increase tessellation efficiency. But HD5700 and 5600 both have one rasterizer - if the performance is really limited by GPU's front-end (tesselation/setup/rasterizer), their framerates shouldn't be much different.

Yes true enough, but the 5670 is also in-between clocks at 775mhz.

Look at the gap between the 5870 and 5850 (big), then the gap between the 5850 and 5830 (small). Looks more like clock speed is a limit, but you can never tell with benchmarks.

IMG0029781.gif


http://img.hexus.net/v2/graphics_cards/nvidia/Fermi/a/N174/Mod.jpg

http://img.hexus.net/v2/graphics_cards/nvidia/Fermi/a/N174/Norm.jpg

http://img.hexus.net/v2/graphics_cards/nvidia/Fermi/a/N174/Xtreme.jpg

If you put each of those in a tab you can see the difference - and that's at extreme in a tessellation benchmark. I just doubt it at all that anybody would notice the difference during gaming in Hawx 2.

If AMD can reduce the tesselation levels in drivers, show how little the IQ loss is, they will surely be able to hugely increase fps over Fermi in Hawx 2.
 
@Simby

Well if there is a noticable IQ loss I suppose it's fair enough. We'll see on that.

For me, this is why Nvidia rounded on the FP16/FP11 thing at the 450's release. Talk of ATI's "cheat" and some sites testing Catalyst AI - this is linked to it too.

Nvidia is trying to make tiny IQ differences which result in large fps gains seem like "cheating", when in actual fact anybody with common sense would take the "cheat" fps compared to the negligible IQ difference.
 
Nobody is complaining about Nvidia pushing tessellation as well, AMD and the rest point out its absurd to push tes. which is discarded 75% like in HAWX.

You've gone back to square one. Read this discussion again and then get back to me.

And the fact is that it works very well: as far as I'm aware, all currently available DX11 games feature tesselation, and they all run very well on Radeons. Now I guess people who spend all day playing Heaven and Stone Giant will be better off with Fermi, but I'm sure NVIDIA has more than enough GTX 470s/480s in stock for them.

Indeed, even more reason for them to avoid talking about nVidia's tessellation advantage (real or imagined).

On the other side the HAWX 2 benchmark results are still high for AMD cards. So its not a situation where it would be unplayable on AMD cards because of nvidia . ;)
From hardware.fr
http://www.hardware.fr/articles/804-17/dossier-amd-radeon-hd-6870-6850.html


If the original is anything to go by that advantage is not just due to tessellation. The first one liked nVidia's hardware too.
 
Well if there is a noticable IQ loss I suppose it's fair enough. We'll see on that.

For me, this is why Nvidia rounded on the FP16/FP11 thing at the 450's release. Talk of ATI's "cheat" and some sites testing Catalyst AI - this is linked to it too.

Nvidia is trying to make tiny IQ differences which result in large fps gains seem like "cheating", when in actual fact anybody with common sense would take the "cheat" fps compared to the negligible IQ difference.

I wonder if it's possible to simply discard everything above what AMD thinks is not excessive, on the driver level. I'm sure everyone would be yelling cheating but if there's not much difference in IQ it should prove AMDs point.

I have a feeling it's not possible though.
 
Couldn't they just limit the max tesselation factor to desired at driver level on .exe basis? (so everything asking higher tesselation factor would just get the max set level instead of what DX API allows)
 
jimbo75: HD5800 has dual rasterizer, which may somewhat increase tessellation efficiency.
The dual rasterizer makes no difference in tessellation performance. No test shows any Evergreen chip generating more than one tessellated triangle
every three clocks.

I'm pretty sure that the dual rasterizer is there for easier data transport across the chip. Having a single rasterizer and transferring pixel data is probably more costly.
 
Even after AMD somehow patches HAWX 2 it seems that a 6850 could still beat 5870 ;).
If future tesselated games will reflect the order of AMD cards in the actual benchmark than 6850 is faster card tha 5870 and the new naming scheme fits quite good.:p
 
Nvidia also recomended for developers to have minimum 8 pixel/triangle if possible. They are just at the point where they can push it further with a smaler penalty than AMD and use it for marketing purpose. :rolleyes:

On the other side the HAWX 2 benchmark results are still high for AMD cards. So its not a situation where it would be unplayable on AMD cards because of nvidia . ;)
From hardware.fr
http://www.hardware.fr/articles/804-17/dossier-amd-radeon-hd-6870-6850.html

How are these results high for AMD cards? For me it's looks like the difference between AMD and nVidia cards in Hawx 2 can not be explained by pure tesselation power.

See this for comparison

Or is Heaven not representative at all for tesselation capacity?
 
I think you and others are missing the point, intentionally or otherwise.
Er, actually, I'm not. I'm not in agreement with AMD's stance till I see some evidence. I think tessellation in Evergreen is a kludge and they better fix it in Cayman.

It's not that AMD hasn't been talking about tessellation. I must be the only person here who sees the irony in a company pitching a 7th generation feature vs the competition's 1st generation while simultaneously complaining that the competition has "too much" performance. Also, after 7 generations if the best you can come up with is stuff like this you're asking for trouble.
Isn't that a Microsoft sample, part of the SDK? Like the other D3D11 samples shown before Evergreen's launch?

Let me put it this way. If Huddy approached you right now and said "Hey Jawed, nVidia is pushing this over tessellated crap. What do you recommend we do to show people the right way to do things?"
Apparently they're doing it, with games like Civ 5.

Will you tell him to dust-off "Froblins" - a proprietary DX10.1 implementation that is not only visually uninspired but irrelevant in the DX11 world? Will that be your counter to nVidia's marketing money and aggressive tactics? There is such a thing as being too techie, a little pragmatic thinking goes a long way.
The era of demos for tessellation is over. By several years.

Yes, that's exactly what they need to do. Put the focus on them and what they're doing, not what nVidia is doing TO them. :)
Actually, what AMD's apparently accusing NVidia of doing is unnecessarily hobbling performance for the 80%+ of D3D11 gamers out there. But NVidia hobbling performance and IQ for people who aren't using NVidia is nothing new.

From the presentation that Sontin linked there's no doubt that there's too much tessellation in certain areas of the screen, but the algorithm is clearly much better than the naive rubbish seen in Heaven. I'm not convinced that HAWX2's intrinsically bad (it's a delicate balance and very hard to avoid over- and under-tessellation simultaneously), so I'm waiting to see why AMD thinks this game is excessive, quantitatively and qualitatively. There are more advanced techniques, such as making the silhouette the focus of the highest-quality tessellation. In fact NVidia went into this technique in some detail over a year ago. So you have to ask why that isn't in the game.

I also think that Huddy's talking nonsense about the required size of triangles. That's just an excuse, those are triangle sizes that don't disappear. I'm not convinced AMD has an argument. Some would argue that "good-enough" is OK, no need to stress out the hardware, but GTS450 seems to be OK.

The stuff about fragment over-shading is very true, but that's because quad-based rendering is too granular (it's a similar problem to dynamic branching incoherence). I suspect ATI's architecture is especially inefficient in the management of quads in hardware threads, so adding to the pain.

Tessellation performance of Evergreen might have been adequate back in R600's days, but a forward-looking architecture's required. It doesn't scale.

The "price" NVidia's paying for advanced tessellation/setup/rasterisation is not dissimilar to the price AMD paid with out-of-order thread execution in R520 and the fine-grained dynamic branching. That price has to be paid, going forwards.

Tessellation is about rendering the right-sized triangles. They seem right-sized to me in HAWX2, but that needs careful evaluation - and the algorithm may be on the naive side. If the game has a tessellation-quality slider then gamers can choose their trade-off and we can argue about IQ :p
 
Actually, what AMD's apparently accusing NVidia of doing is unnecessarily hobbling performance for the 80%+ of D3D11 gamers out there. But NVidia hobbling performance and IQ for people who aren't using NVidia is nothing new.
Actually, AMD is accusing NVidia of hobbling performance for 100% of the D3D11 gamers out there. NVidia's cards are also running slower than they have to, though it just makes less of a difference to them.

Looking at the dev talk that Sontin pointed to, the tessellation really is quite excessive. There's already too many triangles for the mountains, and they say that they cranked down the tessellation for the audience to see the wireframe. Adaptive tessellation is used, but there's no adaptation for patches that have no chance of generating silhouette triangles (and should thus use non-occlusion parallax mapping).

Still, I do want to see AMD get a higher rate of setup. Cayman will hopefully do the job if that leaked line of "scalability" holds true. Just because HAWX2 has unnecessarily excessive triangle count doesn't mean there aren't other legit 10M-triangle scenes in games.

At the very least, AMD should be able to cull/clip more than one tri per clock. The hardware cost and complexity of that is trivial, as there's no ordering issues to worry about.
 
well amd makes more money selling it as a 6800 serie than a 6700 serie, so I woulndt think it was a hard decision to rename the cards.

wait a few weeks, and then the 6970 have great performance, and not many would care at all.
Although AMD could have done better naming choice, but honestly most of us probably wont be able to come up with better choice considering:

1. There are more GPUs to fit into naming schemes, including Fusion taking the bottom. From this perspective AMD did a right choice, even though it leaves little room for Antilles.

2. Naming schemes should be favorably accepted by buying customers, and 68xx definitely would sell better than 67xx, thats the sleazy part. Still AMD havent made the biggest sin - asking higher prices for Barts, they could have easily asked ~GTX470 price for 6870 and got away with it, that would have been wrong.

In the end, we got very nice upgrade for 5700 gen, and forced Nvidia to make significant price cuts, win-win for customers. Outside of minor naming controversy that was a great launch.

P.S. Charlie keeps mentioning as if 6800 is successor for 5800, its not. Also he demands Barts to be priced lower :oops: :p
 
Back
Top