AMD: R9xx Speculation

GTX460 is notably faster than HD5870, with tessellation off or on in that Techspot review. The gap increases with tessellation. If this game is indicative of "good tessellation" then AMD doesn't have a point - let alone the fact that the performance of HD5870 is not competitive at the baseline.

I meant Civ 5 will be a title to look out for in 6870 reviews to see if their tessellation tweak is indeed good enough.
 
Nvidias tesselation aproach is certainly lighter on developers than AMD-s. Adaptivly tesselate multiple objects and control somehow magicaly the AMD-s 16pixel triangle holy grail is certainly not as easy as nvidias "crank that shit up" aproach. :LOL:


Edit: Also the call that "brute force aproach is wastefull" is strange. Of course it is but with this logic tesselation that will run good on AMD-cards will fly on Nvidia card. (maybe civ5 is the first showcase ?)
 
Last edited by a moderator:
Nvidias tesselation aproach is certainly lighter on developers than AMD-s. Adaptivly tesselate multiple objects and control somehow magicaly the AMD-s 16pixel triangle holy grail is certainly not as easy as nvidias "crank that shit up" aproach. :LOL:
Every sensible developer wouldn't "crank that shit up" mindlessly. The simple reason is that it negatively impacts performance without visual improvements on all cards (also nvidia's!). Adaptive tesselation is simply the way to go. If the developers would provide a slider somewhere so the user can select something like the "preferred triangle size after tessellation", it's all okay.

Btw., afaik the 16 pixel/triangle is only the target to shoot for better use of Cypress' dual rasterizer (so it is not a complete waste). Only with triangles <<4 pixels the efficiency really takes a dive.
 
HD 6850
ynnxq4n57ff7szzje.jpg

zd05tco7p3l4irerfd6e.jpg

k5bn8i5knw43ju4tvnct.jpg

33kzlntsjxy4f7duh9w.jpg

r7kor6ns0q6d27xkfxqv.jpg
 
Unknown Soldier, those have been posted all over the net since last morning (19.10 @ 03.00 GMT I think it was), when the NDA for press photos and names ended
 
Nvidias tesselation aproach is certainly lighter on developers than AMD-s. Adaptivly tesselate multiple objects and control somehow magicaly the AMD-s 16pixel triangle holy grail is certainly not as easy as nvidias "crank that shit up" aproach. :LOL:

Edit: Also the call that "brute force aproach is wastefull" is strange. Of course it is but with this logic tesselation that will run good on AMD-cards will fly on Nvidia card. (maybe civ5 is the first showcase ?)

Is AMD's "right way" not to limited for devs?
They must use twice the size of triangles and stay between tessellation factor 4-12 to have a 50%+ increase over Cypress. And in this case they are only capable of 540 Mtris/s max - slighty more than a GTS450.
 
One more thing, is MLAA and improved AF something that will be found in CCC for both 5000 and 6000 series or is that hardware specific for the 6000 series?

I can only say that AA options have changed.
 
Is AMD's "right way" not to limited for devs?
They must use twice the size of triangles and stay between tessellation factor 4-12 to have a 50%+ increase over Cypress. And in this case they are only capable of 540 Mtris/s max - slighty more than a GTS450.

well if so then amd is limiting the tesselation and game development on purpose.
 
So imagine the other alternative: NIs = Evergreen with splitted UTDP and fixed texture filtering...

Which one is less absurd? :smile:
 
Er unless I'm reading that wrong, did you just add 64 on top of the 5870 and get the 384 x 5D?

Cause HD4870 was 55nm, whereas 6870 is 40nm as is the 5870...

Yes - adding 64. I know 4870 was 55nm, I just did approx guess that Barts at 384 x 5D will be about same die area as HD5870.
 
I meant Civ 5 will be a title to look out for in 6870 reviews to see if their tessellation tweak is indeed good enough.
They took a Civ5 screenshot for the background of one of their tessellation slides, so I guess they are confident here.

Regarding optimal tessellation levels, I found this coming from AMD's Richard Huddy (DevRel team):

(...) “These days, the most typical resolution for serious gaming is 1080p. A resolution where you have 1920 dots left to right and 1080 from top to bottom. That gives you around 2 million pixels’ worth of work onto the final screen. However, you actually end up working with a much larger picture, to allow for things like light sources and shadow generators that are off screen in the final image, but which still need to be accounted for. The same goes for situations where something is in front of something else, but you don’t know that at the start, so you end up doing work on pixels that may or may not make it to the final cut”.

Overall, the polygon [Triangle - Ed] size should be around 8-10 pixels in a good gaming environment”, said Huddy. “You also have to allow for the fact that everyone’s hardware works in quads. Both nVidia and AMD use a 2×2 grid of pixels, which are always processed as a group. To be intelligent, a triangle needs to be more than 4 pixels big for tessellation to make sense”.

Interesting enough, but why are we being told this? “With artificial tests like Stone Giant,which was paid for by nVidia, tessellation can be done down to the single pixel level. Even though that pixel can’t be broken away from the 3 other pixels in its quad. Doing additional processing for each pixel in a group of 4 and then throwing 75% of that work away is just sad”.
http://www.kitguru.net/components/g...e-constant-smell-of-burning-bridges-says-amd/

Notice he mentions 8-10 pixel size as optimal for a polygon, while now 16 gives optimal performance for Barts.
 
Benchmark Wars - HD6800 and H.A.W.X. 2 won't do well, says AMD:
AMD has demonstrated to Ubisoft tessellation performance improvements that benefit all GPUs, but the developer has chosen not to implement them in the preview benchmark. For that reason, we are working on a driver-based solution in time for the final release of the game that improves performance without sacrificing image quality. In the meantime we recommend you hold off using the benchmark as it will not provide a useful measure of performance relative to other DirectX® 11 games using tessellation.

Oh, dear - not again...
 
Nothing surprises me when Ubisoft is involved. From the whole DX10.1 fiasco and now HAWX 2. Batman AA and Eidos same crap.

People often complain why AMD doesn't help devs optimizing games for their hardware. It's quite obvious that some devs don't want the help from AMD.
 
Last edited by a moderator:
Back
Top