AMD: Pirate Islands (R* 3** series) Speculation/Rumor Thread

sheesh that's just pathetic lol, just to get views
For pcper is the fact that they miraculously decided to stop using frametime comparisons as soon as AMD solved the issue in their drivers.

As for kitguru, they released quite the hateful video towards AMD last week:
https://forum.beyond3d.com/posts/1854513/


interesting, yeah I can see why pcper got the shaft, but Kitguru outside of the 3xx series everything with Fiji was just speculation, and I think many people have the same perceptions not everyone voiced them.
 
I don't think there was ever a visual upside on disabling AF. ;)

True lol (I usually go with 4-8x. Sometimes 16x shimmers too much).

I'd have thought that the huge memory bandwidth would be beneficial to enabling AF as well. Seems odd.

Do those particular games in the list have issues with AF? I vaguely recall Thief used POM, so AF would be borked.
 
well going by numbers from the r290x, its got ~60% increased ALU throughput and ~60% increased bandwidth, seems to be it should be fairly equal to what we see now when it comes to bottlenecks, but of course as you said before its hard to compare between the generations this time.

You're forgetting the huge difference colour compression makes. I find it hard to believe that Fiji would be at all memory bandwidth limited. If anything it seems pretty ROP limited to me.
 
You're forgetting the huge difference colour compression makes. I find it hard to believe that Fiji would be at all memory bandwidth limited. If anything it seems pretty ROP limited to me.
Maybe the ROPs are on an MC clock domain instead of the shader clock domain? That would explain the lack of significant performance increase when over-clocking.
 
I think it will be interesting to see if reviewers more or less settle on a few general guidelines for reviewing this generations top cards at 4k. To me, so far, it looks more like 4k gets treated as a curiosity/fringe case, though there are increasing numbers of people handling 4k games by way of using two gpu's, or are managing with just one that is struggling. The Titan led the way but I think its price made it a special case. Now with the 780 Ti, and the Fury cards, 4k can be said to be really entering into the consciousness of rank and file gamers. At the prices of those cards its doable for many more people, and for the majority, we more frugal gamers, they at least serve as the writing on the wall.
Speaking personally, I think I'm not going to be not all that concerned about the ability to lay on lots of AA. With only a few exceptions I think anything the game itself can offer that's cheap and which doesn't blur would sound like plenty. Alpha textures can be tough but the increased resolution should help a lot.

Basically, I just want all the textures/geometry at maximum. Shadows, God rays, special effects, those things I'm ok with dialing down in order to get decent framerates. This is going to be very subjective. Which looks better, a panel at its native 2560 x 1440 with everything dialed to the max, or its sibling, who has the only difference of running at the resolution of 3840 x 2160, which has to trade off the use of some Ultra settings in order to maintain framerates at its resolution?
I've been getting the impression that the trend is heading towards going 4k and accepting the compromises. But I have read some dissenting views from people with very high quality 1440p/1600p panels. They seem genuinely happy with their sweet spot.

I have to wonder if part of the acceptance of the idea of 4k with compromises is that 4k pushes sales, and sales are important to keeping our niche hobby alive and kicking. :)
Ha, and it does remind me of how back in the CRT days, with the Voodoo 5, there was debate about which was more important, monitor resolution, or RGSSAA.
 
The descriptions of some of the features added by Fiji were muddled, so I thought originally that the adaptive clocking in Carizzo had made its way to Fiji. Perhaps not, going by later articles.
If it did, we might be entering an era where GPU overclocks would have a range of "stable" speeds where the adaptive clocking automatically downgrades performance, similar to who GDDR5's error/retry method creates an illusion of faster headline speeds that wind up getting lost in CRC errors.

There could be other limits, where the GPU throttles something more if they didn't up the power and temperature limits.

In the case of ROPs, could the shader engines be export-limited? CUs need to negotiate access to an export bus and buffers on the ROP end. If that's per shader engine, and the organization in the block diagram hints at this, then scaling CU count may not scale the data path and capacity for the ROPs.
 
interesting, yeah I can see why pcper got the shaft, but Kitguru outside of the 3xx series everything with Fiji was just speculation, and I think many people have the same perceptions not everyone voiced them.
It's one thing to make a personal judgment in a forum after the cards were released, but it's a completely different thing to set AMD on fire before the releases, based solely on speculation.
Regardless, his comments on the console APUs were as disgraceful as they could ever be.


Tessellation performance went through the roof with the R300 series launch driver - also on older Hawaii- and Tonga cards. Tessmark performance OTOH, remained unchanged.
Might be related to this.
Wow, the performance boost in Witcher 3 with Hairworks on is no less than 50%:


0enj2iX.gif
 
Tessellation performance went through the roof with the R300 series launch driver - also on older Hawaii- and Tonga cards. Tessmark performance OTOH, remained unchanged.
Might be related to this.

http://www.pcgameshardware.de/AMD-Radeon-Grafikkarte-255597/Specials/Radeon-R9-390X-Test-1162303/#a5


Interesting, if there were hardware changes it should show up in all benchmarks, wonder what the drivers are doing to get increased tessellation performance

Would be interesting to see if this is only with witcher 3 or with other non gameworks titles.
 
Is the AF behave the same on AMD vs Nvidia? On the old days there is usually a comparison on the AF method, but we don't get that since probably most people assume it is a solved problem.
 
It's curious that GCN still exhibits performance issues with forced AF. I thought this particular architecture aspect has been solved by both NV and AMD generations ago?
 
don't know, actually haven't looked at AA and AF performance between the two IHV's, for like 5 years, ever since it was pretty much noted x4 aa and AF was negligible with performance numbers.
 
I could only find that with recent cards. But even before reviews would lump together AA with AF and not separate them. 38xx was notorious for poor AA performance but a computerbase review had the AF as big, if not the bigger, culprit.

AMD are still behind in texture filtering I suppose.
 
AMD are still behind in texture filtering I suppose.

In terms of full-rate filtering of higher precisions, sure, but the last controversy I recall for AF came down to shimmering introduced by how the vendors set their LOD, and I think it came down more to whether you wanted blur or shimmer, with Nvidia's choice being a touch further from reference.
 
Looks like it's time some review sites to pick up the AF perf metrics again, after so many years since it was still relevant.
I wonder, how much AF is actually used in XBO and PS4, thinking of it?
 
Back
Top