AMD: Pirate Islands (R* 3** series) Speculation/Rumor Thread

Its weird how suddenly people feel 4GB wasn't enough any more even without any real proof of games needing more.

Shadows of Mordor and GTA V being able to take over 4GB of VRAM even on 1080p isn't any real proof?
 
It's GCN1.1, unless AMD are improving GCN by rebranding it. "2nd generation" was also bandied about during Hawaii launch.

And the R7 370 "rebrand" and for that matter the 380 and 390 "rebrands" apparently supports VSR alongside 290 and 285.

AMD-Virtual-Super-Resolution-May-2015-Radeon-3001-900x292.jpg


Unless AMD are using it to differentiate between the 3xx and 2xx cards, at least the rebranded GCN1.0 cards that don't get a mention, it very strongly hints to the new gpus at least having the hardware scaler like the other GCN1.1 and above gpus.

That's a possible indication for 1.1, but not a necessary one. 1st, there's nothing said about the extent of VSR-support (like the Tonga-exklusiv 4k-VSR) and 2nd, AMD repeatedly touted a second wave of VSR-support via Catalyst driver updates. Maybe they just want to create a USP for the more mid-range 300-line of cards compared to their predecessors so they don't qualify the remaining 200-series yet. All possible.
 
So 4GB is enough VRAM?

For games released up until now? Yes.
Are the people who spend $900 on a videocard expecting to only be able to max out the games that were launched up until that card's release date? No.

Memory footprints for recent games are increasing really fast. GTA V and Shadows of Mordor can already take over 4GB at 1080p but let's not forget that they're still ports of games that were originally meant to run on 512MB consoles.
 
Shadows of Mordor and GTA V being able to take over 4GB of VRAM even on 1080p isn't any real proof?

For games released up until now? Yes.
Are the people who spend $900 on a videocard expecting to only be able to max out the games that were launched up until that card's release date? No.

Memory footprints for recent games are increasing really fast. GTA V and Shadows of Mordor can already take over 4GB at 1080p but let's not forget that they're still ports of games that were originally meant to run on 512MB consoles.

So you've gone from saying 4GB is proven not enough VRAM in GTA V and Shadows of Mordor to saying 4GB is enough for games released up until now and of course GTA V and SoM don't need lots of VRAM they were designed to run on 512mb consoles. Got it.
 
Memory footprints for recent games are increasing really fast. GTA V and Shadows of Mordor can already take over 4GB at 1080p but let's not forget that they're still ports of games that were originally meant to run on 512MB consoles.
I would not say increasing as that implies an ongoing process. I'd rather describe it as "jumped up" after the launch of PS4/XB1, IMHO mainly due to their larger memories. But I do think, that this is a one-time bump for now.
 
Well at the same time, you have some developpers who look at the memory usage and some who seems absolutely dont care to optimize it ( different engines too will get different results, if one is made for caching every ram available ).

Witcher3 is a good example, its incredible how low is the memory usage when you look at the result.

Then there's the question of how Fiji handle the memory, compression algorythm who could have well been optimized since tonga, etc etc
 
I was always under the assumption that bandwidth was more important than size unless the video card was actually running out of space while rendering. 4GB should be enough for size right?
 
I was always under the assumption that bandwidth was more important than size unless the video card was actually running out of space while rendering. 4GB should be enough for size right?

You dont want 4gb on entusiast card, period.
you need 8gb its as simple as that.
if Games need or dont need isnt a factor as why buy a 4gb card if the competition have a 6gb card or a 12gb card?
Its a fail anyway to sunday if its 4gb.
it wont matter that games will play fine on 4gb as somehow peoples perception gets skewed due to 12gb tx and the 6gb 980ti and that also means none would buy the 3.5gb 970 or the 4gb 980 and we also know that isnt true.
4gb is plenty for a majority but if you have a flagship it has to be 8gb for AMD anything else and they are in for hurt in every way due to nvidia will unleash hell on them and simply cut them down.
 
So you've gone from saying 4GB is proven not enough VRAM in GTA V and Shadows of Mordor to saying 4GB is enough for games released up until now and of course GTA V and SoM don't need lots of VRAM they were designed to run on 512mb consoles. Got it.

I'm not sure if you're genuinely misinterpreting my posts out of distraction or you're trying to win the "prove-I'm-an-ass-before-reaching-100-posts" award, but just out of respect for this forum I'll entertain with the complete answer:

1 - Games are already taking up more than 4GB RAM at 1080p.
2 - Cards with 4GB aren't taking large hits yet, probably because memory consumption isn't going too much beyond 4GB for now. Meaning the drivers can probably still handle well whatever get left out of the VRAM.
3 - Cards with 2 and 3GB, OTOH, are already taking more accentuated performance hits (as seen with Hawaii cards suddenly getting a substantial performance advantage towards the 3GB GK110 cards).
4 - Consoles can have up to 5-6GB of RAM available for graphics, using much lower-performing GPUs than the high-end ones we're seeing released in 2015, while being limited to 1080p.
5 - Therefore, VRAM usage in multiplatform titles is expected to continue growing at least up to those those 5-6GB in the PC titles, plus the additional RAM for increased texture, shadowmap and render resolutions that the more powerful GPUs will allow.

My claim is that 4GB is proven not enough because some games are already using more than those 4GB at 1080p, which sets a trend for future games.
Nowhere did I mention that 4GB is not enough for GTA V and Shadows of Mordor, to accuse me of such is either too much distraction or just plain trolling, and you'd do better to stop either of those.


I was always under the assumption that bandwidth was more important than size unless the video card was actually running out of space while rendering. 4GB should be enough for size right?
Then why are we getting 8GB R9 390 cards?
 
I still consider 980 an enthusiast level card. It may not be right up there with Ti/TX as far as performance goes but price tag is very 'enthusiastic'. And then there is this whole 3.5+0.5 GB embarrassment so I really can't see NV doing much 'cutting down' as far as memory goes. You really can't attack your competition with RAM argument when your bread&butter 970 isn't even a true 4GB card.

What I do expect though is reviewers doing some 'lets try pushing this thing to the memory limit thing' trying to find its weakness.
 
I was always under the assumption that bandwidth was more important than size unless the video card was actually running out of space while rendering. 4GB should be enough for size right?
Bandwidth instantly becomes basically useless at two conditions:
1) Your compute/texture ressources are saturated with data
2) You're utterly starved for memory space.

Picture a bottle of water. You can only drink so fast, even if the bottleneck is 640 millimeters wide. And you can drink only so much until it's empty, after which you need to refill it from a larger jar that only has a 12 millimeter opening.

What I do expect though is reviewers doing some 'lets try pushing this thing to the memory limit thing' trying to find its weakness.
Maybe because it was asked of them before - and for a much more mainstream card?
 
I'm not sure if you're genuinely misinterpreting my posts out of distraction or you're trying to win the "prove-I'm-an-ass-before-reaching-100-posts" award, but just out of respect for this forum I'll entertain with the complete answer:

1 - Games are already taking up more than 4GB RAM at 1080p.
2 - Cards with 4GB aren't taking large hits yet, probably because memory consumption isn't going too much beyond 4GB for now. Meaning the drivers can probably still handle well whatever get left out of the VRAM.
3 - Cards with 2 and 3GB, OTOH, are already taking more accentuated performance hits (as seen with Hawaii cards suddenly getting a substantial performance advantage towards the 3GB GK110 cards).
4 - Consoles can have up to 5-6GB of RAM available for graphics, using much lower-performing GPUs than the high-end ones we're seeing released in 2015, while being limited to 1080p.
5 - Therefore, VRAM usage in multiplatform titles is expected to continue growing at least up to those those 5-6GB in the PC titles, plus the additional RAM for increased texture, shadowmap and render resolutions that the more powerful GPUs will allow.

My claim is that 4GB is proven not enough because some games are already using more than those 4GB at 1080p, which sets a trend for future games.
Nowhere did I mention that 4GB is not enough for GTA V and Shadows of Mordor, to accuse me of such is either too much distraction or just plain trolling, and you'd do better to stop either of those.

Then why are we getting 8GB R9 390 cards?

I'll ignore the personal attack and stick to the current discussion about VRAM size if it's all the same to you.

Your contention is that these games are already going beyond 4GB and that this is being managed by WDDM streaming data and further that 3GB cards are falling behind. This seems to be in direct contradiction to the benchmarks at Anandtech.

http://www.anandtech.com/show/9306/the-nvidia-geforce-gtx-980-ti-review/6

The 780Ti seems to be right where it belongs in terms of performance compared to the 980 even at 4K Ultra settings. Now maybe I don't frequent the right tech sites and I'm missing some piece of information that would make this make sense, I'm sure you'll link me the relevant info.

As for trends, of course over the longer term specs are expected to raise, nobody is claiming otherwise but I do think CarstenS view is more likely correct and the major inflection point has already happened.

I would not say increasing as that implies an ongoing process. I'd rather describe it as "jumped up" after the launch of PS4/XB1, IMHO mainly due to their larger memories. But I do think, that this is a one-time bump for now.
 
Last edited by a moderator:
One additional aspect for the 4-GiB-topic: Games are largely streaming based nowadays, meaning they do not load a whole level into local memory. Graphics drivers manage this according to the amount of memory present. If they have a larger wiggle room as in Titan X or FirePro W9100, they can afford to leave used assets untouched for a longer time. If that asset is used again they do not have to reload it from main memory, thus saving time and - depending on the capabilities of the engine - achieving smoother rendering. Having non-blocking DMA-engines help in that …

The amount of memory _really_ needed at a certain point in time cannot be measured this simply with tools like GPU-z or MSI Afterburner.
 
Bandwidth instantly becomes basically useless at two conditions:
1) Your compute/texture ressources are saturated with data
2) You're utterly starved for memory space.

Picture a bottle of water. You can only drink so fast, even if the bottleneck is 640 millimeters wide. And you can drink only so much until it's empty, after which you need to refill it from a larger jar that only has a 12 millimeter opening.
I appreciate the response. I was going to bring up streaming based games but you touched on another post.

With Multi-engine and async copy coming with dx12 - can we not have all the main textures streaming from RAM into VRAM using async copy while the game is rendering? I get that this solution won't work as well with twitch based games that user controls can be unpredictable, but for most other titles they should be ok no?

I understand that ideally more space is going to be a better solution than constantly streaming data to your video cards, but at the same time your RAM can be enormous in size and relatively cheap in costs.
 
Back
Top