AMD: Volcanic Islands R1100/1200 (8***/9*** series) Speculation/ Rumour Thread

Pirate Islands, according to rumors. No one seems to know anything about it, however

No one except these who are responsible for the design and are working on it now...

I am also pretty interested because the news from the green camp make me think AMD will have problems competing Maxwell
 
Hey Dave...i bought a Powercolor 290 because i heard it flashes....watdoya know..it indeed transformed into a 290X! Happy days again for me!

Now for my rant...turbo mode in GPU totally suxxx...i had a bad feeling about it previously mentioned..and the Hawaii GPU power tuning proved me right...clocks seems to go up down up down down up...man..setting the power limit in CCC sometimes work, sometimes it does not...it is just not as simple and clear-cut as my original Tahiti XT.

As for the reference fan noise, i found it better than my 7970. Even going up to 67% speed, it sounded more silent than 7970...i dont get AT and several reviewers complain about the noise.

I think your Hawaii is just not ready for prime time...so much variance...so much inconsistent tools...and yes i did suffered from BF4 black screen of hang...had to hard power off my PC.
 
Last edited by a moderator:
Hey Dave...i bought a Powercolor 290 because i heard it flashes....watdoya know..it indeed transformed into a 290X! Happy days again for me!

Now for my rant...turbo mode in GPU totally suxxx...i had a bad feeling about it previously mentioned..and the Hawaii GPU power tuning proved me right...clocks seems to go up down up down down up...man..setting the power limit in CCC sometimes work, sometimes it does not...it is just not as simple and clear-cut as my original Tahiti XT.

As for the reference fan noise, i found it better than my 7970. Even going up to 67% speed, it sounded more silent than 7970...i dont get AT and several reviewers complain about the noise.

I think your Hawaii is just not ready for prime time...so much variance...so much inconsistent tools...and yes i did suffered from BF4 black screen of hang...had to hard power off my PC.

It seems to me that PowerTune is doing exactly what it's supposed to.
 
It seems to me that PowerTune is doing exactly what it's supposed to.

Not only that, but this is conceptually no different than your CPU doing the exact same thing (which it does.) Even when playing games, your CPU ramps up and down, up and down in clock speed to keep itself under thermal constraints and also keep itself only fast enough to serve up the duties asked of it.

In effect: get used to it, because this is the future of how GPU's will be clocking from this point forward. The days of a static speed are gone, just as they have been gone for years with CPU's.
 
CPUs have much more rigorous speed grades. They give a sustained clock number, and frequently can give sustained turbo clocks for specific numbers of active cores.
AMD hasn't promised that with Hawaii.

It is true that CPUs can drop low in idle periods.
AMD's scheme may also drop clocks in periods of weak utilization, but the visible feedback doesn't distinguish between transient spots of low utilization and thermal throttling under load.
This is where Nvidia seems to have seized a potentially dubious marketing advantage with its slower clock transitions. It also has a less dubious marketing bullet point where it has a stronger guaranteed clock rate, although some tests out there seem to show that this is not quite up to the same standard as the base rates for a CPU.
 
For what it's worth, I'm not sure CPU makers will maintain this practice of guaranteed clock speeds. In fact, there were some SKUs of Trinity that would throttle a bit under OCCT (or some other similar program).

And Atom is very much in the same category as Hawaii, boosting to its maximum burst clock speed whenever appropriate and possible, and throttling whenever is necessary, down to a certain clock speed determined as a function of thermal constraints, required performance and power efficiency.
 
For what it's worth, I'm not sure CPU makers will maintain this practice of guaranteed clock speeds. In fact, there were some SKUs of Trinity that would throttle a bit under OCCT (or some other similar program).
A core design that goes into a server or HPC setup can ill-afford that kind of variability. Some have been known to disable turbo, or even DVFS at times.
Should Trinty's CPUs throttle to below base clock in such a scenario, it's at least for now more an indictment of AMD than anything else.

And Atom is very much in the same category as Hawaii, boosting to its maximum burst clock speed whenever appropriate and possible, and throttling whenever is necessary, down to a certain clock speed determined as a function of thermal constraints, required performance and power efficiency.
Which Atom SKUs have an undisclosed and variable base clock?
 
A core design that goes into a server or HPC setup can ill-afford that kind of variability.
FWIW: according to Anandtech, that is indeed the case for the new K40 tesla card: it has a boost mode, but it's just a static switch to be used by those who know for sure that their kind of workload won't exceed power limits.
 
Well. Installed the ICY Vision on my shiny new 290x with some slight modification to the placement of the bottom heatsink.

GPU-Z is telling me that I'm idling at 42c. VRM 1 is at 35 and VRM 2 is 39-40.

Now my only problem is that Skyrim keeps crashing on me.
 
Which Atom SKUs have an undisclosed and variable base clock?

I am interested in this as well. So far as I know all Atoms have a guaranteed base clockspeed regardless of workload.

AMD did not provide a base clockspeed on Hawaii cards because they know some of those cards will clock down to fairly low speeds under certain workloads (including some games). If that were not the case, they would have obviously provided a base clockspeed.

I understand there will be some variability in clockspeeds on GPUs going forwards, but having a guaranteed base clockspeed is a very good thing IMO.
 
Last edited by a moderator:
FWIW: according to Anandtech, that is indeed the case for the new K40 tesla card: it has a boost mode, but it's just a static switch to be used by those who know for sure that their kind of workload won't exceed power limits.

My CPU bias must be showing, I had seen the announcement about the K40's boost, but I only really pictured the requirements for a mainline CPU core. There are markets that don't tolerate CPU core clocks wandering too far from each other.

As far as putting the onus on third-party software writers to make sure Nvidia's hardware can stick within its own specifications: I find it primitive, and the prospect of marketing it as a feature just another bit in GPU marketing doublespeak.

edit: On re-review of the Anandtech article, I'm retracting the last part. There's still some sort of fail-safe throttling going on. The primary differentiator seems to be that there's enough wiggle room that allows for the turbo bins in certain loads, but cannot be captured with binning.
Making the software writers try to validate their workload to guarantee no dynamic conditions can cause throttling seems like it has to be pretty pessimistic to avoid tripping a transient edge case.
 
Last edited by a moderator:
A core design that goes into a server or HPC setup can ill-afford that kind of variability. Some have been known to disable turbo, or even DVFS at times.
Should Trinty's CPUs throttle to below base clock in such a scenario, it's at least for now more an indictment of AMD than anything else.

It's not much of a problem for consumer SKUs, and as far as I'm aware only a limited number of those exhibit this kind of behavior.

Which Atom SKUs have an undisclosed and variable base clock?

Undisclosed? None that I'm aware of. But the base clock is often not advertised, only the burst clock. In a way this makes sense, because the CPU is rarely stuck at its base clock anyhow. And in this Intel is showing its x86 roots because I've never heard a guaranteed base clock for Samsung or Qualcomm's SoCs, for example. Perhaps I just haven't been paying attention.

PS: I believe 727MHz is something of a base clock for Hawaii.
 
FWIW: according to Anandtech, that is indeed the case for the new K40 tesla card: it has a boost mode, but it's just a static switch to be used by those who know for sure that their kind of workload won't exceed power limits.

I'm going to take one more stab at a reply to this after some reflection.
There have been documented settings or guarantees provided in past CPU architectures for something like this.
For example, Bulldozer and its non-APU derivatives could have their status registers set so that the FPU was inactive. This enabled a few turbo grades as the new sustained clock.
There were similar possibilities in Sandy Bridge, similarly related to not using the FPU or not using full-width AVX, which kept a portion of the FPU clock-gated.
This would translate into a bin or more in clock that would normally be turbo becoming sustained.
In those cases, the hardware providers profiled and characterized their hardware, backing the special use cases with their engineering and manufacturing capabilities.

The GPU turbo option here does not do this. It instead requires those with less access or means to qualify the hardware to make their best guess. Maybe the GPU will report a thermal event, I know CPUs have been able to do this for years now.

In some ways, the CPU cases might have had some things easier. They had a more readily available demarcation between scalar integer loads and FPU work, and the relative sizing of their execution resources and their aggressive make-work OoO logic sort of helped even out the power consumption closer to the top end of a conservative binning test. Due to their design emphasis, there were few architectural elements sized large enough that they could on their own blow out the power budget.
The GPU side is coarser, and can at times hit significant drops and spikes in utilization. Because of its design emphasis, there are parts of the GPU that can on their own really drive power up.

This would mean that it would be harder in some ways to do what CPUs can do.
I don't personally think that necessarily makes a failure to do so a good marketing point.

It's not much of a problem for consumer SKUs, and as far as I'm aware only a limited number of those exhibit this kind of behavior.
It could be the GPU's thermal management can't reign in consumption tightly enough, and so the CPU cores might throttle because they can react faster.

Undisclosed? None that I'm aware of. But the base clock is often not advertised, only the burst clock. In a way this makes sense, because the CPU is rarely stuck at its base clock anyhow. And in this Intel is showing its x86 roots because I've never heard a guaranteed base clock for Samsung or Qualcomm's SoCs, for example. Perhaps I just haven't been paying attention.
I'd readily believe that. For historical reasons, x86 and in the past a number of performance architectures had economic and industry incentives for greater rigor, documentation, and general disclosure about the architectures and specifications.
The architectures were the product, and for a time the more open (or perhaps temporarily less vertically integrated) hardware and software industry used that disclosed information to utilize the product and justify buying more or newer product.
x86 is generally still a standard for that kind of disclosure, from errata documents, software optimization guides, and a range of resources that profile and document it.
I'm not sure it's quite as open as it once was, like some of the aforementioned embedded cores, and a gradual trailing off on some of the details of the cores or their functionality.

No greater example of the abrupt shift is more clear than what happened with AMD's Bobcat, which didn't even get a generally released optimization guide. Someone did manage to fight to give Jaguar one, at least.

Designers coming from an embedded or platform tradition come more from the starting point of providing a platform or a component.
Transparency, rigor, and dare I say honesty on anything is not a traditional strong suit.
For modern examples, we have Apple's latest Cyclone core with a noneofyourdamnbusiness issue width, or Samsung's non-disclosed non-functional CCI products and benchmark-only clock bins.
IP warfare, a bewildering array of non-standard implementations, and walled garden platforms don't incentivize the same level of candor or openness.
x86 is no panacea, as the extreme teeth-pulling to get details more in-depth than the shipping box bullet points for the consoles would indicate.

There could be some swing back for the embedded cores. Now that users expect more programmable functionality, at least some things need to be brought closer to the level that PC software normally assumes as a baseline.
On the other hand, perhaps with the trends towards consumer consumption driving much of the revenue, the desire to lock down more of the content and software, or "liberate" it to remote cloud servers, even less of the hardware will see the same level of exposure as it once did.

PS: I believe 727MHz is something of a base clock for Hawaii.
It's something, maybe.
 
Last edited by a moderator:
Back
Top