But where's the fuss about the Titan X?
Yes, that was another debacle that could easily have been avoid by allowing non-references coolers right from the start.I think 290X muddied the water because the default BIOS switch was "quiet" mode, i.e. throttle clocks at the merest hint of work. Non-reference coolers were fine.
Exactly. Which is the mistake that I was trying to point out. Instead of promising a minimum guaranteed clock speed and a bonus that's not guaranteed, they only advertised a non-guaranteed bonus clock which inevitable lead to throttling with their crappy cooler.But I also think AMD changed things. After the original HD7970 (and pals), base clock disappeared and AMD only gives boost clock as you say.
The base clock for Titan X is 1000MHz. Does it ever throttle below 1000MHz?Titan X with its reference cooler is basically as bad as a reference 290X in quiet mode (87 versus 85% respectively):
Fuss about what?But where's the fuss about the Titan X?
Interesting, the one recent review pointing out AF hit that I could find was HardOCP's Watch Dogs review where 290X had much greater performance loss than 780Ti which barely budged with 16xAF.
We have been talking for years now about driver enablement of DSR/VSR-like techniques which were possible through hacks before - to both AMD and Nvidia - poiting this out and also the chance to increase their margins through higher sales of potentially more powerful graphics solutions. Unfortunately, we only heard back from AMD, when Nvidia already made the move.With its abysmal 1080p performance it could have been a saving grace but VSR was nowhere to be found. The funny thing being that nvidia came up with this idea of having a greater resolution on widespread 1080p resolution monitors while AMD had more to gain from it.
This:I think 290X muddied the water because the default BIOS switch was "quiet" mode, i.e. throttle clocks at the merest hint of work. Non-reference coolers were fine.
But I also think AMD changed things. After the original HD7970 (and pals), base clock disappeared and AMD only gives boost clock as you say.
Titan X with its reference cooler is basically as bad as a reference 290X in quiet mode (87 versus 85% respectively):
(from http://www.hardware.fr/articles/937-9/protocole-test.html ) There's a smaller set of results for 3840x2160, where Titan X gets worse.
But where's the fuss about the Titan X?
It's openly advertised with a base clock (this is sth. that AMD does not give, I've heard vague statements as to the flexibility of powertune, saying in REALLY DIRE circumstances (dunno - fan stuck/PC inside an oven) it could go down until it hits idle clocks and one has to find out the hard way) and a Boost clock that supposedly and also vaguely is the clock the card runs at under a wide range of workloads under normal operating conditions bla... which seems legit, when you look at the table.Probably because its specifications say 1000Mhz base and 1075Mhz boost clock, not 1190Mhz.
No need for scorn, usually, the Fury X suffers less of a performance hit than a GM200 by going from no AF to 16:1 AF (regardless if it's driver default settings or high quality settings). AMD might only be hurting themselves here.
AMD reports the Boost-clock, but since 290(?) they've kept the "base clock" hidden.It was just the opposite, AFAIK.
At least for the 290X, it was the cause for people getting upset because it often did not reach that number, while Nvidia always specifies a minimum speed and a boost clock that isn't guaranteed.
I don't know why people bother looking at the clock speed of gpus so much. Boost clock and Base clock are both non indicative of the performance and often are not the clock that is being used. Not only that but gpu clock speeds have never really even been much of a specification for the consumer because of the parallel nature of gpus. Both AMD and Nvidia's cards will throttle to as low as they need to given the temperatures and power draw and specifying a base clock then is just lying to people. Similarly, specifying a boost clock that you never reach is also pointless. Why do people even care what clock speed is used if the card is performing regularly.AMD reports the Boost-clock, but since 290(?) they've kept the "base clock" hidden.
While NVIDIA does specify "minimum speed", they do throttle under it sometimes, too.
Consumers like to find some kind easy metric they can focus on in a sea of confusing terminology and nuanced interpretation, particularly with so many things to compare them to.I don't know why people bother looking at the clock speed of gpus so much.
x86 DVFS does this as well. Nobody minds what the CPU does when the cycles aren't needed. They do care about what happens when it counts.Both AMD and Nvidia's cards will throttle to as low as they need to given the temperatures and power draw and specifying a base clock then is just lying to people.
If it's truly never, then it's most likely illegal as well.Similarly, specifying a boost clock that you never reach is also pointless.
Purchases are being made on the features and specifications of the product. AMD is not getting paid for the privilege of telling users when its given figures do and do not matter to them.Why do people even care what clock speed is used if the card is performing regularly.
Server GPUs, like Nvidia Tesla GPUs (and I presume AMD's FirePro or FireStream GPUs) do sustain performance at rated clocks. It's not true that GPUs are inherently wobbly. It's a marketing decision.There actually are buyers, like various server customers, that do validate their purchases based on the sustained behavior for those chips. It's not without a very small amount of nuance with regard to pathological software crafted with low-level knowledge of the internals, but they do care about how the CPU behaves when you need to depend on it. If a CPU is put into a box that meets its own specs, it should deliver its specified behavior.
So that level of rigor is possible in that class of products.
That GPUs more frequently cannot meet this standard is an indication of a number of things.
Physically, their behavior can be harder to characterize, as a price for their high transistor counts and variable utilization.
A different marketing stand-in has not been found that is as effective as the aspirational numbers.
GPUs, relative to those processors, are not capable of that level of rigor.
That some are wobblier than others indicates how much difficulty they have in maintaining that level of consistency. The context of when this started had steeper drops from the "up to" than the tables provided earlier.
AMD reports the Boost-clock, but since 290(?) they've kept the "base clock" hidden.
While NVIDIA does specify "minimum speed", they do throttle under it sometimes, too.
Radeon R9 370x is not intended for the European market.
AMD has now clarified that the graphics card will not appear in Europe, but is intended exclusively for the Chinese market. This emerges from a Twitter post in front of AMD's Robert Hallock.
Yes that was a mistake. It was also a mistake to take 1190MHz in the Titan X as its boost clock, since it's 1075 as Dr. Evil says. So Titan X owners certainly won't be making a fuss when the GPU averages faster than specified boost clock.Exactly. Which is the mistake that I was trying to point out.
ATI was doing China only salvage/clearance SKUs for ages.AMD: Starts regionalizing it products.
Unfortunately China will be huge for the GTX 950. And R9 370x's lack of complimentary features like freesync support definitely does not help.ATI was doing China only salvage/clearance SKUs for ages.
Call me when there are different versions for EU vs NA.
http://www.guru3d.com/news-story/amd-r9-380x-tonga-spotted-with-2048-shader-processors.htmlFrom the looks of it AMD is preparing the launch of the Radeon R9 380X, not to confuse with the 380. As so often, it is XFX who has some photos available on an Asian website. The 380 series are Tonga based, and that X indicates a higher shader processor count.
The Tonga GPU has proper DX12 support as it is based on GCN 1.2 architecture. This X model should get 2048 shader processors in its 32 compute units The product would get 128 texture memory units and 32 ROPs. The GPU will be tied to either 3 or 6 GB of memory that has proper bandwidth over a 384-bit wide memory bus.