AMD: Pirate Islands (R* 3** series) Speculation/Rumor Thread

Discussion in 'Architecture and Products' started by iMacmatician, Apr 10, 2014.

Tags:
  1. Esrever

    Regular Newcomer

    Joined:
    Feb 6, 2013
    Messages:
    594
    Likes Received:
    298
    That price makes sense I guess but I would find it hard for anyone to justify it over the Fury X since even some mini ITX cases these days can hold a radiator.
     
  2. Dr Evil

    Dr Evil Anas platyrhynchos
    Legend Veteran

    Joined:
    Jul 9, 2004
    Messages:
    5,777
    Likes Received:
    782
    Location:
    Finland
    Probably because its specifications say 1000Mhz base and 1075Mhz boost clock, not 1190Mhz. Why you choose to use Titan X's max boost as the base I have no idea... and even when taken the averages of that table and using the same max frequencies Titan X gets 92% and 290X 87.2% (BTW the worst case was 84% not 85% for the 290X)

    Also if you allow similar temperatures for the Titan X core it boosts higher and be quieter than that 290X.
     
  3. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    Yes, that was another debacle that could easily have been avoid by allowing non-references coolers right from the start.

    Exactly. Which is the mistake that I was trying to point out. Instead of promising a minimum guaranteed clock speed and a bonus that's not guaranteed, they only advertised a non-guaranteed bonus clock which inevitable lead to throttling with their crappy cooler.

    The base clock for Titan X is 1000MHz. Does it ever throttle below 1000MHz?

    Fuss about what?
     
  4. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,971
    Likes Received:
    4,565
    $650 is utterly ridiculous considering how the Fury X already fits in most mini-ITX cases.
     
    Razor1 and A1xLLcqAgt0qc2RyMz0y like this.
  5. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,798
    Likes Received:
    2,056
    Location:
    Germany
    I know, nothing's true on teh internetz unless you link it, so here you go:
    (similar table in our 980 TI review earlier, which i won't upload... 1080p results:
    980 Ti -11/-19% (Q/HQ), 780 Ti -7/-14%, 770 -7/-14%. Witcher 2 EE proved to be quite heavy on texturing and runs overall comparably on AMD and Nvidia).
    [​IMG]
    (When in D/A/CH, buy our mag!!! :D)

    We have been talking for years now about driver enablement of DSR/VSR-like techniques which were possible through hacks before - to both AMD and Nvidia - poiting this out and also the chance to increase their margins through higher sales of potentially more powerful graphics solutions. Unfortunately, we only heard back from AMD, when Nvidia already made the move.

    (I am not saying here we were (one of) the deciding factors for VSR/DSR, but at leaste one of the parties pushing for it. :) )

    This:
    It's openly advertised with a base clock (this is sth. that AMD does not give, I've heard vague statements as to the flexibility of powertune, saying in REALLY DIRE circumstances (dunno - fan stuck/PC inside an oven) it could go down until it hits idle clocks and one has to find out the hard way) and a Boost clock that supposedly and also vaguely is the clock the card runs at under a wide range of workloads under normal operating conditions bla... which seems legit, when you look at the table.

    I wonder why no one pitched the notion that Nvidia is cheating with clocks because in Damien's table, the average clock over all the games is 1101 instead of 1075 MHz... :|
     
    Razor1 and fellix like this.
  6. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    After all, like every gpus are tested at the same settting... i dont see where's the problem ... its clear that if tthey was test the Nividia ones at 16x and at 0x for the AMD ones, it will be different.

    And like that you dont come with low AF forced by the driver specific profile / games. ( bug or not bug )
     
    #2746 lanek, Aug 27, 2015
    Last edited: Aug 27, 2015
    CarstenS likes this.
  7. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,183
    Likes Received:
    1,840
    Location:
    Finland
    AMD reports the Boost-clock, but since 290(?) they've kept the "base clock" hidden.
    While NVIDIA does specify "minimum speed", they do throttle under it sometimes, too.
     
  8. Esrever

    Regular Newcomer

    Joined:
    Feb 6, 2013
    Messages:
    594
    Likes Received:
    298
    I don't know why people bother looking at the clock speed of gpus so much. Boost clock and Base clock are both non indicative of the performance and often are not the clock that is being used. Not only that but gpu clock speeds have never really even been much of a specification for the consumer because of the parallel nature of gpus. Both AMD and Nvidia's cards will throttle to as low as they need to given the temperatures and power draw and specifying a base clock then is just lying to people. Similarly, specifying a boost clock that you never reach is also pointless. Why do people even care what clock speed is used if the card is performing regularly.
     
    Kaarlisk, no-X, Alexko and 1 other person like this.
  9. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    12,501
    Likes Received:
    8,705
    Location:
    Cleveland
    It only makes a difference when vendors are selling nonstandard cards. For example, Is this eVga Pro card faster than this Sapphire Extreme card?
     
  10. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    Consumers like to find some kind easy metric they can focus on in a sea of confusing terminology and nuanced interpretation, particularly with so many things to compare them to.
    It explains why the vendors put out the specs.


    And why AMD's TFLOPs, GTs, and GPs figures that depend on it don't say "up to" as well.

    x86 DVFS does this as well. Nobody minds what the CPU does when the cycles aren't needed. They do care about what happens when it counts.

    There actually are buyers, like various server customers, that do validate their purchases based on the sustained behavior for those chips. It's not without a very small amount of nuance with regard to pathological software crafted with low-level knowledge of the internals, but they do care about how the CPU behaves when you need to depend on it. If a CPU is put into a box that meets its own specs, it should deliver its specified behavior.
    So that level of rigor is possible in that class of products.

    That GPUs more frequently cannot meet this standard is an indication of a number of things.
    Physically, their behavior can be harder to characterize, as a price for their high transistor counts and variable utilization.
    A different marketing stand-in has not been found that is as effective as the aspirational numbers.
    GPUs, relative to those processors, are not capable of that level of rigor.
    That some are wobblier than others indicates how much difficulty they have in maintaining that level of consistency. The context of when this started had steeper drops from the "up to" than the tables provided earlier.

    Mobile SOCs have similar opaque marketing, and have been taken to task for it. Some of Intel's mobile line have shifted to that marketing when running into that class of processors.
    It's a deficiency in the product, and the marketing serves to make sure the customer is not informed of it.

    If it's truly never, then it's most likely illegal as well.

    Purchases are being made on the features and specifications of the product. AMD is not getting paid for the privilege of telling users when its given figures do and do not matter to them.

    If the slides were titled "Product Aspirations", then whatever, why not?
     
    Razor1 and liquidboy like this.
  11. RecessionCone

    Regular Subscriber

    Joined:
    Feb 27, 2010
    Messages:
    499
    Likes Received:
    177
    Server GPUs, like Nvidia Tesla GPUs (and I presume AMD's FirePro or FireStream GPUs) do sustain performance at rated clocks. It's not true that GPUs are inherently wobbly. It's a marketing decision.
     
    lanek and pharma like this.
  12. Tridam

    Regular Subscriber

    Joined:
    Apr 14, 2003
    Messages:
    541
    Likes Received:
    47
    Location:
    Louvain-la-Neuve, Belgium
    There are many points of view about what the base and turbo/max clocks are. Engineers validating those chips, marketers selling them and reviewers looking at them of course have a different opinion.

    I think the main problem is this notion of turbo which doesn't really make sense when we talk about a GPU. What AMD and Nvidia have implemented is closer to an automatic decelerator than to any kind of turbo. Those tech enable to validate GPUs at higher clocks sure, but they don't speed them at under specific condition, they slow them down under specific conditions. Throttling is what they have been designing for but good luck selling that. Average GPU Boost clock works better on the box than average GPU Throttling clock :p

    Anyway the point is that when you look at GPU Boost and Powertune as decelerators, the notion of base clock doesn't make sense unless there is a significant change of behavior down the road. That's the case with GeForces : when they are down to the base clock, the GPU temperature limit is discarded (but not the power one so the base clock is not the lowest possible clock) and the fan curve gets steeper.

    Now with the Radeons it's a bit more complicated. If the board partners decide to use the traditional fan control, I don't think there are any behavior change that would lead to a base clock. If they use the newer Powertune fan control then there is a behavior change : at some point the fan speed limit is discarded. At least that's how it worked with the R9 290s.

    I'm not sure what the best way would be for AMD to present specs that would enable end-users to easily understand the clock differences between a Fury X and a Nano.
     
    I.S.T., 3dcgi, Lightman and 2 others like this.
  13. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,929
    Likes Received:
    1,626
  14. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,971
    Likes Received:
    4,565
    Meh, the lack of features would probably hurt the brand within the card's higher segment, and no one would want to buy this with all those >1.3GHz GTX 950 models in the market anyway.
     
  15. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    Yes that was a mistake. It was also a mistake to take 1190MHz in the Titan X as its boost clock, since it's 1075 as Dr. Evil says. So Titan X owners certainly won't be making a fuss when the GPU averages faster than specified boost clock.

    So NVidia reference spec is actually for a minimum boost clock. Whereas for AMD it's a maximum. So the conservative NVidia spec wins the day.
     
  16. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,298
    Likes Received:
    247
    When Nvidia launched Kepler, they called it "typical" value. E.g. Many GTX 680 samples with 1058 MHz boost clock ran at 1033 MHz in some games (e.g. Anno 2070).
     
  17. hoom

    Veteran

    Joined:
    Sep 23, 2003
    Messages:
    2,947
    Likes Received:
    495
    ATI was doing China only salvage/clearance SKUs for ages.
    Call me when there are different versions for EU vs NA.
     
  18. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,929
    Likes Received:
    1,626
    Unfortunately China will be huge for the GTX 950. And R9 370x's lack of complimentary features like freesync support definitely does not help.
     
  19. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,929
    Likes Received:
    1,626
    AMD R9 380X Tonga spotted with 2048 shader processors

    http://www.guru3d.com/news-story/amd-r9-380x-tonga-spotted-with-2048-shader-processors.html
     
    iMacmatician and Grall like this.
  20. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,971
    Likes Received:
    4,565
    So they finally got rid of the Tahiti stock?

    What I would find really interesting would be a dual-Tonga card with 2*3GB/6GB VRAM designed exclusively for Liquid VR, with a TDP close to a single Fury X.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...