Perf/watt/IHV man hours/posts *split*

Discussion in 'Architecture and Products' started by Razor1, Oct 11, 2016.

  1. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    True, but so far we haven't seen "great" changes in this specific category for 3 gens, we have the 290x, Fiji, Polaris.

    In the meantime nV went from the 750, to the 9xx series which was even better on perf/watt when you look at the performance segment and dropped the perf/watt on their mid range and low end, then the same thing with Pascal just a greater increase, architecture wise they are all very close.

    Many times we have seen these companies bounce back from a bad product or noncompetitive product, but never does it take this long (pretty much by the next generation we can see the efforts being done to close the gap), the changes were always something tangible, and actually easy for us to see too. This time the gap didn't close, it got wider.
     
    #1 Razor1, Oct 11, 2016
    Last edited: Oct 14, 2016
  2. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    16,398
    Likes Received:
    5,385
    Power consumption will drop far faster than performance if it means going from beyond the knee of the power curve to below the knee of the power curve. And quite drastically so as well.

    Hence why Fury X -> Fury Nano isn't something that could be replicated on Nvidia Hardware as non of the Maxwell or Maxwell 2 cards as far as I'm aware of were set beyond the knee of the power curve for reference cards.

    And hence why Rx 480 sees a far greater increase in perf/watt when reducing voltage than GTX 1060 does.

    Nvidia didn't change the name of the game, they just got better at it. People so quickly forget that AMD was as far ahead of Nvidia with regards to perf/watt with their 48xx and 58xx cards as Nvidia is now ahead with Maxwell and Pascal. Everything else in between was relatively similar between the two.

    As well, had Fury X never launched and instead Fury Nano was the default product launched using that chip, that would have been the de facto perf/watt benchmark for Fiji. Instead, AMD felt the need to push Fiji far beyond it's optimum operating range in order to attain a level of performance they felt was required in the market using an API that couldn't exploit the capabilities of the card.

    Which represents again different approaches by the two IHVs. Nvidia designed their hardware to get the most out of Direct X 11. AMD designed their hardware to get the most out of compute. Nvidia's was obviously the better choice when targeting the then current API and generation of games. Smart as that is what drives sales. While the AMD cards appear to be far better suited to the upcoming generation of games, that's not something that was going to sell their cards. But at least people that bought GCN cards will likely see their cards do better than the early competition (7xx series in particular appears like it's going to be quite bad).

    Regards,
    SB
     
  3. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    Nah they changed the game on AMD, with the 48xx and 58xx series there was only around a 10% difference in perf/watt if I remember correctly for equivalent or close to equivalent cards, but the price is what killed nV and they could not match AMD's prices without taking a hit on margins and that is exactly what happened.

    When you are playing catch up you are kinda at the mercy of your competitor, that is what happened with Fiji, if the 980ti wasn't released or Titan X, they would have been sitting pretty with it. Nano would have had a cake walk against the gtx 980, but that was the first generation nV introduced enthusiast level cards after launching their performance cards again forcing AMD hand. Oddly enough, the 980ti kinda was unexpected, because why would nV cut their own margins with a card that is almost as capable as the Titan X but 65% of the price.
     
  4. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    Reducing voltage is something that at least some RX480 do not tolerate too well, though.

    It is the same with the Nano, it looked to be the best at performance/w, but to be honest I am sure that a power draw optimized GM200 would still be ahead.
     
  5. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    9,995
    Likes Received:
    1,503
    .
    Did it ? AMD is owning the low end at this point in time. The rx 460 is faster than anything in its price range from NVidia and even above it. The rx 460 at $110 is faster than the gtx 950 and consumes less power and costs less . The 470 is in a similar boat. Whats leading the charge for AMD is DX12 titles. These cards light up on games running it and vulkan .

    In doom under vulkan the rx 470 will tie the gtx 1060 6GB card @ 1080p

    upload_2016-10-11_4-6-35.png

    and hitman dx12 @1080p its tied again with only tomb raider favoring NVidia in dx 12 mode.

    Nvidia dominates dx 11 but when it comes to dx 12 that isn't the case and slowly but surely more titles will run dx 12 as time goes on and the benchmark landscape will phase out dx 12
     
    ToTTenTranz likes this.
  6. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    Which assumes that NV can be better, the other option is that so far DX12/Vulkan optimisation was not highly important as those APIs are not that important for the majority of users. And the 1050 is also just round the corner and will compete with the RX460. I think everybody wants AMD to do well, but this cherry picking is not really a good idea.
     
    pharma and DavidGraham like this.
  7. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,292
    Location:
    Helsinki, Finland
    Fermi was a highly compute centric GPU. Nvidia announced the compute (Tesla) cards first before consumer Geforce models. Fermi was hot and had worse perf/watt than AMD. People tend to forget that before Kepler and Maxwell, Nvidia wasn't that great in perf/watt. I still remember Geforce FX 5800 (their first DX9 GPU) and Geforce GTX 480 (their first DX11 GPU). Both had heat problems and very loud fans.

    Kepler and Maxwell were both huge perf/watt increments. Consumer (GTX) Pascal seems to be mostly a die shrinked Maxwell (a big shrink -> a big improvement). Nvidia was already in lead and the die shrink seemed to suit their architecture very well (Maxwell already had quite high clock ceiling).
     
    CarstenS, Heinrich4, no-X and 3 others like this.
  8. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,421
    Likes Received:
    180
    Location:
    Chania
    While I don't disagree with the above, their first DX10 GPU didn't inherit the original dustbuster problems either. Yes many tend to forget, but the message should be that s**t can happen at all IHVs once in a while and it's not API related either IMO.

    I'll dare another perspective: did AMD expect NV to yield as high frequencies with Pascal up to the point where it was toο late to react? IMHO yes. Did AMD have the time and resources to swing Vega to an even higher efficiency as originally projected? IMHO no. And yes I'd love to be proven wrong for the latter.
     
    #8 Ailuros, Oct 11, 2016
    Last edited: Oct 11, 2016
    pharma and DavidGraham like this.
  9. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    It is still has to improve by 33%, not Pascal becoming 24% worst :)
    And this is on a small GPU die.

    The 480-Polaris still has problems as I keep saying with dynamic power consumption/leakage/waste/thermal (which is complicated because you also need to consider die-transister-function position and localised hot spots-dissipation), while also not able to use the best of the silicon-node performance window that is calculated not just by game performance-watt but also looking at the nodes envelope in terms of voltage-frequency-performance.
    Again the silicon optimum is around 1V to 1.1V for this node shrink in this setup, that gives the 480 a 1266MHz boost and the 1060 around 2050MHz.
    Both can be pushed north of that (easier to do with AMD but there are some AIB manufacturer BIOS designed for Pascal to break the 1.1V 'limitation' from Nvidia) but it has a notable effect on the performance envelope when looking at voltage-frequency-performance-power draw (leakage-waste), up to then it is pretty linear for both manufacturers and Polaris/Pascal.

    As I also said, even if you downvolt the 480 it still can only match the power draw of the 1060 (at full 2050MHz boost) when it is near the bottom of the stable voltage spec of the silicon-node that of 0.8V.
    So it is pretty clear Polaris is still not efficiently optimal from overall design even allowing for product tier/they pushed it too high (which they technically did not if staying within the AMD frequency range without OC, just no way round the fact of its 0.8V to 1.15V performance envelope).
    As I say if the reports are correct about Vega having a TBP of 225W, then they found those areas where to make improvements to the design.
    But none of this is trivial though.
    Cheers
     
  10. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    It would be better to use a mix of games rather than just one (and this one has some interesting extensions for AMD in Vulkan), otherwise someone can just post the best ones for Nvidia and say the same argument but supporting them.
    There are other DX12 games where it is not so clear cut (yeah agree Nvidia's Pascal improvements are questionable and we are yet to see a ground up DX12 development beyond AoTS), such as Gears of War 4.
    Hitman is appalling for Nvidia even in DX11 compared to AMD.
    I would expect Vulkan to do well for AMD if the extenstions are used, but this does not necessarily reflect development of games in DX12.
    Swings depending upon game and possibly involvement of Nvidia/AMD/console port support, and even in DX11 AMD can nearly match or beat Nvidia (albeit not often).
    You will notice that now Nvidia is pretty competitive in DX12 AOTS when say comparing 1060 to the 480 especially at 1080p, where in the past the gap was quite notable between peers.

    I guess Quantum Break is possibly a good example of the swing between DX11 and DX12 for Nvidia/AMD, but then we cannot say if the DX12 implementation was ever optimised in any way for Nvidia due to the big performance hit around volumetric lighting that is less of an issue under DX11.
    Cheers
     
  11. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    16,398
    Likes Received:
    5,385
    It was far more than 10%

    http://www.anandtech.com/show/2977/...x-470-6-months-late-was-it-worth-the-wait-/19

    And that's coming 6 months after 5870 hit the market. The GTX 480 consumed roughly 60-100% more power than 5870. I can't remember the site that did power consumption through the PCIE slot and power connectors back then which had more detailed power usage breakdowns. In some games it was faster in some it was slower.

    Fermi was just a really bad chip compared to AMD's cards at the time, unless you absolutely needed compute. Of course, back then the dialog from most Nvidia users on this very forum was that perf/watt wasn't important. So interesting how times have changed.

    It wasn't until Nvidia started to castrate compute on their consumer cards that they caught up to AMD in perf/watt again. And with the last generation the roles ended up reversed with Nvidia having far greater perf/watt.

    So, while it may seem impossible for AMD to catch up or even surpass them, it's never impossible. It's unlikely certainly, and until they do it people shouldn't claim they will do it. But it's also not wise to say they can't do it. In general it's pretty rare for there to be a huge disparity between AMD (ATI) and Nvidia. The 9700 pro and the 5800 was one such occasion. The Geforce 8xxx series and the ATI 2xxx series was another one. The AMD 5xxx series and the Geforce 4xx series another one. And now the Geforce 9xx series and 10xx series versus the AMD parts. Otherwise things have generally been pretty similar between the two.

    Regards,
    SB
     
    Lightman likes this.
  12. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    10,112
    Likes Received:
    4,695
    And that even depends on the type of compute. Cypress cards were absolutely killing it in Bitcoin mining back in the day. I remember when I sold my HD 5870 to get a GTX 580 and the money I got from the HD5870 2nd-hand was almost enough to cover for the 2nd-hand GTX 580.
     
  13. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    I have no idea why we are discussing Fermi. Yes, AMD was earlier to the market, faster and used less power in 2009/10, today they are neither. Does this instil confidence in Vega?
     
  14. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    16,398
    Likes Received:
    5,385
    AMD was in the same position as they are now with the Radeon 2xxx series which just happened to eventually lead to the 4xxx series which basically gained equal footing with Nvidia in perf/watt but at far better perf/mm^2 while the 5xxx completely reversed how things were.. Nvidia was in the same position as AMD is currently in with the Geforce 58xx and 4xx series but eventually the 8xxx and 9xx series, respectively, completely reversed how things were.

    The relevance here is that NVidia and AMD (ATi) have generally been similar and that both have swung at times from being really good to really bad back to really good.

    Meaning, that to discount Vega without having seen anything about Vega is unwise. Likewise to hail Vega as a savior without having seen anything from Vega would be unwise. But it's good to keep in mind that radical changes can happen and have in fact happened in the past.

    Regards,
    SB
     
  15. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    However AMD 4X00 was a bit of a failure and problematic to bring up, the same can be said about Fermi GF100. At the moment I would not call Polaris an obviously problematic product, nor would I call Pascal a dud from NV that can easily be bettered. Imho when both have achieved their design goals there were no big reversals in the competition between them. In fact I would even say that the execution by NV has improved a lot since Fermi. I hope Vega is a success but they surely have to fight an uphill battle.
     
    #15 seahawk, Oct 11, 2016
    Last edited: Oct 11, 2016
    DavidGraham likes this.
  16. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,292
    Location:
    Helsinki, Finland
    Fermi and GCN had proper read & write L1 & L2 cache hierarchies. Previous GPUs had read only special purpose texture caches. You had to coalesce your compute shader reads & writes very carefully or your performance plummeted. Try non-coherent UAV indexing (loads and stores) on HD 5870, and you will see what I mean. Performance is just horrible. VLIW obviously also requires tricky code (pack stuff to lanes) to extract good ALU performance. I am glad Fermi and GCN made programmers life much easier. Too bad Nvidia still has separate constant buffer hardware (like HD 5870 had). Fortunately modern Nvidia GPUs do not pay as high penalty (over constant buffers) for typed/raw/structured buffer accesses.
     
    Pixel, Heinrich4 and Lightman like this.
  17. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    10,112
    Likes Received:
    4,695
    Yes, I don't doubt that Fermi was better for the greater part of compute applications. But for the specific case of Bitcoin mining (when Bitcoin mining was profitable through GPU), Cypress was king.
     
  18. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    Perf/watt
    the hd 4870 wasn't like that, and yeah the hd 5870 did, correct. but even that wasn't as great as this, it was the price of those cards that really hurt nV, Fermi had problems out of the gate hence the 6 months delay. And I have always stated if you EVER see more than one quarter difference between launch of cards from either IHV you better get ready for disappointment, something went wrong. But still the 5870 series started to loose traction soon after its Fermi's V2 release, which those problems were rectified some what.

    Guess what Vega is.....

    And doesn't matter about tapeouts and all the other stuff, because in recent history, both of these companies ALWAYS have launched close to each other within a quarter, as they are prepared to do so. The only time they couldn't do it is when they knew they couldn't match up with something and it would hurt them.

    PS keep this in mind, Tape out of Vega was q2 of this year, So why is it taking 3 or more quarters for it to come out? Why wasn't Vega also on the same time schedule as Polaris, was AMD not interested to go into the performance or enthusiast segment, the performance segment is by far the largest segment by volume and over all profits.

    Do we soon forget the reasoning AMD gave for Polaris's launch (How about the r600, the FX series, Fermi V1, Fiji), and we now know why it took them longer to come out, something didn't go right. Every time we see these companies have to give reason for something that is delayed more than 2 q's over a competitor's product, that reason is most likely BS, there has been the underlying cause of "we are F'ed"

    Then you look at multiple design teams, usually when you have multiple design teams you have one team working on what is coming out soon, and its iterations and the second team working on future architectures which won't see day of light for a while, yet we see the same design team working on Vega and Polaris with a staggered release, that is something we have never seen before if they are truly that much different. We all know the most that these companies can do when fast tracking a project like these its a quarter up, that's it. They can't move mountains and push up timetables of future products with this kind of complexity.

    So lets say Vega was fast tracked, that means since Polaris's release, AMD was not expecting to come into the performance market for a year + another quarter to against nV? Does that sound like anything that is remotely possible, to give away so much money and the entire market segment for essentially an entire generation? That is a lot of money, around 5 billion dollars to say we are not interested in so we didn't plan for it. All the while they were so in tune with LLAPI's that they couldn't plan for future products that supported LLAPI's better then their competition? I see a disconnect there if that was the case.

    More things to add, when ever either of the IHV's had a delay in current products, that never changed the time tables of future products, so we can't say Fiji's delay had something to do with Vega's delay because they are not bound to one another.
     
    #18 Razor1, Oct 11, 2016
    Last edited: Oct 11, 2016
    swaaye and pharma like this.
  19. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    We know the developer didn't work with nV and pascal for this so why are you showing Doom Vulkan to me? And also please pick a review with later patch not one of the ones that came out right off the bat, also pick a review that goes through different parts of the level and not ingame benchmark runs.....

    Would be nice to see a full picture wouldn't you?

    Hitman ran like ass on nV hardware even in DX11, AMD was beating equivalent nV cards... That doesn't speak to whom worked with whom? 3 out of 5 of DX12 titles from game evolved we see that happening, dx11 paths running better on AMD hardware. I wasn't surprised by it at all. I expected that to happen. Wouldn't you? And expect this to happen from Gameworks games too. So who has the bigger game dev program? That is what you will see win out at the end. Just because AMD went stir crazy with dev rel with the launch of DX12, and helped dev's create paths optimized for their cards, and forgot to put the same dev rel resources into DX11 in the same time or before that too, doesn't mean its going to stay the same, nV is much too aggressive for them to let AMD continue with what they did. Yeah there is a full DX12 gameworks thing coming up soon. When it comes out, not sure, what is it all about not sure, but I know there is something coming out.

    The RX460 is chump change to the 1050 which is about to come out. Out of all the Polaris range that was the one card that should have had the best reviews, but it was neutered and got the worst reviews. Its going to get trashed by the 1050.

    If you need to compare the rx460 to a 2 year old Maxwell 2 on a 28 nm process to show the prowess of Polaris, which is what you just did, game over.
     
    #19 Razor1, Oct 11, 2016
    Last edited: Oct 11, 2016
    pharma likes this.
  20. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    Oddly enough if you factor out the max perf/watt changes for the node, you get a worse Polaris over Hawaii when you look at perf/watt. Outside of the front end changes, Polaris really was just the node.

    Yeah having Doom running on Pascal in nV's Pascal presentation, is not the same thing as sending Pascal to the developers and having them code a path for it. The developer even stated they haven't do any work with Pascal when Doom was released.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...