Perf/watt/IHV man hours/posts *split*

Razor1

Veteran
We may just not have seen the changes AMD has implemented yet. GCN is an evolution not a revolution. Maybe Vega will be a bigger change that was brought about by Maxwell. I am sure AMD has been planning these chips for a few years now. It may take 3-4 years for a design to go from the drawing board to production. Or for all we know vega is the end of GCN and Navi is a major change. Its hard to really tell.


True, but so far we haven't seen "great" changes in this specific category for 3 gens, we have the 290x, Fiji, Polaris.

In the meantime nV went from the 750, to the 9xx series which was even better on perf/watt when you look at the performance segment and dropped the perf/watt on their mid range and low end, then the same thing with Pascal just a greater increase, architecture wise they are all very close.

Many times we have seen these companies bounce back from a bad product or noncompetitive product, but never does it take this long (pretty much by the next generation we can see the efforts being done to close the gap), the changes were always something tangible, and actually easy for us to see too. This time the gap didn't close, it got wider.
 
Last edited:
But in the end it will make not much difference, because what you gain by reducing power draw is to some extent compensated by the reduced performance. For a sensible comparison you would either need to equalize the performance and measure the power draw, or equalize the power draw and measure the performance. It is quite pointless to say that a less performing chip which is under volted and under clocked is equal in perfomance per watt to a chip performing 30% better and without any power saving optimisations.

Power consumption will drop far faster than performance if it means going from beyond the knee of the power curve to below the knee of the power curve. And quite drastically so as well.

Hence why Fury X -> Fury Nano isn't something that could be replicated on Nvidia Hardware as non of the Maxwell or Maxwell 2 cards as far as I'm aware of were set beyond the knee of the power curve for reference cards.

And hence why Rx 480 sees a far greater increase in perf/watt when reducing voltage than GTX 1060 does.

Well that all comes from the design of the chip starting with the transistor layouts, if they haven't been able to do it for the past 3 gens, how can they possible just do it now. This problem is not something new or something that was because of unforeseen issues in the new node or even a problem with a new architecture (these have been modified architectures since the 7xxx series. AMD has had ample time and resources after seeing the 750 (maxwell 1) to remedy this. With Polaris which I think many expected to see better than Maxwell 2 perf/watt, I even stated I believed it would beat Maxwell 2 perf/watt handly from the information AMD stated from the first showing of Polaris.

The changes to uarch of Pascal gave it the extra clocks and changed its sweet spots for perf/watt, and they were low level changes something that took quite a bit of time (2+ years) to implement for something that nV already has had quite a bit of experience and success with.

Also if we start looking at AMD vs nV chips (without HBM involved), the sweet spot for nV chips are their performance chips, unlike for AMD which is more traditional their mid range chips, for perf/watt. nV changed the name of the game, and AMD is playing catch up.

When you start seeing things like this over and over again, you gotta start wondering is its a problem with the architecture or is AMD is missing something critical for them to create those changes.

Nvidia didn't change the name of the game, they just got better at it. People so quickly forget that AMD was as far ahead of Nvidia with regards to perf/watt with their 48xx and 58xx cards as Nvidia is now ahead with Maxwell and Pascal. Everything else in between was relatively similar between the two.

As well, had Fury X never launched and instead Fury Nano was the default product launched using that chip, that would have been the de facto perf/watt benchmark for Fiji. Instead, AMD felt the need to push Fiji far beyond it's optimum operating range in order to attain a level of performance they felt was required in the market using an API that couldn't exploit the capabilities of the card.

Which represents again different approaches by the two IHVs. Nvidia designed their hardware to get the most out of Direct X 11. AMD designed their hardware to get the most out of compute. Nvidia's was obviously the better choice when targeting the then current API and generation of games. Smart as that is what drives sales. While the AMD cards appear to be far better suited to the upcoming generation of games, that's not something that was going to sell their cards. But at least people that bought GCN cards will likely see their cards do better than the early competition (7xx series in particular appears like it's going to be quite bad).

Regards,
SB
 
Nvidia didn't change the name of the game, they just got better at it. People so quickly forget that AMD was as far ahead of Nvidia with regards to perf/watt with their 48xx and 58xx cards as Nvidia is now ahead with Maxwell 2 and Pascal. Everything else in between was relatively similar between the two.

As well, had Fury X never launched and instead Fury Nano was the default product launched using that chip, that would have been the de facto perf/watt benchmark for Fiji. Instead, AMD felt the need to push Fiji far beyond it's optimum operating range in order to attain a level of performance they felt was required in the market using an API that couldn't exploit the capabilities of the card.

Which represents again different approaches by the two IHVs. Nvidia designed their hardware to get the most out of Direct X 11. AMD designed their hardware to get the most out of compute. Nvidia's was obviously the better choice when targeting the then current API and generation of games. Smart as that is what drives sales. While the AMD cards appear to be far better suited to the upcoming generation of games, that's not something that was going to sell their cards. But at least people that bought GCN cards will likely see their cards do better than the early competition (7xx series in particular appears like it's going to be quite bad).

Regards,
SB

Nah they changed the game on AMD, with the 48xx and 58xx series there was only around a 10% difference in perf/watt if I remember correctly for equivalent or close to equivalent cards, but the price is what killed nV and they could not match AMD's prices without taking a hit on margins and that is exactly what happened.

When you are playing catch up you are kinda at the mercy of your competitor, that is what happened with Fiji, if the 980ti wasn't released or Titan X, they would have been sitting pretty with it. Nano would have had a cake walk against the gtx 980, but that was the first generation nV introduced enthusiast level cards after launching their performance cards again forcing AMD hand. Oddly enough, the 980ti kinda was unexpected, because why would nV cut their own margins with a card that is almost as capable as the Titan X but 65% of the price.
 
Reducing voltage is something that at least some RX480 do not tolerate too well, though.

It is the same with the Nano, it looked to be the best at performance/w, but to be honest I am sure that a power draw optimized GM200 would still be ahead.
 
.
True, but so far we haven't seen "great" changes in this specific category for 3 gens, we have the 290x, Fiji, Polaris.

In the meantime nV went from the 750, to the 9xx series which was even better on perf/watt when you look at the performance segment and dropped the perf/watt on their mid range and low end, then the same thing with Pascal just a greater increase, architecture wise they are all very close.

Many times we have seen these companies bounce back from a bad product or noncompetitive product, but never does it take this long (pretty much by the next generation we can see the efforts being done to close the gap), the changes were always something tangle, and actually easy for us to see too. This time the gap didn't close, it got wider.

Did it ? AMD is owning the low end at this point in time. The rx 460 is faster than anything in its price range from NVidia and even above it. The rx 460 at $110 is faster than the gtx 950 and consumes less power and costs less . The 470 is in a similar boat. Whats leading the charge for AMD is DX12 titles. These cards light up on games running it and vulkan .

In doom under vulkan the rx 470 will tie the gtx 1060 6GB card @ 1080p

upload_2016-10-11_4-6-35.png

and hitman dx12 @1080p its tied again with only tomb raider favoring NVidia in dx 12 mode.

Nvidia dominates dx 11 but when it comes to dx 12 that isn't the case and slowly but surely more titles will run dx 12 as time goes on and the benchmark landscape will phase out dx 12
 
Which assumes that NV can be better, the other option is that so far DX12/Vulkan optimisation was not highly important as those APIs are not that important for the majority of users. And the 1050 is also just round the corner and will compete with the RX460. I think everybody wants AMD to do well, but this cherry picking is not really a good idea.
 
Which represents again different approaches by the two IHVs. Nvidia designed their hardware to get the most out of Direct X 11. AMD designed their hardware to get the most out of compute. Nvidia's was obviously the better choice when targeting the then current API and generation of games. Smart as that is what drives sales. While the AMD cards appear to be far better suited to the upcoming generation of games, that's not something that was going to sell their cards. But at least people that bought GCN cards will likely see their cards do better than the early competition (7xx series in particular appears like it's going to be quite bad).
Fermi was a highly compute centric GPU. Nvidia announced the compute (Tesla) cards first before consumer Geforce models. Fermi was hot and had worse perf/watt than AMD. People tend to forget that before Kepler and Maxwell, Nvidia wasn't that great in perf/watt. I still remember Geforce FX 5800 (their first DX9 GPU) and Geforce GTX 480 (their first DX11 GPU). Both had heat problems and very loud fans.

Kepler and Maxwell were both huge perf/watt increments. Consumer (GTX) Pascal seems to be mostly a die shrinked Maxwell (a big shrink -> a big improvement). Nvidia was already in lead and the die shrink seemed to suit their architecture very well (Maxwell already had quite high clock ceiling).
 
While I don't disagree with the above, their first DX10 GPU didn't inherit the original dustbuster problems either. Yes many tend to forget, but the message should be that s**t can happen at all IHVs once in a while and it's not API related either IMO.

I'll dare another perspective: did AMD expect NV to yield as high frequencies with Pascal up to the point where it was toÎż late to react? IMHO yes. Did AMD have the time and resources to swing Vega to an even higher efficiency as originally projected? IMHO no. And yes I'd love to be proven wrong for the latter.
 
Last edited:
That wasn't his incentive as far as I could tell. What he is pointing out is that AMD with Polaris has put the 480 product significantly beyond the knee of the power curve (like Fury X) with the base voltage and frequency used for those cards. Hence the perf/watt is much worse than the chip is capable of. But it was something they chose to do to attain X level of performance which could not be achieved while staying at or below the knee of the power curve.

On the flip side Nvidia hasn't had to do that to achieve X level of performance and is able to keep the 1060 at or below the knee of the power curve.

In other words, the 1060 product is operating at closer to the optimum perf/watt point for the chip that is being used. Meanwhile the 480 product is not even close to operating at the optimum perf/watt for the chip being used.

None of that is saying that Polaris 10 is better or more power efficient than the 1060. Only that the implementation of the respective cards is different. One IHV didn't need to push the chip beyond it's optimal operating range (with regards to power) while the other did.

Another way to think of it is that Polaris 10 is being used in a product tier that is not the one best suited for the chip (at least for the majority of the chips if we assume the high voltage set is meant to salvage as many dies as possible for 480). However, AMD didn't have much of a choice as Polaris 10 and 11 were the only chips they were introducing into the market this year. Complicating that was their marketing promising that Polaris 10 would be VR capable for the masses leading up to its launch, hence setting the performance target it would have to reach regardless of whether that performance target ended up being optimal for Polaris 10.

Regards,
SB
It is still has to improve by 33%, not Pascal becoming 24% worst :)
And this is on a small GPU die.

The 480-Polaris still has problems as I keep saying with dynamic power consumption/leakage/waste/thermal (which is complicated because you also need to consider die-transister-function position and localised hot spots-dissipation), while also not able to use the best of the silicon-node performance window that is calculated not just by game performance-watt but also looking at the nodes envelope in terms of voltage-frequency-performance.
Again the silicon optimum is around 1V to 1.1V for this node shrink in this setup, that gives the 480 a 1266MHz boost and the 1060 around 2050MHz.
Both can be pushed north of that (easier to do with AMD but there are some AIB manufacturer BIOS designed for Pascal to break the 1.1V 'limitation' from Nvidia) but it has a notable effect on the performance envelope when looking at voltage-frequency-performance-power draw (leakage-waste), up to then it is pretty linear for both manufacturers and Polaris/Pascal.

As I also said, even if you downvolt the 480 it still can only match the power draw of the 1060 (at full 2050MHz boost) when it is near the bottom of the stable voltage spec of the silicon-node that of 0.8V.
So it is pretty clear Polaris is still not efficiently optimal from overall design even allowing for product tier/they pushed it too high (which they technically did not if staying within the AMD frequency range without OC, just no way round the fact of its 0.8V to 1.15V performance envelope).
As I say if the reports are correct about Vega having a TBP of 225W, then they found those areas where to make improvements to the design.
But none of this is trivial though.
Cheers
 
.


Did it ? AMD is owning the low end at this point in time. The rx 460 is faster than anything in its price range from NVidia and even above it. The rx 460 at $110 is faster than the gtx 950 and consumes less power and costs less . The 470 is in a similar boat. Whats leading the charge for AMD is DX12 titles. These cards light up on games running it and vulkan .

In doom under vulkan the rx 470 will tie the gtx 1060 6GB card @ 1080p

View attachment 1639

and hitman dx12 @1080p its tied again with only tomb raider favoring NVidia in dx 12 mode.

Nvidia dominates dx 11 but when it comes to dx 12 that isn't the case and slowly but surely more titles will run dx 12 as time goes on and the benchmark landscape will phase out dx 12
It would be better to use a mix of games rather than just one (and this one has some interesting extensions for AMD in Vulkan), otherwise someone can just post the best ones for Nvidia and say the same argument but supporting them.
There are other DX12 games where it is not so clear cut (yeah agree Nvidia's Pascal improvements are questionable and we are yet to see a ground up DX12 development beyond AoTS), such as Gears of War 4.
Hitman is appalling for Nvidia even in DX11 compared to AMD.
I would expect Vulkan to do well for AMD if the extenstions are used, but this does not necessarily reflect development of games in DX12.
Swings depending upon game and possibly involvement of Nvidia/AMD/console port support, and even in DX11 AMD can nearly match or beat Nvidia (albeit not often).
You will notice that now Nvidia is pretty competitive in DX12 AOTS when say comparing 1060 to the 480 especially at 1080p, where in the past the gap was quite notable between peers.

I guess Quantum Break is possibly a good example of the swing between DX11 and DX12 for Nvidia/AMD, but then we cannot say if the DX12 implementation was ever optimised in any way for Nvidia due to the big performance hit around volumetric lighting that is less of an issue under DX11.
Cheers
 
Nah they changed the game on AMD, with the 48xx and 58xx series there was only around a 10% difference in perf/watt if I remember correctly for equivalent or close to equivalent cards, but the price is what killed nV and they could not match AMD's prices without taking a hit on margins and that is exactly what happened.

When you are playing catch up you are kinda at the mercy of your competitor, that is what happened with Fiji, if the 980ti wasn't released or Titan X, they would have been sitting pretty with it. Nano would have had a cake walk against the gtx 980, but that was the first generation nV introduced enthusiast level cards after launching their performance cards again forcing AMD hand. Oddly enough, the 980ti kinda was unexpected, because why would nV cut their own margins with a card that is almost as capable as the Titan X but 65% of the price.

It was far more than 10%

http://www.anandtech.com/show/2977/...x-470-6-months-late-was-it-worth-the-wait-/19

And that's coming 6 months after 5870 hit the market. The GTX 480 consumed roughly 60-100% more power than 5870. I can't remember the site that did power consumption through the PCIE slot and power connectors back then which had more detailed power usage breakdowns. In some games it was faster in some it was slower.

Fermi was just a really bad chip compared to AMD's cards at the time, unless you absolutely needed compute. Of course, back then the dialog from most Nvidia users on this very forum was that perf/watt wasn't important. So interesting how times have changed.

It wasn't until Nvidia started to castrate compute on their consumer cards that they caught up to AMD in perf/watt again. And with the last generation the roles ended up reversed with Nvidia having far greater perf/watt.

So, while it may seem impossible for AMD to catch up or even surpass them, it's never impossible. It's unlikely certainly, and until they do it people shouldn't claim they will do it. But it's also not wise to say they can't do it. In general it's pretty rare for there to be a huge disparity between AMD (ATI) and Nvidia. The 9700 pro and the 5800 was one such occasion. The Geforce 8xxx series and the ATI 2xxx series was another one. The AMD 5xxx series and the Geforce 4xx series another one. And now the Geforce 9xx series and 10xx series versus the AMD parts. Otherwise things have generally been pretty similar between the two.

Regards,
SB
 
Fermi was just a really bad chip compared to AMD's cards at the time, unless you absolutely needed compute.

And that even depends on the type of compute. Cypress cards were absolutely killing it in Bitcoin mining back in the day. I remember when I sold my HD 5870 to get a GTX 580 and the money I got from the HD5870 2nd-hand was almost enough to cover for the 2nd-hand GTX 580.
 
I have no idea why we are discussing Fermi. Yes, AMD was earlier to the market, faster and used less power in 2009/10, today they are neither. Does this instil confidence in Vega?
 
I have no idea why we are discussing Fermi. Yes, AMD was earlier to the market, faster and used less power in 2009/10, today they are neither. Does this instil confidence in Vega?

AMD was in the same position as they are now with the Radeon 2xxx series which just happened to eventually lead to the 4xxx series which basically gained equal footing with Nvidia in perf/watt but at far better perf/mm^2 while the 5xxx completely reversed how things were.. Nvidia was in the same position as AMD is currently in with the Geforce 58xx and 4xx series but eventually the 8xxx and 9xx series, respectively, completely reversed how things were.

The relevance here is that NVidia and AMD (ATi) have generally been similar and that both have swung at times from being really good to really bad back to really good.

Meaning, that to discount Vega without having seen anything about Vega is unwise. Likewise to hail Vega as a savior without having seen anything from Vega would be unwise. But it's good to keep in mind that radical changes can happen and have in fact happened in the past.

Regards,
SB
 
However AMD 4X00 was a bit of a failure and problematic to bring up, the same can be said about Fermi GF100. At the moment I would not call Polaris an obviously problematic product, nor would I call Pascal a dud from NV that can easily be bettered. Imho when both have achieved their design goals there were no big reversals in the competition between them. In fact I would even say that the execution by NV has improved a lot since Fermi. I hope Vega is a success but they surely have to fight an uphill battle.
 
Last edited:
And that even depends on the type of compute. Cypress cards were absolutely killing it in Bitcoin mining back in the day. I remember when I sold my HD 5870 to get a GTX 580 and the money I got from the HD5870 2nd-hand was almost enough to cover for the 2nd-hand GTX 580.
Fermi and GCN had proper read & write L1 & L2 cache hierarchies. Previous GPUs had read only special purpose texture caches. You had to coalesce your compute shader reads & writes very carefully or your performance plummeted. Try non-coherent UAV indexing (loads and stores) on HD 5870, and you will see what I mean. Performance is just horrible. VLIW obviously also requires tricky code (pack stuff to lanes) to extract good ALU performance. I am glad Fermi and GCN made programmers life much easier. Too bad Nvidia still has separate constant buffer hardware (like HD 5870 had). Fortunately modern Nvidia GPUs do not pay as high penalty (over constant buffers) for typed/raw/structured buffer accesses.
 
Fermi and GCN had proper read & write L1 & L2 cache hierarchies. Previous GPUs had read only special purpose texture caches. You had to coalesce your compute shader reads & writes very carefully or your performance plummeted. Try non-coherent UAV indexing (loads and stores) on HD 5870, and you will see what I mean. Performance is just horrible. VLIW obviously also requires tricky code (pack stuff to lanes) to extract good ALU performance. I am glad Fermi and GCN made programmers life much easier. Too bad Nvidia still has separate constant buffer hardware (like HD 5870 had). Fortunately modern Nvidia GPUs do not pay as high penalty (over constant buffers) for typed/raw/structured buffer accesses.

Yes, I don't doubt that Fermi was better for the greater part of compute applications. But for the specific case of Bitcoin mining (when Bitcoin mining was profitable through GPU), Cypress was king.
 
It was far more than 10%

http://www.anandtech.com/show/2977/...x-470-6-months-late-was-it-worth-the-wait-/19

And that's coming 6 months after 5870 hit the market. The GTX 480 consumed roughly 60-100% more power than 5870. I can't remember the site that did power consumption through the PCIE slot and power connectors back then which had more detailed power usage breakdowns. In some games it was faster in some it was slower.

Fermi was just a really bad chip compared to AMD's cards at the time, unless you absolutely needed compute. Of course, back then the dialog from most Nvidia users on this very forum was that perf/watt wasn't important. So interesting how times have changed.

It wasn't until Nvidia started to castrate compute on their consumer cards that they caught up to AMD in perf/watt again. And with the last generation the roles ended up reversed with Nvidia having far greater perf/watt.

So, while it may seem impossible for AMD to catch up or even surpass them, it's never impossible. It's unlikely certainly, and until they do it people shouldn't claim they will do it. But it's also not wise to say they can't do it. In general it's pretty rare for there to be a huge disparity between AMD (ATI) and Nvidia. The 9700 pro and the 5800 was one such occasion. The Geforce 8xxx series and the ATI 2xxx series was another one. The AMD 5xxx series and the Geforce 4xx series another one. And now the Geforce 9xx series and 10xx series versus the AMD parts. Otherwise things have generally been pretty similar between the two.

Regards,
SB


Perf/watt
the hd 4870 wasn't like that, and yeah the hd 5870 did, correct. but even that wasn't as great as this, it was the price of those cards that really hurt nV, Fermi had problems out of the gate hence the 6 months delay. And I have always stated if you EVER see more than one quarter difference between launch of cards from either IHV you better get ready for disappointment, something went wrong. But still the 5870 series started to loose traction soon after its Fermi's V2 release, which those problems were rectified some what.

Guess what Vega is.....

And doesn't matter about tapeouts and all the other stuff, because in recent history, both of these companies ALWAYS have launched close to each other within a quarter, as they are prepared to do so. The only time they couldn't do it is when they knew they couldn't match up with something and it would hurt them.

PS keep this in mind, Tape out of Vega was q2 of this year, So why is it taking 3 or more quarters for it to come out? Why wasn't Vega also on the same time schedule as Polaris, was AMD not interested to go into the performance or enthusiast segment, the performance segment is by far the largest segment by volume and over all profits.

Do we soon forget the reasoning AMD gave for Polaris's launch (How about the r600, the FX series, Fermi V1, Fiji), and we now know why it took them longer to come out, something didn't go right. Every time we see these companies have to give reason for something that is delayed more than 2 q's over a competitor's product, that reason is most likely BS, there has been the underlying cause of "we are F'ed"

Then you look at multiple design teams, usually when you have multiple design teams you have one team working on what is coming out soon, and its iterations and the second team working on future architectures which won't see day of light for a while, yet we see the same design team working on Vega and Polaris with a staggered release, that is something we have never seen before if they are truly that much different. We all know the most that these companies can do when fast tracking a project like these its a quarter up, that's it. They can't move mountains and push up timetables of future products with this kind of complexity.

So lets say Vega was fast tracked, that means since Polaris's release, AMD was not expecting to come into the performance market for a year + another quarter to against nV? Does that sound like anything that is remotely possible, to give away so much money and the entire market segment for essentially an entire generation? That is a lot of money, around 5 billion dollars to say we are not interested in so we didn't plan for it. All the while they were so in tune with LLAPI's that they couldn't plan for future products that supported LLAPI's better then their competition? I see a disconnect there if that was the case.

More things to add, when ever either of the IHV's had a delay in current products, that never changed the time tables of future products, so we can't say Fiji's delay had something to do with Vega's delay because they are not bound to one another.
 
Last edited:
.


Did it ? AMD is owning the low end at this point in time. The rx 460 is faster than anything in its price range from NVidia and even above it. The rx 460 at $110 is faster than the gtx 950 and consumes less power and costs less . The 470 is in a similar boat. Whats leading the charge for AMD is DX12 titles. These cards light up on games running it and vulkan .

In doom under vulkan the rx 470 will tie the gtx 1060 6GB card @ 1080p

View attachment 1639

and hitman dx12 @1080p its tied again with only tomb raider favoring NVidia in dx 12 mode.

Nvidia dominates dx 11 but when it comes to dx 12 that isn't the case and slowly but surely more titles will run dx 12 as time goes on and the benchmark landscape will phase out dx 12


We know the developer didn't work with nV and pascal for this so why are you showing Doom Vulkan to me? And also please pick a review with later patch not one of the ones that came out right off the bat, also pick a review that goes through different parts of the level and not ingame benchmark runs.....

Would be nice to see a full picture wouldn't you?

Hitman ran like ass on nV hardware even in DX11, AMD was beating equivalent nV cards... That doesn't speak to whom worked with whom? 3 out of 5 of DX12 titles from game evolved we see that happening, dx11 paths running better on AMD hardware. I wasn't surprised by it at all. I expected that to happen. Wouldn't you? And expect this to happen from Gameworks games too. So who has the bigger game dev program? That is what you will see win out at the end. Just because AMD went stir crazy with dev rel with the launch of DX12, and helped dev's create paths optimized for their cards, and forgot to put the same dev rel resources into DX11 in the same time or before that too, doesn't mean its going to stay the same, nV is much too aggressive for them to let AMD continue with what they did. Yeah there is a full DX12 gameworks thing coming up soon. When it comes out, not sure, what is it all about not sure, but I know there is something coming out.

The RX460 is chump change to the 1050 which is about to come out. Out of all the Polaris range that was the one card that should have had the best reviews, but it was neutered and got the worst reviews. Its going to get trashed by the 1050.

If you need to compare the rx460 to a 2 year old Maxwell 2 on a 28 nm process to show the prowess of Polaris, which is what you just did, game over.
 
Last edited:
What is weird is how AMD kept talking about Polaris as the largest step in many years through its marketing campaign (improved IPC, improved geometry, improved color compression, etc.), whereas Vega kept having only "HBM2" in its description. To me, this suggested that Polaris would be the largest architectural leap since GCN1 and Vega would be little more than larger Polaris + HBM2.

Fast-forward to today and turns out the ISA didn't even change in any meaningful way. We have the OpenCL drivers calling Polaris "GFX8.1", a reportedly small step up from Tonga/Fiji's "GFX8", while Vega is "GFX9". And we have anandtech not bothering to write a full Polaris review or even some architecture analysis with the claim that there's not much to be said about architectural differences from prior GCN3 chips.



I get that AMD's marketing team wanted to get people hyped up to Polaris (and arguably did so in excess), but what I don't get is how Vega is being kept in an awkward silence, especially considering how Vega cards won't cannibalize Polaris cards because they reportedly don't belong to the same segment.




AFAIK, nvidia was the first company to ever show Doom Vulkan to the general public, and they did it running on Pascal nonetheless.


Oddly enough if you factor out the max perf/watt changes for the node, you get a worse Polaris over Hawaii when you look at perf/watt. Outside of the front end changes, Polaris really was just the node.

Yeah having Doom running on Pascal in nV's Pascal presentation, is not the same thing as sending Pascal to the developers and having them code a path for it. The developer even stated they haven't do any work with Pascal when Doom was released.
 
Back
Top