AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
Vega is a family of GPUs, so for example Vega 10 launching in February and Vega 11 in May could conceivably fit their wording.
Of course. It could yes, but my interpretation is if they thought Q1 was likely then they would have reason to say that outright, seeing as Q1 is sooner than Q2, and sooner >>>>> later for AMD.
 
It would appear that both Vega and Volta are new compute architectures (though with how broken and lacking in D3D12 features Polaris is, it would seem that Vega needs to be more than just a new compute architecture).
[my bold]
How so?
 
I think some folks mean that amd's verbiage is implicitly suggesting that a Q1 release (be it little Vega, big Vega or both) is unlikely.

Yes, but they have two new chips coming up. If one is coming in Q1 and another in Q2, then the right sentence to aggregate both launches would be "Vega will launch in H1 2017".
I get that not mentioning "Q1" has put many people on edge (especially those who seem super happy about it), but that's how I would write a sentence to encompass both.
That same slide also has that "AMD is only one of two companies in the world" sentence instead of what should be "is one of only two companies", so whoever wrote those points doesn't seem to be super savvy in english.



Of course, amd could use little Vega right about now because gp104 prices are hilariously unchecked at the moment. So it would definitely benefit amd to have some little Vega cards on the market asap, e.g. Q1 2017. So the need is 1000% there and we all would be happy to see it happen, but their verbiage doesn't inspire confidence for a Q1 release.
This couldn't be more true.
I think I mentioned this before, but it needs to be brought up again. Here are the release prices for nvidia cards of the last 5 generations featuring fully-enabled chips at around 300mm^2 (their Gxx04 parts):

GTX 460: GF104, 330mm^2 chip at 40nm, 256bit GDDR5: $229 MSRP in July 2010.
GTX 560: GF114, 332mm^2 chip at 40nm, 256bit GDDR5: $249 MSRP in January 2011.
GTX 680: GK104, 294mm^2 chip at 28nm, 256bit GDDR5: $500 MSRP in March 2012.
GTX 980: GM204, 398mm^2 chip at 28nm, 256bit GDDR5: $550 MSRP in September 2014.
GTX 1080: GP108, 330mm^2 chip at 16FF+, 256bit GDDR5X: $699 MSRP in May 2016 (though actual prices went well over $749 for a long time after release. IDK how they are right now)


This is the direct result of a crumbling competition. The launch MSRP for cards with their 330mm^2 chips has pretty much tripled over the last 6 years.
I'm aware that the price/IC-area has gone way up between 40nm and 16FF+, but it's definitely not a $200-per-chip difference. It's probably not even half of that. Same goes for increasingly faster GDDR5 or the later adoption of GDDR5X. The GTX 460 was definitely not being sold at a loss, and nvidia is probably just making >3 times more money per chip/card than they were in 2010. Which is great for nvidia and nvidia's shareholders, but terrible for everyone else.

AMD made the terrible decision of pricing the first HD7970 at $550 to try to keep up with nvidia's pricing. Dave Baumann at the time said it was because of brand value. History has proven them to be utterly wrong AFAIK. You only get to claim more money than your direct competitor when your audience believes you have an undeniable advantage over it, which was not the case with HD7970 vs. GTX 680 back in 2012. nvidia has been very successful at doing that for the past 4 years, though.
 
you can't compared die sizes to cost across different nodes, specially new nodes. We know Polaris 10 cards have a margin that is similar to older gen cards even though they have much smaller GPU's. As you stated. Cost is effect of functional chips/wafer, wafer costs and design cost of the new fin fet nodes are double.

Now pricing, AMD has been pricing competitively when you look at performance, but not when it comes to ALL metrics, specifically perf/watt, and this is what hurt them in the recent past. Dropping prices after the fact doesn't help as buyers would have bought something else.

And we can see nV is making more money per GPU/card being sold, but not by a factor of 3 times. Just look at net margins. They have increased more like 15% over the past 4 years. Now you also have to factor in how the performance segment has increased in size in the past 4 years too, where it more than doubled and now equal to the volume size of mainstream and volume tiers combined, so the effect of margins per GPU is now "exaggerated" even more. So that 15% increase in net margins, isn't fully about selling the same tier chips as before and has be accounted for as well. This is purely for desktop discrete mind you, net margin figures from nV tend to be the entire stack of GPU products which also include professional, and with the recent quarter numbers you need to remove the high margin Tesla units for neuronets, which weren't there from 4 years back. I would put the increase in over all margins are closer to 10% when you factor all those in (without the shift from mainstream +value, to performance), if we factor that in, I wouldn't be surprised if its only 5%. The only reason nV hasn't been able to increase margins in mainstream and value segments is because they are catering to buyers that are more price conscious so if they did, those buyers automatically will choose an opposing product with similar performance but lower cost.

Going back to Vega, and lack of competition. nV is taking an advantage of the lack of competition, but only at the enthusiast level. The current FE debacle, AIB's are taking advantage of that though. nV sells their GPU's and memory at a set rate, they don't get anything extra out of the $600 MSRP of the 1080 or the $429 MSRP of the 1070 (FE withstanding). So at the end nV doesn't make much more (there could be a little bit but not much) than the increase from last gen performance to this gen performance, which covers the cost of the increased wafer costs + R&D cost of the new node.
 
Last edited:
Meanwhile my friend Michele is working on this comparison:

file.php
[/QUOTE]

Please, link us to the comparison as soon as available. Thanks.
 
I know that's how things are "supposed" to work, but history has shown us that it doesn't always work that way. For example, HD DVD was developed by the DVD Forum and was supposed to be the "official" successor to DVD, but Sony managed to make Blu Ray succeed despite being the "odd duck". Meanwhile, Nvidia is technically the "odd duck", but they own like 80% of the consumer graphics market and they have tremendous mindshare in the pro market as well. I'm not saying that G-Sync will succeed, only that it's not a done deal that it won't (and vice versa).

Big difference, the Blu-Ray standard (notice standard) was always meant to be licensable. Gsync, Cuda, and PhysX are not licensable. They are completely locked down by NVidia. The closest thing to Blu-Ray would be Adaptive Sync.

Even formats like Sony's BETA (competitor to VHS), Mini Disc or Memory stick were meant to be licensable but in those cases a more open standard gained popularity. Gsync, Cuda, and PhysX are in a similar vein to 3dfx's Glide, which Nvidia hated and campaigned against heavily. But that was back when NVidia embraced and promoted more open standards. The NVidia of the late 90's early 2000's is not the NVidia of today.

Regards,
SB
 
Although I agree with what you are saying, just wanted to add in; they are in the drivers seat, and AMD is the only company that has even a remote possibility to push nV, and as recent circumstances they haven't been able to. So nV doesn't need to change their current tactics in favor of open standards.

The only major benefit of open standards is the reduction in cost of production, but ya still need solid management behind those standards and solid hardware to help propel the use of those standards.
 
Last edited:
The only major benefit of open standards is the reduction in cost of production, but ya still need solid management behind those standards and solid hardware to help propel the use of those standards.

So the benefits for consumers are not worth considering?
 
In this specific example they don't even cater to the end user because the end user isn't the one integrating them, the developers are doing it, so they have to cater to them they are the consumers, and both AMD and nV with their different approaches cater to the developer in similar ways, time saved, money saved, etc, nV has been more effective with gameworks on this though because they have a more "complete" set.

Most end users are using one or the other IHV card, and as of today, nV sitll has a commanding lead on the PC front, so dev's opt to choose gameworks they are doing it for that reason, money, time, capability to sell to more people, its all about money at the end for them, and the end user by buying nV products more, its bound to happen.

So AMD has to gain marketshare back on the pc front, then dev's will pay more attention to Open GPU, and hopefully it will become as robust as gameworks by the time they are able to get enough marketshare to make it viable for developers to switch over, then we have a level playing field, until then I expect the same things as we saw in the past.

These things don't happen over night, and for OpenGPU to get to where gameworks is, being open source, it will take longer, that is just the crux of being open source, more people to do the work, without set schedules, it will take longer, might come out better but that is dependent on the management of the different projects.
 
Last edited:
Nope, but I am thinking that isn't on their top priority list, since the new consoles won't have them either...

If it does kudos.
 
Vega is now 1H '17. :( ...Which means, NOT first quarter.

Lucky AMD that Titan X is a cut-down GPU. I don't buy cut-down graphics cards, so there's still a chance they'll get to sell me something, even though it will be painful to wait potentially until june next year.
I used to be the same way but eventually I caved when the price for the non-cutdown parts got too outrageous. Usually the cutdown parts are faaaar better value these days.
 
Performance per watt and mm² fail.

Lack of conservative rasterisation and raster ordered views, for D3D12.
Do people not understand how important conservative raster is? It seems glossed over like D3D11 vs 11.1 or something.

The upgraded rasterizer in Polaris is the most significant change since Tahiti and more than welcome, but no wonder AMD is so far behind at this point. Anyhow I am glad NVIDIA isn't on 16nm Fermi, thus price/performance (taking into account heat in my room) being equal I reward them with my dollars.

Also something people never seem to consider: GTX970 vs R290: it isn't a difference of ~150W if you live in a hot place. Your AC must also compensate for the added heat. If you game a lot you either get hotter or pay more on the power bill with AMD, maybe only a few dollars a month but I keep my cards for a couple years. And if your house has poor HVAC you may just be hot with no way to compensate! 300W is a lot of heat.
 
Last edited:
Status
Not open for further replies.
Back
Top