Why is AMD losing the next gen race to Nvidia?

There's no such AMD card that can be equivalent on all factors as long as Nvidia has a much more aggressive stance on dev rel, proprietary tool sets, CUDA, and much greater market share to make those things especially relevant.

I never saw any posts from homerdog claiming he needs to use CUDA for research or whatever.
As for the other factors, I wonder exactly how an end user is able to measure the success of a more aggressive stance on dev rel. and proprietary tool sets.
For months/years, "game ready" drivers have been released at pretty much the same time as catalyst/crimson ones. A huge majority of the AA/AAA developers have to work with GCN on consoles and that's a pretty aggressive approach. Even more with the XBone now using straight up DX12. And now pretty much all Microsoft 1st-party games coming for windows are showing pretty significant advantages on AMD cards.

The only time you don't see the fruits of AMD's design wins on both consoles is when a third-party team makes the port using gameworks, a work that usually ends up being panned by critics and gamers alike (e.g. Arkham Knight). Either one is to agree with them or not, AMD was very successful in turning gameworks into a dirty word.



Regardless, you just don't get to claim yourself as "IHV agnostic" and fake being worried about AMD disappearing when you say "I want nvidia because I'm used to them" or "I don't want to bother uninstalling my drivers".
I'll go NVIDIA cause that's what I'm used to. I can swap the GPU out without even messing with drivers.

This is probably one of the most clear confessions of pure bias I've ever seen.
 
This isn't a socialistic system where we have to buy AMD since they are doing worse so we need to equalize the marketshare buy force buying a less competitive product. That would let AMD (or any company for that matter) laugh all the way to bank with our hard earned money for something that is second rate.

There's no need to have market share equality - that would actually be harmful - you just need one party to not be strangled. Competition does not actually require market share equalisation. So long as it still has the resources to create compelling products a company can act to counterbalance the dominant party.

One person can't really do much for AMD. All of B3D buying all the cards they can't afford can't prop the company up, not when they're losing money at the tune of hundreds of millions per quarter.

You can't really fault a guy for not wanting to spend money on what is arguably sub-standard equipment...

I understand that. What I don't understand is feeding the beast year after year after year after year and then complaining that its most desirable parts are out of your reach because of a lack of competition.

My last few graphics cards have been Nvidia because they offered the best bang with lowest noise at the time I was looking. I'm part of the problem in that sense. But I wouldn't be complaining about how expensive high end Nvidia products are. *shrug*
 
If you really want to know - Nvidia and AMD utilize the ROPs differently.

Nvidia, thanks to the geometry buckets / tiling approach featured in Maxwell and Pascal, needs a sufficient number of ROPs to hold a full tile inside the ROPs at a time. That way, any write to the ROPs is essentially guaranteed not to hit the RAM while the same tile is active.

The whole GCN family doesn't have such tiling system yet, so the ROPs are mostly write-through with a comparably low cache hit rate. This means, that there is always a huge chance that any blend operation is actually going to cause a RAM access. This unfortunately means that most ROP activity is also utilizing the RAM system, which increases the power consumption significantly. The increased memory bandwidth is actually utilized despite the lower number of ROPs, except that the favorable solution would have been not to require the bandwidth during that part of the pipeline in the first place.

Yes, I really did want to know!

The memory overclocking results I'd seen show that the 1060 benefits from additional BW far more than even a 1.4 gHz RX 480, so I had made some (possibly incorrect) assumptions. I had taken this to meant that more ROPs on the 480 might have led to more memory access bottlenecking, but increased overall throughput.

So having a sufficiently large number of ROPs on the 1060 actually allows the 1060 to reduce its external BW requirements. That's pretty cool. So cards will fewer ROPs will have smaller tiles, and consequently be a little less efficient?
 
Patents don't mean they are incorporating everything in though ;) we have seen that happen many times, the patents are there but don't show up for two or three generations down the road.
We will just have to wait and see, a lot of the patients and or white papers i have read in the last year appear to be in Zen.


Reality is Zen has to be good or decent enough to push Intel's buttons at least in the mainstream segment, if not, desktop's, servers won't pick them up, let alone HPC's.
What are you talking about, Zen is a primarily Server targeted CPU, the Server Market in x86 CPU a year is in the 15-20 billion USD. HPC is nowhere near as important as people who get distracted by bright shiny things think. Desktop is dying ( sad panda). AMD have a serious opportunity to take a large amount of the notebook space with the APU. Highend Desktop is the area amd is least likely to be able to compete as 14LLP is unlikely to scale clocks upto and beyond 4ghz.

Intel has what 28 core Xeon's for HPC's now, those are also buyable to the general populous, Intel might have specific chips that only go to certain customers, I know they have done this in the past. If a 8 core Zen matches up with the 4 core i5, not going to cut it in the server market. It will be better for AMD's bottom line as right now they have a 8 core piece matching up with a 2 core i3. But don't expect server and HPC guys to go all crazy and start switching over.
Max you can buy is 24 core. You can also only get 22 cores on E5 which is what most orgs buy as 2 to 1 P is what the vast majority of servers run.
Now explain 8 core Zen going clock for clock with 8 core broadwell while consuming less power when making your comparison.
We know more then enough about Zen's uarch to know stupid comparisons of 8 cores vs 4 cores are just that... stupid. The question is where does integer IPC fall, SB,HW,SL. Zen has more resources in almost every way vs SB, so Haswell seems possible. You then have quotes like this from people ( works for one of the big OEM's) who have been highly critical of AMD:
http://semiaccurate.com/forums/showpost.php?p=257718&postcount=2120
For a long time I actually like what I see. I´d say as long as the consumer Zen parts can reach high enough clocks (min. 3.5GHz), everything will be pretty good
wink.gif
or even from this forum:
Terms and conditions apply here. I can't say the same about DP, but let's say it looks really good in a single CPU configuration.
So if we take that at face value(outside of 256bit vectors) a 3.5ghz 8 Zen vs 8 core broadwell-E 3.2ghz base puts Zen around ~10% behind broadwell-E per clock.

The Second quote is in relation to the 32core part so, Zen will have 32 cores, with more memory channels then E7/E5(thus capacity) with on SOC multiple 10gbe and we have wait and see but rumored on SOC crypto engines ( like Seattle).
Now look at what chip all the "big" guys /webb 2.0's want, thats right Xeon D, Which if you look at Zen (from what we know already) is more then a match and offering upto twice the core count and if the crypto accelerator exists a significant advantage .

If that is what comes to pass an 8 core going against a 4 core Intel.... If its an 8 core vs a 6 core, would be better, if its 8 core vs 8 core and they are equal or parity, that will be the only time HPC and server markets will start using AMD.
I have a question for you what do you really know about the server market? I design Datacentre infrastructure for a living im very well aware of what matters to both Enterprise and Cloud providers ( i work in both spaces). I have no idea what Zen's performance looks like but your average cloud and enterprise Server is running a hypervisor and is crammed full of 2-8 thread VM's, You almost always run out of memory bandwidth and capacity long before CPU.

A 24core Zen with 8 memory channels with perf and perf per watt in the ball park ( we know the 32core is rated upto 180watt TDP) is going to sell well because that extra two memory channel can cut down your server farm size by 25%. Depending on pricing i dont think a 32 core part will be as attractive in that market as your not getting extra memory capacity/bandwidth per core.

At those high end markets they need as much performance as they can get, the cost of the hardware is a very small amount of the total cost of what they are doing.
You would be surprised, cost of rack space, cost of fibre channel switches, cost of top of rack, end of row switches, even removing the need for add in/on mother board 10gbe NIC's all adds up, Plenty of app stacks are built on Open source which massively increases the proportion of cost hardware makes up. People like google, Azure etc who dont have to care about licencing hardware makes up a very large amount of the cost. In Enterprise with the way Microsoft Datacentre licencing works ( per socket, SQL etc are sperate) again reducing footprint (form more memory capacity) can make a massive difference.

As i said in the other thread based off what we know my biggest concern is what neilz said about 2P scaling.

So what exactly were you talking about again?

/end thread derailment
 
Last edited:
If we accept the argument that buying from the underdog company will decrease the likelihood of a monopoly, I'd put forward the fact that this line of thinking also has a price.

Namely, the least successful company is likely not as efficient in using your money. The market leader has a proven track of supplying superior products hence your purchase may further guarantee the quality stays up.

I think these do cancel each other out, and in fact we choose a given brand by combining the technical specs with a form of attachment to the brand. We do feel sometimes that this attachment is justified, but often time our reasons are silly and the attachment precedes the justifications, not the other way round ;)
 
If we accept the argument that buying from the underdog company will decrease the likelihood of a monopoly, I'd put forward the fact that this line of thinking also has a price.

Namely, the least successful company is likely not as efficient in using your money. The market leader has a proven track of supplying superior products hence your purchase may further guarantee the quality stays up.

Problem is: When competition disappears entirely, the monopolist will sell you even smaller slices of improvement as new, must-have generational leaps. Take for example PlayReady 3.0 SL3000 - necessary for 4K Netflix streaming as a means of copy protection.

Now imagine a company which launches a new generation of products with the only USP being support for this - instead of only being one of several features, among them actual tangible performance improvements. :)
 
I really hope that AMD's CPUs aren't running into the same issues at GF as their GPUs.

Also makes me worry a little about their APUs. Even if Zen is a good fit for GF, the GPU component might not be as with the 480. Another 12 months might bring considerable improvements in power, frequency and yield though.

Fingers crossed. I really want to see integrated graphics get a a kick up the rse next year.
 
I am not so sure about that reliable supply, but I think RX 480's claim of VR-performance at least on par with R9 290 was what forced AMDs hand. On its own, nobody would have cared for 98 instead of 108 Full-HD-fps in Bioshock Infinite or 45 instead of 49 fps in Crysis 3.

What I'm more concerned about is that apparently the P10-GPUs seem to have massive individual leeway for working at decidedly lower voltages with not-so-large decreases in Megahertz. Mine for example can work at 1065 MHz and 0,88/0,92 Volt for GPU/MC. Another one I have in the office (from AIB) does basically the same, cutting power (for the whole unoptimized (!) rig) while mining ETH drastically.
I question the supply because they don't seem to have done much in the way of binning. Surely the ones that undervolt a bit better could have been 480's while the rest 470s along with any cut down variants for defects? Interesting they still had advertised that voltage scaling hardware that appears disabled for one reason or another. There are still inventory issues as they are difficult to find. Why even claim 290 performance if you didn't anticipate the chip reaching that point. Fewer thermal issues and the likely would have been up there in performance. Should have easily garnered an extra 5-10% performance even if the TDP was ultimately the same.

What are you talking about, Zen is a primarily Server targeted CPU
I'm of the opinion it is desktop targeted, just not quite there yet. The actual APUs would seem to have performance difficult to rival for a SFF design. AMD has a better GPU than Intel and better CPU than Nvidia so it seems illogical not to put them together. While HBM might be a bit more expensive, if you can fit 4-8GB on a system approximating a Chromebox it may be affordable with spectacular performance. To offset those costs you likely have less power usage from HBM over DDR3/4, potentially no DIMMs, ATX to SFF provides lower warehouse/shipping/material costs, and likely significantly better performance than a comparable part. Project Quantum, that AMD is still playing around with, would seem a rather high end version of this concept. While the PC market is shrinking, having a case form factor you could affix to the VESA mount of a monitor would seemingly be an attractive product. I was helping a friend modernize their business earlier this year and the amount of deskspace saved on that solution over large cases is difficult to underestimate. Sure this isn't the target of a 32 core Zen, but a 4 core with smallish APU would do wonders.

You almost always run out of memory bandwidth and capacity long before CPU.
This is where I still wonder about HBM usage. It wouldn't meaningfully affect capacity, but like you said memory bandwidth is an issue. Using HBM as an L4 though a GPU or maybe directly connected to Zen could do wonders here. That's a level of bandwidth that would be unrivaled by DDR4 configurations. It also conveniently works out to 8 memory channels. Broadwell showed huge gains in certain benchmarks with just a 128MB L4 cache. Just imagine 4/8/16/32GB configurations. It wouldn't be cheap, but the performance may be worth it for the situation you described.
 
So having a sufficiently large number of ROPs on the 1060 actually allows the 1060 to reduce its external BW requirements. That's pretty cool. So cards will fewer ROPs will have smaller tiles, and consequently be a little less efficient?
Not necessarily. The cards with less ROPs usually also power lower screen resolutions, so the same amount of geometry is also distributed over a smaller screen space. Effectively, this should roughly even out in terms of amount of geometry per tile, and the corresponding savings. But yes, if you were to compare it at the same screen resolution, the absolute efficiency of the smaller cards would go down, even though that is actually not only limited to the ROPs, but to all cache-like structures on the arch which have also been scaled down accordingly and hence suffer from a higher miss rate.
 
We will just have to wait and see, a lot of the patients and or white papers i have read in the last year appear to be in Zen.



What are you talking about, Zen is a primarily Server targeted CPU, the Server Market in x86 CPU a year is in the 15-20 billion USD. HPC is nowhere near as important as people who get distracted by bright shiny things think. Desktop is dying ( sad panda). AMD have a serious opportunity to take a large amount of the notebook space with the APU. Highend Desktop is the area amd is least likely to be able to compete as 14LLP is unlikely to scale clocks upto and beyond 4ghz.


Max you can buy is 24 core. You can also only get 22 cores on E5 which is what most orgs buy as 2 to 1 P is what the vast majority of servers run.
Now explain 8 core Zen going clock for clock with 8 core broadwell while consuming less power when making your comparison.
We know more then enough about Zen's uarch to know stupid comparisons of 8 cores vs 4 cores are just that... stupid. The question is where does integer IPC fall, SB,HW,SL. Zen has more resources in almost every way vs SB, so Haswell seems possible. You then have quotes like this from people ( works for one of the big OEM's) who have been highly critical of AMD:
http://semiaccurate.com/forums/showpost.php?p=257718&postcount=2120

or even from this forum:

So if we take that at face value(outside of 256bit vectors) a 3.5ghz 8 Zen vs 8 core broadwell-E 3.2ghz base puts Zen around ~10% behind broadwell-E per clock.

The Second quote is in relation to the 32core part so, Zen will have 32 cores, with more memory channels then E7/E5(thus capacity) with on SOC multiple 10gbe and we have wait and see but rumored on SOC crypto engines ( like Seattle).
Now look at what chip all the "big" guys /webb 2.0's want, thats right Xeon D, Which if you look at Zen (from what we know already) is more then a match and offering upto twice the core count and if the crypto accelerator exists a significant advantage .


I have a question for you what do you really know about the server market? I design Datacentre infrastructure for a living im very well aware of what matters to both Enterprise and Cloud providers ( i work in both spaces). I have no idea what Zen's performance looks like but your average cloud and enterprise Server is running a hypervisor and is crammed full of 2-8 thread VM's, You almost always run out of memory bandwidth and capacity long before CPU.

A 24core Zen with 8 memory channels with perf and perf per watt in the ball park ( we know the 32core is rated upto 180watt TDP) is going to sell well because that extra two memory channel can cut down your server farm size by 25%. Depending on pricing i dont think a 32 core part will be as attractive in that market as your not getting extra memory capacity/bandwidth per core.


You would be surprised, cost of rack space, cost of fibre channel switches, cost of top of rack, end of row switches, even removing the need for add in/on mother board 10gbe NIC's all adds up, Plenty of app stacks are built on Open source which massively increases the proportion of cost hardware makes up. People like google, Azure etc who dont have to care about licencing hardware makes up a very large amount of the cost. In Enterprise with the way Microsoft Datacentre licencing works ( per socket, SQL etc are sperate) again reducing footprint (form more memory capacity) can make a massive difference.

As i said in the other thread based off what we know my biggest concern is what neilz said about 2P scaling.

So what exactly were you talking about again?

/end thread derailment


I'm only going to post one thing about this, since yeah it is off topic,

If a data cloud server setup costs 10 mill, how much of that will be the cost of CPU's, I'm not surprised at the cost of a server, I stated the cost of the CPU is small portion of the rest what it entails. You pretty much stated what I stated so don't know what the problem is there.

Yeah and as I stated if Zen can't match up core for core against Xeon (which ever one is out there, Broadwell for now), forget the server, HPC markets.

You just linked to what I have been saying, they are critical of AMD's core performance, if they see Intel still having a 10% lead, which in fact isn't really 10%, cause the next version of Xeon's are coming soon, (which have 28 cores). Will be more than 10%.

And no I don't take what AMD showed off thus far as fact, what we know about the CPU it "should" perform close to SB, Haswell, but we don't know that yet, remember bulldozer, they told us a lot about it and on paper it looked good.....reality was quite different.
 
Last edited:
I'd expect the high core count Zens to be priced competitively, but in the same range, as the very high core count Xeons. That will put them outside the small instance cloud market.

I'm sure AMD will have a SKU comparable to the Intel's Xeon-D to battle that segment, where each physical server is sliced and diced into as many instances as is possible.

Cheers
 
Project Quantum, that AMD is still playing around with, would seem a rather high end version of this concept.

Amd is still looking at project quantum? I didn't know that. I thought that was a tech demo that was abandoned when no OEMs picked it up for mass production.

EDIT I suppose it might not be dead.

http://wccftech.com/amd-project-quantum-not-dead-zen-cpu-vega-gpu/

That could be interesting if amd gets Zen and Vega (or just an hpc apu) in there and ensures that it can be mass produced.
 
I don't understand aspects of AMD's new cards. ROPs are supposed to be less efficient from AMD than Nvidia, and yet AMD have fewer within the same performance segment (e.g. 480 has 32, 1060 has 48). It would appear that AMD do suffer from being ROP limited, and that's even when they have more BW. Were AMD expecting much higher frequencies from Polaris? Can they only have 8 ROPs per memory channel?

The 460 has only 16 ROPs. Its fill to flop rate is worse than the X1S and vastly worse than the PS4 - and taking into account CPU contention on the PS4 (up to 40 GB/s lost) and that Polaris has highly efficient colour compression, it just seems odd to have such low pixel fill rate. The GTX 1050 has 32 ROPs operating at higher frequencies than the 460.
It seems that AMD GPU subdivide as such: shaders engine, geometry engine and SIMD arrays. There seems to be limits wrt too how many units of the lesser type a unit can contain.
Starting from the bottom we've seen up to 16 CUs per geometry engine, I'm not sure we've seen more than 16 CUs per shading engine, we've seen up to 2 geometry engines per shading engine, I think RBE are in fact linked to Shading engine and that we've yet to see (iirc) more than 16 ROPs par Shading engine, the same goes with the memory controllers (up to 2 par shading engine).
More assertions and intuitions I would not be surprised if AMD can pack more than 2 geometry engines per shading engine but they have good reason not to do so.
I assert that there are extra limitation that are quite old but was not presented as such back in time, modern AMD GPU that have more than 7 CUs par RBE (8 ROPs) are not able to achieve their theoretical max texturing filtering performances in some formats. Back in time the same was already happening but it was passed as tdp restriction for a lack of better explanation. I suspect that is a really old memory sub-system limitation that AMD has not touch significantly for more than a decade, so we have the Bonaire with 14CUs 16 ROPS, The Rx460, XB1 (with yields optimization down to 12) and comment from Cerny about how 14 CUs was the sweet spot for the PS4 but added more for non graphical task. I will add in a statement from Sebbi that nowadays shaders are optimized to maximize the use off the available bandwidth.

I think that AMD GPU can't fed properly more than 7 CUs, per RBE and memory controller (64 bits) then you get suboptimal results that are more or less hidden depending on how much you really on data locality (vs external bandwidth).
Nvidia seems to have moved away from such limitation and they have a lot flexibility in their design (192 bit bus GPU comes to mind), but it got worse as Nvidia move to some tiling approach as they exploit data locality their inner organization allows them to rise the number of ROPs with lessser concerns for the available external memory (hence the rumored 32 ROPs on a 128 bit bus GP).

performance per watt from the 460 isn't good either. This very marginally overclocked 460 is topping out at 97W in game (!!) and 105W under stress test:

https://www.techpowerup.com/reviews/ASUS/RX_460_STRIX_OC/22.html
Well for both AMD and Nvidia there might be an overhead for increasing the number of logical units inside their GPU. Nvidia to reach its power target kept geometry performance low for the GTX 750 even lower for some Tegra (1 triangle every two cycles). AMD did not do that for the Rx460, they also pushed the clocks way further than the sweet spot to remain within Pci-Express specification. I suspect the GTX 1050 will stay true to the recipe and deliver lesser geometry performance than the Rx 460 which will prove foremost irrelevant in games (/1 triangle per cycle but clockspeed could soften the difference).
 
This is probably one of the most clear confessions of pure bias I've ever seen.
It's not like the Jen-Hsun poster on the ceiling above my bed where I can stare at it before going to sleep makes me biased, I just think he is a sexy piece of manmeat.

But seriously consider that I'm infinitely familiar with the NV drivers and stuff whereas the AMD stuff I have no clue about, it is all different (in a good way from what I hear but still different) from what it was with my 7950 so I'd have to relearn it. There's no reason to do that if NV offers the same performance at the same price and with lower power consumption. And yea it's pretty sweet not even having to swap drivers with a new GPU, not gonna lie. I don't even fumble with driver updates since I have the GFE thing installed, it's quite wonderful IMO.

Anyway what do you want me to do here? I said I'd buy AMD if the price is right, which is what it always comes down to in the end anyway.
 
Last edited:
It's not like the Jen-Hsun poster on the ceiling above my bed where I can stare at it before going to sleep makes me biased, I just think he is a sexy piece of manmeat.

But seriously consider that I'm infinitely familiar with the NV drivers and stuff whereas the AMD stuff I have no clue about, it is all different (in a good way from what I hear but still different) from what it was with my 7950 so I'd have to relearn it. There's no reason to do that if NV offers the same performance at the same price and with lower power consumption. And yea it's pretty sweet not even having to swap drivers with a new GPU, not gonna lie. I don't even fumble with driver updates since I have the GFE thing installed, it's quite wonderful IMO.

Anyway what do you want me to do here? I said I'd buy AMD if the price is right, which is what it always comes down to in the end anyway.


Eh AMD has been ahead quite a couple of times in the last few years there is just always a portion of people who go against their best intrests and buy NVidia.

I'm still baffled by those who bought the gtx 580 .

Anyway AMD has another 2 console wins and should continue to be the main force in consoles for another 4 years at least . If they keep improving GCN then I can continue to see games target AMD hardware more and more
 
Eh AMD has been ahead quite a couple of times in the last few years there is just always a portion of people who go against their best intrests and buy NVidia.

I'm still baffled by those who bought the gtx 580 .
I think you've got some revisionist history going on here. You said "last few years", but then you bring up the GTX580 which came out six years ago, around the same time as the 6950/6970 generation.

Here's the revisionist history part: AMD was targetting the GTX570 for performance target of the 6970, rightfully so as the 6970 and GTX570 did a lot of trading blows at the time. Examples:
https://www.techpowerup.com/reviews/HIS/Radeon_HD_6970/ said:
With performance comparable to GeForce GTX 570, but a price that is $50 higher it is difficult to justify the investment. While I applaud AMD for making the bold move to 2 GB video memory, I have my doubts if the price/performance gain is worth it. Some people may see benefits from this when using Eyefinity, but the majority of users will game at 1920x1200, even 2560x1600 is just a niche resolution. I do have my hopes high for the future though, when AIBs will take a good look at the card and strip it down to achieve more competitive price levels - competitive being $329 in my opinion.

http://www.anandtech.com/show/4061/amds-radeon-hd-6970-radeon-hd-6950/ said:
Our concern was that AMD would shoot themselves in the foot by pricing the Radeon HD 6970 in particular at too high a price. If we take a straight average at 1920x1200 and 2560x1600, its performance is more or less equal to the GeForce GTX 570. In practice this means that NVIDIA wins a third of our games, AMD wins a third of our games, and they effectively tie on the rest, so the position of the 6970 relative to the GTX 570 is heavily dependent on just what games out of our benchmark suite you favor. All we can say for sure is that on average the two cards are comparable.

So with that in mind a $370 launch price is neither aggressive nor overpriced. Launching at $20 over the GTX 570 isn’t going to start a price war, but it’s also not so expensive to rule the card out. Of the two the 6970 is going to take the edge on power efficiency, but it’s interesting to see just how much NVIDIA and AMD’s power consumption and performance under gaming has converged. It used to be much more lopsided in AMD’s favor.

I can do this all day. The point here is this: the only thing AMD had to compete with the GTX580 was their Crossfire-on-a-stick 5970 -- naively assuming your games were getting benefit from Crossfire, which we know isn't always true (and the same holds for SLI.)

It isn't so cut-and-dry as you make it to be.
 
http://www.anandtech.com/show/7481/the-amd-radeon-r9-290-review
I think you've got some revisionist history going on here. You said "last few years", but then you bring up the GTX580 which came out six years ago, around the same time as the 6950/6970 generation.
Sorry time keeps on slipping my friend . I'm getting old


anyway according to anand the 6970 was $369 vs the $350 of the 570 and the 6950 was $300

http://www.anandtech.com/show/4061/amds-radeon-hd-6970-radeon-hd-6950/13

Also it seems that both the 6970 and 50 were on par or fasterthan the 570 and the 6970 was close to the 580 in a lot of the games . And low and behold the 6950 was 70 watts more efficient and the 6970 was 21watts more efficient than the 570 and even more than the 580 .


Also lets go to 2013 just 3 years ago. AMD releases the r9 290x which causes Nvidia to drop the 780 and 770 prices

The r9 290 cost less than the 780 and out performed it in many cases even after they dropped the 780 price
http://www.anandtech.com/show/7481/the-amd-radeon-r9-290-review

Nvidia put out the 780ti later costing another $150 bucks over the 290x for about 10% more performance

AMD has competed quite well in the past . I see no reason why they can't in the future.

Nvidia has shown time and time again that they will price gouge even with competition out there who knows what they will do if AMD leaves the game. As a gamer buying NVidia when its all but a blow out is bad for you in the end. When things are comparable its just silly choice
 
I'm confused, thought you were talking about Fermi, gtx 580.

If you were talking about the r290, that card had supply issues for a couple of quarters. And wasn't it also relatively late compared to the gtx 780?
 
Back
Top