VR? I doubt AMD is claiming they're the best for VR just because of async and better compute performance. Seems logical to expect at least one dual Polaris 10 setup just for that market. May still be price competitive with a 1080 and should fit on a single board, maybe even MCM thanks to their low power preference.Why on earth would there be a dual gpu solution at launch? The assumption is ridiculous. If you scale the mm^2 with similar clock speeds as nvidia achieved you get something exactly around that top benchmark for a single card. We already heard there's going to be 3 skus and hey look 3 performance levels! At around 25% smaller than a gp104 we can assume 25% cheaper too. $450, $329, $279 at a guess. NVIDIA will end up with the top end for a while cause they built the bigger card, but Amd will grab the middle and low end with better $ for performance and, you know, actually offering a modern sub $200 card, at least until Gp106 and Vega come out whenever this year.
VR? I doubt AMD is claiming they're the best for VR just because of async and better compute performance. Seems logical to expect at least one dual Polaris 10 setup just for that market. May still be price competitive with a 1080 and should fit on a single board, maybe even MCM thanks to their low power preference.
So you suggest they spend hundreds of millions of dollars developing another chip for a market that may or may not take off? Considering they went the low power route, I doubt they'll hit 1.75GHz like Nvidia. I'd guess more around 1.3-1.4GHz with lower power consumption. Positioning the chips perfectly where they could be paired up on a single card for a higher performing part where multiple adapters scale well. AMD also said Polaris will replace the entire lineup. So unless there are more chips floating around we haven't seen, they will likely use the chip for their high end offering.No it doesn't, there's never been a simultaneous launch of a GPU with it's own Dual GPU set up at the same time, and never been a mid tier dual GPU solution to begin with. The entire idea is stupid. Just take 232^2mm doubled is 464^mm, i.e. with the new process node about 25% smaller (less actually but whatever) than the same amount of transistors as a Fury X. So just to be conservative we round it to 3072 CUs. If it reaches say, 1.75ghz thats 49% faster than a Fury X, giving it performance at least above a Fury X even without the major architectural changes announced which could easily make up the rest of the performance jump seen.
It's a single card, it's slower than a 1080 but will cost a lot less, and will have 3 SKUs, two ones with disabled parts to maximize the amount of useable chips received. The highest end TDP will be between 195 watts to 146 watts depending on where their "varying" claims of 2x-2.5x efficiency gains the high end lay, with the other chips TDPs dependent on frequency versus how many units are disabled.
No it doesn't, there's never been a simultaneous launch of a GPU with it's own Dual GPU set up at the same time, and never been a mid tier dual GPU solution to begin with. The entire idea is stupid. Just take 232^2mm doubled is 464^mm, i.e. with the new process node about 25% smaller (less actually but whatever) than the same amount of transistors as a Fury X. So just to be conservative we round it to 3072 CUs. If it reaches say, 1.75ghz thats 49% faster than a Fury X, giving it performance at least above a Fury X even without the major architectural changes announced which could easily make up the rest of the performance jump seen.
One way to use up the inventory, Ethereum mining:
Dont forget 1070 is 150 watts.That could tell us directly who is the winner this time in perf/watt.I take something like that for my Luxrender when you want, send the me inventory cleaning lol.
Now, for the dual gpu, well maybe.. not quite sure it will be a good idea for a short period life.. ( i expect to see Vega around first quarter of 2017 ), so. performance, price cost.. not quite sure.
It look for me that AMD have decided to take a different road that Nvidia with Polaris, keep low power and low clock ( and end as middle end to low tier gpu's ), when Nvidia have try everything for up the core speed as far as they can, going for a 180W gpu's ( GTX1080 ) instead.
The 1080 is amazing, but the fact remain that it is a small gpu's, with a small memory controller, low compute units count but with extremely high clock ( and wait to see the AIB one with ~ 250-300mhz more for 200-250W ). For what we know of GP100, with way higher compute units counts, it was ~300W with lower clock speed.
AMD on the other hand, seems have decided to treat the 14nm and middle tier card differently.. small budget power, and for keep power under 150W have decided to keep core clock rather conservative. ( this worry me a bit as it remember me the 7970 with conservative clock first, before Nvidia released his 680 some months after )
I will need to wait the launch, but im start to ask me what could happend if AMD decided to push 180-190W and up the core speed of Polaris 10.
If the AMD cards are cheap and perform well, they should sold enough well, but i hope they dont understimate the "halo" effect of "faster product". ( There''s enthusiast gamers, but even today i see so much peoples who know more or less nothing of the hardware they use, and just buy a brand because someone tell them that they are " faster ".. even if they end with a brick for 200$ )
Dont forget 1070 is 150 watts.That could tell us directly who is the winner this time in perf/watt.
No. Where did you get that from?wasn't P11 28nm?
http://wccftech.com/amd-radeon-r9-480-polaris-10-july/No. Where did you get that from?
Saw this chart but I don't know if I believe it. wasn't P11 28nm?
Any chance any of the polaris chips exceed fury x?Saw this chart but I don't know if I believe it. wasn't P11 28nm?
Any chance any of the polaris chips exceed fury x?
I'm just having trouble seeing even this new architecture not be competitive with the 1070. What could they've been thinking to even drop it on the midrange?
On the other hand if it was even a bit above 10~% faster than fury x it'd be exceeding the 1080 in some dx12 apps.