AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
Well I jumped the gun to sell my two 290X. If I sell both for 400€, I won't even mind a slight decrease in performance + 60€ in exchange for 8GB VRAM, a non-noisy operation and not having to turn on the A/C whenever I'm playing games in the office (especially in the summer).

There was a guy previewing the RX 480 through a livestream:
http://videocardz.com/61275/amd-radeon-rx-480-review-live-stream

His thermal imaging video shows a maximum of 72ºC in the back of the GPU.

There are rumors pointing to AMD pushing the NDA date forward to June 24, because of so many leaks happening this week. Serious reviewers won't be pleased...
 
Where did this 100 watt figure come from in the first place? It's quite a bit below the 150 watts that was given earlier, isn't it?
Also to the fact that historically AMD cards only consume from 30 to 40W from the PCI-E slot so 75+30/40 = 105/115W.

This is not counting the leaks that already said it consume 110W while gaming.
 
Also to the fact that historically AMD cards only consume from 30 to 40W from the PCI-E slot so 75+30/40 = 105/115W.
Which, as I've written before, is not so much a fact, but rather quite selective perception. It is true, that some/a couple of AMD reference cards were quite low on PEG power, but that is not true for all of them. Traditionally, cards tended to favor power over the 6- or 8-pin connectors, taxing them to a much larger percentage than the PEG slot's power. It is, however, also true that the fewer 6- or 8-pin connectors there where, the higher the percentage of the slot's 75w being used. For example HD 7870 (2× 6 pin) vs. HD 7850 (1× 6-pin) was 37 vs. 46 watts for PEG only under full load in our test.

Additional fun fact: The less reserve there was with the 2× 6-/8-pin connectors to the rate power, the more the slot was taxed as well. For example HD 7950 and 7970, based on the same PCB, but the latter was equipped with 1 8- and one 6-pin instead of 2 6-pins. PEG-slot power: 51 vs. 29 watts. There were even cards like the R9 290X in Uber-Mode, that over-stressed the 8-pin (170 instead of 150 watt) and the 6-pin connector (92 instead of 75 watts), but stayed very moderate (~34 watts) on the PEG.

So no magic involved here, just plain and common-sense playing-it-safe since the reference cards also go in OEM-PCs and AMD would not want to loose lucrative mass business due to poorly designed mainboards there.

This is not counting the leaks that already said it consume 110W while gaming.
So, it was one of the leaks quoting that? I wonder if this was a rather medium load... which 3DMark usually is, AFAIR.
 
Well I jumped the gun to sell my two 290X. If I sell both for 400€, I won't even mind a slight decrease in performance + 60€ in exchange for 8GB VRAM, a non-noisy operation and not having to turn on the A/C whenever I'm playing games in the office (especially in the summer).

There was a guy previewing the RX 480 through a livestream:
http://videocardz.com/61275/amd-radeon-rx-480-review-live-stream

His thermal imaging video shows a maximum of 72ºC in the back of the GPU.

There are rumors pointing to AMD pushing the NDA date forward to June 24, because of so many leaks happening this week. Serious reviewers won't be pleased...

I was going to sell my two 290x, in fact, I disassembled them out of my waterloop this weekend. But, I made a mining rig with them instead. They will pay themselves in 2 months of mining.
 
So no magic involved here, just plain and common-sense playing-it-safe since the reference cards also go in OEM-PCs and AMD would not want to loose lucrative mass business due to poorly designed mainboards there.

Im sure it is the reason. I don't know how stressful would be for a low-tier MoBo to supply 75W to the PCI slot or if the peaks in power consumption would make it a little dangerous but I think companies just play it safe and draw less power trough the MoBo and more directly trough the power supply.

If you see cards w/o power connectors they typically draw 40 to 55W max.

So, it was one of the leaks quoting that? I wonder if this was a rather medium load... which 3DMark usually is, AFAIR.


It said under 110W under gaming. it is in this thread a couple of pages behind. I think it was from wtcctece :p
 
Talking about the hype. at least in "serious" forums people always take with the "if the rumors are true" behind their works. In my case I would never have though I would be able to get the level of relative performance(compare to the last generations and even the current one) of the 480. 200 dollars is the best I can aim for because import taxes after that are too damn high and buying in my own country is like 60%+ expensive. Also getting all the new technologies and be able to even think about the future on getting a VR headset, etc. So yes I am hyped. But that doesn't mean I will go pre-order the first minute it is available. The most interesting after of Polaris is not Polaris itself, is how it will affect the last generation of VGAs and how many good deals we will be able to get. because a 970 for around 100 dollars is just too damn good to be true. But is the 470 for 150 is better than it, how knows...
 
As said, I remembered seeing a datasheet a while back and the relative power consumption of the x32 and x16 modes in there (I didn't remember the exact numbers after a two or three years of course) and was confident enough to post this. After you expressed some interest I searched for some datasheet and posted the specific numbers for a specific memory chip (which is the correct way to do as they will vary a bit from model to model). And what "deeper understanding" did you expect? The reason for this was given already (simply less bits transferred at high speeds per chip).
To make it short: I don't get your "Haha, I gotcha". You couldn't possibly get me, just some information. ;)

Anyway, I guess it got too OT.

PS: "Get" has far to many meanings in English.

Dude, I'm just trying to thank you.

I apologize for any confusion. I come here to learn. There are a lot of smart people on this forum. It's hard to tell who's googling for a random gddr5 data sheet and who knows about some semi-secret data sheet repository. So you gotta ask, lol.
 
  • Like
Reactions: xEx
Just did a quick measurement on this P10 shot from here: http://cdn.videocardz.com/1/2016/06/AMD-Radeon-RX-480-PCB-Polaris-10-7.jpg

Just perspective corrected:

1gwzl6.png


Overall count very close to the rumored 232mm² (off by +0.22) so nothing we didn't really know about, just a confirmation i guess :)
 
Usually we are over estimating that way compared to the official numbers (likely by measuring the package vs what is inside), though you are a bit on the inside of the edge. But could be an indication that it's smaller than the 232 (which is still only from the linkedin resume).

Also, the heatsink, which doesn't look that great if 150w:
8053421_22.jpg
 
It matches the $199 price point. If that card consumes a bit less than 150W under normal load (which is probable), it should be fine.
[my bold]
Additionally, one should note - and this is also in response to the TDP/TGP discussion earlier - that not only the GPU itself consumes power. There's the memory, the power circuitry and more that's also eating into the budget and is usually not cooled by the fansink directly.
 
  • Like
Reactions: xEx
Not sure of the veracity of the post but here it goes anyway:

I've played around with this little beast and I have to say this is the biggest improvement in years. Polaris is extremely powerful in micropoligons. I didn't even imagine that this kind of performance is possible on a quad raster design. In an extreme test case with 8xMSAA and 64 polys/pixel, the Polaris 10 is the fastest GPU in the market by far.
The second interesting thing is the pipeline stall handling. I wrote a program to test it, and remarkable how it works. I hate dealing with pipeline stalls, because it is hard, but on Polaris the stalls are just reduced by far. Even if I run a badly optimized program, the hardware just "try to solve the problem", and it works great. This behavior reminds me the Larrabee... and now we have it, not from Intel, but a hardware is here to solve a lot of problems


http://semiaccurate.com/forums/showpost.php?p=266518&postcount=2022
 
I'm surprised there's only 36CUs in there.

I think amd wanted to have the cheapest chip that still convincingly met the vr min spec of 290/970-like performance.

Due to higher clocks and small architectural improvements, they apparently only needed 36 CUs to the 40 CUs of the 290.
 
Not sure of the veracity of the post but here it goes anyway:

I've played around with this little beast and I have to say this is the biggest improvement in years. Polaris is extremely powerful in micropoligons. I didn't even imagine that this kind of performance is possible on a quad raster design. In an extreme test case with 8xMSAA and 64 polys/pixel, the Polaris 10 is the fastest GPU in the market by far.
The second interesting thing is the pipeline stall handling. I wrote a program to test it, and remarkable how it works. I hate dealing with pipeline stalls, because it is hard, but on Polaris the stalls are just reduced by far. Even if I run a badly optimized program, the hardware just "try to solve the problem", and it works great. This behavior reminds me the Larrabee... and now we have it, not from Intel, but a hardware is here to solve a lot of problems


http://semiaccurate.com/forums/showpost.php?p=266518&postcount=2022
The author of that post clearly never ran any code on Larrabee. [emoji23]
 
I'm surprised there's only 36CUs in there.

I'm not.
The rather small performance gain between Hawaii and Fiji despite a 45% larger compute output and even larger increase in memory bandwidth proves that AMD would do well to spend a smaller portion of their transistors into raw compute capabilities, and a larger one into e.g. geometry performance.
 
I'm not.
The rather small performance gain between Hawaii and Fiji despite a 45% larger compute output and even larger increase in memory bandwidth proves that AMD would do well to spend a smaller portion of their transistors into raw compute capabilities, and a larger one into e.g. geometry performance.

Surprised compared to the size of the GPU I mean. So obviously transistor budget went somewhere else. Or density decreased.
 
Status
Not open for further replies.
Back
Top