AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
if TSMC is already volume limited, they will have to compete for foundry space, and with nV being a larger partner, they will get higher priority, just like (pretty sure) Apple got higher priority than nV, which gave nV supply constrain issues initially.
 
How much power is your card drawing when mining?
The one card I tested at work for this (with the same, above settings, which I found to be roughly the sweet spot) was at 135 watts (isolated, via extender etc), my own card, I haven't measured yet, since I lack the equipment at home.
 
Would be safe to buy a used miner custom 480 when vega comes out? (even if they work at full load all the time they are under volted and undercloked) or they will be just too been out to be a safe option?
 
Would be safe to buy a used miner custom 480 when vega comes out? (even if they work at full load all the time they are under volted and undercloked) or they will be just too been out to be a safe option?
Would IMHO depend on what they mined. An Etherium miner should be safe, apart from normal aging. A card which was abused for an ALU stressing currency not so much.
Back then when Bitcoin was still mined on GPUs, the 290X cards were pretty worn out by the time they got finally sold off on Ebay and alike. And I doubt that 16nm is more durable than 28nm was.
 
if TSMC is already volume limited, they will have to compete for foundry space, and with nV being a larger partner, they will get higher priority, just like (pretty sure) Apple got higher priority than nV, which gave nV supply constrain issues initially.

No. Morris Chang takes pride in the principle of not giving anyone priority, no matter their volume, and instead just always allowing people to bid more for earlier wafers. This is precisely why Apple wanted to get off TSMC -- even Apple cannot buy certain priority on TSMC. If AMD wants wafers on TSMC, even if TSMC is volume constrained all they need to do is to pay more than nVidia/Apple currently are to get them.
 
The one card I tested at work for this (with the same, above settings, which I found to be roughly the sweet spot) was at 135 watts (isolated, via extender etc), my own card, I haven't measured yet, since I lack the equipment at home.
Hmm, to be honest I was expecting closer to 100, perhaps the main culprit is the memory?
 
No. Morris Chang takes pride in the principle of not giving anyone priority, no matter their volume, and instead just always allowing people to bid more for earlier wafers. This is precisely why Apple wanted to get off TSMC -- even Apple cannot buy certain priority on TSMC. If AMD wants wafers on TSMC, even if TSMC is volume constrained all they need to do is to pay more than nVidia/Apple currently are to get them.


Well paying more will get ya higher priority lol. Supply and demand at work!
 
Hmm, to be honest I was expecting closer to 100, perhaps the main culprit is the memory?
Maybe. There are strange correlations between memory and GPU voltage in Wattman. OTOH, there are hacked BIOSes and driver validation files that claim an improvement to ~30-31 MH/s while having 120 watts of power, though they seem to read GPU-z's GPU-only power as a basis.
 
So, in the end it is of absolutely no importance whether or not AMD already supported/carried over some kind of tessellator inside their R600 generation, since it was apparently not accessible from the outside or at least no game developer cared enough to try to make use of it outside of an industry standard.

I could post the headers, but there is this. Contrary to the last poster there, the library was available on Windows. I added this to OBGE and tried to tesselate the water, but it was't dense enough. It reinterpreted the vertex shader as domain shader. The real vertex and hull shaders were passthrough implementations. It felt very eery to programm that thing, it was like a timetravel through some wormhole, because it was basically identical to DX11.
 
That's a very nice piece of information, Etha! Maybe there are other bits hidden out there as well :)
 
Are the 3GB and 6GB D700s in the MacPro the only implementations of 384bit bus on Tonga? If so how do they perform compared to 256bit Tonga? Can you take the 6GB D700 out of the MacPro and put it in a Windows machine?

The 384bit bus of Tonga has always been a question for me. Not whether it exists, but why AMD never sold a part with >256bit Tonga.
 
Indeed, Tonga only exist in the iMac line-up.

And the "new" Mac Pro graphic cards do not fit in a regular PCI-Express slot.
 
Are the 3GB and 6GB D700s in the MacPro the only implementations of 384bit bus on Tonga? If so how do they perform compared to 256bit Tonga? Can you take the 6GB D700 out of the MacPro and put it in a Windows machine?

The 384bit bus of Tonga has always been a question for me. Not whether it exists, but why AMD never sold a part with >256bit Tonga.
Dave Baumann (correct me if I'm wrong Dave) once wrote that they couldn't find a reasonable market slot for a 384-bit Tonga. I took that to mean that the performance gained in sales driving benchmarks didn't justify the added cost of a wider bus+components. So as far as I understood it, it was a decision made from the how the market actually appeared at the time of retail availability.
 
I wonder why the GTX 1060 isn't an option. Perhaps the RX 470 sells for a lot less within a smaller performance differential?

For what i have understand, thoses new Alienware have both the AMD and Nvidia option, with 1060-1070 on the "nvidia one ."

https://www.engadget.com/2016/09/02/dell-alienware-laptopts-vr-ready/

Its funny, because most article who have been online today, only cite the Nvidia option.
For gamers, the main attraction is support for the latest NVIDIA laptop cards. The big-screen Alienware 17 gets the top-end NVIDIA GTX 1080 chip, while the Alienware 15 and 13 get the GTX 1070 and 1060, respectively. That means that all three models will be "VR-ready" for Oculus Rift and HTC Vive headsets.

CNet:

The new Alienware 15 and Alienware 17 both feature current-gen Intel processors and Nvidia's new 10-series GPUs. That's important, as these new Nvidia GPUs promise nearly the same performance in both their laptop and desktop versions -- no more underperforming "M" versions of graphics chips.

Anandtech:
Alienware is calling the 15 and 17 VR-Ready, and to get there they’ve turned to NVIDIA’s Pascal lineup of mobile graphics cards with the GTX 1080 on the 17 and GTX 1070 on the 15. Since Kaby Lake has only just launched with the dual-core Y and U series, Alienware is still leveraging Intel’s 6th generation Core with quad-core i7 availab
 
Last edited:
Status
Not open for further replies.
Back
Top