CryptoCurrency Mining with GPUs *spawn*

Noob question : I have ETH in metamask chrome extensions wallet. How do I buy BTC?

  • Create an account with an exchange like Bittrex or Poloniex.
  • Create an Ethereum wallet on the exchange to hold the Ethereum you want to trade.
  • Send the Ethereum from the Metamask wallet to your exchange wallet (this will take several minutes and there is a fee).
  • Once the Ethereum is in your exchange wallet, go to the ETH/BTC trading section of the exchange and set a BTC price for your ETH.
  • Once someone agrees to your price, the BTC will show up in your exchange BTC wallet.
If your goal is to mine ETH to sell for BTC, you may want to mine directly into your exchange wallet. Every time you move coins between wallets, you have to pay a fee to facilitate the transaction. You don't want to waste ETH (and time!) just shuffling funds around.
 
Last edited:
Interesting. I'd seen the service mentioned, but didn't dig into it. I had to hunt around on the site to find it, but here's where they lay out their fee structure. I'm always very careful to find out exactly how a service makes their money before I engage with them.

Their reddit page is giving me pause. Any personal experience with them, @Mize?
I converted 0.5ETH to BTC via shape shift last week, it worked fine using my MyEtherWallet to my blockchain.info wallet. Value of the BTC was very close to the coingecko rate at the time.
 
Looks like Vega will sell well after all:

https://hothardware.com/news/amd-radeon-rx-vega-mining-block-chain-ethereum

AMD's new BlockChain driver increases performance and reduces power draw a bit. I don't care enough to do the math but I think that makes Vega 56 very appealing to miners. Plus, there may be further improvements to come.

It's not appealing. ETH mining is nearly dead after august 25 (big jump in difficulty), and still, you can have 24mh/s for 80w with a 1060, and a little more with a 580. 30-35mh/s for more than 180-200w is a bad choice...
 
It's not appealing. ETH mining is nearly dead after august 25 (big jump in difficulty), and still, you can have 24mh/s for 80w with a 1060, and a little more with a 580. 30-35mh/s for more than 180-200w is a bad choice...

Really depends on what happens to ETH pricing after the 25th. If it increases enough then its still worth mining
 
Really depends on what happens to ETH pricing after the 25th. If it increases enough then its still worth mining

You're right, but I doubt it will rise enough. It would have to be higher (and not by a small amount) than last may-june when the big rush of miners happened.
 
Hitting 30 MH/s @95W with a 1070 is trivial. Unless AMD's new crypto driver pulls some additional pixie dust out of its cache (rymes with ass sorta), then Vega isn't going to have much of an impact.
 
Well, if I'm not mistaken, AMD's new crypto instruction is supposed to help with mining, but has yet to be used by existing software, right? I have no idea how much it could bring, but it might help.

I also wonder about underclocking—and undervolting!—the core while overclocking the memory.
 
Well, if I'm not mistaken, AMD's new crypto instruction is supposed to help with mining, but has yet to be used by existing software, right? I have no idea how much it could bring, but it might help.
It has an instruction to accelerate hash calculations. That part was never the bottleneck for ETH. So it shouldn't have an impact there. I don't know about other currencies. (Well, it would help bitcoin, but that's not viable either way.)
 
It has an instruction to accelerate hash calculations. That part was never the bottleneck for ETH. So it shouldn't have an impact there. I don't know about other currencies. (Well, it would help bitcoin, but that's not viable either way.)

Thanks for clarifying that. It would be interesting to know precisely where it can help, and by how much. Presumably the instruction was added for a very real reason, but I suppose cryptocurrencies might not be it—after all, there are many other uses for cryptographic algorithms.
 
The new vega ops: https://github.com/RadeonOpenComput...CN_ISA_Manuals/Vega_Shader_ISA_28July2017.pdf
Code:
V_XAD_U32: D.u32 = (S0.u32 ^ S1.u32) + S2.u32
V_SAT_PK_U8_I16: D.u32 = {16'b0, sat8(S.u[31:16]), sat8(S.u[15:0])}
V_LSHL_ADD_U32: D.u = (S0.u << S1.u[4:0]) + S2.u
V_ADD_LSHL_U32: D.u = (S0.u + S1.u) << S2.u[4:0]
V_ADD3_U32: D.u = S0.u + S1.u + S2.u
V_LSHL_OR_B32: D.u = (S0.u << S1.u[4:0]) | S2.u
V_AND_OR_B32: D.u = (S0.u & S1.u) | S2.u
V_OR3_B32: D.u = S0.u | S1.u | S2.u

The ADD/SHIFT, SHIFT/ADD, and ADD3 will all be useful for address calculations.

Some compute-limited coins like SIA might make use of the AND/OR, OR3, and XOR/ADD, but they're weirdly limited. Nowhere near as flexible as CUDA's LOP3 which can calculate any 3-input logical op, even things like ((a & b | c) ^ a). Generally not expecting this to make much of a difference.
 
So how exactly does GP100 get to these ridiculously high hash rates? Math throughout and bandwidth put it on par with GP102, which tit manage to somehow outperform by over 2x.
 
So how exactly does GP100 get to these ridiculously high hash rates? Math throughout and bandwidth put it on par with GP102, which tit manage to somehow outperform by over 2x.

What do you mean? With 717GB/s it's got a good deal more bandwidth than the Titan Xp (548GB/s). I guess that doesn't quite explain the 2× difference, but maybe having four HBM2 channels is really helpful somehow.
 
Hitting 30 MH/s @95W with a 1070 is trivial. Unless AMD's new crypto driver pulls some additional pixie dust out of its cache (rymes with ass sorta), then Vega isn't going to have much of an impact.

Do you mind sharing how you get a 1070 to 30MH/s @95W? Both mine are running -50% power, +700MHz memory and hashing at 30+MH/s. Top card running @38 degrees, bottom card @ 63 degrees. Hardware Info is reporting average power consumption at about 115W and 110W respectively, however I assume that is the GPU power and the cards could well be pulling more. That's running on Win 10 and using Afterburner to change settings.
 
Back
Top