The AMD Execution Thread [2018]

Status
Not open for further replies.
I could imagine for example the most easy way: Enable full mining throughput only if no display outputs are present
It is a dangerous path to deliberately discriminate against particular uses of a product the user has paid money for. Personally I wouldn't mind if just about every miner out there got sucked up by a passing-by black hole, but that said, it's not really for me or anyone else to decide how other people use their computer hardware.

If we can't decide what we can run on our own stuff for ourselves, then what are we even living for? :p
 
It is a dangerous path to deliberately discriminate against particular uses of a product the user has paid money for.
Yes, it is. You shouldn't be doing this in the middle of the road, but when a new generation of card comes out, you communicate this clearly from the beginning and you have specific products targeting that market (mining cards w/o display outputs soldered down or even display engines fused off in the chip), I think it might be evaluated as an option for product differentiation. It's another question if you can afford to do this in the competitive landscape without scaring people off.
 
I think it might be evaluated as an option for product differentiation.
It's bullshit tho, because you'd literally be charged extra money for nothing. IE, you buy a TV, except you can't use it to watch streaming internet video (like Netflix et.al); to do that you have to buy a TV which is by every measure functionally identical, except five times as expensive.

Such 'product differentiation' would open the door for arbitrarianess; IE, what qualifies as coin mining? Who gets to judge what is or isn't? What is their motivation for their decision? How will it be enforced?

Like I said, dangerous path. It'd be a form of corporate fascism. Oppression.
 
So, your proposed solution is … what exactly? Handle miners' demands with higher ordering volume, increasing price bids at the foundries for wafer allocation, leading to higher prices for everyone?
 
So, your proposed solution is … what exactly?
What made you think I have one, or that there even is one? And yes, ordinarily, higher demand would be solved by increasing orders, but as cryptocoins are a bullshit fake bubble that's not ideal in this case as the bubble could burst at virtually any time, while graphics cards take months to manufacture.

I'd have to say, "ride it out until the end", if NV/AMD aren't willing to increase supply.
 
Look, isn’t it reasonable to say that we are in the current situation precisely because the hardware suppliers cannot find a good solution? And they are the ones having the best sales/cost/allocation/whathaveyou data.
 
Which is what Nvidia is proposing:

NVIDIA Asks Retailers To Stop Selling To Miners & Sell To Gamers Instead
https://wccftech.com/nvidia-instructs-retailers-stop-selling-miners-sell-gamers




https://www.nvidia.com/en-us/geforce/products/10series/geforce-store

Which doesn't stop them from doing it, just slows them down slightly. And that is only if the retailers agree to limit the number of GPUs sold to a single account.

And even that would just slow a determined miner down and not stop them. You'd just need multiple accounts at that point, which is extremely easy to do. Will they then limit it based on address? If so that wouldn't be a good method as it penalizes legitimate purchasers. For example, it isn't uncommon for people getting their start in Silicon Valley or the S.F. Bay area to live 3-6 people per apartment due to the extremely high housing and rental costs.

Regards,
SB
 
The latest move might not stop mining completely, but at least it will slow it down even a bit, giving gamers a breathing room or a chance to get some cards. Partners will stop shipping thousands of cards from factories to miners directly, limiting purchases to 2 per purchase (or more) should also help. Partners will also try to offer more cards to gamers or they risk receiving fewer GPUs from NVIDIA, as NVIDIA will have to fulfill the increased demands for GPUs through it's official site.

At least NVIDIA is doing somethings about it, they created mining specific SKUs, asked retailers and partners to stop or restrict selling to miners, and offered GPUs on their official site at MSRP. AMD should at least try to do the same. Their rapidly vanishing gaming marketshare can't be a good thing to them at all.
 
wonder if amd or nvidia get a lot of chips with rops or other non mining essential hardware that is non functional and can just sell those as mining only chips
 
At least NVIDIA is doing somethings about it, they created mining specific SKUs, asked retailers and partners to stop or restrict selling to miners, and offered GPUs on their official site at MSRP. AMD should at least try to do the same. Their rapidly vanishing gaming marketshare can't be a good thing to them at all.

Meh, the mining specific SKUs are very poor value propositions for mining. At least what ends in retailer stores.
The rest are just words; there obviosly is no stock for the official site purchases. And recommending something to retailers doesn't need anyone has to follow.

The one thing they did which is IMO somewhat substantial is the limit in the drivers. Currently you can have a maximum of 8 nv GPUs pe system. They did , AFAIK, mentioned that this number may go lower. AMD on the other hand goes so far as to track configurations with 12 GPUs - they even have known bugs/limitations related to them in the version hostory
 
The one thing they did which is IMO somewhat substantial is the limit in the drivers. Currently you can have a maximum of 8 nv GPUs pe system. They did , AFAIK, mentioned that this number may go lower. AMD on the other hand goes so far as to track configurations with 12 GPUs - they even have known bugs/limitations related to them in the version hostory
AMD also has an open source linux driver. So a reasonably knowledgeable user could just modify the configuration and distribute that. Assuming the larger configurations weren't needed by enterprise users.
 
AFAIK on linux there is no limitions on number of GPUs per system

On Windows, there used to be a limit of 8 GPUs. AMD has increased it to 12 , 1-2 months ago.

nVidia (don't have the link, there was one iirc) stated they will maintain or increase this limitation. They'd remove the limitation only for mining specific cards.
 
I agree with a previous post, make or "recycle" compute only gpu, without (or with minimum) rop, display output, etc, and make them cheaper than "normal" gpu. Yes you will need more wafer&co, but at least gamers will see gaming cards again...
 
1) On-die multichip interconnect
Are there specific crypto-coins that are limited by the main chip's processing or internal transfers?
The rigs I've seen go out of their way to put as many discrete cards on a board, hooked up to as few PCIe lanes as possible. The GPU is usually clocked low and undervolted if possible, with the memory pushed up as high is feasible.
4) Fast processor interconnect
is there a mining target that is constrained by that element of the system?
Coins like Ethereum are purposefully bottlenecked by local DRAM bandwidth

Design a distributed memory controller, so that each GPU in a muti-chip die treats its local HMB2 module as a separate memory bank in a multi-channel configuration. I believe this has been implemented in the UltraPath Interconnect protocol for LGA 3647 socket Xeon Gold/Platinum, as well as Xeon Phi 200 processors (these are not cancelled, unlike PCIe 'accelerator' boards).

Miners use a limited number of PCIe lanes (36-48) in current processors to control as many video cards as possible. On the other hand, AMD Ryzen Threadripper has 64 PCIe lines and EPYC has 128, so they can theoretically support a good amount of 8/16/32-lane PCIe slots.

2) Binning for low power
the market's so overheated that making a profit in such a supply-limited scenario could allow for any chip... to be salable. And if the card can make back its purchase price before it breaks, the reliability standard... could be more relaxed.
Obviously, if your video card doesn't break, it keeps working and you don't have to spend any money and effort to replace it.

3) Optimised software
Create instructions that can noticeably help mining performance or efficiency, then throttle them.
Determine what patterns are needed for mining, then determine what lowered voltage+gating+clock levels those need, and then make them unavailable on standard firmware
If Etherium mining is really memory bandwidth limited, the only thing you can reasonably do is actually increase memory bandwidth.

that was specifically carved out from the datacenter limitation
This is similar because Nvidia's 'professional' products do not offer much additional value over consumer cards for a considerable higher price.
 
Last edited:
Please correct me if I am wrong. My understanding of the GF 12nm process is just a rebranding of the 14nm+ process. This includes just tighter design rules and new constructs, not any different fab equipment. So there is no retooling downtime at the fab. AMD would need to make new masks and validate the new wafers, but this isn't losing much production cap.
AFAIK they also need to package the new chips, and this would require new mechanical tooling. If they needed to install new lithography equipment, the entire fab would need to be stopped then brought back to the clean state, and that would take a full year or more.

NVIDIA Asks Retailers To Stop Selling To Miners & Sell To Gamers Instead
The latest move might not stop mining completely, but at least it will slow it down even a bit, giving gamers a breathing room or a chance to get some cards
This would only result in further price increases due to limited supply. Cryptocurrency miners would just use multiple spam accounts, or pay intermediaries to make card orders for them.

how can you make a difference between compute task needed for games, pro apps, etc, and miner program
You can't.

GPU vendors used to detect known games and replace their shaders with custom optimized versions, but this was in the times of simple shader model 2.0 hardware rated for 10 GFLOPs, and this white-listing could be easily circumvented.

You shouldn't be doing this in the middle of the road, but when a new generation of card comes out, you communicate this clearly from the beginning and you have specific products targeting that market
It won't be enforceable, unless there are specific hardware differences in the GPU which make driver modding impossible.
 
Design a distributed memory controller so that Each GPU in a muti-chip module could treat its nearby HMB2 module as a separate memory bank in a multi-channel configuration. I believe this has been implemented in the UltraPath Interconnect protocol for LGA 3647 socket Xeon Gold/Platimun, as well as Xeon Phi 200 processors (these are not cancelled, unlike PCIe 'accelerator' boards).

Beneficial uses of that functionality do depend on optimizing for locality and avoiding excessive transfers between chips. There are even modes that sub-divide the LLC domains on-chip for workloads so that quadrants of the chip cache specific channels, which can point to sub-optimal access patterns not being able to be hidden, although what mining algorithms would notice this specifically isn't clear. That class of hardware tends to be uncommon and doesn't seem to overlap as much with the same range of workloads that are snapping so many discrete boards.

Miners use a limited number of PCIe lanes (36-48) available in current processors to control as many video cards as possible. On the other hand, AMD Ryzen Threadripper has 64 lines and EPYC has 128, so they can theoretically support a large amount of 8/16/32-lane PCIe slots.
Even more limited PCIe systems don't display much sensitivity to PCIe bandwidth, with lane counts per GPU potentially even narrower than that, and even going accepting plugging into PCIe 2.0 even with PCIe 3.0 cards. That seems like the expansion bus is far down the list of priorities for a mining-targeted product.

If your video card doesn't break, it keeps working and you don't have to spend any money and effort to replace it.
What is the level of demand in this scenario? If it's like right now, where buyers are spending vastly above base pricing even for cards that can have measurable deficits versus competing options and retailers consistently have minimal stock, it seems to come down to whether a card can be profitable at all rather than whether it is better than competition that is likely unavailable.

If Etherium mining is really memory bandwidth limited, the only thing you can reasonably do is actually increase memory bandwidth.
The power consumption of the GPU influences the temperature/power budget available for overclocking memory, or influences how many additional cards can be added, at least for the apparent optimal point for rigs targeting a workload like Ethereum in its current proof of work incarnation.

Multi-chip may increase the apparent bandwidth of the package, but it's not local in the same manner as a single chip. The access patterns are intended to exceed on-chip storage for ASICs while scaling poorly with inter-chip transfers.
For that class of algorithms, I am unclear as to how significant the difference is between two independent GPUs running independent payloads in parallel versus trying to unify them for one DAG using a bus with non-zero power cost with sub-optimal access patterns.
Ethereum's algorithm didn't seem to anticipate large numbers of discrete GPUs being run together, however.

Equihash was mentioned earlier, which seems to have adjusted the balance of arithmetic and bandwidth requirements. The proof of work algorithm appears from my early reading on the topic to try to correct that oversight with an algorithm that is more sensitive to memory capacity--which may not favor HBM if that is the next major ASIC-resistant target.

This is similar because Nvidia's 'professional' products do not offer much additional value over consumer cards for a considerable higher price.
This is likely a profit-maximization move, although I think Nvidia's restricting purchase quantities for the cards in question.
I think that points to a continuum with non-datacenter customers at a given price point, and datacenter and mining operating at higher ones. Possibly, the datacenter customers pay the most, but I don't think Nvidia would carve out the mining component if it were indistinguishable revenue-wise from the individual use case. The rumors of more direct sales to miners might figure into it.
 
It won't be enforceable, unless there are specific hardware differences in the GPU which make driver modding impossible.
AMD could always bring back Fiji with it's 4GB of HBM that provides insufficient capacity for mining. Realistically though the APUs or MCM designs might be a good bet to avoid miners. At the very least they would be a higher margin part with the CPU component. Embedded in a NUC there may be enough other components any gamer would want as a bundle, but a miner find less desirable to scale. I'd agree any profitable design will ultimately be gamed and there is little to stop it.
 
This would only result in further price increases due to limited supply. Cryptocurrency miners would just use multiple spam accounts, or pay intermediaries to make card orders for them.
That depends on the scale of the buyer. It takes non-zero time to create and rotate through accounts, or additional expense in time and money if dealing with intermediaries. If there's a credit or banking account limit, there can be higher barriers to creating enough of them on an individual basis, and legal barriers if you try to "hack" your way past those.
Of note, it's alleged larger mining customers may have cut out AIB partners or wholesalers as intermediaries in their pursuit of greater economies of scale.

It seems like the more casual miner that also games wouldn't be as affected, while the largest concerns would be above that barrier of entry.
The classes of mining customer in the middle would be more adversely affected, which makes me think that the biggest mining concerns may derive additional benefit by paying enough to the GPU suppliers to encourage further purchase limits.
The losers would seem to be any non-pool competitors trying to grow the level of the dominant miners, and possibly the channel/retail partners that are purportedly seeing less supply. This does seem to push towards empowering the largest datacenters and/or the largest pool admins over a less centralized market.

You can't.
Some of the facets of the most common mining algorithms appear to be rather uncommon. A workload that uses a lot of integer and bitwise operations, hashing-type functions, unpredictable DRAM access, and very poor on-die caching behavior for the vast majority if its work appears like it would stand out.
Games that somehow trip a heuristic get the typical "performance-enhancing" launch driver or bug fix.


It won't be enforceable, unless there are specific hardware differences in the GPU which make driver modding impossible.
The various hardware vendors have made choices that give them incomplete but growing ability to do this. Nvidia has a stronger stance in maintaining its proprietary driver, and AMD's use of its SOC15 system organization provides a trusted environment and signing checks that could be directed to block undesirable software from running.
It might not be sufficient overall right now, although there's a decent path going forward if the choice is made.
 
AMD could always bring back Fiji with it's 4GB of HBM that provides insufficient capacity for mining.

Incorrect. Pretty sure ETH's DAG won't exceed 4 GB til something like mid 2019. Probably ETH will be PoS by then, so it won't be minable anyway. And that's one single currency (even though it's an important one, mining others yields similar short - turm returns) ,
 
Status
Not open for further replies.
Back
Top