Such bandwidth sensitive workloads could be opmitized by the driver (or managed explicitly by the programmer through API extensions) to use each die as a separate node with a local memory pool.
At that point, why invest in a massive amount of inter-die communication for a mining-targeted product?
AMD could provide a less intensive level of connectivity between the dies, and unless there's a class of mining algorithms that is widely used and profitable AMD could pocket the implementation savings with little revenue lost.
At the time of Infinity Fabric announcement, the news site were citing 512 GB/s of bandwidth in graphics apllications (or directly in Vega GPU). Unfortunately I cannot find any references to original AMD materials in the white noise of reposts and retweets...
I believe Raja Koduri stated that the mesh's bandwidth matched the memory controller bandwidth. The implementations so far have individual link bandwidth in one direction that matches one channel's bandwidth. EPYC over-provisions its MCM bandwidth in order to have a fully connected setup.
This presumes for mining that there's a compute-bound workload that is sensitive to whether there is one GPU of size X versus 2 of size X/2. I'm asking if there's a class of mining algorithms with that limitation. Some of the most notable ASIC-resistant algorithms were constructed in part to not scale as much with resources that might favor a wealthy miner disproportionately or allow concentration of hash rate.
Interpreting performance counters takes a qualified graphics programmer running graphics debugging tools in the Visual Studio IDE with full aceess to the source code.
A part of the underlying infrastructure for those counters is what allows DVFS to not melt the chip or engage turbo, since AMD's method uses progress measurement, estimated power cost per event, and utilization.
Further, part of my proposal was the introduction of specific functions that boost mining efficiency that would be very trackable.
Any such detection heuristics in the driver will just give a rise to obfuscation code pretending to perform genuide graphics rendering.
If the heuristic is, for example, "consumer SKUs must use 20% of hardware events for fixed-function graphics hardware events before duty cycling, and mining ops are 1/16 rate", a miner would spoof it by leaving 20% of the GPU's effectiveness running a fake renderer and not using mining operations. That, or they could pay more up-front and still make money.
Additional targeted restrictions could be checks on workloads not using the graphics pipeline in systems with more than 2-4 cards, and maybe a check for the negotiated PCIe link width being 4x or below. Special operations or the desired low voltages and clocks could be overridden by DVFS, which is already able to override settings at its discretion.
It's not that one couldn't mine, just that someone trying to optimize a mining rig or mining data center for max profit while paying retail price gets only what they pay for.
If it works - the problem is, intricate DRM systems rarely work as intended;
the SimCity fiasco should teach us a few lessons.
This isn't DRM. I was not proposing that they prevent mining so much as providing a trade-off in terms of up-front cost for a mining SKU or unlock, versus a reduced hash rate that would still eventually pay for itself under current miner logic. This is market segmentation.
No. The Soviet planned economy was essentially
the economy of shortages by design, because it could not effectively react to changes in supply and/or demand by adjusting either price or production volume, like a free market economy does.
I was talking about how the members of the Party or the power structure did well for themselves, not well for the nation at large.
I admit I'm not an expert on the transition, but I thought a fair number of power brokers under the Soviet Union wound up the opposite of paupers after its dissolution. I didn't think those named Yeltsin or Putin, their allies, or the friends and family of managers of state-run enterprises that bought said enterprises at a discount wound up sleeping under a bridge.
Granted, any number of that pool did wind up mysteriously dead or imprisoned, but that's not an economic mechanism.
AMD isn't the person standing in the bread line.
So instead of going through the obvious solution of applying the laws of free market, they propose a kind of 'video card rationing' system, which will certainly give a rise to a wide-reachng black market. Isn't it wise?
The "laws of the free market" in this case is charging whatever the market will bear, which in the case of miners is a lot.
The manufacturers are also the supply, which is the ultimate constraint on any type of market.
There soon could be hordes of black market dealers collecting stolen personal data, forging idendity documents and making fake bank accounts - all to make illegitimate orders from non-existing customers. Involve the police and there is potential for this to become a new war on drugs.
There are already hordes of people like that, they're one of the few classes of individuals using crypto-coins as they were intended. Besides, if it were that easy to do this at scale, why not still make fake accounts so you could make fraudulent purchases and get all the cards for free?
As far as drugs go (again, one of the few cases for there to be a non-zero valuation for most cryptocurrency), a package of coke doesn't broadcast itself to the whole world every couple seconds and log itself into a permanent record for the world to see, while hooked into a building with a utility-scale power contract with a thermal signature that could probably be seen from space.
Besides, in this case, AMD's the cartel and they can add conditions on the disposition of their product as much as they like.
The polite request that retailers constrain purchase quantity doesn't affect gamers that mine on the side, or more casual miners.
Mid-level buyers that start renovating rooms or houses might get hurt, and it costs money, time, and possibly freedom to wholly get around this.
Big buyers are bypassing retailers and possibly part of the wholesale market, and as such are likely getting a bit more money back up the food chain to the AIBs and possibly AMD or Nvidia (Nvidia's direct model could make this easier). As such, the big players wind up padding out margins for a board maker or GPU vendor, and it hurts the datacenters' smaller competitors.