Nvidia Post-Volta (Ampere?) Rumor and Speculation Thread

Status
Not open for further replies.
How exactly would one "hardware block mining" without killing compute side altogether?
You would not need to kill off compute completely. To become irrelevant for miners, it is enough if you degrade performance over time, for example:
• if your kernels match known signatures for crypto-algorithms
• if you're calling the same compute-kernels for minutes or hours on end and not use the graphics pipeline at all
• …
then slowly insert bubbles in the pipeline, increasing the amount of them over time until you're at maybe half- or quarter-rate throughput.
 
You would not need to kill off compute completely. To become irrelevant for miners, it is enough if you degrade performance over time, for example:
• if your kernels match known signatures for crypto-algorithms
• if you're calling the same compute-kernels for minutes or hours on end and not use the graphics pipeline at all
• …
then slowly insert bubbles in the pipeline, increasing the amount of them over time until you're at maybe half- or quarter-rate throughput.
Very interesting ideas, but what's to stop the miner software from not just running some graphics at the same time?
 
You would not need to kill off compute completely. To become irrelevant for miners, it is enough if you degrade performance over time, for example:
• if your kernels match known signatures for crypto-algorithms
• if you're calling the same compute-kernels for minutes or hours on end and not use the graphics pipeline at all
• …
then slowly insert bubbles in the pipeline, increasing the amount of them over time until you're at maybe half- or quarter-rate throughput.


That's a bi-directional slipery slope, IMO :) . Miner software authors would invest time into shifting their signatures. While the signature checks could always fail and compromize "legitimate" software

Like that supposed AMD engineer said, perhaps easyest is to disable/slow down just a few instructions that can be isolated to mining exclusively. Probably won't hit every algo out there, but it will reduce the value of the card for miners
 
Very interesting ideas, but what's to stop the miner software from not just running some graphics at the same time?
Easy, make this „and“ a hint, not a requirement.
• if you're calling the same compute-kernels for minutes or hours on end and not use the graphics pipeline at all

That's a bi-directional slipery slope, IMO :) . Miner software authors would invest time into shifting their signatures. While the signature checks could always fail and compromize "legitimate" software
I'm not talking about mining-client signatures, but about the kernels themselves. For given crypto algorithms, they probably have a very specific profile. But I realize another problem now: Completely new algorithms would not be catched by this property of the drivers alone.
 
Easy, make this „and“ a hint, not a requirement.
• if you're calling the same compute-kernels for minutes or hours on end and not use the graphics pipeline at all
Sounds like an or, not an and, then :D
 
Problem solved if NV make miners an offer they can't refuse: crypto.Turing -- featuring lower voltage, 512 bit bus, and no display out. For $2500. In lots of 100.
 
Problem solved if NV make miners an offer they can't refuse: crypto.Turing -- featuring lower voltage, 512 bit bus, and no display out. For $2500. In lots of 100.
Please do not color your text black, leave it default. There is a dark theme and your posts are unreadable.
 
„NVIDIA Turing could be manufactured at a low-enough cost against GeForce-branded products, and in high-enough scales, to help bring down their prices, and save the PC gaming ecosystem.“
There are other sources for the info:
https://www.theinquirer.net/inquire...pto-mining-chips-to-ease-the-strain-on-gamers
https://www.digitaltrends.com/computing/nvidia-turing-ampere-graphics-cards-gtc-2018/

Two possibilities:
-Either Turing will be very powerful at mining than Geforce, driving difficulty up and making Geforce alternatives irrelevant (aka GTX 1050).
-Or that Geforce will be just as good at mining as Turing, in that case NVIDIA will block or slow down mining algorithms on the Geforces.
 
What if turing is simply only cuda cores, with very few rops&such so no interest for gaming. "Just" a "compute card", based on a new smaller die, so cheaper to make.
 
I cannot locate passages in those two articles that support what you said in the earlier post either: „NVIDIA will indeed block mining on consumer hardware!“

edit:
Don't get me wrong, I'm not debating that this may be the case after all, in fact I do lean towards that assumption, but I have yet to see hard proof that's not reurgitating the same two sources of the rumors, reuters and expreview.
 
Tensor cores are as useful for crypto mining as pickaxes...
If it indeed goes down this way then expect something with 0 tensor cores, compatibility level of floating point cores and a substantial amount of dedicated integer cores. Probably with GP100/GV100 style register file (2x of SMs compared to more GeForce line) and call it say a compute level 6.3.
 
I'm not talking about mining-client signatures, but about the kernels themselves. For given crypto algorithms, they probably have a very specific profile. But I realize another problem now: Completely new algorithms would not be catched by this property of the drivers alone.
I'm not sure that's a serious problem in the short-medium term. ASIC-resistant algorithms generally derive their resistance by purposefully bottlenecking on some resource a dedicated ASIC cannot readily scale, like on-die capacity or local DRAM bandwidth. Popular algorithms go further and select some architectural facet that is common in client hardware and somewhat less successfully not scalable with more expensive setups or data centers.

That's why Ethereum and derivatives often revolve around pseudorandom access to DRAM, which blows out on-die storage and doesn't reward clusters (hence getting away with very little PCIe bandwidth).
Others like Equihash balance that bandwidth demand out with additional compute with proof of capacity, though that still heavily focuses on a subset of the architecture.

A general heuristic, aside from obvious checks like how many cards in a system, what their PCIe width is, and some common tweaks for some miners like the heavy undervolting of the GPU and overclocking of RAM, would be the heavy use of a subset of system corners.
A high sustained rate of non-linear misses to DRAM, straightforward resource allocations, little or no standard GPU hardware path utilization, the math/logic used, and the very high level of sustained performance.
Dedicated instructions that accelerate hashes or a mix that's heavy on integer math and logical comparisons could show up as a clear signal as well.

Limiting them outright, or duty-cycling them if they consistently hit a high threshold of use for some effectively impractical time period for a game seems plausible. It's not clear from the profiles shown that resource utilization in gaming avoids serious trail-off near the tail end of a 16/33ms frame, or avoiding at least some of the graphics pipeline taking up a measurable percentage of time. Gameplay-wise, full saturation seems extremely improbable for more than a few seconds, and a gamer would likely be physically incapable of managing a full-bore game scneario for 12 hours or more in a sustained fashion. I'm not seeing how checks for such scenarios would affect gaming generally enough to not be handled on a case-by-case basis.

For a miner, getting around that could translate into tens of percent lopped off throughput at the top end, and significant periods of throttling in a 24 hour period. "Faking" utilization checks literally means leaving hash rate off the table due to underutilization or creating a fake graphics load sufficiently heavyweight to compromise utilization.
However, that's a reason to make a miner pay for hardware that is able to lift such limits, rather than creating a mining SKU that costs them less.
Giving a cheap mining option provides miners the chance to raise their earning potential so that they can buy standard GPUs in addition to mining SKUs.

Avoidance of the checks with new algorithms has some back-pressure.
Since this is utilization-based, they're either very different or not efficient.
Very different means it might take them out of the GPU-friendly space.
Very different may compromise the appeal of the algorithm, since part of the motivation was to broaden the hardware base.
Very different may shrink the amount of money that would flow into the coin's cap, leaving it niche.
Very different may take some time to be created and to ramp to significant numbers.
Very different means fighting the inertia of the existing market.

Up-charging may also have synergies with the profit motive of miners. They might pay more for GPUs with the limiters removed, but this also weakens the hash rate contribution for the duty-cycled gaming cards while reducing competition for optimized hardware.


Turing is the name of quick deployment GPUs designed specifically for crypto market, named after Alan Turing who broke the Nazi "Enigma" cryptography. NVIDIA will indeed block mining on consumer hardware!

https://www.techpowerup.com/241552/...ng-chip-jen-hsun-huang-made-to-save-pc-gaming

Turing was also hugely influential in formalizing computational theory. Turing machines (any truly programmable machine), Turing-complete languages, contributions to theory and AI, etc.
More than the other scientists used so far, for a company pursuing a fully-generalized programming model and AI, I'd think Nvidia wouldn't want to waste his name on something that is doing so little to advance humanity.
 
Seeing how "to-the-metal" some of these mining kernals are, I'm somewhat skeptical of the ability of Nvidia to stop hand-optimized miners from performing on their hardware.
 
Status
Not open for further replies.
Back
Top