Nvidia Post-Volta (Ampere?) Rumor and Speculation Thread

Discussion in 'Architecture and Products' started by Geeforcer, Nov 12, 2017.

Tags:
  1. Lightman

    Veteran Subscriber

    Joined:
    Jun 9, 2008
    Messages:
    1,804
    Likes Received:
    475
    Location:
    Torquay, UK
    I bet this will only be in place for consumer GPUs and all Titan or higher brackets will still be fully unlocked. Otherwise nVidia would loose money to the competitors.

    Wonder what AMDs move will be!
     
  2. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,241
    Likes Received:
    1,914
    Location:
    Finland
  3. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,801
    Likes Received:
    2,061
    Location:
    Germany
    You would not need to kill off compute completely. To become irrelevant for miners, it is enough if you degrade performance over time, for example:
    • if your kernels match known signatures for crypto-algorithms
    • if you're calling the same compute-kernels for minutes or hours on end and not use the graphics pipeline at all
    • …
    then slowly insert bubbles in the pipeline, increasing the amount of them over time until you're at maybe half- or quarter-rate throughput.
     
    pharma and DavidGraham like this.
  4. Bludd

    Bludd Experiencing A Significant Gravitas Shortfall
    Veteran

    Joined:
    Oct 26, 2003
    Messages:
    3,247
    Likes Received:
    811
    Location:
    Funny, It Worked Last Time...
    Very interesting ideas, but what's to stop the miner software from not just running some graphics at the same time?
     
  5. entity279

    Veteran Regular Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,235
    Likes Received:
    424
    Location:
    Romania

    That's a bi-directional slipery slope, IMO :) . Miner software authors would invest time into shifting their signatures. While the signature checks could always fail and compromize "legitimate" software

    Like that supposed AMD engineer said, perhaps easyest is to disable/slow down just a few instructions that can be isolated to mining exclusively. Probably won't hit every algo out there, but it will reduce the value of the card for miners
     
  6. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,801
    Likes Received:
    2,061
    Location:
    Germany
    Easy, make this „and“ a hint, not a requirement.
    • if you're calling the same compute-kernels for minutes or hours on end and not use the graphics pipeline at all

    I'm not talking about mining-client signatures, but about the kernels themselves. For given crypto algorithms, they probably have a very specific profile. But I realize another problem now: Completely new algorithms would not be catched by this property of the drivers alone.
     
  7. Bludd

    Bludd Experiencing A Significant Gravitas Shortfall
    Veteran

    Joined:
    Oct 26, 2003
    Messages:
    3,247
    Likes Received:
    811
    Location:
    Funny, It Worked Last Time...
    Sounds like an or, not an and, then :D
     
  8. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,801
    Likes Received:
    2,061
    Location:
    Germany
    With only an „or“ you'd effectively disable all compute on the card.
     
  9. nnunn

    Newcomer

    Joined:
    Nov 27, 2014
    Messages:
    29
    Likes Received:
    23
    Problem solved if NV make miners an offer they can't refuse: crypto.Turing -- featuring lower voltage, 512 bit bus, and no display out. For $2500. In lots of 100.
     
  10. Bludd

    Bludd Experiencing A Significant Gravitas Shortfall
    Veteran

    Joined:
    Oct 26, 2003
    Messages:
    3,247
    Likes Received:
    811
    Location:
    Funny, It Worked Last Time...
    Or with a hint :D
     
  11. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    7,107
    Likes Received:
    3,173
    Location:
    Pennsylvania
    Please do not color your text black, leave it default. There is a dark theme and your posts are unreadable.
     
    nnunn likes this.
  12. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,820
    Likes Received:
    2,643
    pharma and Lightman like this.
  13. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,801
    Likes Received:
    2,061
    Location:
    Germany
    The article you link actually does put a very different emphasis on the turing story:
    „NVIDIA Turing could be manufactured at a low-enough cost against GeForce-branded products, and in high-enough scales, to help bring down their prices, and save the PC gaming ecosystem.“
    [my bold]
     
  14. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,820
    Likes Received:
    2,643
    There are other sources for the info:
    https://www.theinquirer.net/inquire...pto-mining-chips-to-ease-the-strain-on-gamers
    https://www.digitaltrends.com/computing/nvidia-turing-ampere-graphics-cards-gtc-2018/

    Two possibilities:
    -Either Turing will be very powerful at mining than Geforce, driving difficulty up and making Geforce alternatives irrelevant (aka GTX 1050).
    -Or that Geforce will be just as good at mining as Turing, in that case NVIDIA will block or slow down mining algorithms on the Geforces.
     
  15. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,187
    Likes Received:
    585
    Location:
    France
    What if turing is simply only cuda cores, with very few rops&such so no interest for gaming. "Just" a "compute card", based on a new smaller die, so cheaper to make.
     
  16. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,801
    Likes Received:
    2,061
    Location:
    Germany
    I cannot locate passages in those two articles that support what you said in the earlier post either: „NVIDIA will indeed block mining on consumer hardware!“

    edit:
    Don't get me wrong, I'm not debating that this may be the case after all, in fact I do lean towards that assumption, but I have yet to see hard proof that's not reurgitating the same two sources of the rumors, reuters and expreview.
     
    DavidGraham likes this.
  17. Urian

    Regular

    Joined:
    Aug 23, 2003
    Messages:
    621
    Likes Received:
    55
    Is it possible that the Nvidia "Turing" could be a pure Tensor Core processor replacing the SM for tensor cores?
     
    pharma, iMacmatician and xpea like this.
  18. MDolenc

    Regular

    Joined:
    May 26, 2002
    Messages:
    690
    Likes Received:
    425
    Location:
    Slovenia
    Tensor cores are as useful for crypto mining as pickaxes...
    If it indeed goes down this way then expect something with 0 tensor cores, compatibility level of floating point cores and a substantial amount of dedicated integer cores. Probably with GP100/GV100 style register file (2x of SMs compared to more GeForce line) and call it say a compute level 6.3.
     
    entity279 likes this.
  19. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,137
    Likes Received:
    2,936
    Location:
    Well within 3d
    I'm not sure that's a serious problem in the short-medium term. ASIC-resistant algorithms generally derive their resistance by purposefully bottlenecking on some resource a dedicated ASIC cannot readily scale, like on-die capacity or local DRAM bandwidth. Popular algorithms go further and select some architectural facet that is common in client hardware and somewhat less successfully not scalable with more expensive setups or data centers.

    That's why Ethereum and derivatives often revolve around pseudorandom access to DRAM, which blows out on-die storage and doesn't reward clusters (hence getting away with very little PCIe bandwidth).
    Others like Equihash balance that bandwidth demand out with additional compute with proof of capacity, though that still heavily focuses on a subset of the architecture.

    A general heuristic, aside from obvious checks like how many cards in a system, what their PCIe width is, and some common tweaks for some miners like the heavy undervolting of the GPU and overclocking of RAM, would be the heavy use of a subset of system corners.
    A high sustained rate of non-linear misses to DRAM, straightforward resource allocations, little or no standard GPU hardware path utilization, the math/logic used, and the very high level of sustained performance.
    Dedicated instructions that accelerate hashes or a mix that's heavy on integer math and logical comparisons could show up as a clear signal as well.

    Limiting them outright, or duty-cycling them if they consistently hit a high threshold of use for some effectively impractical time period for a game seems plausible. It's not clear from the profiles shown that resource utilization in gaming avoids serious trail-off near the tail end of a 16/33ms frame, or avoiding at least some of the graphics pipeline taking up a measurable percentage of time. Gameplay-wise, full saturation seems extremely improbable for more than a few seconds, and a gamer would likely be physically incapable of managing a full-bore game scneario for 12 hours or more in a sustained fashion. I'm not seeing how checks for such scenarios would affect gaming generally enough to not be handled on a case-by-case basis.

    For a miner, getting around that could translate into tens of percent lopped off throughput at the top end, and significant periods of throttling in a 24 hour period. "Faking" utilization checks literally means leaving hash rate off the table due to underutilization or creating a fake graphics load sufficiently heavyweight to compromise utilization.
    However, that's a reason to make a miner pay for hardware that is able to lift such limits, rather than creating a mining SKU that costs them less.
    Giving a cheap mining option provides miners the chance to raise their earning potential so that they can buy standard GPUs in addition to mining SKUs.

    Avoidance of the checks with new algorithms has some back-pressure.
    Since this is utilization-based, they're either very different or not efficient.
    Very different means it might take them out of the GPU-friendly space.
    Very different may compromise the appeal of the algorithm, since part of the motivation was to broaden the hardware base.
    Very different may shrink the amount of money that would flow into the coin's cap, leaving it niche.
    Very different may take some time to be created and to ramp to significant numbers.
    Very different means fighting the inertia of the existing market.

    Up-charging may also have synergies with the profit motive of miners. They might pay more for GPUs with the limiters removed, but this also weakens the hash rate contribution for the duty-cycled gaming cards while reducing competition for optimized hardware.


    Turing was also hugely influential in formalizing computational theory. Turing machines (any truly programmable machine), Turing-complete languages, contributions to theory and AI, etc.
    More than the other scientists used so far, for a company pursuing a fully-generalized programming model and AI, I'd think Nvidia wouldn't want to waste his name on something that is doing so little to advance humanity.
     
    pharma, CarstenS, DavidGraham and 3 others like this.
  20. mrcorbo

    mrcorbo Foo Fighter
    Veteran

    Joined:
    Dec 8, 2004
    Messages:
    3,598
    Likes Received:
    2,001
    Seeing how "to-the-metal" some of these mining kernals are, I'm somewhat skeptical of the ability of Nvidia to stop hand-optimized miners from performing on their hardware.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...