With the massive increase in mining demand and other uses, it seems that both companies should be using their IP to design "GPUs" without the fluff necessary for actual graphics computing. Obviously, any decode, media, or display silicon is unnecessary and takes up valuable die space while possibly consuming extra power. I wanted to know the feasibility and practicality of AMD and Nvidia using their GPU architectures in a stripped down manner for cryptocurrency, machine learning, etc.
I'm going to guess that both TMUs and ROPs serve no purpose in mining or machine learning. Without messing with a complete CU/SM redesign, a large array with the necessary inner comm buses and memory controllers could be a quick and feasible way to use existing architecture to make a more optimized general mining/AI processor. While a mining specific ASIC always comes along, I think crypto is here to stay, and in turn GPU demand for new cyptocurrencies will keep up the pressure unless supply catches up to demand. I understand memory supply is a major part of the issue, and honestly I'm not sure how important memory bandwidth is to mining or machine learning, so perhaps there could be some savings there too.
This just seems like one way to partition off some of the market that benefits the GPU makers while mitigating some of the effects on the market that actually needs a GPU's full capabilities.
I'm going to guess that both TMUs and ROPs serve no purpose in mining or machine learning. Without messing with a complete CU/SM redesign, a large array with the necessary inner comm buses and memory controllers could be a quick and feasible way to use existing architecture to make a more optimized general mining/AI processor. While a mining specific ASIC always comes along, I think crypto is here to stay, and in turn GPU demand for new cyptocurrencies will keep up the pressure unless supply catches up to demand. I understand memory supply is a major part of the issue, and honestly I'm not sure how important memory bandwidth is to mining or machine learning, so perhaps there could be some savings there too.
This just seems like one way to partition off some of the market that benefits the GPU makers while mitigating some of the effects on the market that actually needs a GPU's full capabilities.
Last edited: