We'd obviously be talking about workloads that are less sensitive to latency. It's not like crypto, SETI and Folding are the only viable distributed computing systems available.
Companies that rent compute workloads from Azure servers can do all sorts of stuff like offline raytracing rendering, basic sciences (math, physics, chemistry), seismology, climatology, etc.
Why not give Xbox customers the choice of providing their hardware for distributed computing during downtime hours in exchange for free games and online services?
SETI and Folding are academic/non-commercial efforts where material and monetary constraints can make a mass of random hardware with non-professional levels of service acceptable, and accepting donated time with no compensation takes out a legal thicket of trying to create a commercial service relationship with unsophisticated (or unwitting) business partners.
Some cryptocurriencies or crypto-offerings are centered around a more commercially-oriented transaction system, but the travails of getting results besides hoping for speculative investment or actions of debatable legality shows this is not a clear win. It's somewhat reminiscent of Bitcoin's centralization under ASIC-driven data centers, where specialization can win over a broader field of participants that have lower investment ceilings and lower margins over their baseline overheads.
The one clear win is that Microsoft could in theory offload some of the capital and operational expense of operating the Xbox instances since it would not be responsible for the buildings, electricity, HVAC, security, or maintenance for those instances. However, it's not like those needs or overheads went away. The service would need to be discounted for the reduced level of reliability or increased risk of a service with a lot of infrastructure is exposed and managed/neglected by non-professionals who directly interfere with the service with their personal use. That might not give much return for the increased overheads and reliance on outside actors like ISPs, versus a datacenter with massive amounts of traffic kept in-house.
As long as Microsoft doesn't pay with cash - which would open the whole ordeal to entrepreneurs creating massive xbox farms in places with cheap electricity - I don't see what's wrong with the approach.
This might not be a major hurdle. Unlike cryptocurrency mining where utility companies don't necessarily know who or what is plugging into the grid, Microsoft's cloud servers would be in direct contact with the hardware nodes and would have a lot of intimate knowledge of how they're being used and where they are. If for some reason Microsoft wanted to discourage a concentration of such consoles into farms, they could just not distribute work units or instances to them.
A risk in all this is if Microsoft's compensation method becomes close enough to a service relationship, they may blunder into creating an obligation to send work or compensation out to defray costs or expected revenues. Since this is dealing with presumably unsophisticated consumers, legal judgments may at times discard boilerplate lines in the EULA where Microsoft claims it is not using its customers as under-compensated or defrauded contractors.
As far as risk goes, I'm not sure where the threshold would be where Microsoft would want to deal with potential headaches or bad PR. Users risk hitting ISP caps or spending more money in terms of bandwidth and utility fees than they take in (i.e. more fees than their free game or discount), which can be bad optics and an outcome that would be expected since there's no expectation most users are professionally evaluating such risks until they happen.
While the consoles do seem to have above-average security measures, I would be curious if Microsoft wants its security infrastructure for them to have such a big target on its back. Game pirates and online cheaters may be deterred by the cost floor of defeating a console's copy protection. Potentially getting access to industrial and possibly governmental data with a system with hundreds of millions of unsecured entry points means the motivations and resources of the hackers could be on the level of nation-states, and I wouldn't be that confident in my game console's copy protection against those adversaries or the fallout of a successful breach.
In fact, we should consider this a very valid method of reducing the carbon footprint of general compute workloads. Increasing the uptime of already produced chips capable of general compute workloads would decrease the necessity of fabricating more chips for the same end = less materials and energy needed.
This may require more thorough life cycle review. Much of the footprint in the chips themselves may be in the uptime of the fabs, whose greatest efficiency comes from economies of scale and utilizing the time of high-intensity manufacturing processes that can't really idle cheaply in terms of money or energy.
The logistics of all the packaging, shipping, all the materials and manufacturing for the console cases and peripherals, warehousing, retail, and the generally much less efficient facilities and power delivery in residences might not win out against racks in expensive data centers. Those have a cost incentive to be extremely effective at being a single delivery point with very effective HVAC, better power delivery, better reliability in terms of discarded work, and less building material per node vs an Xbox in a 4-bedroom house. Externalities like the power cost of involving internet backbones and ISP networks over long distances are also poorly accounted for, with the power expenditure per bit transmitted having estimates that are much greater relative to on-premises connections.
edit: Also unclear is how efficient the console hardware is compared to datacenter hardware, whether it can suffer in terms of features business workloads might want, and the relative length of upgrade cycles for the two industries.