There's an explosion of new cloud providers that make sense for scientific computing, and most of them were born from cryptocurrency mining operations. I think this is going to lead to a big workflow change for computational scientists, as cloud computing and its tooling make more sense for scientific computing.
Generic clouds like AWS don't make sense for most scientific computing yet. Crypto mining and scientific computing share similar infrastructural incentives to one another, which makes them more compatible than infrastructure for web apps vs. scientific computing.
The core shared incentive between crypto mining and scientific computing is something like "lowest cost computing on fast hardware". Generic clouds like AWS were born from different incentives: their users care more about reliability and latency than cost, because downtime and high latency have significant direct costs.
Feedback loops for "lowest cost computing on fast hardware"
A lower cost of energy starts a fast positive feedback loop for cryptocurrency mining: if you spend less on energy, you have more cash to spend on hardware. With more hardware, you get more currency – which you can spend on hardware again. Because of that feedback loop, many crypto mining operations started in areas with low costs of energy. Some even bought their own power plants; the most notorious restarted coal-powered plants to mine Bitcoin.
The more societally responsible operations powered servers with excess energy that is usually wasted, like excess renewable generation or natural gas flaring. Crusoe, for example, powers a mining operation – and now the public Crusoe Cloud – using excess energy.
Crusoe's containers full of high-performance hardware, powered by excess energy
Comparably, mining cryptocurrency on public clouds doesn't make sense. Such operations don't attract investment or create currency as fast as other currency mining operations.
Those incentives and infrastructure are similar to scientific computing, just with different deliverables and feedback loop speeds. The biggest scientific computing facilities use energy from a power plant owned by the same organization (like a university HPC), or are co-located on a facility that already draws a massive amount of energy (like a particle accelerator) and has low-cost energy.
Currency miners had a faster feedback loop without the bindings of a bureaucratic organization, so they grew faster.
Existing scientific computing infrastructure is struggling with GPU demand
Organizations were hesitant to invest in GPUs as they emerged, and now are uncertain if they want to purchase existing GPUs with newer models with new architectures approaching. Those who are ready to purchase GPUs find a harsh market with high prices and low supply.
This is creating a vacuum in the market: there's strong demand for GPU computing, not enough supply through existing vendor relationships, but engineers working on production pipelines are restricted by organizational policy as to where they can source GPU computing. I think this will lead to a change in how pipelines are executed to fix this acute problem.
An aside: generic clouds also have low availability
Launching a large GPU or CPU based server in public clouds is really hard right now, exacerbated by the LLM-induced ML boom.
Scientific computing needs many big servers for a relatively short period, and it's nearly impossible to launch 100 big servers for a two-week job on existing clouds.
This is net good for scientific computing
With alternative clouds that were born from similar infrastructural incentives to scientific computing, better base economics, the latest technology, and on-demand availability, the calculus for on-prem vs. cloud computing is finally changing for scientific computing.
I think this is going to lead to the biggest workflow change for computational scientists in decades, giving them fast access to more computing at a lower price.
Compared to web apps, scientific computing can be down for a long time or have high latency before affecting a researcher's cycle time, and has no directly attributable revenue stream. Most researchers have grants (academic) or set R&D funding (commercial), and they don't have directly attributable increases in revenue they can funnel towards an increased computing budget if things are going well. They know the maximum possible amount they can spend on computing, and try to optimize for the best possible outcome within that budget. ↩︎
Others, like Lancium, were spawned by mining infrastructure. CoreWeave was spawned by mining infrastructure, and FluidStack aggregates across providers that include those spawned from currency mining. (I'm not paid by any of these companies, although I use them.) ↩︎