Introduction

Over the past decade, machine learning has experienced a rapid growth in popularity, permeating various aspects of daily life. As the demand for compute resources for machine learning tasks continues to surge, existing centralized inference mechanisms are being pushed to their limits. Simultaneously, supply chain shortages have led to year-long back orders on cutting edge hardware while hundreds of thousands of users already have mid-tier to high-end excess compute sitting idle. By connecting this unused capacity via a distributed network, a higher global compute efficiency can be achieved. This approach not only ensures cost-efficiency but also delivers reliability and performance to a broader spectrum of users. The Smart AI machine learning inference platform offers distributed Machine Learning as a Service (MLaaS) capabilities. The platform enables individuals to securely execute machine learning inference tasks with an expanding repertoire of popular models such as Llama [1] and Stable Diffusion XL [2], among others.

Last updated