📖
Build AI Documentation
  • Welcome!
  • Getting Started
  • Hardware Selection & Payment
  • Onboarding
  • Environment Setup
    • Logging in to your VM
    • Code & Env Setup
  • Training
    • On Virtual Machines
    • On Managed Kubernetes
  • Batched inference
    • On Virtual Machines
    • On Managed Kubernetes
  • Partners
    • Run House (ML Ops)
Powered by GitBook
On this page
  1. Batched inference

On Virtual Machines

While we focus on training, our platform is extremely well suited for asynchronous batched inference workloads at a significantly lower cost.

To make the best use of the Build AI GPU cloud, we expect your ML team to be competent in building on top of bare metal. Our team is here to support every step of the way in adapting to our infrastructure.

Additional layers of support are coming soon.

PreviousOn Managed KubernetesNextOn Managed Kubernetes

Last updated 9 months ago