Resource Allocation Framework

Resource Limits: Comprehensive Resource Allocation Framework

Swarm’s Resource Limits establish boundaries for resource allocation at the user, node, and cluster levels. These limits ensure fair usage, optimized performance, and scalability across its decentralized infrastructure, catering to diverse workloads while supporting future growth.


Resource

Per User

Per Node

Per Cluster

GPUs

16

32

1,000

vCPUs

64

128

10,000

Memory

512GB

1TB

100TB

Storage

10TB

100TB

10PB


Descriptions

  1. GPUs:

    • Per User: Up to 16 GPUs allocated for individual workloads or experiments.

    • Per Node: Nodes can support up to 32 GPUs, ideal for large-scale training tasks.

    • Per Cluster: Clusters can aggregate up to 1,000 GPUs for distributed deep learning and compute-intensive applications.

  2. vCPUs:

    • Per User: Allows up to 64 virtual CPUs for lightweight or multi-threaded tasks.

    • Per Node: Supports up to 128 vCPUs, providing sufficient capacity for diverse workloads.

    • Per Cluster: Scales to 10,000 vCPUs, enabling large-scale distributed computing.

  3. Memory:

    • Per User: Up to 512GB for memory-intensive applications like large model training or data analytics.

    • Per Node: Provides 1TB for high-performance nodes catering to specialized tasks.

    • Per Cluster: Aggregates 100TB, supporting massive datasets and complex simulations.

  4. Storage:

    • Per User: Allocates up to 10TB for datasets, model checkpoints, and logs.

    • Per Node: Scales up to 100TB for storage-heavy nodes handling extensive data.

    • Per Cluster: Enables 10PB for distributed data storage across large-scale deployments.


Key Features

  • Scalability: Limits support incremental growth from single-user tasks to cluster-wide workloads.

  • Flexibility: Supports a variety of use cases, from small-scale experiments to enterprise-level operations.

  • Fair Usage: Ensures equitable distribution of resources across users and nodes.

  • High Capacity: Provides ample headroom for large-scale AI workloads and future expansion.


Benefits

  • Efficiency: Optimized resource allocation prevents over-provisioning and underutilization.

  • Reliability: Consistent resource limits ensure predictable performance across deployments.

  • Adaptability: Accommodates diverse user needs while supporting scalability for future growth.

  • Robustness: Sufficient capacity for intensive tasks ensures operational reliability even under high demand.

These Resource Limits form a foundational framework for building, operating, and scaling the Swarm platform, ensuring security, consistency, and flexibility for evolving AI workloads.

Last updated