Slinky#
Welcome to Slinky, a set of powerful integration tools designed to bring Slurm’s capabilities into Kubernetes. Whether you’re managing high-performance computing (HPC) workloads or operating within cloud-native environments, Slinky helps bring together the best of both worlds for efficient resource management and scheduling. In addition to being ideal for running AI training workloads, Slinky also provides a unique capability for scheduling both single- and multi-node AI inference workloads.
Slinky was made by SchedMD, the lead developers of Slurm, and is developed and supported by NVIDIA.
Slurm-operator allows users to run workloads on Slurm within a Kubernetes cluster, taking advantage of many of the advanced scheduling features of Slurm within a cloud-native environment. Slurm-operator allows for resources to be shared between Slurm and Kubernetes - improving resource utilization.
Slurm-bridge contains a Kubernetes scheduler to manage select workloads from Kubernetes, which allows for co-location of Kubernetes and Slurm workloads within the same cluster.
The containers repository provides images to support running Slurm clusters on Kubernetes.