Head over to the RC Dashboards to see our SPORC cluster in action!
You can also take a look at our cluster benchmarking results.
SPORC Cluster: (Scheduled Processing On Research Computing)
- 2304 cores (Intel(R) Xeon(R) Gold 6150 CPU @ 2.70GHz)
- Linpack benchmark, cpu-only: 105 TFLOPS
- 24TB RAM
- 100 Gbit/sec RoCEv2 interconnect (Mellanox MLX5/Juniper QFX5210-64c)
- 64 SuperMicro X11 systems with space for 4 GPU cards
- 100 Nvidia A100 cards
- 40GB HBM2
- 156 TFLOPS TF32
- 312 TFLOPS FP16
- 624 TOPS Int8
- 16 Nvidia V100 cards
- 32GB HBM2
- 14 TFLOPS single-precision
- 7 TFLOPS double-precision
- 56 TOPS Int8
- 112 TensorFLOPS
- 96 Nvidia P4 cards
- 8GB GDDR5
- 5.5 TFLOPS single-precision
- 22 TOPS Int8
- Interactive Partition:
- 148 cores
- 525GB RAM
- 2 Nvidia A100 cards
- 42 Nvidia P4 cards
Ceph (Block and Filesystem) Storage
- 9.5PB of storage comprised of
- 200G NVMe hot tier
- 6PB 7200RPM SAS bulk tier
- 3.5PB 7200RPM SATA cold tier
- 15TB Intel(R) Optane(R) NVMe CephFS metadata tier
- Dual 100 Gbit/sec RoCEv2 interconnect (Mellanox MLX5/Juniper QFX5210-64c)
Large Single-System-Image Compute Node (theocho)
- SYS-7089P Supermicro 8-way
- 144 cores (Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz)
- Linpack benchmark, cpu-only: 4 TFLOPS
- 2.3TB RAM
- 1 Nvidia V100 card
Operating System
- Red Hat Enterprise 7
Help: If there are any further questions, or there is an issue with the documentation, please submit a ticket or contact us on Slack for additional assistance.