About
The High-Performance Computing Cluster, aptly name 鈥淪piedie,鈥 housed at the Thomas J. Watson College of Engineering and Applied Science's data center in the Innovative Technology Complex. This research facility offers computer capabilities for researchers across 黑料不打烊.
Raw Stats
- 16 Core 96GB Head Node
- 312TB Available Infiniband Connected NFS Storage
- 144 Compute Nodes
- 3372 native compute cores
- 12x NVidia H100 NVL GPUs, 8x NVidia A40 GPUs, 10x NVidia A5000 GPUs, 4x NVidia P100 GPU
- 40,56 and 200/400Gb Infiniband to all nodes
- 1GbE to all nodes for management and OS deployment
Since the deployment of the Spiedie cluster, it has gone through various expansions and deployments, growing from 32 compute nodes to 144 compute nodes as of June 2025. Most of these expansions have come from individual researcher grant awards. These individuals recognized the importance of the cluster to forward their research and helped grow this valuable resource.
Watson College continues to pursue opportunities to enhance the Spiedie Cluster and to expand its outreach to other researchers in different transdisciplinary areas of research. Support for the cluster has come from Watson College and researchers from the Chemistry, School of Computing, Electrical and Computer Engineering, Mechanical Engineering, and the Physics Departments.
Head Node
Consists of a Dell R660 running a hypervisor to support the headnode and other services across discrete Virtual Machines
Storage Node
A common file system accessible by all nodes is hosted on a Red Barn HPC server providing 312TB, with the ability to add additional storage drives. Storage is accessible via NFS through a 56 and 400 Gb/s Infiniband interface.
Compute Nodes
The 144 compute nodes are a heterogeneous mixture of varying Intel Based processors, generations, and capacity.
Management and Network
Networking between the head, storage and compute nodes utilizes Infiniband for inter-node communication and Ethernet for management. Bright Cluster Manager provides monitoring, management of the nodes with SLURM handling, jobs submission, queuing, and scheduling. The cluster currently supports MATLAB jobs up to 600 cores along with, VASP, COMSOL, R and almost any *nix based application.
Cluster Policy
High-Performance Computing at Binghamton is a collaborative environment where computational resources have been pooled together to form the Spiedie cluster.
Access Options
Yearly subscription access
- $1,675/year, faculty research group
- Running queue core restrictions are removed
- Fair-share queue enabled
- Storage is monitored
- 122 hr wall time
- per research group access
Condo access
Purchase your own nodes to integrate into the cluster
- Priority on your nodes
- Fair-share access to other nodes
- No limits on job submission to your nodes
- Storage is monitored
- Your nodes are accessible to others when not in use
Watson Computing will assist with quoting, acquisition, integration and maintenance of purchased nodes. For more information on adding nodes to the Spiedie cluster, email Phillip Valenta.