Supercomputers

Supercomputers made up of several computer clusters and disk servers can handle numerical calculations in a way that enables them to take part in national and international projects.

Goliath Cluster

Goliath Cluster uses an HTCondor batch-processing system for distributing tasks to individual computational servers. HTCondor servers receive tasks from both local and grid users and distribute them to working nods. The working nods are divided into subclusters according to the type of their hardware: Minis, Lilium, Mahagon, Mikan, Magic, Aplex, Malva, Ib and Ibis. More >>

LUNA cluster

The LUNA cluster, intended primarily for FZU users from the Division of Solid State Physics, is connected to the national grid’s environment called Metacentrum. At the beginning of 2021, it was providing 2048 computational AMD cores in 32 servers with the total capacity of 16 TB RAM. Users from the FZU can use a priority queue of the PBSPro batch system, which plans tasks for all clusters within Metacentrum. If some servers are temporarily not fully used, other Metacentrum users can run shorter tasks on them.

Local disk field provides 100 TB for the fast sharing of files between servers; a backed-up space for home directories is available at a CESNET remote server. If another operational system is needed for handling tasks, Singularity for virtualization is available like on other VS clusters. A number of applications is available on a AF shared file system, and lately also on CVMFS.

Koikos Cluster

Koios Cluster for the CEICO (Central European Institute for Cosmology and Fundamental Physics) group consists of 30 powerful servers interconnected with a low-latency network with high permeability, Infiniband EDR (100 Gb/s), and a backed-up shared data storage system with the capacity of 100 TB. Computational capacity 960 CPU cores is complemented with 14336 GPU cores, users can use up to 11 TB RAM. The system supports both batch and interactive tasks, access from a command line, and a graphic user environment.

The distribution of tasks to servers is performed with a batch-processing system Slurm. Users can use a pre-prepared development environment, containing the latest tools from the GNU gcc a Intel compiler families, a Wolfram Mathematica interactive tool and tools for interactive working with data not only in Python Jupyter Hub/Notebook. The modularity and code transferability is ensured using Singularity containers and a framework for building scientific tools in EasyBuild.