.. Front page HPC Center user guides ====================== .. figure:: pictures/HPC.jpg :align: center :scale: 100% ----------------------- The use of the resources of the TalTech `HPC Centre`_ requires an active Uni-ID account (please ask at hpcsupport@taltech.ee to activate access), an application form for non-employees/non-students can be found `here`_ , further the user needs to be added to the HPC-USERS group, please ask hpcsupport@taltech.ee to activate HPC access (from your UniID e-mail account). In the case of using licensed programs, the user must also be added to the appropriate group. `More about available programs and licenses`_. .. _HPC Centre: https://taltech.ee/en/itcollege/hpc-centre .. _here: https://taltech.atlassian.net/wiki/spaces/ITI/pages/38996020/Uni-ID+lepinguv+line+konto .. _More about available programs and licenses: https://hpc.pages.taltech.ee/user-guides/software.html The cluster has a Linux operating system (based on CentOS; Debian or Ubuntu on special purpose nodes) and uses SLURM as a batch scheduler and resource manager. Linux is the dominating operating system used for scientific computing and of now is the only operating system present in the `Top500`_ list (a list of the 500 most powerful computers in the world). Linux command-line knowledge is essential for using the cluster. By learning Linux and using the TalTech clusters also necessary skills for accessing one of the international supercomputing centers (e.g. `LUMI`_ or any of the `PRACE`_ centers) are acquired. .. _Top500: https://www.top500.org/ .. _LUMI: https://www.lumi-supercomputer.eu/ .. _PRACE: https://prace-ri.eu/hpc-access/hpc-systems/ .. raw:: html



Hardware Specification ----------------------- ----------------------- .. ul:: **TalTech ETAIS Cloud:** 4 node OpenStack cloud - 5 compute (nova) nodes with 768GB of RAM and 80 threads each - 65 TB CephFS storage (net capacity) - accessible through the ETAIS website: https://etais.ee/using/ .. ul:: **TalTech cluster base.hpc.taltech.ee:** - SLURM v20 scheduler, a live `load diagram`_ - home directory file system has 1.5 PB storage, with a 2 TB/user quota - 32 **green** nodes (former **hpc2.ttu.ee** nodes), 2 x Intel Xeon Gold 6148 20C 2.40 GHz, **96 GB** DDR4-2666 R ECC RAM (**green[1-32]**), 25 Gbit Ethernet, 18 of these FDR InfiniBand (**green-ib** partition) - 48 **gray** nodes (former **hpc.ttu.ee** nodes, migration in progress), 2 x Intel Xeon E5-2630L 6C with **64 GB RAM** and 1 TB local drive, 1 Gbit Ethernet, QDR InfiniBand (**gray-ib** partition) - 1 **mem1tb** large memory node, 1TB RAM, 4x Intel Xeon CPU E5-4640 (together 32 cores, 64 threads) - **amp** GPU nodes, `specific guide for amp`_, amp1: 8xNvidia A100/40GB, 2x 64core AMD EPYC 7742 (together 128 cores, 256 threads), 1 TB RAM; amp2: 8xNvidia A100/80GB, 2x 64core AMD EPYC 7713 (together 128 cores, 256 threads), 2 TB RAM - **viz.hpc.taltech.ee** Visualization node (accessible within University network and FortiVPN), 2x nVidia Tesla K20Xm graphic cards (on displays :0.0 and :0.1) .. _load diagram: https://base.hpc.taltech.ee/load/ .. _specific guide for amp: gpu.html ----------------------- .. raw:: html

Billing ----------------------- ----------------------- **TalTech cluster** .. list-table:: :align: center :widths: 22 22 22 22 :header-rows: 1 * - What - Unit - TalTech internal - External * - CPU & < 6 GB RAM - CPU/hour - 0.006 EUR - 0.012 EUR * - CPU & > 6 GB RAM - 6 GB RAM/hour - 0.006 EUR - 0.012 EUR * - GPU - GPU/hour - 0.20 EUR - 0.50 EUR * - Storage - 1 TB - 20 EUR/Year - 80 EUR/Year More details how to calculate computational costs for TalTech cluster can be found in `Monitoring resources part of Quickstart page`_ . **LUMI cluster** .. list-table:: :align: center :widths: 32 22 22 :header-rows: 1 * - What - Unit - Price for TalTech * - CPU - CPU/hour - 0.008 EUR * - GPU - 6 GB RAM/hour - 0.35 EUR * - User home directory - 20 GB - free * - Project storage (persistent and scratch) - TB/hour - 0.0106 EUR * - Flash based scratch storage - TB/hour - 10 x 0.0106 EUR More detail guide how to calculate computational costs for LUMI can be found in `LUMI billing policy`_. .. _Monitoring resources part of Quickstart page: https://hpc.pages.taltech.ee/user-guides-newtest/quickstart.html#monitoring-resource-usage .. _LUMI billing policy: https://docs.lumi-supercomputer.eu/runjobs/lumi_env/billing/#compute-billing ----------------------- .. raw:: html

SLURM partitions ----------------------- ----------------------- .. list-table:: :align: center :widths: 22 22 22 22 22 :header-rows: 1 * - partition - default time - time limit - default memory - nodes * - **short** - 10 min - 2 hours - 1 GB/thread - green * - **common** - 10 min - 8 days - 1 GB/thread - green * - **green-ib** - 10 min - 8 days - 1 GB/thread - green * - **long** - 10 min - 15 days - 1 GB/thread - green * - **gray-ib** - 10 min - 8 days - 1 GB/thread - gray * - **gpu** - 10 min - 5 days - 1 GB/thread - amp * - **mem1tb** - 10 min - 8 days - 1 GB/thread - mem1tb ----------------------- .. raw:: html

Contents: ----------------------- ----------------------- .. toctree:: :maxdepth: 3 lumi cloud quickstart learning modules software mpi performance profiling visualization gpu singularity acknowledgement