# LUMI


## What is LUMI? ---
LUMI is the fastest supercomputer in Europe. It's an HPE Cray EX supercomputer consisting of several hardware partitions targeted different use cases: - 2560 GPU-based nodes ([**LUMI-G**](https://docs.lumi-supercomputer.eu/hardware/compute/lumig/)), each node with one 64 core AMD Trento CPU and four AMD MI250X GPUs. - 1536 dual-socket CPU nodes [**LUMI-C**](https://docs.lumi-supercomputer.eu/hardware/compute/lumic/) with 64-core 3rd-generation AMD EPYC™ CPUs, and between 256 GB and 1024 GB of memory. - large memory GPU nodes [**LUMI-D**](https://docs.lumi-supercomputer.eu/hardware/compute/lumid/), with a total of 32 TB of memory in the partition for data analytics and visualisation. - 20 PB main storage [**LUMI-P**](https://docs.lumi-supercomputer.eu/hardware/storage/lumip/) with an aggregate bandwidth of 240 GB/s - 7 PB flash storage [**LUMI-F**](https://docs.lumi-supercomputer.eu/hardware/storage/lumif/) with bandwidth of 1 740 GB/s.

More about LUMI system architecture can be found [here](https://www.lumi-supercomputer.eu/lumis-full-system-architecture-revealed/) and [here](https://docs.lumi-supercomputer.eu/hardware/). At LUMI Slurm partitions can be allocated by node or by resources. More about partitions can be found [here](https://docs.lumi-supercomputer.eu/runjobs/scheduled-jobs/partitions/).




## Why LUMI? ---
There are several reasons to choose LUMI instead of HPC: - if job is run using GPUs - if job needs large memory - if queue on HPC is too large





## Getting started --- ### - [How to get access to LUMI](lumi/start.md) ### - [Software](lumi/software.md) ### - [Examples of jobs and slurm scripts](lumi/examples.md) ### - [Billing](lumi/billing.md)