# ORCA
***This manual is work in progress, please check regularly for updates***
***Important note:***
To run ORCA, user must registered individually and have agreed to the EULA at [Orcaforum](https://orcaforum.kofo.mpg.de/app.php/portal).
## ORCA short introduction
---
1. Make [orca.slurm](orca.slurm) batch script for parallel calculations:
#!/bin/bash
#SBATCH --job-name=Job_Name
##SBATCH --mem-per-cpu=3GB
#SBATCH --nodes=1
#SBATCH --ntasks=24
#SBATCH --cpus-per-task=1
#SBATCH -t 5-00:00:00
#SBATCH --partition=common
#SBATCH --no-requeue
module load green/all
module load orca/5.0.3
export orcadir=/gpfs/mariana/software/green/Orca/orca_5_0_3_openmpi_411/
#Create scratch directory
SCRATCH=/state/partition1/$SLURM_JOB_ID
mkdir -p $SCRATCH
cp $SLURM_SUBMIT_DIR/*.inp $SCRATCH/
cd $SCRATCH/
#Run calculations
$orcadir/orca job.inp >> $SLURM_SUBMIT_DIR/job.log
#Copy files back to working directory
cp $SCRATCH/* $SLURM_SUBMIT_DIR
rm *tmp*
#Clean after yourself
rm -rf $SCRATCH
or [orca-single-core.slurm](orca-single-core.slurm) batch script for single core calculations:
Click to expand
#!/bin/bash
#SBATCH --job-name=Job_Name
#SBATCH --mem-per-cpu=2GB
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH -t 10:00:00
#SBATCH --partition=common
#SBATCH --no-requeue
module load green/all
module load orca/5.0.3
export orcadir=/gpfs/mariana/software/green/Orca/orca_5_0_3_openmpi_411/
#Create scratch directory
SCRATCH=/state/partition1/$SLURM_JOB_ID
mkdir -p $SCRATCH
cp $SLURM_SUBMIT_DIR/*.inp $SCRATCH/
cd $SCRATCH/
#Run calculations
$orcadir/orca job.inp >> $SLURM_SUBMIT_DIR/job.log
#Copy files back to working directory
cp $SCRATCH/* $SLURM_SUBMIT_DIR
rm *tmp*
#Clean after yourself
rm -rf $SCRATCH
2. Copy job-input file [job.inp](job.inp).
3. Submit the job on **base**:
sbatch orca.slurm
***NB!*** _More cores does not mean faster!!! See [Benchmarks](https://hpc.pages.taltech.ee/user-guides/chemistry/orca.html#benchmarks-for-parallel-jobs)._
***NB!*** To ORCA parallel run full path name is needed. Single core calculations can be performed with just `orca` command.
4. Check results using [visualization software](visualization.md).
## ORCA long version
---
### Environment
There are currently several version of ORCA available on HPC: 4.1.2, 4.2.1 and 5.0.3. Environment is set up by the commands:
module load green/all
module load orca/5.0.3
The first time use, user has to agree to the licenses:
touch ~/.licenses/orca-accepted
if this is the first user license agreement, the following commands should be given:
mkdir .licenses
touch ~/.licenses/orca-accepted
***NB!*** After agreeing to the license, user has to log out and log in again to be able run ORCA.
### Running ORCA jobs
ORCA input files are executed by the command `orca`. This command is usually placed in `slurm` script.
***NB!*** To ORCA parallel run full path name is needed, but single core calculations can be performed with just `orca` command.
/gpfs/mariana/software/green/Orca/orca_5_0_3_openmpi_411/orca
### Single core calculations
ORCA by default execute jobs on only a single processor.
Example of ORCA input:
! RI BP86 def2-SVP def2/J D3BJ printbasis Opt
*xyz 0 1
C 0.67650 0.42710 0.00022
H 0.75477 1.52537 0.00197
O 1.62208 -0.30498 -0.00037
S -1.01309 -0.16870 0.00021
H -1.58104 1.05112 -0.00371
*
Example of an [orca-single-core.slurm](orca-single-core.slurm) batch script for single core calculations.
***NB!*** If in a Slurm script is defined more processors, they will be reserved, but not utilized.
### Parallel jobs
To run multiple processors/cores job a number of cores should be specified both in an ORCA input file and in a `slurm` script. In ORCA it is done with `PAL` keyword (e.g. PAL4) or as a block input.
Example of ORCA input for 4 cores:
! RI BP86 def2-SVP def2/J D3BJ printbasis Opt
%pal
nprocs 4
end
*xyz 0 1
C 0.67650 0.42710 0.00022
H 0.75477 1.52537 0.00197
O 1.62208 -0.30498 -0.00037
S -1.01309 -0.16870 0.00021
H -1.58104 1.05112 -0.00371
*
Example of `slurm` script:
#!/bin/bash
#SBATCH --job-name=Job_Name
##SBATCH --mem-per-cpu=2GB
#SBATCH --nodes=1
#SBATCH --ntasks=4
#SBATCH --cpus-per-task=1
#SBATCH -t 2:00:00
#SBATCH --partition=common
#SBATCH --no-requeue
module load green/all
module load orca/5.0.3
export orcadir=/gpfs/mariana/software/green/Orca/orca_5_0_3_openmpi_411/
#Create scratch directory
SCRATCH=/state/partition1/$SLURM_JOB_ID
mkdir -p $SCRATCH
cp $SLURM_SUBMIT_DIR/*.inp $SCRATCH/
cd $SCRATCH/
#Run calculations
$orcadir/orca job.inp >> $SLURM_SUBMIT_DIR/job.log
cp $SCRATCH/* $SLURM_SUBMIT_DIR
#Clean after yourself
rm -rf $SCRATCH
***NB!*** To ORCA parallel run full path name is needed.
More about ORCA input can be found at [ORCA Input Library](https://sites.google.com/site/orcainputlibrary/home), [ORCA tutorials](https://www.orcasoftware.de/tutorials_orca/) and [ORCA forum](https://orcaforum.kofo.mpg.de/).
### Memory
The default dynamic memory requested by ORCA is frequently too small for successful job termination. If amount of memory requested is insufficient, the job can crash. Memory usage in ORCA is controlled by the `%maxcore` keyword.
%maxcore 2000
There is no golden rule for memory requests, since they are basis set and calculation type dependant. Usually, 1-5 GB per 1 CPU is sufficient. Data from a `slurm-JOBID.stat` file can be useful to determine the amount of memory required for a computation. In `slurm-JOBID.stat` file the efficiency of memory utilization is shown.
Bad example:
Memory Utilized: 3.08 GB
Memory Efficiency: 11.83% of 26.00 GB
Good example:
Memory Utilized: 63.12 GB
Memory Efficiency: 98.62% of 64.00 GB
### Time
Time limits depend on time partition used, see [taltech user-guides](https://hpc.pages.taltech.ee/user-guides/index.html#hardware-specification). If the calculation time exceeds the time limit requested in the `slurm` script, then the job will be killed. Therefore, it is recommended to request more time than is usually needed for calculation.
#### _Restarting a failed/interrupted calculation_
All ORCA jobs are restart jobs as default.
SCF calculations with input file name `jobname.inp` will automatically search for a GBW file named `jobname.gbw` and will attempt to read in the old orbitals and continue the SCF from there.
`MOREAD` and `%moinp` keywords allows manually specify where to read the orbitals from.
! MORead
%moinp "jobname2.gbw"
# Note that if jobname2.gbw is the gbw file you read in then jobname2.inp can not be the name of the inputfile.
*xyz 0 1
Geometry optimization is recommended to be restarted using the last geometry (`job.xyz`).
Numerical frequency calculations also can be restarted if `.hess` files from the previous calculation are presented.
!
%freq
restart true
end
**NB!** Checkpoint files are very heavy and after successful completion of the calculation, it is recommended to delete these files.
#### Coping files
During calculations ORCA creates many different additional files, by default, `slurm` copies all files to the user's directory. However, the user can choose which files to copy back to the working directory.
cp $SCRATCH/*.gbw $SLURM_SUBMIT_DIR
cp $SCRATCH/*.engrad $SLURM_SUBMIT_DIR
cp $SCRATCH/*.xyz $SLURM_SUBMIT_DIR
cp $tdir/*.log $SLURM_SUBMIT_DIR
cp $tdir/*.hess $SLURM_SUBMIT_DIR
#### How to cite:
### Benchmarks for parallel jobs