[[computing:cluster:start|{{:up1.png?direct|}}]] [[computing:cluster:start|Cluster Kraken]] ====== Software on the Kraken cluster ====== Commercial programs (Ansys, Comsol,...) are accessible from the system, see Documentation below. Freely available software packages, especially compilers, are also installed in various versions in modules ([[https://lmod.readthedocs.io/en/latest/|Lmod]]). In addition to installations from source code, the extensive [[https://spack.readthedocs.io/en/latest/package_list.html| spack]] system database is used. Requests for installation of freely available software, module and queue system administration are handled by [[jpech@it.cas.cz|Jan Pech]], 2 6605 3132. **We run jobs exclusively through [[computing:cluster:fronty:start|the SLURM queue system]].** \\ \\ ====== Environment Modules ====== Basic procedures for working with cluster modules are described on the [[computing:cluster:software:moduly|modules]] page. You can use the modules to call programs (openfoam, paraview, lammps...), see the list below, but also to set up the environment (compiler, mpi, ...) to compile your own code. The modules are tagged in the form //name/version[-version_compiler]// and divided into 3 sections according to the architecture they are optimized for * //broadwell// universal * //zen2//, kraken-m7,...,kraken-m9 machines * //zen4//, kraken-m10 machine (the m10 machine runs Ubuntu 22.04) linux-centos7-broadwell boost/1.59.0-5.5.0 gcc/10.3.0 openmpi/4.0.5-5.5.0 boost/1.76.0-10.3.0 intel-mpi/2019.10.317-18.0.2 openmpi/4.0.5-10.3.0 cmake/3.20.6 mesa/21.2.1-10.3.0 paraview/5.9.0-10.3.0 emacs/27.1 metis/5.1.0-10.3.0 python/3.8.9-10.3.0 ffmpeg/4.3.2-10.3.0 mpich/3.4.1-5.5.0 qt/5.15.2-10.3.0 gcc/5.5.0 mpich/3.4.1-10.3.0 scotch/6.0.10-5.5.0 gcc/7.5.0 nektar/5.0.0-5.5.0 zlib/1.2.11-5.5.0 linux-centos7-zen2 boost/1.77.0-11.2.0 openmpi/4.1.1-11.2.0 py-six/1.15.0-11.2.0 gcc/11.2.0 py-numpy/1.21.2-11.2.0 python/3.8.11-11.2.0 mpich/3.4.2-11.2.0 py-scipy/1.7.1-11.2.0 linux-ubuntu22.04-zen4 boost/1.57.0-12.3.0 boost/1.83.0-12.3.0 fftw/3.3.10-12.3.0 metis/5.1.0-12.3.0 python/3.10.12-12.3.0 tinyxml/2.6.2-12.3.0 boost/1.76.0-12.3.0 cmake/3.27.4 gcc/12.3.0 openmpi/4.0.7-12.3.0 scotch/7.0.3-12.3.0 zlib/1.3-12.3.0 The //free_modulefiles// section contains programs with specially created modules (all compiled for the broadwell architecture) foam-extend/4.0-5.5.0 openfoam-org/8-10.3.0 solids4foam/4.0 foam-extend/4.1-7.5.0 openfoam/2012-10.3.0 solids4foam/4.1 openfoam-org/6-10.3.0 paraview/5.6.0 \\ \\ ====== Documentation ====== \\ ==== Ansys ==== **[[https://www.ansys.com/|ANSYS]]** is currently available in version **2024 R1** The input (journal) calculation file can either be prepared locally and transferred to the cluster, or created directly on the "kraken" admin node via Ansys' remotely running graphical interface, i.e. workbench. The calculation itself must then be placed in [[computing:cluster:fronty:ansys|queues]]. **Further information** can be found at: * [[computing:cluster:software:ansys|Ansys - Overview for Kraken cluster]] * [[computing:cluster:queues:ansys|running Ansys (Fluent) in the SLURM queue]] * [[http://shelob.it.cas.cz/ansys/|online license status]]; login: ansys; password: 1dohens \\ ==== BDDCML ==== {{:computing:cluster:manualy:manual_bddcml-1.3.pdf|Manual}} \\ ==== COMSOL ==== Comsol is actively installed in version 5.5. After logging on to the cluster (ssh with GUI enabled), the command line can be used to run the ''comsol'' command. Running a job on a queueing system is described on the following page: [[computing:cluster:fronty:comsol|queues / Comsol]] \\ ==== DL_POLY ==== {{:computing:cluster:manualy:dl_usrman4.05.pdf|User Manual}} {{:computing:cluster:manualy:dl_javagui.pdf|Graphical User Interface}} \\ ==== LAMMPS ==== The program is currently installed with components: //diffraction, dipole, extra-dump, extra-fix, extra-pair, fep, kspace, manybody, meam, misc, molecule, phonon, replica, rigid// in modules: * ''lammps/20230802-10.3.0'' (universal) * ''lammps/20230802-11.2.0'' (optimization for zen2 architecture, i.e. kraken-m[7-9] nodes). \\ ==== Matlab ==== **[[https://www.mathworks.com/products/matlab.html|Matlab]]** is available on all cluster nodes in version **R2022b**. Queuing a job is described in [[computing:cluster:fronty:matlab|queues/Matlab]]. On the Matlab administrative node, simply run the //matlab// command. [[http://shelob/matlab/|online license status]]; login: matlab; password: 1dohens \\ ==== Open Foam ==== OpenFOAM is available in several versions in [[computing:cluster:software:moduly|modules]]. ** The procedure to run OpenFOAM: ** - Create a folder for OpenFOAM: ''mkdir -p $HOME/MyOpenFOAMFolder'' - Move to the folder: ''cd $HOME/MyOpenFOAMFolder'' - Create a ''task.sh'' script to queue the task, see [[computing:cluster:fronty:openfoam|SLURM script for OPENFoam]] - Queue the job: ''sbatch job.sh''' {{:computing:cluster:manualy:of_userguide.pdf|User Guide}} {{:computing:cluster:manualy:of_programmersguide.pdf|Programmer's Guide}} \\ ==== ParaView ==== {{:computing:cluster:manualy:paraviewmanual.v4.2.pdf|Manual}} \\ ==== ParMETIS ==== {{:computing:cluster:manualy:parmetis_manual.pdf|Manual}}