====== Ansys (Fluent) ====== There are several versions of ANSYS installed on the cluster, see [[computing:cluster:software:start|summary of installed software]]. See [[computing:cluster:software:ansys|documentation]] for more information on the location of each version and commands. The workbench for preparing the job input file can be run on the administrative node "kraken", the calculation itself should be run under the queue system on powerful machines. **NOTE: Ansys requires an unencrypted SSH connection of the cluster elements - Ansys users must set this up themselves after the first login, see [[computing:cluster:software:start#ansys|Software on Kraken/Ansys cluster]].** Basic script to run a 2D job in Fluent (version 2020 R1) in a queue via the sbatch command: #!/bin/bash #SBATCH --job-name=fluent_test #SBATCH --output=slurm-%j.out #SBATCH --partition=Mexpress #SBATCH --exclude=kraken-m[7-9] #SBATCH --ntasks=4 FLUENTNODES="$(scontrol show hostnames)" FLUENTNODES=$(echo $FLUENTNODES | tr ' ' ',') /ansys_inc/v211/fluent/bin/fluent -ssh 2ddp -mpi=openmpi -slurm -t $SLURM_NTASKS -cnf=$FLUENTNODES -g -i my_journal_file.jou The functions of the variables behind #SBATCH are described on the [[computing:cluster:queues:|cluster/queues]] page. The next lines correspond to command-line commands for running the program in parallel (in the Fluent version 21 R1 example: /ansys_inc/v211/fluent/bin/fluent; for other available versions, see [[computing:cluster:software:ansys|ANSYS]]). Due to the licensing setup for ANSYS, it is not recommended to split jobs between multiple nodes. The compute nodes have a sufficient number of cores for the current licenses, so splitting a job between multiple nodes will only cause computation slowdown.