<?xml version="1.0" encoding="UTF-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="http://docs.it.cas.cz/lib/exe/css.php?s=feed" type="text/css"?>
<rdf:RDF
    xmlns="http://purl.org/rss/1.0/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
    xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel rdf:about="http://docs.it.cas.cz/feed.php">
        <title>Dokumentace ÚT AV ČR - en:computing:cluster:fronty</title>
        <description></description>
        <link>http://docs.it.cas.cz/</link>
        <image rdf:resource="http://docs.it.cas.cz/lib/exe/fetch.php?media=wiki:logo.svg" />
       <dc:date>2026-05-05T13:19:43+00:00</dc:date>
        <items>
            <rdf:Seq>
                <rdf:li rdf:resource="http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:abaqus&amp;rev=1708086961&amp;do=diff"/>
                <rdf:li rdf:resource="http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:ansys&amp;rev=1708087066&amp;do=diff"/>
                <rdf:li rdf:resource="http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:comsol&amp;rev=1708087204&amp;do=diff"/>
                <rdf:li rdf:resource="http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:matlab&amp;rev=1708087459&amp;do=diff"/>
                <rdf:li rdf:resource="http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:openfoam&amp;rev=1708087651&amp;do=diff"/>
                <rdf:li rdf:resource="http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:paraview&amp;rev=1722523190&amp;do=diff"/>
                <rdf:li rdf:resource="http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:pmd&amp;rev=1708089511&amp;do=diff"/>
                <rdf:li rdf:resource="http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:start&amp;rev=1770648494&amp;do=diff"/>
            </rdf:Seq>
        </items>
    </channel>
    <image rdf:about="http://docs.it.cas.cz/lib/exe/fetch.php?media=wiki:logo.svg">
        <title>Dokumentace ÚT AV ČR</title>
        <link>http://docs.it.cas.cz/</link>
        <url>http://docs.it.cas.cz/lib/exe/fetch.php?media=wiki:logo.svg</url>
    </image>
    <item rdf:about="http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:abaqus&amp;rev=1708086961&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-02-16T12:36:01+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>abaqus</title>
        <link>http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:abaqus&amp;rev=1708086961&amp;do=diff</link>
        <description>Abaqus

Running on 5 cores within 1 node can be specified with the “sbatch” script:
#!/bin/bash

#SBATCH --job-name=Abaqus_test
#SBATCH --output=slurm_out.txt
#SBATCH -N 1
#SBATCH -n 5

/var/DassaultSystemes/SIMULIA/Commands/abaqus job=PLATE_HOLE.inp cpus=$SLURM_NTASKS</description>
    </item>
    <item rdf:about="http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:ansys&amp;rev=1708087066&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-02-16T12:37:46+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>ansys</title>
        <link>http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:ansys&amp;rev=1708087066&amp;do=diff</link>
        <description>Ansys (Fluent)

There are several versions of ANSYS installed on the cluster, see summary of installed software.

See documentation for more information on the location of each version and commands.

The workbench for preparing the job input file can be run on the administrative node “kraken</description>
    </item>
    <item rdf:about="http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:comsol&amp;rev=1708087204&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-02-16T12:40:04+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>comsol</title>
        <link>http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:comsol&amp;rev=1708087204&amp;do=diff</link>
        <description>Comsol

COMSOL can use the variables provided by the SLURM queue system, so the script for “sbatch” is quite simple. The only oddity is that the number of cores is chosen using the “--nodes” and “--ntasks-per-node” variables, the “--ntasks”</description>
    </item>
    <item rdf:about="http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:matlab&amp;rev=1708087459&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-02-16T12:44:19+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>matlab</title>
        <link>http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:matlab&amp;rev=1708087459&amp;do=diff</link>
        <description>Matlab

Simple examples for running Matlab tasks can be found in the /home/SOFT/modules/examples/Matlab/ directory, which you can copy to your own using:
cp -r /home/SOFT/modules/examples/Matlab/ ./
In the single directory there are two files, the Matlab script</description>
    </item>
    <item rdf:about="http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:openfoam&amp;rev=1708087651&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-02-16T12:47:31+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>openfoam</title>
        <link>http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:openfoam&amp;rev=1708087651&amp;do=diff</link>
        <description>OPENFoam

The program is available in modules, in several development branches and versions

	*  OpenFOAM v2012
	*  OpenFOAM-org ver. 6 and 8
	*  foam-extend versions 4.0 and 4.1

The  solids4foam extension is also available to foam-extend both versions.

The contents of the input script for sbatch for a job running on a single core:</description>
    </item>
    <item rdf:about="http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:paraview&amp;rev=1722523190&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-08-01T14:39:50+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>paraview</title>
        <link>http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:paraview&amp;rev=1722523190&amp;do=diff</link>
        <description>Paraview

Paraview is only installed on the cluster as pvserver for Server-Client connections.

NOTE: The versions of Paraview on the Server and Client side must be identical!
Currently, versions 5.10.1 and 5.12.1 are available on the cluster (Server) side (</description>
    </item>
    <item rdf:about="http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:pmd&amp;rev=1708089511&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-02-16T13:18:31+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>pmd</title>
        <link>http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:pmd&amp;rev=1708089511&amp;do=diff</link>
        <description>PMD

The PMD program is primarily intended for the L part, but can also be run on the M part.

The program always runs on one node, it is not possible to run it across multiple nodes.

The maximum number of cores on the L part is 8 (--ntasks=8). An error will occur if a higher value is entered.</description>
    </item>
    <item rdf:about="http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:start&amp;rev=1770648494&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2026-02-09T14:48:14+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>start</title>
        <link>http://docs.it.cas.cz/doku.php?id=en:computing:cluster:fronty:start&amp;rev=1770648494&amp;do=diff</link>
        <description>As of February 2022, running jobs on Kraken cluster compute nodes is only possible through the queuing system (SLURM). Compute nodes are not directly accessible from the network. Special administrative node „kraken“ is available for connecting to the cluster, preparing and submitting jobs to the queuing system.</description>
    </item>
</rdf:RDF>
