HPC Resources available at rz University of Augsburg
FAQ and Troubleshooting
- How do I register myself to use the HPC resources?
- How do I get access to the LiCCA or ALCC resources?
- What kind of resources are available on LiCCA?
- What kind of resources are available on ALCC?
- What Slurm Partitions (Queues) are available on LiCCA?
- What Slurm Partitions (Queues) are available on ALCC?
- What is Slurm?
- How do I use Slurm batch system
- How do I submit the serial calculations?
- How do I run multithreaded calculations?
- How do I run parallel calculations on several nodes?
- How do I run GPU based calculations?
- How do I check Slurm current schedule, queue?
- Is there some kind of Remote Desktop for the cluster?
- If a have a question which is not listed here?
- If I want to report a problem?
- Which version of Python could be used?
- Which Anaconda, Miniconda, Miniforge, Micromamba?
- How do I monitor live CPU/GPU/memory/disk utilization?
- Popular Labels:
- Search for topics by keyword:
The HPC clusters LiCCA and ALCC are used with increasing
intensity and the load on the power distribution rises.
Stronger power cables have been laid for the power
infrastructure supporting the HPC clusters.
A shutdown of all compute nodes is scheduled for:
Tuesday, May 14, 7:00
We will start to drain the queues on Saturday, May 11.
To the best of our knowledge the work on the
electrical system will be finished the same day,
so the clusters should be back on Wednesday, May 15.
We migrate the Slurm database instance (serving both the ALCC and the LiCCA cluster) to a different system starting today.
Slurm operation is planned to stay up during this time.
This should speed up Slurm operations after the migration, but
is also needed in preparation of the ALCC upgrade from Ubuntu 20.04 to 22.04, which will happen in the next weeks.
we are proud to announce the availability of LiCCA, a compute resource focused on research, open for members of the University of Augsburg.
Access to LiCCA is possible after registration of your chair or working group for a HPC project. The complete application workflow is described in the HPC Knowledge Base, as well as the cluster hardware and setup.
Questions and problems, which are not solved within the HPC Knowledge Base, can be addressed to the Service-Desk at the Service- & Supportportal or by E-Mail.
Happy computing, the RZ HPC team
- What is the HPC hardware ecosystem at University of Augsburg
- Usage Polices / Nutzungsregelungen
- Access HPC Resources
- Linux Compute Cluster Augsburg
- Resources
- Status
- Access
- Data Transfer
- File Systems
- Environment Modules (Lmod)
- Interactive (Debug) Runs (not Slurm)
- Submitting Jobs (Slurm Batch System)
- Slurm
- Slurm 101
- Slurm Queues
- Submitting Serial Jobs
- Submitting Interactive Jobs
- Submitting Parallel Jobs (MPI/OpenMP)
- Submitting GPU Jobs
- Submitting Arrays Jobs and Chain Jobs
- Handling Jobs running into TIMEOUT
- Accessing Webinterfaces (e.g. Jupyterlab, Ray) via SSH Tunnels
- Exclusive jobs for benchmarking
- Controlling the environment of a Job
- HPC Software and Libraries
- HPC Tuning Guide
- Service and Support
- Origin of the name
- FAQ and Troubleshooting - LiCCA
- Augsburg Linux Compute Cluster
- FAQ and Troubleshooting
- Mailinglists
List of Labels/Keywords:
-
A
-
B
-
C
-
D
-
E
-
F-G
-
H
-
I-K
-
L
-
M
-
N-O
-
P
-
Q
-
R
-
S
-
T-U
-
V-Z