From 2438797863c21a2ca9ab0a8410abaf77249d844a Mon Sep 17 00:00:00 2001 From: Alberto Invernizzi Date: Wed, 3 Dec 2025 10:57:20 +0100 Subject: [PATCH 1/4] add section about one-time-setup for scripts --- docs/software/sciviz/paraview.md | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/docs/software/sciviz/paraview.md b/docs/software/sciviz/paraview.md index 0f9858e3..ea45dd6b 100644 --- a/docs/software/sciviz/paraview.md +++ b/docs/software/sciviz/paraview.md @@ -12,6 +12,37 @@ [ParaView](https://www.paraview.org/) is provided on [ALPS][platforms-on-alps] via [uenv][ref-uenv]. Please have a look at the [uenv documentation][ref-uenv] for more information about uenvs and how to use them. +## One-time setup + +CSCS provides helper scripts that are very handy for launching live sessions and batch rendering. + +To keep these utilities available from any shell, the simplest approach is to place them in a directory that is part of your +`PATH`. A common convention is to create a personal `~/bin` directory and add it to to your `PATH`. + +```bash +# create the ~/bin folder and add it to PATH +mkdir -p ~/bin +echo "export PATH=~/bin:$PATH" >> .bashrc + +# Then, download the scripts and put them in that folder +cd ~/bin +wget -qO- https://gist.github.com/albestro/67728336bb3e60f6a3c64471b1893d66/archive/main.tar.gz | tar xzf - --strip-component 1 --same-permissions +``` + +!!! warning "reload to make changes effective" + + Changes to your `.bashrc` requires a reload of your shell to become effective. + + Hence, you need to logout and login back to be able to easily use the scripts you just intalled. + +In a new shell, you can then test that scripts are available as commands from any directory. +For instance, you can check that issuing + +```bash +paraview-reverse-connect +``` + +it prints some basic instructions on how to use the command. ## Running ParaView in batch mode with Python scripts From a6dd37aed8f7270910552919950f06c7213ee5b3 Mon Sep 17 00:00:00 2001 From: Alberto Invernizzi Date: Wed, 3 Dec 2025 11:09:16 +0100 Subject: [PATCH 2/4] fix typo --- docs/software/sciviz/paraview.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/software/sciviz/paraview.md b/docs/software/sciviz/paraview.md index ea45dd6b..5ad67111 100644 --- a/docs/software/sciviz/paraview.md +++ b/docs/software/sciviz/paraview.md @@ -33,7 +33,7 @@ wget -qO- https://gist.github.com/albestro/67728336bb3e60f6a3c64471b1893d66/arch Changes to your `.bashrc` requires a reload of your shell to become effective. - Hence, you need to logout and login back to be able to easily use the scripts you just intalled. + Hence, you need to logout and login back to be able to easily use the scripts you just installed. In a new shell, you can then test that scripts are available as commands from any directory. For instance, you can check that issuing From feaa0de02b0b63bab9de763b98b05fed39c22a86 Mon Sep 17 00:00:00 2001 From: Alberto Invernizzi Date: Wed, 3 Dec 2025 12:15:30 +0100 Subject: [PATCH 3/4] adapt templates to new scripts and small integrations --- docs/software/sciviz/paraview.md | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/docs/software/sciviz/paraview.md b/docs/software/sciviz/paraview.md index 5ad67111..f08eb266 100644 --- a/docs/software/sciviz/paraview.md +++ b/docs/software/sciviz/paraview.md @@ -48,22 +48,27 @@ it prints some basic instructions on how to use the command. The following sbatch script can be used as a template. +!!! note + Before using a uenv, you have to ensure that you pulled the one you are going to use. + Refer to [uenv quick start guide][ref-uenv-using] for more details. + === "GH200" + !!! note + Current observation is that best performance is achieved using [one MPI rank per GPU][ref-slurm-gh200-single-rank-per-gpu]. + How to run multiple ranks per GPU is described [here][ref-slurm-gh200-multi-rank-per-gpu]. + ```bash #SBATCH -N 1 #SBATCH --ntasks-per-node=4 #SBATCH --cpus-per-task=72 #SBATCH --gpus-per-task=1 #SBATCH -A - #SBATCH --uenv=paraview/5.13.2:v2 --view=paraview + #SBATCH --uenv=paraview/6.0.1 --view=default #SBATCH --hint=nomultithread - export MPICH_GPU_SUPPORT_ENABLED=0 - - srun --cpus-per-task=72 --cpu_bind=sockets /user-environment/ParaView-5.13/gpu_wrapper.sh /user-environment/ParaView-5.13/bin/pvbatch ParaViewPythonScript.py + srun --cpus-per-task=72 bind-gpu-vtk-egl pvbatch your-paraview-python-script.py ``` - Current observation is that best performance is achieved using [one MPI rank per GPU][ref-slurm-gh200-single-rank-per-gpu]. How to run multiple ranks per GPU is described [here][ref-slurm-gh200-multi-rank-per-gpu]. === "Eiger" @@ -71,13 +76,12 @@ The following sbatch script can be used as a template. #SBATCH -N 1 #SBATCH --ntasks-per-node=128 #SBATCH -A - #SBATCH --uenv=paraview/5.13.2:v2 --view=paraview + #SBATCH --uenv=paraview/6.0.1 --view=default #SBATCH --hint=nomultithread - srun --cpus-per-task=128 /user-environment/ParaView-5.13/bin/pvbatch ParaViewPythonScript.py + srun --cpus-per-task=128 pvbatch your-paraview-python-script.py ``` - ## Using ParaView in client-server mode A ParaView server can connect to a remote ParaView client installed on your desktop. Make sure to use the same version on both sides. Your local ParaView GUI client needs to create a SLURM job with appropriate parameters. We recommend that you make a copy of the file `/user-environment/ParaView-5.13/rc-submit-pvserver.sh` to your $HOME, such that you can further fine-tune it. From 2134a7b5c98ddb9b135751fa71187035439eb20c Mon Sep 17 00:00:00 2001 From: Alberto Invernizzi Date: Wed, 3 Dec 2025 12:15:57 +0100 Subject: [PATCH 4/4] draft section about manual command without xml --- docs/software/sciviz/paraview.md | 36 +++++++++++++++++++++++++++++++- 1 file changed, 35 insertions(+), 1 deletion(-) diff --git a/docs/software/sciviz/paraview.md b/docs/software/sciviz/paraview.md index f08eb266..e905cdc1 100644 --- a/docs/software/sciviz/paraview.md +++ b/docs/software/sciviz/paraview.md @@ -84,7 +84,41 @@ The following sbatch script can be used as a template. ## Using ParaView in client-server mode -A ParaView server can connect to a remote ParaView client installed on your desktop. Make sure to use the same version on both sides. Your local ParaView GUI client needs to create a SLURM job with appropriate parameters. We recommend that you make a copy of the file `/user-environment/ParaView-5.13/rc-submit-pvserver.sh` to your $HOME, such that you can further fine-tune it. +A ParaView server can connect to a remote ParaView client installed on your workstation. + +!!! note + Make sure to use the same version on both sides. + +Your local ParaView client needs to create a SLURM job with appropriate parameters. + +### Manual command + +The most versatile and basic way to connect is to create a new configuration for "reverse connection" +and specify a command that looks like this + +```bash +ssh -R 2222:localhost:2222 daint.cscs.ch -- paraview-reverse-connect paraview/6.0.1 2222 -N1 -n4 --gpus-per-task=1 +``` + +Let's split it and understand the various parts. + +It is possible to identify two sections of the full command separated by "`--`": + +- `ssh -R 2222:localhost:2222 daint.cscs.ch` +- `paraview-reverse-connect paraview/6.0.1 2222 -N1 -n4 --gpus-per-task=1` + +The former `ssh` command runs locally on your workstation and specifies how to connect to Alps via SSH. +You should use whatever option you are normally using to connect to Alps. +The only important part is the `-R :localhost:` which is responsible of forwarding +the specified port from Alps to your local workstation. + + +The latter `paraview-reverse-connect` command runs on the Alps login node to start a SLURM job which will run ParaView +`pvserver` instances on compute nodes that will connect to your ParaView UI on your workstation. +It requires you to specify as first two arguments the [uenv image label][ref-uenv-labels] and the port you are forwarding via SSH (they must match). +After those two mandatory arguments, you can optionally specify any srun option you need, giving you full control on the allocation request. + +### GUI You will need to add the corresponding XML code to your local ParaView installation, such that the Connect menu entry recognizes the ALPS cluster. The following code would be added to your **local** `$HOME/.config/ParaView/servers.pvsc` file