Skip to content
Draft
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
87 changes: 78 additions & 9 deletions docs/software/sciviz/paraview.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,44 +12,113 @@
[ParaView](https://www.paraview.org/) is provided on [ALPS][platforms-on-alps] via [uenv][ref-uenv].
Please have a look at the [uenv documentation][ref-uenv] for more information about uenvs and how to use them.

## One-time setup

CSCS provides helper scripts that are very handy for launching live sessions and batch rendering.

To keep these utilities available from any shell, the simplest approach is to place them in a directory that is part of your
`PATH`. A common convention is to create a personal `~/bin` directory and add it to to your `PATH`.

```bash
# create the ~/bin folder and add it to PATH
mkdir -p ~/bin
echo "export PATH=~/bin:$PATH" >> .bashrc

# Then, download the scripts and put them in that folder
cd ~/bin
wget -qO- https://gist.github.com/albestro/67728336bb3e60f6a3c64471b1893d66/archive/main.tar.gz | tar xzf - --strip-component 1 --same-permissions
```

!!! warning "reload to make changes effective"

Changes to your `.bashrc` requires a reload of your shell to become effective.

Hence, you need to logout and login back to be able to easily use the scripts you just installed.

In a new shell, you can then test that scripts are available as commands from any directory.
For instance, you can check that issuing

```bash
paraview-reverse-connect
```

it prints some basic instructions on how to use the command.

## Running ParaView in batch mode with Python scripts

The following sbatch script can be used as a template.

!!! note
Before using a uenv, you have to ensure that you pulled the one you are going to use.
Refer to [uenv quick start guide][ref-uenv-using] for more details.

=== "GH200"

!!! note
Current observation is that best performance is achieved using [one MPI rank per GPU][ref-slurm-gh200-single-rank-per-gpu].
How to run multiple ranks per GPU is described [here][ref-slurm-gh200-multi-rank-per-gpu].

```bash
#SBATCH -N 1
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=72
#SBATCH --gpus-per-task=1
#SBATCH -A <account>
#SBATCH --uenv=paraview/5.13.2:v2 --view=paraview
#SBATCH --uenv=paraview/6.0.1 --view=default
#SBATCH --hint=nomultithread

export MPICH_GPU_SUPPORT_ENABLED=0

srun --cpus-per-task=72 --cpu_bind=sockets /user-environment/ParaView-5.13/gpu_wrapper.sh /user-environment/ParaView-5.13/bin/pvbatch ParaViewPythonScript.py
srun --cpus-per-task=72 bind-gpu-vtk-egl pvbatch your-paraview-python-script.py
```
Current observation is that best performance is achieved using [one MPI rank per GPU][ref-slurm-gh200-single-rank-per-gpu]. How to run multiple ranks per GPU is described [here][ref-slurm-gh200-multi-rank-per-gpu].

=== "Eiger"

```bash
#SBATCH -N 1
#SBATCH --ntasks-per-node=128
#SBATCH -A <account>
#SBATCH --uenv=paraview/5.13.2:v2 --view=paraview
#SBATCH --uenv=paraview/6.0.1 --view=default
#SBATCH --hint=nomultithread

srun --cpus-per-task=128 /user-environment/ParaView-5.13/bin/pvbatch ParaViewPythonScript.py
srun --cpus-per-task=128 pvbatch your-paraview-python-script.py
```


## Using ParaView in client-server mode

A ParaView server can connect to a remote ParaView client installed on your desktop. Make sure to use the same version on both sides. Your local ParaView GUI client needs to create a SLURM job with appropriate parameters. We recommend that you make a copy of the file `/user-environment/ParaView-5.13/rc-submit-pvserver.sh` to your $HOME, such that you can further fine-tune it.
A ParaView server can connect to a remote ParaView client installed on your workstation.

!!! note
Make sure to use the same version on both sides.

Your local ParaView client needs to create a SLURM job with appropriate parameters.

### Manual command

The most versatile and basic way to connect is to create a new configuration for "reverse connection"
and specify a command that looks like this

```bash
ssh -R 2222:localhost:2222 daint.cscs.ch -- paraview-reverse-connect paraview/6.0.1 2222 -N1 -n4 --gpus-per-task=1
```

Let's split it and understand the various parts.

It is possible to identify two sections of the full command separated by "`--`":

- `ssh -R 2222:localhost:2222 daint.cscs.ch`
- `paraview-reverse-connect paraview/6.0.1 2222 -N1 -n4 --gpus-per-task=1`

The former `ssh` command runs locally on your workstation and specifies how to connect to Alps via SSH.
You should use whatever option you are normally using to connect to Alps.
The only important part is the `-R <PORT>:localhost:<PORT>` which is responsible of forwarding
the specified port from Alps to your local workstation.


The latter `paraview-reverse-connect` command runs on the Alps login node to start a SLURM job which will run ParaView
`pvserver` instances on compute nodes that will connect to your ParaView UI on your workstation.
It requires you to specify as first two arguments the [uenv image label][ref-uenv-labels] and the port you are forwarding via SSH (they must match).
After those two mandatory arguments, you can optionally specify any srun option you need, giving you full control on the allocation request.

### GUI

You will need to add the corresponding XML code to your local ParaView installation, such that the Connect menu entry recognizes the ALPS cluster. The following code would be added to your **local** `$HOME/.config/ParaView/servers.pvsc` file

Expand Down
Loading