Remote VNC Setup Connection to Access a Linux Desktop Apps (Fluent 2020 R2)
ON-DEMAND USERS:
Login into wendian-ondemand.mines.edu or mio-ondemand.mines.edu to short cut many of these steps. For single node users, starting fluent from Workbench and selecting the total number of cores that you received is all that is needed! MULTI-NODE sessions will require a Terminal (upper left Applications -> “Terminal Emulator). Load ansys module, see STEP 1, and skip to PRE-STEP 7 below under “FLUENT USING MULTIPLE NODES.”
TERMINAL ONLY USERS:
This guide uses the vncserver TurboVNC installed on Mines HPC platform to access a full graphics environment in a Linux desktop. The application GUI Ansys Fluent version 20.2 is started as an example. Ansys Workbench and Ansys Electronics Desktop are also available. For Ansys Fluent version 2021 and above consider using Ansys Remote Visualization Client
Software Requirements
Install putty or MobaXterm on windows, and TurboVNC (www.turbovnc.org) or your favorite VNC viewer. This tutorial uses Putty SSH tunnel settings for MobaXterm reference the other access guides for SSH Tunnel setup. If can also use MacOS terminal and Screen Sharing App that uses VNC protocols.
Mio:
-
module load apps/ansys/202
For newer versions of Fluent use “module avail” to see full list or use the direction at Ansys Remote Visualization Client Startup Guide
-
module load utility/standard/turbovnc/2.2.6
Wendian:
-
module load apps/ansys/202
For newer versions of Fluent use “module avail” to see full list or use the direction at Ansys Remote Visualization Client Startup Guide
-
module load utility/standard/TurboVNC/2.2.4
Wendian:
[joeuser@wendian001 ~]$ salloc -N1 --exclusive
OR
[joeuser@wendian002 ~]$ salloc -N1 --exclusive -p aun
salloc: Pending job allocation 12109017 salloc: job 12109015 queued and waiting for resources salloc: Granted job allocation 12109017 salloc: Waiting for resource configuration salloc: Nodes compute031 are ready for job [joeuser@c031 ~]$
OR on AuN partition:
[joeuser@node140 ~]$
Mio:
[joeuser@mio001 ~]$ srun -N1 --exclusive --pty bash
could add partition flags "--partition hpc" for students ONLY nodes or your owned nodes partition
could add time request length of job flag "--time=D-HH:MM:SS"
cpu-bind=MASK - compute031, task 0 0 [29841]: mask 0x1 set
[joeuser@compute031 ~]$
Step 1: Start an Interactive Session and start a VNCserver:
Start PowerShell and login to Mio, and then start an interactive session. For example:
PS C:\Users\username> ssh username@mio.mines.edu
Once the job has been granted a new shell prompt is started on the compute node. In this case, this is compute031 on Mio.
Or it could, be on AuN so the node name is “node140”
Step 2: Load modules and start the VNC server
Load the TurboVNC module and the application module of the HPC platform.
The first time you start the vncserver you will be prompted to create a password for when you connect with the VNC viewer. Enter a password and re-enter to verify.
[joeuser@compute031 ~]$ module load apps/ansys/202
OR on Wendian:
[joeuser@c031 ~]$ module load apps/ansys/v202
[joeuser@compute031 ~]$ module load utility/standard/turbovnc/2.2.6
OR on Wendian:
[joeuser@c031 ~]$ module load utility/standard/turbovnc/2.2.4
[joeuser@compute031 ~]$ vncserver
[joeuser@compute031 ~]$ vncserver
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
You will require a password to access your desktops.
Password:
Verify:
New 'compute031:1 (joeuser)' desktop is compute031:1
Starting applications specified in /u/aa/bb/joeuser/.vnc/xstartup
Log file is /u/aa/bb/joeuser/.vnc/compute031:1.log
[joeuser@compute031 ~]$
Step 3: Identify where the VNCserver started and screen number
In this case, the node is compute031 and the screen is :1 as highlight in red. Screen port locations start on 5900 for :0 and increase, In this case, the port number will be 5901 for screen :1.
Some useful commands:
To stop a running server use ‘vncserver -kill :1’ for screen :1. To list running servers use ‘vncserver -list’.
If you forget your password you can delete the passwd file in your .vnc/ home directory.
Step 4: Setting up a SSH Tunnel
An SSH Tunnel makes it possible to securely connect to a location on your machine through the SSH protocol to the remote compute node on the HPC platform. On Windows either using MobaXterm or Putty or on Linux, MacOS, or PowerShell using the terminal follow one of the instruction below.
Linux, MacOS, or powershell using a Terminal
SSH Tunneling Setup on Linux, Macos, or Windows “Powershell”
From a new terminal session from your computer (if you’re using Linux pick a different number for the first port as 5901 is reserved for your current running X-session).
[username@MyComputer ~] % ssh -L 5901:compute031:5901 username@mio.mines.edu
SSH Tunnel creation on Windows using MobaXterm
See ParaView connection guides under Step 2 A-C for creating a MobaXterm SSH Tunnel. Use ports 5901 for the client and server side ports numbers.
SSH Tunnel creation on Windows using Putty
Step 4a:
Right Click on the window header to bring up the settings menu and select “Change Settings…”
Step 4b:
Go to “Connection” in the tree menu under Category, and select “SSH” and then “Tunnels.” Enter the “Source Port” 5901 Enter the “Destination” compute031:5901 (the “01” refers to the number following the colon after the node name in this case “compute031:1”).
Step 4c:
Successfully added SSH Tunnel in Putty session will be listed after clicking “Apply”. Every time you start a new GUI job you will need to setup a tunnel, and then close the “PuTTY Reconfiguration” window.
Step 5: Start the VNC Client on your Machine
On Windows download and install TurboVNC or another VNC viewer of your choice. For MacOS you can use “Screen Sharing” which is a VNC client installed with MacOS (Use Command+Space to use spotlight to search).
Starting VNC Client Windows
Step 5a: Connecting through the SSH Tunnel
Using Windows TurboVNC connection through your SSH tunnel at localhost and the port number listed first in the PuTTY SSH Tunnel configuration. In this case, this is “localhost:5901” or for short screen one at localhost:1.
Step 5b: Enter your VNC Password
Enter your password from when you setup the vncserver. (or delete the file ./vnc/passwd in your home directory and restart the vncserver).MacOS VNC Client "Screen Sharing"
You can also use the native Mac VNC Viewer found in “/System/Library/CoreServices/Applications/Screen Sharing.app” or Command+Space to use spotlight and search for “Screen Sharing”
Create a short cut and move it to your Application folder.
Step 6: Opening the “Terminal Emulator” on the Linux Desktop
You are now connected to the compute node through the localhost and encrypted SSH Tunnel connection. The generic linux desktop windows manager will open. Select from the “Applications Menu” the “Terminal Emulator” program.
Step 7: Start your GUI application
At the command prompt in the Terminal window start fluent with either the number of cores on your compute node or in serial mode.
For MULTI-NODE you must follow the steps in the next section to inform fluent of the connections and host names.
[joeuser@compute031 ~]$ fluent 3ddp /sw/ansys_inc/v182/fluent/fluent18.2.0/bin/fluent -r18.2.0 3ddp /sw/ansys_inc/v182/fluent/fluent18.2.0/cortex/lnamd64/cortex18.2.0 -f fluent (fluent "3ddp -pshmem -host -alnamd64 -r18.2.0 -t1 -mpi=ibmmpi -path/sw/ansys_inc/v182/fluent -ssh")
Or for a parallel on this single node specify the number on core with the -t option
[joeuser@compute031 ~]$ fluent 3ddp -t8 /sw/ansys_inc/v182/fluent/fluent18.2.0/bin/fluent -r18.2.0 3ddp -t8 /sw/ansys_inc/v182/fluent/fluent18.2.0/cortex/lnamd64/cortex18.2.0 -f fluent (fluent "3ddp -pshmem -host -alnamd64 -r18.2.0 -t8 -mpi=ibmmpi -path/sw/ansys_inc/v182/fluent -ssh")
Fluent using Multiple Nodes
In step 1, request an interactive session with multiple nodes.
[joeuser@mio001 ~]$ srun -N2 --exclusive --partition <name of your group of nodes>
Or using Slurm’s srun command:
[joeuser@wendian ~]$ salloc -N2 --exclusive --partition <if desired use aun> --pty bash
And continue through steps 2-6.
pre-Step 7: Preparing Fluent Node Connection List
Fluent uses a file with a list of hostnames for each cpu that will run an MPI fluent process, one per line. The utility “expands” was written to produce this file on Mines HPC platforms using the Slurm nodelist variable as the argument and direct the output to a file. Then count the line to using “wc -l” to known the number of cpu core that are available in your Slurm job.
pre-Step 7a: Expands Utility
On Mio to run the expands utility
[joeuser@compute065 ~]$ /opt/utility/expands $SLURM_NODELIST > nodes
On Wendian to run the expands utiltiy
[joeuser@c021 ~]$ /sw/utility/local/expands $SLURM_NODELIST > nodes
pre-Step 7b: CPU Core Count
Using the linux utility “wc -l” to count the number of lines that are in the file you created. The “cat” command outputs the file contents, which is then piped to the “wc” utility. The result is the total number of lines in the file, in this case, 24.
[joeuser@XXXX ~]$ cat nodes | wc -l 24
Mulit-node Step 7: Starting the Fluent Application
The Fluent application is now started with this CPU core count number, and the flag “-cnf=nodes” as follows.
[joeuser@XXXX ~]$ fluent 3ddp -t24 -cnf=nodes
Successfully started GUI of Fluent
Using VNC connection through a SSH Tunnel and running Fluent across multiple nodes on the Mines HPC platform Mio.