Encyclopaedia Index

Installation of PHOENICS 2015

6 Parallel PHOENICS

6.1 Introduction

This chapter describes the installation of MPICH.NT for the user who purchased a licence for running parallel PHOENICS on machines that are running Windows 2000/XP Professional, Vista, Windows 7 or Server 2003. It applies to multi-processor machines and clusters of single or multi-processor machines.

The purpose of Parallel PHOENICS is to enable larger models to run faster therefore the minimum recommended configuration for each processor is a PC running at 2GHz, with 1GB of RAM. When running parallel PHOENICS across a cluster, MPICH.NT requires the ability to make TCP/IP socket connections between all hosts. The network cards and switch used to connect the PCs in the cluster should run at 100Mbps or more. For efficient parallel operation the processors should all be of equal performance: if not, the loads will be unbalanced and the slowest processor will determine the speed of the system.

6.2 Preliminary Considerations

Before starting the installation make sure that each PC in the cluster belongs to the same Workgroup or Domain. To check this, go to the Control Panel and open up the Systems Properties dialog. Then look at the ‘Computer Name’ (see figure below):

Here the computer name is CHAM-CFD1 and it belongs to the Workgroup PARALL. You may choose the name of the workgroup, but all the PCs which are to be in the cluster must belong to the same one. Use the ‘Change…’ button to reset the Workgroup of any PCs where necessary.

It is necessary to install MPICH.NT on each PC in the cluster. To this the user will need to be logged in using an account with Administrator permissions. This need not be the account on which you run PHOENICS.

Windows XP now has the personal Firewall switched on by default. Experienced users may configure the firewall settings to allow parallel PHOENICS to run successfully, but while installing and until the user is sure that MPICH has been configured correctly, it is recommended that any personal firewalls are turned off. For XP, open the Windows Firewall icon on the Control Panel, and then choose the ‘Off (not recommended)’ option, then click OK. After the user has established that MPICH is working correctly then, of course you may switch the Firewall back on.

If the master PC has difficulty seeing one or more of the slave PCs and you are using static IP addresses in your cluster then you should consider defining the IP addresses in the host file on each of the PCs. The host file is normally located as


An example ‘hosts’ file is:

# Copyright (c) 1993-1999 Microsoft Corp.
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
# For example:
# rhino.acme.com # source server
# x.acme.com # x client host localhost cham-cfd1 cham-cfd2 cham-cfd3 cham-cfd4

In the following sections, the PC from which parallel PHOENICS is launched is known as the master (process) and the other PCs/processes in the cluster will be termed slaves.

6.3 Installation of MPICH.NT

It will be assumed in what follows that PHOENICS has been successfully installed in accordance with the instructions in the earlier chapters of this document. A full PHOENICS installation need only be made on one machine in a parallel cluster (the one from which jobs will be run); however it is recommended that an installation is made on each PC. If PHOENICS is installed on a single host then it will be necessary to share the \phoenics directory so that all PCs may see it. Each machine in the cluster will also be required to have a valid local licence file ‘phoenics.lic’ accessible.

On Windows platforms parallel PHOENICS uses MPICH.NT as the message passing interface (MPI) for the communication between the different processors. MPICH.NT is freely available on the Internet but for compatibility it is important that the installation is made from the MPI provided with PHOENICS package. Installation instructions are as follows:

1) In order to run the MPICH.NT installation program you must first be logged onto the PC using an account that has Administrator privileges.

2) Run the mpich installation program provided, mpich.nt.1.2.5.exe which is located in the directory \phoenics\d_allpro\d_libs\d_windf\mpi. It is recommended that the user choose all the default options. This will install mpich within the C:\Program Files directory.

Please note: While it only necessary for PHOENICS to be installed on a single PC in a cluster, it is necessary to install MPICH on all PCs.

3) The PHOENICS parallel run scripts assume that mpirun.exe is located on the users PATH on the master and slave PCs. If the default location was chosen, this should be C:\Program Files\MPICH\mpd\bin. The PATH should be set through the System Properties dialog [launched from the Control Panel]. The image below shows the stages to set the User variable, path. Go to the Advance settings page and click on Environment Variables button. If the User variable path does not exist, click on New, otherwise highlight path and click on Edit. Add the necessary path entry for mpirun – separate any path entries by semi colons. [To modify the System variable path, consult with your network administrator.]

4) You need to register MPICH on the master PC as follows. Open a Command Prompt window [From the Start menu it is located under Accessories], change to the mpd directory,

> cd c:\Program Files\MPICH\mpd\bin

assuming the default location was chosen, and run the MPICH program MPIRegister.exe. You will be prompted for an account and password, e.g.

> mpiregister.exe
account: cfd1
password: ********
confirm: ********
Do you want this action to be persistent (y/n)? y

If the cluster has been set up within a network Domain (rather than a Workgroup) then in the above you should also specify the domain as part of the account name. For example, if 'phoenics' is the domain and 'cfd1' is the user account within that domain, enter phoenics\cfd1. The response 'y' ensures that this action is persistent, i.e. this registration process does not have to be repeated for each session on this PC.

5) The user may run the MPICH Configuration tool to identify the PCs in the cluster. From the Start menu, locate the MPICH menu, the MPICH Configuration tool is available under the mpd submenu. First select the hosts in your cluster, on which you have installed MPICH, using the Add or Select buttons. In this example we have four PCs: CHAM-CFD1, CHAMCFD2, CHAM-CFD3 and CHAM-CFD4. In column two, tick the final option 'enable –localroot option by default', press the yes button on that row and then press 'Apply'. You may now press 'OK' to close the configuration tool.

6.4 Windows Firewall settings

4) XP Personal Firewall: With the introduction of XP Service Pack 2 the personal firewall is now activated by default. If the firewall is activated, then running the PHOENICS solver, earexe, may generate a Windows Firewall Security Alert. When running in parallel mode it is essential that earexe is Unblocked, even if you are only using the processors of the host PC. If mpirun is run with an MPI configuration file instead of the executable program as the argument, then there will be an additional security alert for the mpirun program. Again, it is essential that this program be unblocked.

With the Windows Firewall, the user may choose to unblock the Earexe executable from the security alert dialog above. However, if you are operating across a cluster, this will not be sufficient to enable Parallel Phoenics to run. There are additional settings needed on both the master and slave PCs.

For those using the Windows XP open the Windows Firewall icon from the Control Panel, then go to the Exceptions Page. On the Master PC you will need to use the 'Add Program…' button to add the following programs:

C:\Program Files\MPICH\mpd\bin\mpiconfig.exe
C:\Program Files\MPICH\mpd\bin\mpirun.exe

You may also use the 'Change scope..' button to restrict access to My network (subnet) only.

On each of the slave PCs, you will need to add the programs

C:\Program Files\MPICH\mpd\bin\mpd.exe

Users of other personal firewall will need to unblock the above programs in a manner suitable for their firewall software.

6.5 Running Parallel PHOENICS

6.5.1 Running the standard parallel Earth

The simplest way to launch parallel EARTH is from the VR-Editor, although it can be run from a Command Prompt window.

If a parallel PHOENICS licence has been purchased, an additional sub-menu, 'Parallel Solver', will appear under the 'Run' menu option in the VR-Editor. Once the parallel solver is chosen, a dialog box will appear on the screen where the user can either specify the number of processes to use or to specify a MPI configuration file.

The pulldown combo box provides the user with an option to select up to thirty-two processes. Those users who have more than thirty-two processors on their PC cluster may type the appropriate number into the box. This method does have its limitations though, it does require that:

  1. each node in the cluster must have been previously identified using the MPICH Configuration tool (see step 5 in section 6.3),
  2. each node has a local copy of Earth. If a full installation of Phoenics has not been made then there must be a copy of the Earth executable at \phoenics\d_earth\ d_windf\ earexe.exe. If a Private Earth is to be used, then this also should be copied to the working directory for each of the slave processes.

6.5.2 Configuration File

The MPI configuration file option gives a more flexible way of launching the parallel solver. Assuming we have PHOENICS installed on each PC in the cluster, the following config file will use the local earexe.exe to run a single process on each of the four PCs.

exe c:\phoenics\d_earth\d_windf\earexe.exe
cham-cfd1 1
cham-cfd2 1
cham-cfd3 1
cham-cfd4 1

Example configuration files, config2 and config4, are provided as part of the PHOENICS installation (in directory \phoenics\d_utils\d_windf). The following file 'config4' is for use on a cluster where PHOENICS is installed only on the master PC:

exe \\cham-cfd1\phoenics\d_earth\d_windf\earexe.exe
cham-cfd1 2
cham-cfd2 2

This file is for use with a run command (such as runcl4) issued on cham-cfd1, with chamcfd2 as the other machine in the cluster. The first line specifies the executable program ‘earexe’, and the text ‘phoenics’ on that line is the shared name of the actual phoenics folder (e.g. c:\phoenics). The lines following the 'hosts' line list the machines that are to be used and the number of processes to be used on each - in this case 2. If the executable program is to be different on each host, then this line may take an optional third parameter, indicating the executable program, as it will be seen from the machine in question. Note that the first host machine should be the one on which the executable program is located. If cham-cfd1 and cham-cfd2 were single processor machines, runcl2, which uses config2, would be used.

Users should create their own configuration and 'run' files, based on the examples provided, tailored to their own installation. These can either be located in \phoenics\d_utils\d_windf or the local working directory.

6.5.3 Cluster Operation

All Nodes in the cluster should belong to the same Workgroup or Domain, and the user should be logged into each Node on the Cluster using the same Workgroup/Domain User account.

PHOENICS must be installed on the Master PC, but installation on the other Nodes (Slave PCs) is optional.

  1. If PHOENICS is only installed on the Master PC then the phoenics folder will need to be shared, with at least Read permissions, for the other Slave PCs in the cluster. The Shared name which is chosen when the folder is shared is used in the configuration file, and in the example file 'config4' above the shared name is ‘phoenics’. In addition, on each Slave PC there must be a folder with the same pathname as that on the Master PC from which PHOENICS has been launched. For example, if, on the Master PC the program is run from C:\phoenics\d_priv1 then there must be a folder C:\phoenics\d_priv1 on each of the Slave PCs. This folder must contain a copy of the FLEXlm licence file 'phoenics.lic', (which can be found in C:\phoenics/d_allpro on the Master PC). The Workgroup/Domain User account used to log into each Slave PC must allow write access to this folder C:\phoenics\d_priv1.
  2. If PHOENICS is installed on each Slave PC (in addition to the Master PC) then the Workgroup/Domain User account used to log into each Slave PC must allow read access to all PHOENICS folders, and write access to the folder C:\phoenics\d_priv1.
    For cluster operation it is necessary for MPICH to know which processors to use for the run. This is achieved by means of a configuration file (see chapter 6.5.2), or by using the MPICH Configuration Tool.

6.5.4 Automatic domain decomposition

When using the default automatic domain decomposition, parallel PHOENICS only differs from sequential when Earth is run: problem set-up and post-processing of results can be done in exactly the same way as for the sequential version. A case that has been run in sequential mode can be run in parallel without any changes being made. The output from a parallel PHOENICS simulation will be result and phi files, having the same format as for sequential simulations.

6.5.5 User-specified sub-domains

It is also possible to by-pass the automatic domain decomposition algorithm, and to specify how you want to decompose the calculation domain into sub-domains. This can be done by setting the appropriate date-for-solver arrays in the Q1 file.

For example, to split the domain into 8 sub-domains (2 in each direction), the following arrays must be set in the Q1 file:


The logical LG(2) will instruct the splitter to by-pass the automatic domain decomposition, and split the domain according to the settings defined in the IG array as follows.

IG(1) specifies the number of sub-domains in the x-direction;
IG(2) specifies the number of sub-domains in the y-direction;
IG(3) specifies the number of sub-domains in the z-direction;

In this case, the domain has been divided into sub-domains according to the settings made in the Q1 file.

6.5.6 Command mode operation

In a Command Prompt window, if the EARTH executable is launched directly, then the sequential solver will be used; to run the parallel solver, the program name ‘earexe’ is used as an argument to mpirun.

A script RUNPAR.BAT [nnodes] is provided. The optional argument [nnodes] indicates the number of processes to be launched on the current PC. The default is to launch two processes.

For example, RUNPAR 2 will execute the MPI command:

mpirun –localroot -np 2 \phoenics\d_earth\d_windf\earexe

If a cluster has been defined by the MPICH configuration tool then, the command will execute on two processors in the cluster, otherwise it will launch multiple processes on the local machine

There are also 'run' commands which can be used in conjunction with configuration files, for example 'runcl4' uses the configuration file 'config4'. Config4 lists the PCs and processors to be used (see above Configuration file section above).

6.5.7 Testing Parallel PHOENICS

The parallel installation should be tested by loading a library case. The different solver used for parallel operation requires a slight modification to the numerical controls. For example, the user may use main 'Menu' in the VR-Editor, and select 'Numerics' and then 'Iteration control': change the number of iterations for TEM1 (temperature) from 20 to 300. (Increasing the relaxation for the velocity components, U1 and W1, from 1.0 to 10.0 will also improve performance.) For parallel operation it is recommended that velocities should be solved whole-field (rather than slab-by-slab); this can be achieved from the VR Editor (under 'Models', 'Solution control/extra variables') or by direct editing of the q1 file (by setting 'Y' as the third logical in a SOLUTN command).

6.6 Further Information

A MPICH user guide is installed as part of the installation in PDF format; it is accessible from the MPICH menu item on the Start menu. An on-line copy of the user guide can be found here.