CMAQ MPI and undefined symbols from NetCDF Fortran library

I’ve been running CMAQ with MPI and found it was ignoring the MPI environment completely. My best guess of the problem is that IOAPI was compiled using the default gcc without MPI support.

When I try to compile IOAPI after setting all the NETCDF libraries and includes, I still get these errors:

cd /tmp/yul18051/spack-stage/ioapi-3.2-br5cduslu7xmyrhtbi46yn2scnekzkvu/spack-src/Linux2_x86_64gfortmpi; mpif90 -I/tmp/yul18051/spack-stage/ioapi-3.2-br5cduslu7xmyrhtbi46yn2scnekzkvu/spack-src/ioapi -I/tmp/yul18051/spack-stage/ioapi-3.2-br5cduslu7xmyrhtbi46yn2scnekzkvu/spack-src/Linux2_x86_64gfortmpi  -DAUTO_ARRAYS=1 -DF90=1 -DFLDMN=1 -DFSTR_L=int -DIOAPI_NO_STDOUT=1 -DNEED_ARGS=1  -O3 -ffast-math -funroll-loops -m64   -DAUTO_ARRAYS=1 -DF90=1 -DFLDMN=1 -DFSTR_L=int -DIOAPI_NO_STDOUT=1 -DNEED_ARGS=1 -I/home/yul18051/CMAQ/master/spack/opt/spack/linux-rhel6-x86_64/gcc-9.1.0/netcdf-4.7.0-6o5a4a3pvuaulda4ii43imluhsfcz5in/include -I/home/yul18051/CMAQ/master/spack/opt/spack/linux-rhel6-x86_64/gcc-9.1.0/netcdf-fortran-4.4.5-26wtc3i7bocvtu5lyrteak4fhg6uqaat/include -c /tmp/yul18051/spack-stage/ioapi-3.2-br5cduslu7xmyrhtbi46yn2scnekzkvu/spack-src/m3tools/airs2m3.f
cd /tmp/yul18051/spack-stage/ioapi-3.2-br5cduslu7xmyrhtbi46yn2scnekzkvu/spack-src/Linux2_x86_64gfortmpi; mpif90  airs2m3.o -L/tmp/yul18051/spack-stage/ioapi-3.2-br5cduslu7xmyrhtbi46yn2scnekzkvu/spack-src/Linux2_x86_64gfortmpi -lioapi -L/home/yul18051/CMAQ/master/spack/opt/spack/linux-rhel6-x86_64/gcc-9.1.0/netcdf-4.7.0-6o5a4a3pvuaulda4ii43imluhsfcz5in/lib -lnetcdf -L/home/yul18051/CMAQ/master/spack/opt/spack/linux-rhel6-x86_64/gcc-9.1.0/netcdf-fortran-4.4.5-26wtc3i7bocvtu5lyrteak4fhg6uqaat/lib -lnetcdff -fopenmp -dynamic -L/usr/lib64 -lm -lpthread -lc  -o airs2m3
/tmp/yul18051/spack-stage/ioapi-3.2-br5cduslu7xmyrhtbi46yn2scnekzkvu/spack-src/Linux2_x86_64gfortmpi/libioapi.a(init3.o): In function `init3_':
init3.F90:(.text+0x2f5): undefined reference to `nfmpi_inq_libvers_'
/tmp/yul18051/spack-stage/ioapi-3.2-br5cduslu7xmyrhtbi46yn2scnekzkvu/spack-src/Linux2_x86_64gfortmpi/libioapi.a(open3.o): In function `open3_':
open3.F90:(.text+0x12fb): undefined reference to `nfmpi_close_'
/tmp/yul18051/spack-stage/ioapi-3.2-br5cduslu7xmyrhtbi46yn2scnekzkvu/spack-src/Linux2_x86_64gfortmpi/libioapi.a(opnlog3.o): In function `opnlog3_':
opnlog3.F90:(.text+0x18af): undefined reference to `nfmpi_get_att_text_'
/tmp/yul18051/spack-stage/ioapi-3.2-br5cduslu7xmyrhtbi46yn2scnekzkvu/spack-src/Linux2_x86_64gfortmpi/libioapi.a(pn_crtfil3.o): In function `pn_crtfil3_':
pn_crtfil3.F90:(.text+0x761): undefined reference to `nfmpi_enddef_'
pn_crtfil3.F90:(.text+0x80b): undefined reference to `nfmpi_create_'
pn_crtfil3.F90:(.text+0x898): undefined reference to `nfmpi_close_'
pn_crtfil3.F90:(.text+0x9c9): undefined reference to `nfmpi_put_att_text_'
pn_crtfil3.F90:(.text+0xa5e): undefined reference to `nfmpi_put_att_text_'
pn_crtfil3.F90:(.text+0xadb): undefined reference to `nfmpi_put_att_int1_'
...

The only similar post I found was CMAQ v5.3 Compiling Errors but my issue is not related to OpenMP.

Here are my full build and environment logs:

You do not need to build ioapi with mpi in order to run CMAQ in parallel.
Building ioapi with the mpi support is only necessary if you want to use parallel I/O.

To run CMAQ in parallel you need to be sure:

  1. CMAQ was compiled to support MPI
set ParOpt                             #> uncomment to build a multiple processor (MPI) executable; 
  1. You are using the mpirun command in the run script.

Look for the following settings:

 set PROC      = mpi               #> serial or mpi
  1. You are using the mpirun command
  #> Executable call for multi PE, configure for your system 
  # set MPI = /usr/local/intel/impi/3.2.2.006/bin64
  # set MPIRUN = $MPI/mpirun
  ( /usr/bin/time -p mpirun -np $NPROCS $BLD/$EXEC ) |& tee buff_${EXECUTION_ID}.txt
  1. Next step would be to determine if there is a batch job management system such as slurm or LSF.
    If that is the case, you need to add settings to the top of your CMAQ run script to allow your job to run in parallel with the batch queue system.

here is an example for a slurm that would run CMAQ on 128 processors:

#!/bin/csh -f
#SBATCH -t 4:00:00
#SBATCH --mem=100000
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=32
#SBATCH -J CMAQ_Bench
##SBATCH -p debug_queue
##SBATCH -p knl
#SBATCH -p 528_queue
#SBATCH --exclusive
# usage: linux command line > sbatch run_cctm_Bench_2016_12SE1.csh
# 

# ===================== CCTMv5.3 Run Script ========================= 

Thanks for your suggestions! I confirmed I have 1, 2, and 3 all set correctly. I added an additional -v option to mpirun to get additional debug output. I get messages like this from OpenMPI version 3:

...
--------------------------------------------------------------------------
There are not enough slots available in the system to satisfy the 24
slots that were requested by the application:

  /home/yul18051/CMAQ_Project/CCTM/scripts/BLD_CCTM_v53_gcc/CCTM_v53.exe

Either request fewer slots for your application, or make more slots
available for use.

A "slot" is the Open MPI term for an allocatable unit where we can
launch a process.  The number of slots available are defined by the
environment in which Open MPI processes are run:

  1. Hostfile, via "slots=N" clauses (N defaults to number of
     processor cores if not provided)
  2. The --host command line parameter, via a ":N" suffix on the
     hostname (N defaults to 1 if not provided)
  3. Resource manager (e.g., SLURM, PBS/Torque, LSF, etc.)
  4. If none of a hostfile, the --host command line parameter, or an
     RM is present, Open MPI defaults to the number of processor cores

In all the above cases, if you want Open MPI to default to the number
of hardware threads instead of the number of processor cores, use the
--use-hwthread-cpus option.

Alternatively, you can use the --oversubscribe option to ignore the
number of available slots when deciding the number of processes to
launch.
--------------------------------------------------------------------------
real 0.63
user 0.06
sys 0.07
...

The OpenMPI bindings are not able to detect the MPI environment created by SLURM in this instance. I’m fairly certain the SLURM side of things is configured correctly because I administer the HPC cluster and other applications don’t have this issue.

Can you please confirm which MPI implementation you are using for your CMAQ on your system so that we can better mirror your setup? We have some similar software that works well with Intel MPI (which underneath the covers uses MPICH, I believe) and I was thinking of recompiling the CMAQ dependency chain with Intel MPI or any other MPI implementation that supports our InfiniBand interconnect.

Are you using a workload manager, such as SLURM on your computer?
The default script is set up to run on 32 processors. On our system, we then need to add commands to the top of the script for the SLURM batch job system.

#SBATCH --nodes=2
#SBATCH --ntasks-per-node=16

Then submit the job using
sbatch run_cctm_Bench_2016_12SE1.csh

If you have slurm on your system, you can learn more about how to use sbatch by using the command:

man sbatch

If you are not using slurm, you can check to see if LSF is available.

If you don’t have a workload manager, you can try to reduce the number of processors used in the job script to 8 processors by changing the @NPCOL and @NPROW in the following section of the run script:

#> Horizontal domain decomposition
if ( $PROC == serial ) then
setenv NPCOL_NPROW “1 1”; set NPROCS = 1 # single processor setting
else
@ NPCOL = 8; @ NPROW = 4
@ NPROCS = $NPCOL * $NPROW
setenv NPCOL_NPROW “$NPCOL $NPROW”;
endif

to
@ NPCOL = 4; @ NPROW = 2

…and

% setenv <name> MPI:<path>

for each of the logical names for files for which you want to do distributed I/O.

The issue is when a job spans multiple nodes, instead of paying attention to the number of MPI tasks specified by either mpirun or srun, only the many CPUs allocated on the first node are treated as the available number of MPI tasks. This causes the red herring message about There are not enough slots available in the system...

I’ve given up on trying to use OpenMPI and am instead trying to use Intel MPI which provides much better diagnostic information and in general is more popular on our cluster.

Hi,

Please try adding the oversubscribe option to your run script after the mpirun command:
mpirun --oversubscribe -np 32

I found this suggestion from this link: https://github.com/open-mpi/ompi/issues/6020

Liz

Thanks for the advice! The underlying problem was I needed to use a newer version of OpenMPI.