CMAQv5.2.1 compiling issue

When I was trying to install CMAQV5.2.1 and following the userguide “5.2.4.5.2 Compile the CMAQ programs”, I get the problem:
wrf_netcdf_mod.f90(64): error #7012: The module file cannot be read. Its format requires a more recent F90 compiler. [NETCDF]
USE netcdf
------^
wrf_netcdf_mod.f90(126): error #7012: The module file cannot be read. Its format requires a more recent F90 compiler. [NETCDF]
USE netcdf
------^
wrf_netcdf_mod.f90(184): error #7012: The module file cannot be read. Its format requires a more recent F90 compiler. [NETCDF]
USE netcdf
------^
wrf_netcdf_mod.f90(82): error #6404: This name does not have a type, and must have an explicit type. [NF90_INQ_VARID]
rcode = nf90_inq_varid (cdfid, var, id_data)

Anybody can help with this? Thank you.

From the I/O API Troubleshooting document https://www.cmascenter.org/ioapi/documentation/all_versions/html/ERRORS.html#ccfc

In general, you are best off if you can build the whole modeling system (libnetcdf.a, libpvm3.a, libioapi.a, and your model(s) CMAQ, SMOKE, etc. with a common compiler set and common set of compile-flags. When this is not done, there are a number of compatibility issues…

…and this is one of them. Re-build the netCDF with the same compiler and compile-flags that you are using for CMAQ.

Hi all,

I’m a sysadmin who helped Yukui finally get CAMQ compiled on our cluster. I just wanted to report some of the experiences we’ve had with the installation and build process in the hopes that some of the shortcomings might be considered and addressed. The experiences are ordered from specifics to more general observations.

  1. It appears that CAMQ requires OpenMP support? We ended up using the Intel 2016 compiler and Intel compiled dependencies, and had to add compiler flags for OpenMP to the LINK* variables in the c shell scripts for building some of the projects. Specifically for the Intel ifort compiler we added -openmp.
  2. I could not find any mention of minimum versions of dependencies or compilers in the online GitHub documentation.
  3. The c shell build system has terrible idempotency.
    1. In config_cmaq.csh if one neglects to set the input paths the first time round, the c shell script creates broken symlinks to non-existant directories! It doesn’t bother to check that the paths exist on the filesystem, and when the path variables are updated, the symlinks are skipped because they already exist even though they do not match the new variables.
    2. For some of the scripts/bldit_*.csh scripts, the entire $Bld directory needs to be deleted when updating variables!
  4. Hiding patch files in release notes is less helpful that cutting a new release with the correct files: https://github.com/USEPA/CMAQ/tree/5.2.1/DOCS/Known_Issues#cmaqv521-i2-icon-and-bcon-fail-to-compile-in-v521
  5. The c shell build system tells you very little when things do wrong. One has to append -x to the csh shebang to trace problematic output.
  6. Neither IOAPI nor CMAQ create directory structures specified by the FHS to separate read-only from writable data (man 7 hier) needed by shared installations. IOAPI 3.1 lacks a sane make install step that supports installing to a prefix path, and the way CMAQ organizes its files makes it impossible to create a shared installation for multiple users using environment modules.
  7. Yukui found a Word document with more detailed installation instructions but I did not. The GitHub documentation doesn’t link to that document.
  8. Some of these issues can be uncovered with better (or any) support for continuous integration and tests. Often times I can look at the CI files to figure out why things are not working, or what we’re doing differently from the developers.

By item number:

  1. Although CMAQ itself does not require OpenMP, various other related programs do; therefore, the I/O API (and M3Tools) are compiled for it. BTW, it’s “-qopenmp” for 2016 and later Intel compilers.

  2. & 7) That documentation is on the CMAS web-site (the primary distribution center for this stuff)…

      1. I can’t disagree… b ut they are trying to build a multi-configuration (“vanilla”, DDM, ISAM, …) modeling system.
  3. The FHS standard is inadequate to describe the plethora of binary types which may arise. In particular, various vendors’ Fortran compilers are not link-compatible – in fact, some of them aren’t link compatible from version to version from the same vendor. Nor does the FHS envision supporting multiple binary types for profiling, debugging, etc. BTW, all of this stuff generally shows up either under “/opt” or else in user directories (somewhere under ${HOME}…)

– Carlie J. Coats, Jr., Ph.D.
I/O API Author/Maintainer

Oh, under “binary link types” I forgot to mention “small-model” vs “medium-model” builds and “normal” vs “distributed-I/O” builds and “normal-year” vs “climatology-year” builds and “32-bit” vs “64-bit” builds.

By default, 53 different Linux link-types are supported (and this does not account for user-customization to support multiple versions of each vendor’s compilers which may be installed). By the time you do that, it’s a nightmare of a combinatorial explosion.

BTW, the “module” approach does not come close to scaling to the number of different link-compatibility-types that come up. On the primary UNC server, I am by default building (an abbreviated(!) set of) 14 distinct types, used by different portions of the user community for different (production vs R&D vs development) tasks.