Library error when attempting to run CCTM--bad compile?


our sysadmin recently updated our compilers and libraries, and I’ve been working to install CMAQv5.3. I had a little trouble building CCTM, and had to add flags to where NETCDF is set and force it to recognize those flags by changing
echo “netcdf $quote$netcdf_libF$quote;” >> $Cfile
echo “netcdf $quote$NETCDF$quote;” >> $Cfile

on line 402 (attached build script) in order to get the executable.

When I do a test run of CCTM, it stops with the following:
“/home/shared/CMAQ-5.3.1/CCTM/scripts/BLD_CCTM_v532_intel/CCTM_v532.exe: error while loading shared libraries: cannot open shared object file: No such file or directory”

–I’m not sure if this means I compiled CCTM incorrectly, or if I need to change something in the run script, or if it goes further back than that and I need to modify my config script. I’ve asked our sysadmin, too, but he’s not too familiar with MPI so I thought I’d ask here too. What do you recommend I try (or our sysadmin try)?bldit_cctm.csh (30.0 KB) config_cmaq.csh (10.7 KB) run_cctm_testday2016_12US2.csh (35.6 KB)

I’ve attached my config file, my build script, my run script, and the logs from each.

Thank you!


config_cmaq.csh (10.7 KB) bldit_cctm.csh (30.0 KB) bldit_cctm-Mar25.log.txt (373.1 KB) run_cctm_testday2016_12US2.csh (35.6 KB) run_cctm_testday2016.log.txt (3.8 KB)

It means that when your sysadmin “updated our compilers and libraries” he broke things. It also means that you are not following the I/O API build-recommendations "

Vendor supplied netCDF libraries frequently cause problems…

[Availability/Download of the BAMS/Models-3 I/O API], and
I> t is recommended that you disable these options by adding the command-line flags below to your netCDF configure command:

–disable-netcdf4 --disable-dap

Availability/Download of the BAMS/Models-3 I/O API)[Availability/Download of the BAMS/Models-3 I/O API](Availability/Download of the BAMS/Models-3 I/O API

You are going to have the opportunity to “re-compile the universe” because of incompatibilities between the new libraries and the previous versions.

Assuming you are able to compile CMAQ, then the problem is likely that the environment used to compile CMAQ (and/or IOAPI) is not the same as you are using to run the model.

If you cd to the CMAQ build directory and use the ldd command on the executable, does it find all libraries, or does it show that (or others) are “not found”?
If all libraries are found in your build environment, then try putting that command in your run script. Presumably that will show that is not found.

When I have run into this, tracking down the inconsistency has enabled me to resolve the problem. If your system uses modules, then the module list command is helpful in comparing what modules are loaded in each environment.

The OP is using vendor supplied libraries, with hard-coded .so-versions, and his sysadmin updated everything, thereby changing those .so-library versions (and thanks to what Ulrich Drepper has done with GNU ld, the run-time linking with these is version sensitive).


If he had built the netCDF libraries as recommended in the I/O API build-instructions, he would have created his own static .a-libraries that do depend upon HDF at all, and this would not have happened.

thank you for your celerity and thoughts!

I’ve asked about how the netcdf-c library build, maybe he missed part of the steps I sent (Carlie, you may be glad to know that your best practices instructions are not limited to the IOAPI documentation, but also here: CMAQ/ at master · USEPA/CMAQ · GitHub)

Chris, I used the ldd command and see that is the only one not found.
This system unfortunately doesn’t use modules. I’m confident nothing’s changed in the environment between the library and compiler updates and my IOAPI and CMAQ builds; this machine doesn’t get updated often and I have an appropriate fear of building and running across library updates.

Assuming the netcdf libraries need to be reconfigured I’ll report back after that’s done and I try recompiling IOAPI and CCTM. Merci encore!