Program received signal SIGABRT: Process abort signal

I am trying to do a simulation in a small domain with CONUS data. I did my own wrf simulation and use MCIP, BCON and ICON to prepare the emission files to do the simulation. I faced the issue when I began to start CCTM. You can check the log files
run.cctm.txt (51.1 KB)
CTM_LOG_000.v54_cb6r5_ae7_aq_WR413_MYR_2019mid_20190802.txt (30.3 KB)
CTM_LOG_001.v54_cb6r5_ae7_aq_WR413_MYR_2019mid_20190802.txt (30.3 KB)
CTM_LOG_002.v54_cb6r5_ae7_aq_WR413_MYR_2019mid_20190802.txt (30.3 KB)
CTM_LOG_003.v54_cb6r5_ae7_aq_WR413_MYR_2019mid_20190802.txt (30.3 KB)

You mentioned that you ran MCIP to process your WRF simulation with 44 layers and then ran BCON and ICON to prepare boundary and initial conditions for CCTM using this 44 layer structure. However, the CTM_LOG files you posted show that your run actually tries to use the 35-layer file EQUATES restart file as initial conditions, i.e. file INIT_CONC_1. This will cause an array mismatch error when trying to read the initial conditions that seems to be consistent with the traceback messages you posted.

Assuming that the initial and boundary condition files you prepared with BCON and ICON indeed have the correct 44 layer structure, make sure that your run script points to those files rather than the corresponding EQUATES files that used a 35 layer structure that is incompatible with your WRF setup.

Thanks for your suggestion. I can understand what you mean. To solve the problem, can I rerun my wrf to prepare the 35 layers icon and bcon file for the simulation?

If you want to use 44 layers for your simulations, there is no need to rerun WRF, you just need to make sure that you use your 44 layer initial and boundary condition files (which in your first post you said you prepared) as input to your CCTM simulation since you cannot use the existing 35 layer EQUATES initial and boundary conditions (even if you were to change to a 35 layer setup, you’d still have to generate your own boundary conditions since your domain is smaller than the EQUATES domain, but you could use the EQUATES CGRID files as initial conditions).

You didn’t provide details on how you prepared these 44 layer initial and boundary conditions files mentioned in your first post, but if you deem them to be appropriate for your project, I see no reason why it would be necessary to change your WRF setup and run WRF again.

Thanks for your suggestion. I will try to try to use 44-layer files for my simulation. Another question is that what does “EQUATES CGRID” mean. Is that the concentration file with 35-layes structure? My understanding is that the CGRID file is not required to modify for my domain as initial conditions but as boundary condition I still need to prepare by myself. Is that correct?

Your log file shows that you are using input files from the EQUATES project (e.g. and, probably downloaded from the CMAS data warehouse where we make that data available. The CGRID file ( is the file your run script specifies as initial conditions, based on your log file:

 "INIT_CONC_1" opened as OLD:READ-ONLY   
 File name "/scratch/mnpfubl/Build_CMAQ/CMAQ_project/data/2019_08/icbc/"
 File type GRDDED3 
 Execution ID "CMAQ_CCTMv532_kappel_20220521_150314_338864756"
 Grid name "12US1"
 Dimensions: 299 rows, 459 cols, 35 lays, 224 vbles
 NetCDF ID:    589824  opened as READONLY            
 Starting date and time  2019214:000000 (0:00:00   Aug. 2, 2019)
 Timestep                          010000 (1:00:00 hh:mm:ss)

Yes, the CGRID file contains species concentrations for all species and model grid cells and is written at the very end of a CCTM simulation. It can therefore be used as initial conditions for the next time period following the previous CCTM simulation, instead of using an initial conditions file prepared by ICON to initialize the model.

CCTM can automatically horizontally window gridded files like CGRID or emissions (in your case, EQUATES input files with 299 rows and 459 columns) to a smaller domain (like your subdomain with 125 rows and 125 columns), as long as they share the same projection and the subdomain is fully contained in the larger domain.

However, windowing does not apply to the vertical dimension. The vertical dimension for your CCTM simulation is defined by your MCIP files, and the initial and boundary condition files need to match that vertical structure.

Thanks for your reply. I realized that I have prepared my ICON file, but it was not used in my script. I have modified them and run again. However, I still get the similar errors. Here are the log files for this time.
CTM_LOG_000.v54_cb6r5_ae7_aq_WR413_MYR_2019mid_20190802.txt (30.3 KB)
run.cctm.log.txt (51.1 KB)

Thanks for confirming what I mentioned in my first post, i.e. that your run was not using your own 44 layer initial condition file you thought you were using and instead used the downloaded 35 layer CGRID file that is incompatible with your 44 layer MCIP files.

While I had hoped that that mismatch might have explained the crash, that doesn’t appear to be the case, since in your new run you are clearly using your own 44 layer initial condition file.

I’m not sure what to suggest next - the traceback message points to some memory issue, but I don’t know what the cause could be. I noticed you are only using 4 processors. Are you able to use more, and if so, do you encounter the same error? Did you try to run the benchmark case, and were you successful in doing so?

I can give more cores. I will give it a try and I hope it could be fine. Thanks for your help

I have successfully ran benchmark case. I use 32 cores to give it a try. The error is still the same. I am not sure what I should do now. Do I need to rebuild my library or do something else?

Thanks for performing this test and reporting back.
If you haven’t compiled CCTM in debug mode, I would give that a try next. To do that, go to your BLD directory, type “make clean”, followed by “make DEBUG = TRUE”. Then execute the run script again and see if there are any error messages pointing to specific lines in the CCTM code, rather than the traceback messages pointing to shared object libraries.

I have a question. I used the bld_cctm.csh to build it. If I want to compile with debug mode, should I modify some thing in the scripts?(Never mind, I have found one option in the file

Yes, you can use the bldit_cctm.csh script to compile in debug mode, you just have to uncomment ‘#set Debug_CCTM’. But if you already have an existing BLD directory and want to rebuild the executable in debug mode there, you can use the approach listed in my previous post.

Hi, Christian. I have compiled a debug version CCTM. The problem is that where I can find other error messages. I can only find same log files as before.

The hope was that by compiling in debug mode and rerunning the script, the error message in the main log file (run.cctm.txt) would change to something more concrete, and/or that the processor-specific log files (CTM_LOG_…) would show something after opening the E2C_SOIL file.

I have rerun with the debug mode CCTM. I cannot find any changes. Should I use bsub(a tool to submit job in our university super computer like slurm) maybe?
CTM_LOG_000.v54_cb6r5_ae7_aq_WR413_MYR_2019mid_20190802.txt (30.3 KB)
run.cctm.log.txt (89.7 KB)

You could try, yes, but if you successfully ran the benchmark case on the same system which you used for your tests so far, I’m not very hopeful. I don’t have any other suggestions, though, so you might as well try. Maybe others can think of what else to try.

One additional test you could try is setting the CTM_OCEAN_CHEM and maybe also the CTM_ABFLUX run script flags to F, just to see if opening their required input files windowed for your smaller domain is causing any issues.

Hi, Christian. I have tried to disable CTM_OCEAN_CHEM and CTM_ABFLUX. The script can run through the input file process. However I got another error. I will also attached my log files here. I try to modify the cores number and the memory usage. It will show different error.
CTM_LOG_001.v54_cb6r5_ae7_aq_WR413_MYR_2019mid_20190802.txt (43.4 KB)
run.cctm.log.txt (126.4 KB)
CTM_LOG_000.v54_cb6r5_ae7_aq_WR413_MYR_2019mid_20190802.txt (56.2 KB)


After talking to @hogrefe.christian about your issue could you upload a few additional things:

  1. An ncdump of your METCRO3D (something like ncdump -h /scratch/mnpfubl/Build_CMAQ/CMAQ_project/data/2019_08/met/ >> metcro3d_ncdump.log)
  2. A cat of your GRIDDESC file (something like cat /scratch/mnpfubl/Build_CMAQ/CMAQ_project/data/2019_08/met/GRIDDESC >> GRIDDESC.log)
  3. Lastly, a cat of your Makefile (something like cat /scratch/mnpfubl/Build_CMAQ/CMAQ_project/CCTM/scripts/BLD_CCTM_v54_gcc_debug/Makefile >> makefile.log)

Looking at your most recent log files suggests that the model cannot determine an appropriate time step for the science processes (vertical diffusion, advection, etc) to synchronize.