I am running CMAQ for the 12km CONUS domain for the month of January 2022. But it is processed in 4 minutes intervals rather it is supposed to be processed in 1-hour intervals also the processing speed is very low. (I am using 8 processors) Here are the run script and log script: run_cctm_202201_12US1.csh (35.4 KB) run_cctm_202201_12US1_log.txt (56.0 KB)
I will be grateful if anyone can help me with how I can make this processed in 1-hour intervals and make it faster.
Your log file shows that the model output is being written every hour, as desired. Internally, model processes (advection, chemistry, etc.) need to be called more frequently, at each “sync step”, these are the 4 minute intervals you see in the log file. There is a good discussion on this topic in this previous thread:
Also, unless all your inputs (MCIP, emissions, etc.) have been constructed with all 745 hours for January, your NSTEPS setting should probably be 240000 rather than 7450000.
The 4 minute time steps is normal behavior. The model chooses an appropriate synchronization time step to ensure stability. The 1-hour interval you are specifying applies to the frequency at which output is written.
As for speed, you appear to be using the 12US1 domain, which is 459x299, but on only 8 processors. This is a large domain to run on 8 processors. We typically use 128 processors for that size domain, and I believe it typically takes about an hour of wall clock time to run the model for one day.
I have all 744 hours of MCIP and Emission data available. I want hourly concentration output for all 744 hours. At the same time, I don’t want the CMAQ run to be computationally expensive. (concentration accuracy is not an issue). Currently, I am trying with 64 and 128 Processor but still, it is pretty slow. What settings I need to choose to resolve this issue?
I have all 744 hours of MCIP and Emission data available. I want hourly concentration output for all 744 hours.
Yes, but the question is whether you want the outputs for all 744 hours in a single file, and if yes, whether your inputs are organized to support such a single 744 simulation, i.e. whether each of the emissions and MCIP files specified in your run script contain the required 745 hours. The naming of the files in the run script suggest they don’t and are organized as separate 25 hour files for each day (your files are named GRIDBDY2D_220101.nc, inln_mole_ptfire_${YYYYMMDD}_${STKCASEE}.ncf, etc., ) , but since the log file didn’t show the open statements for these files, I am just guessing. Setting NSTEPS to 7450000 tells CMAQ to perform a single 745 hour simulation, while the more typical approach is to perform consecutive 24 hours simulations (accomplished by setting NSTEPS to 240000) to cover the time period specified by START_DATE and END_DATE. There’s nothing wrong with setting NSTEPS to 7450000, it just means that you have to prepare your input files in a different way than is typically done.
At the same time, I don’t want the CMAQ run to be computationally expensive. (concentration accuracy is not an issue). Currently, I am trying with 64 and 128 Processor but still, it is pretty slow. What settings I need to choose to resolve this issue?
Running for a continental scale domain at 12 km resolution with 459x299 grid cells and (presumably) 30+ layers is computationally demanding, no matter how you cut it. What is the timing you see for 1 simulation day using 128 processors? In our experience, we get better performance with the intel than the gcc compiler, but system hardware and configuration also affect runtime. Aside from reducing the size of your domain, I don’t see many CMAQ runtime options that you could specify in your run script to shorten your runtime since you’re already limiting CONC file outputs to layer 1 and also don’t write out diagnostic files.