I have attached the information on my wrf data I got using ncdump -h wrfout, as well as the LAI data as mentioned in a previous post.
Thank you, @NRiceKYDAQ. Unlike @a.kashfi73, you are using P-X in your runs. However, you don’t have RA and RS (the aerodynamic and stomatal resistances) in your WRF output (per your header file)…but they are available from P-X. If these variables were available directly in your WRF output file, then you would not need to use resistcalc.f90 to artificially create them.
Is it realistic for you to re-run WRF in your case? If so, I can advise you on how to make these variables available in your WRF output. Otherwise, @mmallard and I will continue to work with you on this.
@tlspero, thanks for your response. Unfortunately it isn’t feasible for me to re-run WRF right now. I am using WRF data I was given in collaboration with other state government entities rather than WRF data processed in-house. I am working to get photochemical modeling up and running for the first time for the state of KY, so I am using the data made available to me.
Thank you for the clarification. I’ll keep digging…
On a slightly different note, I was wondering if there is a trick to produce separate MCIP output files for each day of the year, rather than one large file for all hours+days in a year?
You can run MCIP for whatever period you want. You just need to edit MCIP_START and MCIP_END in the script. If you want to loop over the entire year to create separate files, that is easy to script.
A number of M3Tools programs (m3cple, for example) can be used to split out single days, etc., from longer I/O API files.
But: Why? First of all, any program that reads it can read that single day out of a much longer file without any penalties: the I/O API is a direct access data interface that jumps directly to the data requested and then reads it (without having to read preceeding data, etd.). Moreover, splitting what are almost always longer-term studies into single day chunks makes the scripting, run-management, data-management, and analysis far harder and more complex than it ought to be. It is really a bad idea; this was even recognized as such in the original specifications thirty years ago.
Is there any other temporarily solution? I need to do it ASAP. What if I add a very small number like 0.01 to all of LAIs? Do you think it may be a bug in WRF?
Thank you so much,
IOSTAT=17 results from “I/O statement not allowed on a direct file”.
Please upload a copy of your namelist.mcip so we can take a look at it.
@tlspero Here is the namelist.mcip file:
I appreciate if I only receive emails related to my issue. Please open another one.
IOSTAT numbers are compiler dependent – “I/O statement not allowed on a direct file” is for IBM-mainframe xlf90.
For Intel ifort, it’s severe (17): Syntax error in NAMELIST input (see https://software.intel.com/en-us/fortran-compiler-developer-guide-and-reference-list-of-run-time-error-messages#BAD476FE-3A8C-48AA-B611-3400DC2A7EDB).
For Portland pgf95, the
IOSTAT values go from 201 to 245, so this can’t be PGI-compiled.
For gfortran, lots of luck finding the
IOSTAT value/meaning table ;-(
In your setting of “wrf_lc_ref_lat”, please just specify the value as a real number (without the “f” for float at the end).
@a.kashfi73 – You are correct. We should stick with one issue per thread.
@skunwar – If my suggestion does not solve the issue, please open a separate thread.
Yes, you are correct. Thank you for the clarification.
My apologies @a.kashfi73 for posting on the thread you started - would it be possible to add something more specific at the end of the thread title ‘MCIP v5.1 RUN ISSUE’ so that users don’t assume it is a general MCIPv5.1 issues thread?
@skunwar Good suggestion. It’s always a good idea to have specific thread titles.
good suggestion. thread titles are mandatory to keep the code running
I have revisited the issue you had with LAI values by doing my own runs and trying to recreate the error. I used WRF 4.1 and MCIP 5.1, with similar physics settings (MODIS landuse data, thermal diffusion scheme for the LSM, revised MM5 M-O scheme at the surface). However, I’ve been unable to replicate the crash you had, so far.
The difference may be the compilers though. Mine’s Intel 18. Are you using also using Intel or maybe gcc?
Thanks for any info you can provide,