Hello,
I am using the CMAQ v5.3.1 windblown dust physics in an offline framework at much higher spatial resolution than a traditional CMAQ application, and I am trying to better understand the origin and intended interpretation of the legacy static desert-fraction values in lus_data_module.F.
In the v5.3.1 source I am using, the BELD3 dust classes are assigned the following static desert-fraction values:
-
shrubland = 0.50
-
shrubgrass = 0.25
-
barrenland = 0.75
Other land use schemes forego the shrubgrass category and generally use the shrubland (0.50) and barrenland (0.75) fractions.
My questions are:
-
Where did these legacy values come from originally?
Were they based on a paper, dataset, expert judgment, calibration exercise, or inherited from an earlier code base? -
What were they intended to represent physically?
Were they meant to approximate unresolved subgrid variation in source availability/erodibility within a CMAQ grid cell, or were they mainly an empirical/tuning factor? -
In the original CMAQ/FENGSHA context, were these values intended to account for things like vegetation patchiness, crusted or non-erodible fractions, rocky/armored terrain, wet/ponded surfaces, subgrid soil heterogeneity, or some combination of these?
-
Has there been any guidance on whether these values should be revisited when the model is applied at much finer spatial resolution with more explicit surface characterization?
I am asking because, in my current workflow, I explicitly resolve several surface controls that may have previously been folded into these factors, including time-specific remote-sensing-derived masks/fields for standing water, salt crust, and vegetation fractions, along with finer wind fields. Because of that, I am trying to determine whether applying the legacy static desert-fraction values unchanged could lead to double counting of reduced source availability.
At the same time, I recognize that unresolved supply limitations may still remain, especially for rocky/armored/bedrock terrain, cobbles, crusts, and unresolved soil variability. So I am trying to understand whether these values were originally intended as a coarse-grid source-availability correction, a geomorphic/supply-related mask, an empirical calibration factor, or some mixture of those ideas.
Any clarification on the historical origin and intended physical meaning of these values would be greatly appreciated.