mHM issueshttps://git.ufz.de/mhm/mhm/-/issues2021-11-12T13:14:38+01:00https://git.ufz.de/mhm/mhm/-/issues/208Improve error message for DDS2021-11-12T13:14:38+01:00Pallav Kumar Shresthapallav-kumar.shrestha@ufz.deImprove error message for DDS**Issue**
We need to improve error message for DDS when the observations are missing for optimization. ATM we get a `developer level message` which is completely out of context for mhm users who don't know much about the source code. In...**Issue**
We need to improve error message for DDS when the observations are missing for optimization. ATM we get a `developer level message` which is completely out of context for mhm users who don't know much about the source code. In general, the mhm error messages should transition from `developer level` to `user level`.
**Location**
[Occurrence 1](https://git.ufz.de/mhm/mhm/-/blob/ff81bd9f6624429c019f416136447ca28b3b676e/src/lib/mo_dds.f90#L224)
[Occurrence 2](https://git.ufz.de/mhm/mhm/-/blob/ff81bd9f6624429c019f416136447ca28b3b676e/src/lib/mo_dds.f90#L532)
**Complexity**
However, this example is a bit different to others in the sense that the message is embedded in `mo_dds` which is part of [FORCES](https://git.ufz.de/chs/forces/-/blob/develop/src/mo_dds.F90) library.
**Challenge**
To find a way to communicate with mhm user based on errors occurring at modules of shared/ external libraries like FORCES.
![Screenshot_2021-10-27_at_13.34.08](/uploads/415e67ca964f986cbd6da433a5745327/Screenshot_2021-10-27_at_13.34.08.png)wishlist futureSebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/164[CI] fails occasionally (mo_objective_function)2023-05-12T15:06:32+02:00Sebastian Müller[CI] fails occasionally (mo_objective_function)@kaluza and me (@muellese) noticed, that the gitlab CI fails occasionally, because it can't compile mhm since it is missing `mo_objective_function.mod`.
This happens for all compilers (gcc83, intel20, nag62 already reported)
This can b...@kaluza and me (@muellese) noticed, that the gitlab CI fails occasionally, because it can't compile mhm since it is missing `mo_objective_function.mod`.
This happens for all compilers (gcc83, intel20, nag62 already reported)
This can be fixed at occurrence with a retry or deleting the build folder on eve (`/public/mhm_runner/....`).
I am at my wit's end. At least for nowwishlist futureSebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/134Leaf Area Index of 0 not possible in gridded LAI input data2020-11-27T13:45:05+01:00Marco HannemannLeaf Area Index of 0 not possible in gridded LAI input dataI faced a problem with my gridded LAI input data. If the value 0 occurs in a cell within the basin, e.g. describing bare ground or urban areas, the following error will be thrown:
```
***ERROR: read_forcing_nc: values in variable "lai" ...I faced a problem with my gridded LAI input data. If the value 0 occurs in a cell within the basin, e.g. describing bare ground or urban areas, the following error will be thrown:
```
***ERROR: read_forcing_nc: values in variable "lai" are lower than 0.00
at timestep : 1
File setup_dir//lai//lai.nc
Minval at timestep: 0.0000
Total minval: 0.0000
```
Is this expected behaviour? However, the error indicates the presence of negative LAI values, maybe this is a problem of floating point uncertainty?
A possible workaround is to set the value of the cells to e-10.wishlist futureSebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/125restoring older parameter ranges for &soilmoisture12020-11-27T13:44:08+01:00Oldrich Rakovecrestoring older parameter ranges for &soilmoisture1Attached are the updated parameter ranges for &soilmoisture1 process provided by @lese and @rkumar, based on the initial mHM code.
We are using them currently in the latest GDM calibrations, to ensure realistic ThetaS values. [GAMMA.LOO...Attached are the updated parameter ranges for &soilmoisture1 process provided by @lese and @rkumar, based on the initial mHM code.
We are using them currently in the latest GDM calibrations, to ensure realistic ThetaS values. [GAMMA.LOOKUP.txt](/uploads/d334d345b7a4532043ef67bda9ce7638/GAMMA.LOOKUP.txt)
Probably good to replace the older one in the next release? Thanks!
@thoberwishlist futureSebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/25Consistency check for soil LUT (iFlag_soilDB = 1) fails for large soil datase...2021-07-29T14:22:26+02:00Oldrich RakovecConsistency check for soil LUT (iFlag_soilDB = 1) fails for large soil datasets with IntelDear all,
While running Southern America sub-continent (L0 has ncols=30720, nrows=24064) with `iFlag_soilDB = 1` using `nSoilHorizons_mHM = 6`, mHM breaks in `/src/MPR/mo_read_wrapper.f90` on following line:
`call check_consistency_lu...Dear all,
While running Southern America sub-continent (L0 has ncols=30720, nrows=24064) with `iFlag_soilDB = 1` using `nSoilHorizons_mHM = 6`, mHM breaks in `/src/MPR/mo_read_wrapper.f90` on following line:
`call check_consistency_lut_map(reshape(L0_soilId, (/ size(L0_soilId, 1) * size(L0_soilId, 2) /)), soilDB%id(:), fName)`
mHM is able to execute this command only up to `nSoilHorizons_mHM = 3`, when increasing to `nSoilHorizons_mHM = 4` and more, code returns following error:
`forrtl: severe (408): fort: (2): Subscript #1 of the array IRNGT has value 2109892711 which is greater than the upper bound of 2109892710`
Note that I encountered this weak point already half year ago for the Australian domain, but I could have solved this in the Makefile by `INTEL_EXCLUDE := mo_multi_param_reg.f90 mo_mpr_soilmoist.f90 mo_read_wrapper.f90`, which has no effect now. I also tried reducing the optimization level from `O3` to `O1` in `./make.config/eve.intel18`, did not help either, as follows:
`F90FLAGS += -O1 -qoverride-limits -gxx-name=/usr/local/gcc/6.2.0-1/bin/g++
FCFLAGS += -O1 -qoverride-limits -gxx-name=/usr/local/gcc/6.2.0-1/bin/g++
CFLAGS += -O1`
Actually, I have tried with both Intel compilers on EVE (13 and 18). Both have the same behaviour.
@kaluza, @thober @rkumar @lese @schaefed @ottor, Any ideas? Thanks!6.xMaren KaluzaMaren Kaluza