mHM issueshttps://git.ufz.de/mhm/mhm/-/issues2022-07-07T10:38:33+02:00https://git.ufz.de/mhm/mhm/-/issues/131L0 resolutions with repeating decimals2022-07-07T10:38:33+02:00Pallav Kumar Shresthapallav-kumar.shrestha@ufz.deL0 resolutions with repeating decimals**Background**
* MERIT DEM is at 3 arc seconds i.e. 0 deg 0 min 3 secs OR 0.00083333.....33333…..
* I use ASCII input format. All of input .asc files and header.txt have the L0 resolution in decimal degrees.
**Issue**
* I noticed that...**Background**
* MERIT DEM is at 3 arc seconds i.e. 0 deg 0 min 3 secs OR 0.00083333.....33333…..
* I use ASCII input format. All of input .asc files and header.txt have the L0 resolution in decimal degrees.
**Issue**
* I noticed that in order to avoid truncation errors the user needs to enter a lot of decimal places in the .asc files and header.txt. E.g. 18 decimal places were not enough. So I entered 100 decimal places - worked. (note: the error occurs at lines 546-547 of mo_grid.f90, calculation of xllcorner and yllcorner from L0 and L2 resolutions)
**Solution (?)**
* In mHM we have two coordinate systems - metric and lat lon. The latter has numeral format of decimal degrees. However, data which come as deg-min-sec when converted to decimal degrees could bear “repeating decimals” leading to truncation errors as in case of MERIT DEM resolution.
* Wouldn’t it be better if users had the option to enter the resolution in deg-min-sec or decimal degrees as deemed fit? That would be more elegant/ intuitive than entering a lot of decimal places.
* I don’t have experience in using .nc files for L0 data (except lai.nc). Does that solve this issue already?v5.12Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/126use of ceiling in calculate_grid_properties2022-07-07T10:38:33+02:00Stephan Thoberuse of ceiling in calculate_grid_propertiesHi Robert,
in calculate_grid_properties in mo_grid, the Fortran intrinsic ceiling is used to calculate Ncols, Nrows. If the cell factor is close to 1, but not exactly one, this can lead to additional rows and cols. I changed to nint in ...Hi Robert,
in calculate_grid_properties in mo_grid, the Fortran intrinsic ceiling is used to calculate Ncols, Nrows. If the cell factor is close to 1, but not exactly one, this can lead to additional rows and cols. I changed to nint in Ulysses. Do you have any objection to changing it in mhm develop too?
Thanks
Stephan
p.s.: This is my last message during my holidays ;)v5.12Robert SchweppeRobert Schweppehttps://git.ufz.de/mhm/mhm/-/issues/121PGI-Fortran support2022-04-28T15:23:40+02:00Sebastian MüllerPGI-Fortran supportWe now have the PGI Fortran compiler on EVE (https://git.ufz.de/it/eve/software/-/issues/176).
ATM there are some problems/bugs that need to be resolved in order to make it work:
* [x] we need a load-script for the PGI compiler on EVE
*...We now have the PGI Fortran compiler on EVE (https://git.ufz.de/it/eve/software/-/issues/176).
ATM there are some problems/bugs that need to be resolved in order to make it work:
* [x] we need a load-script for the PGI compiler on EVE
* [ ] in `mhm_driver.f90`:
* the pointer `eval` should be of type `eval_interface` from `mo_optimization_utils`
https://git.ufz.de/mhm/mhm/-/blob/develop/src/mHM/mhm_driver.f90#L172
* the pointer `obj_func` should be of type `objective_interface` from `mo_optimization_utils`
https://git.ufz.de/mhm/mhm/-/blob/develop/src/mHM/mhm_driver.f90#L173
* [ ] in https://git.ufz.de/mhm/mhm/-/blob/develop/src/common/mo_read_latlon.f90#L106, an error occures:
```
0: SHAPE: arg not associated with array
```
(very descriptive) ... my educated guess is, that PGI has a problem with the generic procedures in `mo_netcdf` (https://git.ufz.de/mhm/mhm/-/blob/develop/src/common/mo_read_latlon.f90#L106)
* [ ] there are a lot of pre-processor directives across the code-base to handle PGI-Fortran compilation. We would need to test, which of these are still necessary and if we would need more of them to handle problems like the one described above (what a mess)
* [ ] is it worth the hassle?
List TBC...v5.12Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/107hourly meteo forcing nc2023-01-12T11:13:47+01:00Stephan Thoberhourly meteo forcing ncThis is an email send by Olda:
Hi Stephan,
Just a follow-up on Husain's comment during mhm develop meeting.
During today’s mHM meeting I have checked further the reading of hourly data is not working in mHM.
I remember checking this...This is an email send by Olda:
Hi Stephan,
Just a follow-up on Husain's comment during mhm develop meeting.
During today’s mHM meeting I have checked further the reading of hourly data is not working in mHM.
I remember checking this already long time ago…
FYI, I have attached 3-meteo files with modified time step for the test basin at hourly time,
This can be used in the test example with modified time window:
```fortran
warming_Days(1) = 0
!> first year of wanted simulation period
eval_Per(1)%yStart = 1989
!> first month of wanted simulation period
eval_Per(1)%mStart = 01
!> first day of wanted simulation period
eval_Per(1)%dStart = 02
!> last year of wanted simulation period
eval_Per(1)%yEnd = 1989
!> last month of wanted simulation period
eval_Per(1)%mEnd = 02
!> last day of wanted simulation period
eval_Per(1)%dEnd = 02
```
The first thing is that `./src/common/mo_read_forcing_nc.f90` fails to recognize, these data are at hourly time step.
If `inctimestep` is hardcoded in there to `-4`, then seq fault occurs later in the code.
I did not go deeper, but just want to confirm, there is an issue to be checked.
Thanks!
Best regards,
Olda
[pet.nc](/uploads/042c56ffde633de2f2efb26a126d6c2a/pet.nc)
[tavg.nc](/uploads/005f8ee487b8fa381e8def6fcf7432b2/tavg.nc)
[header.txt](/uploads/4ae6bdb6d30e8de7a3a1cf0fa499dbf2/header.txt)
[pre.nc](/uploads/71f9ed2044168a54a6a9c1bb5bddf115/pre.nc)v5.12Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/77improve error message in mo_read_time_series.f902022-06-17T20:59:15+02:00Stephan Thoberimprove error message in mo_read_time_series.f90in mo_read_time_series.f90, a end of line error occurres if there are too few lines in the file. The error message could be more detailed, stating how many lines have been read and how many are expected by the code.in mo_read_time_series.f90, a end of line error occurres if there are too few lines in the file. The error message could be more detailed, stating how many lines have been read and how many are expected by the code.v5.12Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/241Index "i" in L11_routing used for more than one assignment within a loop2023-02-10T13:03:25+01:00Pallav Kumar Shresthapallav-kumar.shrestha@ufz.deIndex "i" in L11_routing used for more than one assignment within a loopIndex `i` in `L11_routing` used for more than one assignment within a loop [Line 437](https://git.ufz.de/mhm/mhm/-/blob/develop/src/mRM/mo_mrm_routing.f90#L437) and [Line 451](https://git.ufz.de/mhm/mhm/-/blob/develop/src/mRM/mo_mrm_rout...Index `i` in `L11_routing` used for more than one assignment within a loop [Line 437](https://git.ufz.de/mhm/mhm/-/blob/develop/src/mRM/mo_mrm_routing.f90#L437) and [Line 451](https://git.ufz.de/mhm/mhm/-/blob/develop/src/mRM/mo_mrm_routing.f90#L451).
Currently there is not harm done. However, if there are inflow gauges and the first intended assignment of index i is used after the inflow gauge check block, `it leads to disaster` as I learned the hard way in my lake module fork.
I recommend to use a different index in the the inflow gauge check (e.g. `gg_inflow`).
Tagging the main authors here: @lese @rkumarv5.13.0Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/239Simple namelist files for each test domain2023-01-30T17:24:17+01:00Sebastian MüllerSimple namelist files for each test domainWe should add simple namelist files in each test domain folder to be able to run these with:
```bash
mhm ./test_domain
mhm ./test_domain_2
```We should add simple namelist files in each test domain folder to be able to run these with:
```bash
mhm ./test_domain
mhm ./test_domain_2
```v5.13.0Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/238ExitCode still 0 after unsuccessful run2023-01-30T18:14:18+01:00Peter MierschExitCode still 0 after unsuccessful runAfter an unsuccessful run of mHM with errors like
```
Reading LAI ...
***ERROR: read_nc: mHM generated x and y are not matching NetCDF dimensions
```
or
```
read precipitation ...
***ERROR: length of time dimension nee...After an unsuccessful run of mHM with errors like
```
Reading LAI ...
***ERROR: read_nc: mHM generated x and y are not matching NetCDF dimensions
```
or
```
read precipitation ...
***ERROR: length of time dimension needs to be at least 2 in file
```
it still exits with exitcode 0, suggesting that the run was successful. This makes monitoring and accounting of mHM runs more difficult, as the queuing system reports a successful run. This is especially a problem when running multiple instances of mHM parallelly. In contrast, when an input files is missing, mHM exits with exitcode 1, indicating that the run failed.
From my point of view, the ideal behavior would be that mHM exits with exitcode 1 if any error makes the run unsuccessful.v5.13.0Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/200check_dir behaves differently with Intel (18) and GNU (83)2023-01-30T16:06:04+01:00Pallav Kumar Shresthapallav-kumar.shrestha@ufz.decheck_dir behaves differently with Intel (18) and GNU (83)The `check_dir` subroutine behaves differently with intel and gnu for long directory paths. Gnu works fine with long directories while executable compiled with Intel, for some reason, introduces a line break during the check. As a result...The `check_dir` subroutine behaves differently with intel and gnu for long directory paths. Gnu works fine with long directories while executable compiled with Intel, for some reason, introduces a line break during the check. As a result, Intel compiled executable always has False value for check_dir for long directory paths. Screenshots below -
**with GCC83**
![Screenshot_2021-08-05_at_22.09.54](/uploads/9ed3f187dfbd2d622c747beed52e0993/Screenshot_2021-08-05_at_22.09.54.png)
**with Intel18**
![Screenshot_2021-08-05_at_22.12.57](/uploads/c8c7cbb20acc60075aedcee486ff0947/Screenshot_2021-08-05_at_22.12.57.png)
Similar issue with line breaks was encountered in #40 which could already provide some clues to debug this.v5.13.0Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/83date mHM output2023-01-30T17:32:12+01:00Friedrich Boeingdate mHM outputIn mHM 5.10 the mHM_Fluxes_States Output starts one day later than meteo input data.
meteo input:
-->starts at 1st Jan.
`
time:units = "days since 1951-01-01-00:00:00" ;
time = 0, 1, 2, 3, 4, 5, 6, 7,...`
mHM-Output:
-->output starts ...In mHM 5.10 the mHM_Fluxes_States Output starts one day later than meteo input data.
meteo input:
-->starts at 1st Jan.
`
time:units = "days since 1951-01-01-00:00:00" ;
time = 0, 1, 2, 3, 4, 5, 6, 7,...`
mHM-Output:
-->output starts at 2nd Jan.
`time:units = hours since 1951-01-01 00:00:00
time = 47, 71, 95, 119, 143, 167,...`
in older mHM version (5.5) from current drought monitor **same meteo input** generates output starting at same day as input data.
mHM-Output:
-->output starts at 2nd Jan.
`time:units = hours since 1951-01-01 00:00:00
time = 23, 47, 71, 95, 119, 143, 167,...`
```fortran
warming_Days(1) = 0
eval_Per(1)%yStart = 1951
eval_Per(1)%mStart = 01
eval_Per(1)%dStart = 01
eval_Per(1)%yEnd = 1951
eval_Per(1)%mEnd = 12
eval_Per(1)%dEnd = 31
```
I need help to clarify this issue.v5.13.0Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/242How to silence warnings (e.g. 'tmax smaller than tmin' when using Hargreaves-...2023-08-23T16:28:01+02:00Peter MierschHow to silence warnings (e.g. 'tmax smaller than tmin' when using Hargreaves-Samani equation)When estimating evapotranspiration using the Hargreaves-Samani equation (processCase(5) = 1) using observational data (e.g. E-OBS gridded data), sometimes tmin is larger than tmax (especially for regions and time periods with poor weathe...When estimating evapotranspiration using the Hargreaves-Samani equation (processCase(5) = 1) using observational data (e.g. E-OBS gridded data), sometimes tmin is larger than tmax (especially for regions and time periods with poor weather station coverage, this issue is recognized by data curators). This leads to the warning `WARNING: tmax smaller than tmin at doy xxx in year xxxx at cell xxx!`. While this warning is helpful in most cases, for optimization runs in large catchments (~10000km2) with spotty data, this severely slows down mHM by writing to increasingly large log files (log files after an optimization run around 50 gigabytes in some cases). Leading to my question:
Is it possible to silence specific or all warnings in mHM through a 'verbosity setting' that I'm unable to find?wishlisthttps://git.ufz.de/mhm/mhm/-/issues/213LAI time coordinate needs proper clipping and selecting in v62021-12-07T15:17:57+01:00Robert SchweppeLAI time coordinate needs proper clipping and selecting in v6In branch mpr_finalize there is currently no check for LAI periods (daily time step, monthly time step, yearly time step, average monthly values). Prior, the stepping method was given in the namelist and then the correct period matching ...In branch mpr_finalize there is currently no check for LAI periods (daily time step, monthly time step, yearly time step, average monthly values). Prior, the stepping method was given in the namelist and then the correct period matching the simulation period was selected.
This feature needs to implemented as well. When reading the restart file or the data arrays from MPR, the LAI coordinate must be checked (against certain naming conventions) and array must then be selected. The check routine in read_nc `get_time_vector_and_select` is already there and the indexing is done already for the land_cover_periods. The LAI stepping method needs to be inferred and correctly set (not done so far).6.xRobert SchweppeRobert Schweppehttps://git.ufz.de/mhm/mhm/-/issues/196Gitlab-Runner: shallow clone2021-07-09T16:32:57+02:00Sebastian MüllerGitlab-Runner: shallow cloneWe encounter some qouta issues on EVE with the gitlab-runner.
To solve these, we could set the fetch depth to `10` for example:
https://docs.gitlab.com/ee/ci/large_repositories/#shallow-cloning
But we need to take care of the documentat...We encounter some qouta issues on EVE with the gitlab-runner.
To solve these, we could set the fetch depth to `10` for example:
https://docs.gitlab.com/ee/ci/large_repositories/#shallow-cloning
But we need to take care of the documentation that checks out the `master` branch to build the `stable` docs. There we could add `--no-tags` to `GIT_FETCH_EXTRA_FLAGS`:
https://docs.gitlab.com/ee/ci/large_repositories/#git-fetch-extra-flagsv5.11.2Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/194With mHM - v5.11.1 (segmentation issue - when routing process switched off i....2021-07-08T18:27:28+02:00Eshrat Fatimaeshrat.fatima@ufz.deWith mHM - v5.11.1 (segmentation issue - when routing process switched off i.e., processCase(8) = 0)At line 217 of file /cygdrive/d/mhm-v5.11.1/src/mHM/mo_mhm_read_config.f90 (unit = 30, file = 'mhm.nml')
Fortran runtime error: ![Error](/uploads/84d7f989310060d9a7db17028d5d57f9/Error.PNG)At line 217 of file /cygdrive/d/mhm-v5.11.1/src/mHM/mo_mhm_read_config.f90 (unit = 30, file = 'mhm.nml')
Fortran runtime error: ![Error](/uploads/84d7f989310060d9a7db17028d5d57f9/Error.PNG)v5.11.2Rohini KumarRohini Kumarhttps://git.ufz.de/mhm/mhm/-/issues/188MPR merge2022-01-13T17:11:27+01:00Stephan ThoberMPR mergeNext steps:
- speed performance in v5.9 verbessern
- Robert stellt Übersicht zu MPR Geschwindigkeit vor
- Sebastian beginnt Routinen von der alten MPR Implementation durch Routinen der neuen zu ersetzen
Open questions:
- how to create ...Next steps:
- speed performance in v5.9 verbessern
- Robert stellt Übersicht zu MPR Geschwindigkeit vor
- Sebastian beginnt Routinen von der alten MPR Implementation durch Routinen der neuen zu ersetzen
Open questions:
- how to create LuT in netCDF6.xSebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/187unreachable else branch in feddes_et_reduction2021-07-21T17:21:12+02:00Nicola Nadine Döringunreachable else branch in feddes_et_reductionhttps://git.ufz.de/mhm/mhm/-/blob/175eff0ba82690418410d8c0703694a64b7e8c8c/src/mHM/mo_soil_moisture.f90#L353
should be:
```fortran
! SM >= FC
if (soil_moist >= soil_moist_FC) then
feddes_et_reduction = frac_roots
...https://git.ufz.de/mhm/mhm/-/blob/175eff0ba82690418410d8c0703694a64b7e8c8c/src/mHM/mo_soil_moisture.f90#L353
should be:
```fortran
! SM >= FC
if (soil_moist >= soil_moist_FC) then
feddes_et_reduction = frac_roots
! PW < SM < FC
else if (soil_moist > wilting_point) then
feddes_et_reduction = frac_roots * (soil_moist - wilting_point) / (soil_moist_FC - wilting_point)
! SM <= PW
else
feddes_et_reduction = 0.0_dp
end if
```v5.11.2https://git.ufz.de/mhm/mhm/-/issues/186unnecessary inout variable intent in soil_moisture2021-07-21T17:21:12+02:00Nicola Nadine Döringunnecessary inout variable intent in soil_moisturehttps://git.ufz.de/mhm/mhm/-/blob/175eff0ba82690418410d8c0703694a64b7e8c8c/src/mHM/mo_soil_moisture.f90#L141https://git.ufz.de/mhm/mhm/-/blob/175eff0ba82690418410d8c0703694a64b7e8c8c/src/mHM/mo_soil_moisture.f90#L141v5.11.2https://git.ufz.de/mhm/mhm/-/issues/185Conda package and development workflow2021-10-11T16:33:49+02:00Sebastian MüllerConda package and development workflow# Package
We should think about adding a conda-forge package for `mHM` at least for Linux and Mac.
We could use the netcdf-fortran feedstock as a guiding light for the recipe:
https://github.com/conda-forge/netcdf-fortran-feedstock
# ...# Package
We should think about adding a conda-forge package for `mHM` at least for Linux and Mac.
We could use the netcdf-fortran feedstock as a guiding light for the recipe:
https://github.com/conda-forge/netcdf-fortran-feedstock
# Development Environment
In addition we should provide a developing guide to set up everything needed in a conda environment (based on conda-forge):
- cmake (https://anaconda.org/conda-forge/cmake)
- fortran-compiler (https://anaconda.org/conda-forge/fortran-compiler) should be gfortran on Linux/Mac
- netcdf-fortran (https://anaconda.org/conda-forge/netcdf-fortran)
- fypp (https://anaconda.org/conda-forge/fypp)
Optional:
- openmp (https://anaconda.org/conda-forge/llvm-openmp)
- mpi (https://anaconda.org/conda-forge/mpi)
Testing:
- valgrind (https://anaconda.org/conda-forge/valgrind)
- pfUnit (not yet added: https://github.com/conda-forge/staged-recipes/pull/13282)
- Python: `pexpect`, `numpy`, `xarray`, `pandas` (for `run_mhm_checks.py`)
Documentation:
- doxygen (https://anaconda.org/conda-forge/doxygen)
- texlive-core (no pdflatex at the moment: https://github.com/conda-forge/texlive-core-feedstock/issues/19)6.xSebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/174Cygwin - Segmentation fault (core dumped) when using "cmake -DCMAKE_WITH_Open...2021-03-03T11:14:48+01:00Mehmet Cüneyd DemirelCygwin - Segmentation fault (core dumped) when using "cmake -DCMAKE_WITH_OpenMP:STRING=ON .."Dear @kaluza,
I could compile mhm-v5.11.0-rc1 with cmake -DCMAKE_WITH_OpenMP:STRING=ON ..
However, I am getting segmentation error as shown below.
Openmp was working when using "make" with older versions of mhm e.g. v5.10_fixed.
If you ...Dear @kaluza,
I could compile mhm-v5.11.0-rc1 with cmake -DCMAKE_WITH_OpenMP:STRING=ON ..
However, I am getting segmentation error as shown below.
Openmp was working when using "make" with older versions of mhm e.g. v5.10_fixed.
If you can comment on this, it would be great. I can try the develop version if some updates were done.
My cygwin info:
![openmp](/uploads/7d11366a8630c774fdc0a5381ec748ae/openmp.png)
Error:
![segment](/uploads/4607d77ec11381c2d6c38d2c413854dd/segment.png)v5.11.1Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/170mhm output as float?2021-07-08T18:30:34+02:00Oldrich Rakovecmhm output as float?Dear mHM admins,
Please, could be an option to make a flag in mhm.nml and print out netcdf variables in float instead of double?
This could squeeze the file size in some ongoing projects (mainly HICAM, ESM), in which we hit the EVE lim...Dear mHM admins,
Please, could be an option to make a flag in mhm.nml and print out netcdf variables in float instead of double?
This could squeeze the file size in some ongoing projects (mainly HICAM, ESM), in which we hit the EVE limits?
Thanks,
Olda
@lese @muellese @thober @rkumar @boeing @marxav5.11.2Sebastian MüllerSebastian Müller