mHM issueshttps://git.ufz.de/mhm/mhm/-/issues2020-01-08T10:46:35+01:00https://git.ufz.de/mhm/mhm/-/issues/53Remove branch: domain_parallelization2020-01-08T10:46:35+01:00Sebastian MüllerRemove branch: domain_parallelizationPlease remove the branch ``domain_parallelization``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``domain_parallelization``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryMaren KaluzaMaren Kaluza2019-12-01https://git.ufz.de/mhm/mhm/-/issues/54Remove branch: optidata_management2020-01-08T10:46:30+01:00Sebastian MüllerRemove branch: optidata_managementPlease remove the branch ``optidata_management``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``optidata_management``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryMaren KaluzaMaren Kaluza2019-12-01https://git.ufz.de/mhm/mhm/-/issues/55Remove branch: process_dependent_output2020-01-08T10:46:27+01:00Sebastian MüllerRemove branch: process_dependent_outputPlease remove the branch ``process_dependent_output``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``process_dependent_output``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryMaren KaluzaMaren Kaluza2019-12-01https://git.ufz.de/mhm/mhm/-/issues/59Remove branch: preparing_mhm_eval_for_parallel_io2020-01-08T10:46:19+01:00Sebastian MüllerRemove branch: preparing_mhm_eval_for_parallel_ioPlease remove the branch ``preparing_mhm_eval_for_parallel_io``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``preparing_mhm_eval_for_parallel_io``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryMaren KaluzaMaren Kaluza2019-12-01https://git.ufz.de/mhm/mhm/-/issues/60Remove branch: CR_ET2020-01-08T10:46:16+01:00Sebastian MüllerRemove branch: CR_ETPlease remove the branch ``CR_ET``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``CR_ET``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryJohannes BrennerJohannes Brenner2019-12-01https://git.ufz.de/mhm/mhm/-/issues/61Remove branch: domain_parallelization_mrm_subrepo2020-01-08T10:46:13+01:00Sebastian MüllerRemove branch: domain_parallelization_mrm_subrepoPlease remove the branch ``domain_parallelization_mrm_subrepo``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``domain_parallelization_mrm_subrepo``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryMaren KaluzaMaren Kaluza2019-12-01https://git.ufz.de/mhm/mhm/-/issues/56Remove branch: MDF_inclusion2020-01-08T10:42:59+01:00Sebastian MüllerRemove branch: MDF_inclusionPlease remove the branch ``MDF_inclusion``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``MDF_inclusion``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryMaren KaluzaMaren Kaluza2019-12-01https://git.ufz.de/mhm/mhm/-/issues/62Remove branch: CR_et_mrm_subrepo2020-01-08T10:41:59+01:00Sebastian MüllerRemove branch: CR_et_mrm_subrepoPlease remove the branch ``CR_et_mrm_subrepo``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``CR_et_mrm_subrepo``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryJohannes BrennerJohannes Brenner2019-12-01https://git.ufz.de/mhm/mhm/-/issues/64Remove branch: nag_compilation2020-01-08T10:41:46+01:00Sebastian MüllerRemove branch: nag_compilationPlease remove the branch ``nag_compilation``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``nag_compilation``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryMaren KaluzaMaren Kaluza2019-12-01https://git.ufz.de/mhm/mhm/-/issues/65Remove branch: mHM_parallel2020-01-08T10:41:00+01:00Sebastian MüllerRemove branch: mHM_parallelPlease remove the branch ``mHM_parallel``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``mHM_parallel``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryMaren KaluzaMaren Kaluza2019-12-01https://git.ufz.de/mhm/mhm/-/issues/66Remove branch: mHM_parallel_dev_pertesta2020-01-08T10:40:35+01:00Sebastian MüllerRemove branch: mHM_parallel_dev_pertestaPlease remove the branch ``mHM_parallel_dev_pertesta``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``mHM_parallel_dev_pertesta``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryMaren KaluzaMaren Kaluza2019-12-01https://git.ufz.de/mhm/mhm/-/issues/67Remove branch: mk-neutron2020-01-08T10:40:06+01:00Sebastian MüllerRemove branch: mk-neutronPlease remove the branch ``mk-neutron``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``mk-neutron``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryMaren KaluzaMaren Kaluza2019-12-01https://git.ufz.de/mhm/mhm/-/issues/68Remove branch: corMU2020-01-08T10:38:46+01:00Sebastian MüllerRemove branch: corMUPlease remove the branch ``corMU``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``corMU``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryJohannes BrennerJohannes Brenner2019-12-01https://git.ufz.de/mhm/mhm/-/issues/1Develop branch does not compile with NAG2023-05-23T22:30:43+02:00Robert SchweppeDevelop branch does not compile with NAGHaving set the newest NAG 6.2 compiler on EVE, we can now test mHM again. When doing: `make compiler=nag system=eve release=debug -j 4`
we get the following:
```
Questionable: /gpfs0/home/ottor/lib/mhm/src/mRM/mo_mrm_river_head.f90,
l...Having set the newest NAG 6.2 compiler on EVE, we can now test mHM again. When doing: `make compiler=nag system=eve release=debug -j 4`
we get the following:
```
Questionable: /gpfs0/home/ottor/lib/mhm/src/mRM/mo_mrm_river_head.f90,
line 162: Variable NCOLS0 set but never referenced
Questionable: /gpfs0/home/ottor/lib/mhm/src/mRM/mo_mrm_river_head.f90,
line 162: Variable NROWS0 set but never referenced
Error: /gpfs0/home/ottor/lib/mhm/src/mRM/mo_mrm_river_head.f90, line 31
7: Implicit type for ISNAN in CALC_SLOPE
[NAG Fortran Compiler pass 1 error termination, 1 error, 2 warnings]
```
I assign @schueler as he is creator of the fileLennart SchülerLennart Schülerhttps://git.ufz.de/mhm/mhm/-/issues/2mRM standalone write restart not working2023-05-23T22:30:43+02:00Stephan ThobermRM standalone write restart not workingswitching write_restart = .True. in mrm.nml leads to:
Log-file written to test_basin/ConfigFile.log
Writing mRM restart file to test_basin/restart/mRM_restart_001.nc ...
Failed to write data into variable: L1_basin_lat
NetCDF: ...switching write_restart = .True. in mrm.nml leads to:
Log-file written to test_basin/ConfigFile.log
Writing mRM restart file to test_basin/restart/mRM_restart_001.nc ...
Failed to write data into variable: L1_basin_lat
NetCDF: HDF error
STOP 1mHM releaseStephan ThoberStephan Thoberhttps://git.ufz.de/mhm/mhm/-/issues/8check_files.py not working2023-05-23T22:30:43+02:00Stephan Thobercheck_files.py not workingcheck_files.py does not identify differences in file when there are differences. These differences are correctly found by cdo diffncheck_files.py does not identify differences in file when there are differences. These differences are correctly found by cdo diffnRobert SchweppeRobert Schweppehttps://git.ufz.de/mhm/mhm/-/issues/9Failing check-case #22020-02-14T08:35:15+01:00Oldrich RakovecFailing check-case #2https://git.ufz.de/mhm/mhm/-/issues/10Muskingum: mRM initialization takes much more time in release 5.9 than in rel...2020-02-14T08:35:15+01:00Oldrich RakovecMuskingum: mRM initialization takes much more time in release 5.9 than in release 5.8 (measurable only for large domains)Dear all,
Given some inputs from an external user and also confirmed by Pallav, there seems to be a **non-negligible time difference between mRM initialization for routing case 1 (=Muskingum) between release 5.8 and 5.9**.
I have che...Dear all,
Given some inputs from an external user and also confirmed by Pallav, there seems to be a **non-negligible time difference between mRM initialization for routing case 1 (=Muskingum) between release 5.8 and 5.9**.
I have checked recently myself on one EDgE basin (~120000km2), 30years of simulations, compiled with intel18.
mHM release 5.8: total runtime 13minutes
mHM release 5.9: total runtime 22minutes **(~11minutes MPR initialization)**
Same I did for a smaller basin (~25000km2), the differences were smaller, but also appeared.
Does someone of you have a clue?
Lastly, these are no differences in runtime for the adaptive time stepping (routing case #2)
Thanks!
Oldahttps://git.ufz.de/mhm/mhm/-/issues/11Potential bug in mo_multi_param_reg.f902020-02-14T08:35:15+01:00Robert SchweppePotential bug in mo_multi_param_reg.f90When the fractions of the different land cover types are calculated in `mo_multi_param_reg.f90`:
```
fSealed1(:, 1, iiLC) = fracSealed_CityArea * fSealed1(:, 1, iiLC)
fPerm1(:) = fPerm1(:) + (1.0_dp - fracSealed_CityArea) * ...When the fractions of the different land cover types are calculated in `mo_multi_param_reg.f90`:
```
fSealed1(:, 1, iiLC) = fracSealed_CityArea * fSealed1(:, 1, iiLC)
fPerm1(:) = fPerm1(:) + (1.0_dp - fracSealed_CityArea) * fSealed1(:, 1, iiLC)
! to make sure everything happens smoothly
fForest1(:) = fForest1(:) / (fForest1(:) + fSealed1(:, 1, iiLC) + fPerm1(:))
fSealed1(:, 1, iiLC) = fSealed1(:, 1, iiLC) / (fForest1(:) + fSealed1(:, 1, iiLC) + fPerm1(:))
fPerm1(:) = fPerm1(:) / (fForest1(:) + fSealed1(:, 1, iiLC) + fPerm1(:))
```
the 2nd line contains IMHO a bug. fPerm1 must be based on the original fSealed, not the corrected one. One possible alternative could be, for example:
```
fSealed1(:, 1, iiLC) = fracSealed_CityArea * fSealed1(:, 1, iiLC)
fPerm1(:) = 1.0_dp - fSealed1(:, 1, iiLC) - fForest1(:)
```
There are multiple ways to implement that, but this one is the neatest, I think and makes the last checks obsolete. fForest1(:) + fSealed1(:, 1, iiLC) + fPerm1(:) should always sum to 1, as they come directly from the L0 land cover fields. @rakovec Please give some feedback and decide, what is to be done. It influences all the snow parameters and fSealed is also used in many parts of the program. The code has also been there in version 5.8.https://git.ufz.de/mhm/mhm/-/issues/12dicussion needed on calculation of aerodynamic resistance2020-02-14T08:35:15+01:00Robert Schweppedicussion needed on calculation of aerodynamic resistanceIf PET is calculated using Penman-Monteith, aerodynamic resistance needs to be calculated. This is happening in subroutine `aerodynamic_resistance` in `mo_multi_param_reg.f90`. It was implemented by Matthias Zink and cites a FAO publicat...If PET is calculated using Penman-Monteith, aerodynamic resistance needs to be calculated. This is happening in subroutine `aerodynamic_resistance` in `mo_multi_param_reg.f90`. It was implemented by Matthias Zink and cites a FAO publication as a source. Now here is the problem:
The FAO paper uses a calculation based on a paper of Jacobs(2002) that seems to be valid for the grass reference surface. What I found in a brief search in the internet is that it is most commonly used for canopy_heights for crops of up to 2m. However, we also use that function for forest canopies. The canopy height for forests in mHM is controlled by a parameter (`param(1)`) and can range from 15 (default) to 40m. The measurement height of wind measurement (`zm`) is a parameter and set to 10m.
To make that function work, the `canopy_height0` is added to `zm` , if the `canopy_height0` is higher than `zm`. I do not know, if that is really the way the function is supposed to be used.
Here is the relevant code:
```
! regionalization of canopy height
! substitute with canopy height
canopy_height0 = merge(param(1), canopy_height0, LCover0 == 1) ! forest
canopy_height0 = merge(param(2), canopy_height0, LCover0 == 2) ! impervious
! old implementation used values from LUT statically for all cells (Jan-Dec, 12 values):
! 7 Intensive-orchards: 2.0, 2.0, 2.0, 2.0, 3.0, 3.5, 4.0, 4.0, 4.0, 2.5, 2.0, 2.0
maxLAI = MAXVAL(LAI0, dim=2)
do tt = 1, size(LAI0, 2)
! pervious canopy height is scaled with LAI
canopy_height0 = merge((param(3) * LAI0(:, tt) / maxLAI), canopy_height0, LCover0 == 3) ! pervious
! estimation of the aerodynamic resistance on the lower level
! see FAO Irrigation and Drainage Paper No. 56 (p. 19 ff) for more information
zm = WindMeasHeight
! correction: if wind measurement height is below canopy height loagarithm becomes negative
->!! zm = merge(canopy_height0 + zm, zm, ((abs(zm - nodata_dp) .GT. eps_dp) .AND. (zm .LT. canopy_height0)))
! zh = zm
displace = param(4) * canopy_height0
zm_zero = param(5) * canopy_height0
zh_zero = param(6) * zm_zero
!
! calculate aerodynamic resistance (changes monthly)
aerodyn_resistance0(:, tt) = log((zm - displace) / zm_zero) * log((zm - displace) / zh_zero) / (karman**2.0_dp)
aerodyn_resistance1(:, tt) = upscale_arithmetic_mean(n_subcells1, upper_bound1, lower_bound1, &
left_bound1, right_bound1, Id0, mask0, nodata_dp, aerodyn_resistance0(:, tt))
end do
```
So my questions are (before I adapt this to the new MPR) @lese, @rakovec, @rkumar, @thober:
* are we okay with using this function for calculation of aerodynamic resistance for forests?
* if yes, is the correction, like Matthias did it, okay, or do we come up with a smoother solution, because now: canopy_height: 9.99 m -> zm = 10.0 m, canopy_height: 10.0 m -> zm = 20.0 m
* if no, what other transfer function do we want?https://git.ufz.de/mhm/mhm/-/issues/13mo_read_forcing_nc.f90 does not support reading monthly climatology of LAI2020-02-14T08:35:15+01:00Oldrich Rakovecmo_read_forcing_nc.f90 does not support reading monthly climatology of LAIConsistency checks for meteorologic files does not allow to read-in climatologic LAI values using mo_read_forcing_nc.f90 (monthly values have time dimension of 12). I will investigate this issue later on, for time being be aware ... @ot...Consistency checks for meteorologic files does not allow to read-in climatologic LAI values using mo_read_forcing_nc.f90 (monthly values have time dimension of 12). I will investigate this issue later on, for time being be aware ... @ottor @shresthap @rkumar @thober @lese
We need to have a check case for reading LAI climatology "timeStep_LAI_input = -2"https://git.ufz.de/mhm/mhm/-/issues/20MPR_STANDALONE does not complile2020-02-14T08:35:15+01:00Robert SchweppeMPR_STANDALONE does not complilethere is an error thrown at CompileTime in `mpr_driver.f90` about missing kind parameters.there is an error thrown at CompileTime in `mpr_driver.f90` about missing kind parameters.https://git.ufz.de/mhm/mhm/-/issues/21Filed capacity estimation in mo_mpr_soilmoist.f902020-02-14T08:35:15+01:00Andreas MarxFiled capacity estimation in mo_mpr_soilmoist.f90Please check:
FC values for Germany were too low compared to available publications. Therefore I checked the code and found that the Twarakavi, et. al. 2009, WRR EQ7 differs from what is coded and therefore made changes in two lines:
...Please check:
FC values for Germany were too low compared to available publications. Therefore I checked the code and found that the Twarakavi, et. al. 2009, WRR EQ7 differs from what is coded and therefore made changes in two lines:
elemental pure subroutine field_cap( thetaFC, & ! Output
Ks, thetaS, Genu_Mual_n ) ! Input
[...]
! x = (field_cap_c1) * (field_cap_c2 + log10( Ks ))
! thetaFC = thetaS * exp( x * log(Genu_Mual_n) )
x = field_cap_c1 * (field_cap_c2 + log10( Ks ) ) ! (Twarakavi, et. al. 2009, WRR) EQ7 exponent
thetaFC = Genu_Mual_n ** x
The result is that FC values are in the expected range now for Germany. aET values now show lower value in summer time (~0.35mm/day) and almost unchanged in winter - please keep in mind that I did not re-calibrate the model.https://git.ufz.de/mhm/mhm/-/issues/23Potential bug in mo_mpr_soilmoist2020-12-01T09:06:14+01:00Robert SchweppePotential bug in mo_mpr_soilmoistI think I found a bug in this loop calculating the soil properties:
```
do i = 1, size(is_present)
if (is_present(i) .lt. 1) cycle
horizon : do j = 1, nHorizons(i)
! calculating vertical hydraulic conductivity
call hydro_con...I think I found a bug in this loop calculating the soil properties:
```
do i = 1, size(is_present)
if (is_present(i) .lt. 1) cycle
horizon : do j = 1, nHorizons(i)
! calculating vertical hydraulic conductivity
call hydro_cond(Ks_tmp, param(10 : 13), sand(i, j), clay(i, j))
Ks(i, j, :) = Ks_tmp
! calculating other soil hydraulic properties
! tillage horizons
if (j .le. nTillHorizons(i)) then
! LC class
do L = 1, max_LCover
select case (L)
case(1) ! forest
pOM = tmp_orgMatterContent_forest
case(2) ! impervious
pOM = tmp_orgMatterContent_impervious !param(2)
case(3) ! permeable
pOM = tmp_orgMatterContent_pervious
case default
stop 'Error mpr_sm: pOM used uninitialized.'
end select
pM = 100.0_dp - pOM
! bulk density acording to Rawl's (1982) paper
Db(i, j, L) = 100.0_dp / ((pOM / BulkDens_OrgMatter) + (pM / DbM(i, j)))
! Effect of organic matter content
! This is taken into account in a simplified form by using
! the ratio of(Bd / BdOM)
Ks_tmp = Ks_tmp * (DbM(i, j) / Db(i, j, L)) !!!!!!!!! <-- this is the critical line, Ks_tmp gets updated
Ks(i, j, L) = Ks_tmp
! estimated SMs_till & van Genuchten's shape parameter (n)
call Genuchten(thetaS_till(i, j, L), Genu_Mual_n, Genu_Mual_alpha, &
param(4 : 9), sand(i, j), clay(i, j), Db(i, j, L))
! estimating field capacity
call field_cap(thetaFC_till(i, j, L), Ks_tmp, thetaS_till(i, j, L), Genu_Mual_n)
! estimating permanent wilting point
call PWP(Genu_Mual_n, Genu_Mual_alpha, thetaS_till(i, j, L), thetaPW_till(i, j, L))
end do
! deeper layers
else
! estimate SMs & van Genuchten's shape parameter (n)
call Genuchten(thetaS(i, j - tmp_minSoilHorizon), Genu_Mual_n, Genu_Mual_alpha, &
param(4 : 9), sand(i, j), clay(i, j), DbM(i, j))
! estimate field capacity
call field_cap(thetaFC(i, j - tmp_minSoilHorizon), &
Ks_tmp, thetaS(i, j - tmp_minSoilHorizon), Genu_Mual_n)
! estimate permanent wilting point
call PWP(Genu_Mual_n, Genu_Mual_alpha, thetaS(i, j - tmp_minSoilHorizon), &
thetaPW(i, j - tmp_minSoilHorizon))
end if
end do horizon
end do
```
The problem is with the line `Ks_tmp = Ks_tmp * (DbM(i, j) / Db(i, j, L))` In each iteration over the the land_covers, `Ks_tmp` gets updated. Effectively, the `Ks_tmp` for pervious land cover type (3) is `Ks_tmp * DbMineral / Dbforest * DbMineral / Dbimpervious * DbMineral / Dbpervious`.6.xRobert SchweppeRobert Schweppehttps://git.ufz.de/mhm/mhm/-/issues/24Rewrite the SM distribution between PWP and FC2019-06-04T22:58:34+02:00Oldrich RakovecRewrite the SM distribution between PWP and FChttps://git.ufz.de/mhm/mhm/-/issues/26get rid of numerical recipes2020-12-01T09:14:09+01:00Robert Schweppeget rid of numerical recipesCurrently in our lib folder, we do have `mo_corr.f90` with modules calculating mainly autocorrelation metrics. Two procedures from therein are used in `mo_mrm_signatures.f90`, which itself is not currently used in the code.
As there wil...Currently in our lib folder, we do have `mo_corr.f90` with modules calculating mainly autocorrelation metrics. Two procedures from therein are used in `mo_mrm_signatures.f90`, which itself is not currently used in the code.
As there will be some restructuring again in order to get #22 done, the lib module will be a seperate repo and should not contain anything not licensed under GPL. So `mo_corr.f90`, which is based on the Numerical Recipes and not free-to-use, needs to go. @lese called for a rewrite of the relevant function so we can keep `mo_mrm_signatures.f90` in the code. For now, I deleted that file in the new repository https://git.ufz.de/chs/lightweight_fortran_lib. Please add a new version of `mo_corr.f90` there and reinsert `mo_mrm_signatures.f90` in the branch for #22.5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/27Adaptive routing does not allow to run without at least 1 gauge specification.2020-10-07T18:25:45+02:00Oldrich RakovecAdaptive routing does not allow to run without at least 1 gauge specification.Dear all,
There appears a segmentation fault for adaptive routing (processCase(8) = 2), if nGaugesTotal = 0 && NoGauges_basin(1) = 0 && dir_Gauges(1) = "./", i.e., Qobs is not provided.
One selects this option when is interes...Dear all,
There appears a segmentation fault for adaptive routing (processCase(8) = 2), if nGaugesTotal = 0 && NoGauges_basin(1) = 0 && dir_Gauges(1) = "./", i.e., Qobs is not provided.
One selects this option when is interested in the gridded runoff only.
Note that processCase(8) = 1 handles this option.
Could @thober check, where could be the problem in mRM?
A possible workaround is to create idgauges.asc with at least one gauge, but we thought with @lese better to fix in the code as well ...
Thanks!
Olda5.11Stephan ThoberStephan Thoberhttps://git.ufz.de/mhm/mhm/-/issues/29I/O L1_jarvis_thresh_c1 to restart file for process id 2 AND 32020-12-01T09:07:44+01:00Robert SchweppeI/O L1_jarvis_thresh_c1 to restart file for process id 2 AND 3Currently, it is only read and written to and from restart files if the processmatrix(3,1) == 2, we have to include 3 also.Currently, it is only read and written to and from restart files if the processmatrix(3,1) == 2, we have to include 3 also.5.11Robert SchweppeRobert Schweppehttps://git.ufz.de/mhm/mhm/-/issues/49FinalParam.nml routing section2020-12-01T09:10:03+01:00Mehmet Cüneyd DemirelFinalParam.nml routing sectionHello,
When I use default test_basin set-up (nbasin=1) in optimize=1 mode, the resultant "FinalParam.nml" file should have "&routing3" in between lines 69 and 70 as in other processes:
! percolation
&percolation1
rechargeCoefficien...Hello,
When I use default test_basin set-up (nbasin=1) in optimize=1 mode, the resultant "FinalParam.nml" file should have "&routing3" in between lines 69 and 70 as in other processes:
! percolation
&percolation1
rechargeCoefficient = 0.0000000000000000 , 50.000000000000000 , 32.113965205509707 , 1 , 1
rechargeFactor_karstic = -5.0000000000000000 , 5.0000000000000000 , -1.0000000000000000 , 1 , 1
gain_loss_GWreservoir_karstic = 1.0000000000000000 , 1.0000000000000000 , 1.0000000000000000 , 0 , 1
/
! routing
slope_factor = 0.10000000000000001 , 100.00000000000000 , 30.000000000000000 , 0 , 1
/
[Sample file](https://www.dropbox.com/s/yeh1hlzeeu91hse/FinalParam.nml?dl=0)
p.s. minor issue: test_basin2 gauge/meteo should also have 1990-1993 data for a smooth example in mhm.nml
If one starts dummy calibration s/he gets this error below.
"Simulation period is not covered by observations! test_basin_2/input/gauge/45.txt"5.11Stephan ThoberStephan Thober2019-11-08https://git.ufz.de/mhm/mhm/-/issues/128mHM initialization fails for small basins: percentile_0d_dp: n < 22020-09-15T17:19:28+02:00Oldrich RakovecmHM initialization fails for small basins: percentile_0d_dp: n < 2Hi @thober @shresthp (cc @kaluza)
[update 6.7.2020]
After re-running all the 5000+ global basins with the latest mHM develop setup, routing process = 3;
20% of the basin runs are broken during the mRM initialization due to `percentile_0...Hi @thober @shresthp (cc @kaluza)
[update 6.7.2020]
After re-running all the 5000+ global basins with the latest mHM develop setup, routing process = 3;
20% of the basin runs are broken during the mRM initialization due to `percentile_0d_dp: n < 2`
That was not the case earlier (with revision 8271b54 and routing process = 2).
Initially I thought it came with with the bugfix in `./src/mRM/mo_mrm_net_startup.f90` saying:
```fortran
! Stephan Thober, Pallav Kumar Shrestha, Sep 2020 - bug fix in cut off Length at 40 percentile, neglecting links with -9999. that occur if multiple outlets are present
```
lines 1434-1438:
```fortran
! cut off Length at 40 percentile to neglect short paths in headwaters
if ((processMatrix(8, 1) .eq. 2) .or. (processMatrix(8, 1) .eq. 3)) then
length = percentile(pack(nLinkLength(:), nLinkLength(:) .ge. 0._dp), 40._dp)
nLinkLength(:) = merge(nLinkLength(:), length, (nLinkLength(:) .gt. length))
end if
```
But it looks the reason is elsewhere, after rolling it back prior the aforementioned bugfix.
Any hints appreciated (? @kaluza )
Thanks,
Olda5.11Maren KaluzaMaren Kaluzahttps://git.ufz.de/mhm/mhm/-/issues/149snow melt flux2022-06-17T20:58:53+02:00Stephan Thobersnow melt fluxThis commit integrates snow melt flux:
https://git.ufz.de/ulysses/mhm_src/-/commit/87d04c8ad16fc5627289e149097b2512f5b94e19This commit integrates snow melt flux:
https://git.ufz.de/ulysses/mhm_src/-/commit/87d04c8ad16fc5627289e149097b2512f5b94e19v5.12Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/71mRM output file of discharge.nc does not have missing_value nor Fill_Value de...2020-10-08T20:17:36+02:00Oldrich RakovecmRM output file of discharge.nc does not have missing_value nor Fill_Value defined as -9999.5.11Stephan ThoberStephan Thoberhttps://git.ufz.de/mhm/mhm/-/issues/34Renaming basins to domain2020-12-01T09:04:49+01:00Oldrich RakovecRenaming basins to domain- change this in the namelists
- change also the variable names- change this in the namelists
- change also the variable names5.11https://git.ufz.de/mhm/mhm/-/issues/35more details on developers in the mhm manual2019-06-04T16:27:18+02:00Oldrich Rakovecmore details on developers in the mhm manual@lese could you please specify contribution of the mHM developers in the mhm manual prior release 5.10?@lese could you please specify contribution of the mHM developers in the mhm manual prior release 5.10?mHM releaseLuis Samaniegoluis.samaniego@ufz.deLuis Samaniegoluis.samaniego@ufz.dehttps://git.ufz.de/mhm/mhm/-/issues/40Finalparam.nml format with Intel2021-08-05T22:21:01+02:00Pallav Kumar Shresthapallav-kumar.shrestha@ufz.deFinalparam.nml format with IntelThe Finalparam.nml from Intel compiled executable is not compatible as input with GNU compiled executables. Intel for some reason includes unwanted line breaks which GNU doesn't accept (Intel works with it though).
Finalparam.nml from ...The Finalparam.nml from Intel compiled executable is not compatible as input with GNU compiled executables. Intel for some reason includes unwanted line breaks which GNU doesn't accept (Intel works with it though).
Finalparam.nml from INTEL:
![Screenshot_2019-05-27_at_12.00.51](/uploads/d3b2f25b8c473fad4ff43b44f26ccd5f/Screenshot_2019-05-27_at_12.00.51.png)
Finalparam.nml from GNU:
![Screenshot_2019-05-27_at_12.14.20](/uploads/bc31e7488c21e86334d17527803dff6d/Screenshot_2019-05-27_at_12.14.20.png)5.11https://git.ufz.de/mhm/mhm/-/issues/42Potential issue with building mhm via cmake2020-12-01T09:26:56+01:00Marius OsterPotential issue with building mhm via cmakeBoth me and @extluedt have issues with building mhm with the makefile generated by cmake in the current develop-branch.
![cmake](/uploads/4aa344e9935645ed7e3c25585e4b8a79/cmake.PNG)
Getting the makefile with cmake seems to work fine a...Both me and @extluedt have issues with building mhm with the makefile generated by cmake in the current develop-branch.
![cmake](/uploads/4aa344e9935645ed7e3c25585e4b8a79/cmake.PNG)
Getting the makefile with cmake seems to work fine as far as I can tell, but running 'make' on it fails
![make](/uploads/6fcbf2207826885f6826ef8acc5b70af/make.PNG)
For both me and Stefan building the model via the Makefile (manual edits) from the master branch works just fine.
*EDIT*
It seems when cloning from gitlab not all submodules have been properly included, causing missing files.
This error is thus resolved.
However, the model still cannot be built (see screencap).
![make_new_error](/uploads/ec1aa6fd30b563dd0dc5b178c5bdd204/make_new_error.PNG)
This is using the the recommended the libraries as recommended in install.md
For completion's sake it should be noted that cmake works just fine using cygwin, generating the mhm executable.5.11https://git.ufz.de/mhm/mhm/-/issues/45missing meta information on dimensions in restart files2020-12-01T09:05:22+01:00Robert Schweppemissing meta information on dimensions in restart filesThere is a problem with reading from restart files. Since mHM 5.9, MPR is always executed before the call to mhm_eval. When reading restart files, all parameters for all land cover scenes, domains and soil horizons must be contained in t...There is a problem with reading from restart files. Since mHM 5.9, MPR is always executed before the call to mhm_eval. When reading restart files, all parameters for all land cover scenes, domains and soil horizons must be contained in the restart file.
Currently, there is no meta information contained on land cover scenes or soil horizons. Also, there is no check in mHM if the land cover scenes or soil horizons in the restart file are consistent with the eval_period or soil layering as specified in the namelist.
There is some preliminary work on this in the branch #22 that needs to be merged. It introduces dimension bounds in the restart files and an extensive checking of these bounds and the selection of correct scenes and layers.
@ottor will work on that and present the presumed changes of the implementation at the next mHM devel meeting.5.11Robert SchweppeRobert Schweppehttps://git.ufz.de/mhm/mhm/-/issues/46missing soilmoisture parameters for Jarvis equation (option 3) after optimisa...2020-12-01T09:07:19+01:00Johannes Brennermissing soilmoisture parameters for Jarvis equation (option 3) after optimisationI spot missing @soilmoisture3 in mhm_parameter.nml after optimisation.
Feddes (option 1) and Jarvis (option 2) are fine.
[mhm_parameter.nml](/uploads/5488bd5827a464879f279c7043f080ea/mhm_parameter.nml)I spot missing @soilmoisture3 in mhm_parameter.nml after optimisation.
Feddes (option 1) and Jarvis (option 2) are fine.
[mhm_parameter.nml](/uploads/5488bd5827a464879f279c7043f080ea/mhm_parameter.nml)5.11https://git.ufz.de/mhm/mhm/-/issues/103Remap types duplication2020-10-08T12:03:47+02:00Pallav Kumar Shresthapallav-kumar.shrestha@ufz.deRemap types duplicationmo_common_variables and mo_mrm_global_variables both have declarations for l0_l11_remap and l1_l11_remap. @thober
* Is this necessary or duplication?
* If its duplication, where should the declaration be housed?
The ![Screenshot_20...mo_common_variables and mo_mrm_global_variables both have declarations for l0_l11_remap and l1_l11_remap. @thober
* Is this necessary or duplication?
* If its duplication, where should the declaration be housed?
The ![Screenshot_2020-04-27_at_18.53.40](/uploads/1e1e2c8fc179331142e58c0affa916e8/Screenshot_2020-04-27_at_18.53.40.png)5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/180replace the ascii input files by netcdf input files2021-06-09T09:40:37+02:00Robert Schweppereplace the ascii input files by netcdf input filesAll 'ASCII' input files need to be replaced by 'netCDF' files. This will ensure future compatibility with the new MPR tool.
This will also lead to a change in orientation as ASCII files have their point of reference in the top left wher...All 'ASCII' input files need to be replaced by 'netCDF' files. This will ensure future compatibility with the new MPR tool.
This will also lead to a change in orientation as ASCII files have their point of reference in the top left whereas netCDF has it in the lower left corner. This leads to a sorting of all the input (including meteorology) and output files along the latitude coordinate.
All test cases will likely need to be redefined as the precision is different with ASCII and netCDF files.
We need also to come up with reasonable names for the x, y, z and land_cover_period coordinates in the netCDF files.
Additionally, we need to introduce a system of handling cell-dependent variable soil depths that mimicks the lookup-table based approach of the `soilclass_definiton.txt` file.
We might reorganize the input folder for that by separating the files based on their usage for mHM/MPR, the routing and the optimization.MPR integrationRobert SchweppeRobert Schweppe2021-05-14https://git.ufz.de/mhm/mhm/-/issues/177make FORCES library available to mHM2021-06-09T09:41:11+02:00Robert Schweppemake FORCES library available to mHMWe need to make the FORCES library available to mHM. This should be done using native CMake commands (e.g. [FetchContent](https://cmake.org/cmake/help/latest/module/FetchContent.html) or [CMake CPM](https://github.com/cpm-cmake/CPM.cmake...We need to make the FORCES library available to mHM. This should be done using native CMake commands (e.g. [FetchContent](https://cmake.org/cmake/help/latest/module/FetchContent.html) or [CMake CPM](https://github.com/cpm-cmake/CPM.cmake). Ideally we try to find a system where FORCES is searched by find_package and if not found, it falls back to cloning and compiling and linking the repository itself.
In a first step, as find_package will not work, we make the cloning and compiling approach work.
Later (and optionally), FORCES thus needs to be available as a (static) library and there should be an install script (so find_package works).
Ideally, but not necessarily, FORCES is in version 1.0.MPR integrationRobert SchweppeRobert Schweppe2021-04-06https://git.ufz.de/mhm/mhm/-/issues/178make MPR library available to mHM2021-06-09T09:41:01+02:00Robert Schweppemake MPR library available to mHMWe need to make the MPR library available to mHM. This should be done using native CMake commands (e.g. [FetchContent](https://cmake.org/cmake/help/latest/module/FetchContent.html) or [CMake CPM](https://github.com/cpm-cmake/CPM.cmake). ...We need to make the MPR library available to mHM. This should be done using native CMake commands (e.g. [FetchContent](https://cmake.org/cmake/help/latest/module/FetchContent.html) or [CMake CPM](https://github.com/cpm-cmake/CPM.cmake). Ideally we try to find a system where MPR is searched by find_package and if not found, it falls back to cloning and compiling and linking the repository itself.
In a first step, as find_package will not work, we make the cloning and compiling approach work. Later (and optionally), MPR thus needs to be available as a (static) library and there should be an install script (so find_package works).
Ideally, but not necessarily, MPR is in version 1.0.MPR integrationRobert SchweppeRobert Schweppe2021-04-06https://git.ufz.de/mhm/mhm/-/issues/22Couple mHM with MPR 1.02022-01-13T17:11:03+01:00Robert SchweppeCouple mHM with MPR 1.0The new MPR needs to be coupled to mHM. We will make use of the submodules feature of git to enable the coupling of the now external project. Prior to that, we will need to discuss some implementation questions and how the new configurat...The new MPR needs to be coupled to mHM. We will make use of the submodules feature of git to enable the coupling of the now external project. Prior to that, we will need to discuss some implementation questions and how the new configuration file can be adapted to mHM. This will happen during the mHM meeting on Nov. 5th. The notes will be put here to continue discussion.6.xRobert SchweppeRobert Schweppehttps://git.ufz.de/mhm/mhm/-/issues/145LAI input data name is hard coded although the namelist suggests otherwise2022-06-17T16:40:16+02:00Maren KaluzaLAI input data name is hard coded although the namelist suggests otherwiseThis is about the LAI input file name which is usually called 'LAI_class.asc' or 'lc.asc'. In the namelist the name can be chosen. However, the name 'LAI_class.asc' is hard coded in 'mo_mpr_file.f90'.This is about the LAI input file name which is usually called 'LAI_class.asc' or 'lc.asc'. In the namelist the name can be chosen. However, the name 'LAI_class.asc' is hard coded in 'mo_mpr_file.f90'.6.xMaren KaluzaMaren Kaluzahttps://git.ufz.de/mhm/mhm/-/issues/140Hacky output dirs in test cases2020-11-30T08:58:34+01:00Sebastian MüllerHacky output dirs in test casesMost output directories for the test-cases are given with sth like `output_b1/b1_`.
But this is just a hack to prepend a prefix `b1_` to the output files.
When there will be checks for folders with automatically creation in the future, t...Most output directories for the test-cases are given with sth like `output_b1/b1_`.
But this is just a hack to prepend a prefix `b1_` to the output files.
When there will be checks for folders with automatically creation in the future, this would create two folders `output_b1/b1_/` with the output files in `b1_/`.
This is related to !415.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/83date mHM output2023-01-30T17:32:12+01:00Friedrich Boeingdate mHM outputIn mHM 5.10 the mHM_Fluxes_States Output starts one day later than meteo input data.
meteo input:
-->starts at 1st Jan.
`
time:units = "days since 1951-01-01-00:00:00" ;
time = 0, 1, 2, 3, 4, 5, 6, 7,...`
mHM-Output:
-->output starts ...In mHM 5.10 the mHM_Fluxes_States Output starts one day later than meteo input data.
meteo input:
-->starts at 1st Jan.
`
time:units = "days since 1951-01-01-00:00:00" ;
time = 0, 1, 2, 3, 4, 5, 6, 7,...`
mHM-Output:
-->output starts at 2nd Jan.
`time:units = hours since 1951-01-01 00:00:00
time = 47, 71, 95, 119, 143, 167,...`
in older mHM version (5.5) from current drought monitor **same meteo input** generates output starting at same day as input data.
mHM-Output:
-->output starts at 2nd Jan.
`time:units = hours since 1951-01-01 00:00:00
time = 23, 47, 71, 95, 119, 143, 167,...`
```fortran
warming_Days(1) = 0
eval_Per(1)%yStart = 1951
eval_Per(1)%mStart = 01
eval_Per(1)%dStart = 01
eval_Per(1)%yEnd = 1951
eval_Per(1)%mEnd = 12
eval_Per(1)%dEnd = 31
```
I need help to clarify this issue.v5.13.0Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/147fate of MHM2MRM definition2020-10-08T12:03:47+02:00Robert Schweppefate of MHM2MRM definitionCurrently, there is this line in the top-level `CMakeLists.txt`:
```
cpp_definitions("-DMRM2MHM" "CMAKE_MRM2MHM" "ON" "If set to ON the model runs with mRM")
```
so that we be default compile mHM with mRM. If one would turn it off, ther...Currently, there is this line in the top-level `CMakeLists.txt`:
```
cpp_definitions("-DMRM2MHM" "CMAKE_MRM2MHM" "ON" "If set to ON the model runs with mRM")
```
so that we be default compile mHM with mRM. If one would turn it off, there will be errors, e.g:
```
Error: Symbol ‘kge_q’ at (1) has no IMPLICIT type; did you mean ‘kge_et’?
/Users/ottor/nc/Home/local_libs/fortran/mhm_mpr/src/mHM/mo_objective_function.f90:854:63:
854 | call init_indexarray_for_opti_data(domainMeta, 1, nQDomains, opti_domain_indices_Q)
| 1
Error: Symbol ‘nqdomains’ at (1) has no IMPLICIT type; did you mean ‘netdomains’?
```
because `nQDomains` is defined within the `#def MHM2MRM` block but also called outside it.
What is the policy on the MHM2MRM definition (@muellese, @thober)?
- keep it, then we should also test it and fix it (maybe @kaluza?)!
- drop it, then we can get rid of it altogether!5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/126use of ceiling in calculate_grid_properties2022-07-07T10:38:33+02:00Stephan Thoberuse of ceiling in calculate_grid_propertiesHi Robert,
in calculate_grid_properties in mo_grid, the Fortran intrinsic ceiling is used to calculate Ncols, Nrows. If the cell factor is close to 1, but not exactly one, this can lead to additional rows and cols. I changed to nint in ...Hi Robert,
in calculate_grid_properties in mo_grid, the Fortran intrinsic ceiling is used to calculate Ncols, Nrows. If the cell factor is close to 1, but not exactly one, this can lead to additional rows and cols. I changed to nint in Ulysses. Do you have any objection to changing it in mhm develop too?
Thanks
Stephan
p.s.: This is my last message during my holidays ;)v5.12Robert SchweppeRobert Schweppehttps://git.ufz.de/mhm/mhm/-/issues/102PET unit documentation2020-11-30T12:55:14+01:00Sebastian MüllerPET unit documentationIn [`mo_pet.f90`](https://git.ufz.de/mhm/mhm/blob/develop/src/mHM/mo_pet.f90), `pet_hargreaves` and `pet_penman` are documented to be in `mm/s` but actually all PET routines return `mm/TS`. (TS - timestep)In [`mo_pet.f90`](https://git.ufz.de/mhm/mhm/blob/develop/src/mHM/mo_pet.f90), `pet_hargreaves` and `pet_penman` are documented to be in `mm/s` but actually all PET routines return `mm/TS`. (TS - timestep)5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/133Improve error message when soil or geology input data is not masked properly2022-07-07T10:55:23+02:00Marco HannemannImprove error message when soil or geology input data is not masked properlyWhen soil_class.asc or geology_class.asc are not masked properly and contain NoData-Values where other morphological input data sets are defined, following error will be thrown:
```
***ERROR: Class -9999 is missing
in input fi...When soil_class.asc or geology_class.asc are not masked properly and contain NoData-Values where other morphological input data sets are defined, following error will be thrown:
```
***ERROR: Class -9999 is missing
in input file soil_classdefinition.txt ...
```
Since -9999 is a NoData value here and not a valid class ID, the user should get the advice to check his masking routine.v5.12Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/77improve error message in mo_read_time_series.f902022-06-17T20:59:15+02:00Stephan Thoberimprove error message in mo_read_time_series.f90in mo_read_time_series.f90, a end of line error occurres if there are too few lines in the file. The error message could be more detailed, stating how many lines have been read and how many are expected by the code.in mo_read_time_series.f90, a end of line error occurres if there are too few lines in the file. The error message could be more detailed, stating how many lines have been read and how many are expected by the code.v5.12Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/132Improve error message when nSoil_Types higher than actual classes2022-07-07T10:55:11+02:00Marco HannemannImprove error message when nSoil_Types higher than actual classesWhen nSoil_Types in the header of soil_classdefinition.txt is lower than the actual classes defined, an error will be raised
that class n is missing.
If it is higher than the actual classes defined, mHM will return a Segmentation fault...When nSoil_Types in the header of soil_classdefinition.txt is lower than the actual classes defined, an error will be raised
that class n is missing.
If it is higher than the actual classes defined, mHM will return a Segmentation fault (SIGSEGV):
```
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
#0 0x2B7084854347
#1 0x2B708485494E
#2 0x2B70863863FF
#3 0x54FFB4 in __mo_soil_database_MOD_generate_soil_database
#4 0x55A82C in __mo_mpr_startup_MOD_mpr_initialize
#5 0x5C87D4 in __mo_startup_MOD_mhm_initialize
#6 0x59119F in MAIN__ at mhm_driver.f90:?
Segmentation fault
```
This makes it hard to identify the root cause of the error for the userv5.12Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/101Redundant constants2020-11-30T12:55:15+01:00Sebastian MüllerRedundant constants`Stefan-Boltzmann constant`:
https://git.ufz.de/mhm/mhm/blob/develop/src/mHM/mo_mhm_constants.f90#L37
is already defined here:
https://git.ufz.de/mhm/mhm/blob/develop/src/lib/mo_constants.f90#L106
The first one is not used ATM.`Stefan-Boltzmann constant`:
https://git.ufz.de/mhm/mhm/blob/develop/src/mHM/mo_mhm_constants.f90#L37
is already defined here:
https://git.ufz.de/mhm/mhm/blob/develop/src/lib/mo_constants.f90#L106
The first one is not used ATM.5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/14NAG compiler, version 6.2, build 6214 does not work on EVE2020-12-01T09:13:32+01:00Robert SchweppeNAG compiler, version 6.2, build 6214 does not work on EVEWith the current develop branch, we get a Segmentation Fault when running on Eve (frontend1):
`make compiler=nag system=eve release=debug && ./mhm`
The code occurs at each time a call to write is encountered, namely in `src/mo_nml.f90`...With the current develop branch, we get a Segmentation Fault when running on Eve (frontend1):
`make compiler=nag system=eve release=debug && ./mhm`
The code occurs at each time a call to write is encountered, namely in `src/mo_nml.f90`, line 308 (this is where the error is thrown):
` write(test, '(A,A)') '&', tolower(name)`
but if we make a workaround to that call, the program raises another SegmentationFault at `src/mo_netcdf.f90`, line 1613:
` write(msg, *) id`
I successfully run and compile the code at my local system with NAG 6.2 (6214). It also applies to other Fortran projects, e.g. MPR. Locally I have gcc version 8.1, netcdf-fortran 4.4.4 compiled with NAG 6.2.
I think it has, something to do with the way the compiler is set up on Eve. Any ideas @rakovec, @krausec, @lberg?5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/41global optimization of mHM2020-12-01T09:26:38+01:00Robert Schweppeglobal optimization of mHMProtocol of working group meeting
date: 29/05/2019
persons: @lese, @thober , @thober , @kaluza , @ottor
boundary conditions:
- 160k core hours we need to use per month (total 2 Mio), it is possibly to postpone one month's budget to the...Protocol of working group meeting
date: 29/05/2019
persons: @lese, @thober , @thober , @kaluza , @ottor
boundary conditions:
- 160k core hours we need to use per month (total 2 Mio), it is possibly to postpone one month's budget to the next
- period 05/2019 - 04/2020
target:
- use budgets each month
- run global multi-objective optimization runs on JEWELS
tasks:
- prepare reference datasets (TWS, FLUXNET, Q, ...)
- code multi-objective function in mHM
- stepwise setup of final mHM: do one individual objective functions optimization at a time (e.g. start with TWS)
@rakovec tasks:
- need to evaluate streamflow data (Budyko analysis/plausibility) and assign gauge coordinates and then delineate basins for Q optimization
@kaluza tasks:
- 1a) June: first set up Danube with Q optimization (test code and find optimal parallelization parameters on JEWELES)
- 1b) June: run last working mHM parallel version on one of Oldas global subdomain and check if domain decomposition works
- 2) July: run mHM with parallel I/O for TWS ()
- @thober will assist with identifying some bugs for 2)
@ottor tasks:
- merge reading of netcdf from MPR branch to preparing_mhm_eval_for_parallel_io branch
open tasks:
- convert global setting from ascii to netcdf
- how to organize domains for Q optimization?
- supply domains that are not too big so that domain decomposition is still able to run on JEWELS
- what is the appropriate subdomain organization for L0 and L2 files
current status:
- global mHM setup is ready (at least with dummy setup)
- coarse domain decomposition script is ready (latlonbox)
- bugs with reading L0 in parallel
-https://git.ufz.de/mhm/mhm/-/issues/81The runtime display of selected mHM fluxes for output is erroneous2020-08-31T16:24:12+02:00Pallav Kumar Shresthapallav-kumar.shrestha@ufz.deThe runtime display of selected mHM fluxes for output is erroneousThe runtime display of the user selected mHM fluxes for output has a bug.
* It seems there is a count error for mHM fluxes displayed at runtime
* However, the variables saved in the output file corresponds to the user selected options.The runtime display of the user selected mHM fluxes for output has a bug.
* It seems there is a count error for mHM fluxes displayed at runtime
* However, the variables saved in the output file corresponds to the user selected options.5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/50netCDF read_forcing_nc needs to be renamed2020-10-08T20:17:36+02:00Stephan ThobernetCDF read_forcing_nc needs to be renamedThe module read_forcing_nc needs to be renamed. It is used whenever a netCDF file is read, irrespective whether it is a forcing variable or not.The module read_forcing_nc needs to be renamed. It is used whenever a netCDF file is read, irrespective whether it is a forcing variable or not.5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/130unit vs units in netCDF output2020-10-08T20:17:36+02:00Pallav Kumar Shresthapallav-kumar.shrestha@ufz.deunit vs units in netCDF output* ncdump -h of mHM's netCDF output shows that the name used for unit/units are not uniform.
* The word **units** is used for the attribute of grid variables. This is usually the case/ found elsewhere.
* The word **unit** is used for...* ncdump -h of mHM's netCDF output shows that the name used for unit/units are not uniform.
* The word **units** is used for the attribute of grid variables. This is usually the case/ found elsewhere.
* The word **unit** is used for the attribute of mHM variables.
This is minor issue. However, ratifying this would reduce some lines of post processing of mHM (e.g. in forecasting chains).5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/52Remove branch: mLMv2.02019-12-03T22:07:25+01:00Sebastian MüllerRemove branch: mLMv2.0Please remove the branch ``mLMv2.0``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``mLMv2.0``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryPallav Kumar Shresthapallav-kumar.shrestha@ufz.dePallav Kumar Shresthapallav-kumar.shrestha@ufz.de2019-12-31https://git.ufz.de/mhm/mhm/-/issues/57Remove branch: 45-missing-meta-information-on-dimensions-in-restart-files2019-11-06T17:30:29+01:00Sebastian MüllerRemove branch: 45-missing-meta-information-on-dimensions-in-restart-filesPlease remove the branch ``45-missing-meta-information-on-dimensions-in-restart-files``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is rea...Please remove the branch ``45-missing-meta-information-on-dimensions-in-restart-files``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryRobert SchweppeRobert Schweppe2019-12-01https://git.ufz.de/mhm/mhm/-/issues/58Remove branch: 22-couple-mhm-with-mpr-1-02019-11-06T17:28:39+01:00Sebastian MüllerRemove branch: 22-couple-mhm-with-mpr-1-0Please remove the branch ``22-couple-mhm-with-mpr-1-0``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``22-couple-mhm-with-mpr-1-0``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryRobert SchweppeRobert Schweppe2019-12-01https://git.ufz.de/mhm/mhm/-/issues/63Remove branch: develop_mrm_subrepo2020-02-06T10:18:26+01:00Sebastian MüllerRemove branch: develop_mrm_subrepoPlease remove the branch ``develop_mrm_subrepo``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``develop_mrm_subrepo``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryStephan ThoberStephan Thober2019-12-31https://git.ufz.de/mhm/mhm/-/issues/69Remove branch: nitrate-mhm5.72020-01-08T13:47:27+01:00Sebastian MüllerRemove branch: nitrate-mhm5.7Please remove the branch ``nitrate-mhm5.7``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``nitrate-mhm5.7``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryRohini KumarRohini Kumar2019-12-01https://git.ufz.de/mhm/mhm/-/issues/70Remove branch: neutron_calib2019-12-01T21:26:00+01:00Sebastian MüllerRemove branch: neutron_calibPlease remove the branch ``neutron_calib``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!Please remove the branch ``neutron_calib``.
If there is still development going on, please fork the mHM repository and continue your work there.
Make a merge request, when your feature is ready.
Thanks a lot!slimming down repositoryMartin Schrönmartin.schroen@ufz.deMartin Schrönmartin.schroen@ufz.de2019-12-01https://git.ufz.de/mhm/mhm/-/issues/75CI pipeline permission problems in Forks2020-12-01T09:19:50+01:00Sebastian MüllerCI pipeline permission problems in Forks# Pipeplines are not triggered in the base-repo for merge-requests from forks
Currently the pipelines are executed as part of the fork and for the commit itself, not the merge commit. There is a webhook that can execute on merge request...# Pipeplines are not triggered in the base-repo for merge-requests from forks
Currently the pipelines are executed as part of the fork and for the commit itself, not the merge commit. There is a webhook that can execute on merge request creation/update. On this level it is equivalent to GitHub, as there it is external services that handle the CI (except that they are not allowed to set the build status on the fork, see above...). But one of the advantages of GitLab is its GitLab CI which should execute as part of the original project (not its forks) and also work on merge requests. GitLab is aware of the problem and it is fixing it, although perhaps initially only in the Enterprise version. Relevant issues:
* https://git.ufz.de/help/ci/merge_request_pipelines/index.md#important-notes-about-merge-requests-from-forked-projects
* https://gitlab.com/gitlab-org/gitlab/issues/25644
* https://gitlab.com/gitlab-org/gitlab/issues/9713
# The resulting problem
There is a solution, BUT only for the Enterprise Edition of GitLab... this is starting to get frustrating, that important features are left out of the FOSS edition of GitLab. I don't know if the WKDV is willing to get the Enterprise Edition for the UFZ?!
Frustration!
# Affected
* https://git.ufz.de/mhm/mhm/merge_requests/12Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/78Add intel compiler to CI/CD2020-12-01T09:06:47+01:00Sebastian MüllerAdd intel compiler to CI/CDThe intel compiler should work with the makefile within the gitlab runner.The intel compiler should work with the makefile within the gitlab runner.5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/79Numerical Recipes License required for objective_sse_boxcox function2020-12-01T09:07:56+01:00Robert SchweppeNumerical Recipes License required for objective_sse_boxcox functionHi @thober and @muellese. When working on MPR I wanted to update its dependency lightweight_fortran_lib. It is also used in mHM and there was another file added in August: `mo_boxcox.f90`. It uses some numerical recipes which do not come...Hi @thober and @muellese. When working on MPR I wanted to update its dependency lightweight_fortran_lib. It is also used in mHM and there was another file added in August: `mo_boxcox.f90`. It uses some numerical recipes which do not come under LGPL as the rest of the code (see also issue #26). The functions used are
`brent`, `mnbrak`, `swap`, `shft`. How do we proceed there?5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/82Compile/execute mHM with ifort using the Makefile on EVE under Easybuild2020-01-16T14:02:06+01:00Oldrich RakovecCompile/execute mHM with ifort using the Makefile on EVE under EasybuildProbably useful tip for Compiling/executing mHM with Makefile on EVE under Easybuild.
As before, to compile mHM with intel, you need to load licence (ml ifort/2018.3.222-GCC-7.3.0-2.30)
However, execution of mHM failed until I load all...Probably useful tip for Compiling/executing mHM with Makefile on EVE under Easybuild.
As before, to compile mHM with intel, you need to load licence (ml ifort/2018.3.222-GCC-7.3.0-2.30)
However, execution of mHM failed until I load all three intel modules:
ml icc/2018.3.222-GCC-7.3.0-2.30 ifort/2018.3.222-GCC-7.3.0-2.30 iccifort/2018.3.222-GCC-7.3.0-2.30
Or is there any better way to handle this?https://git.ufz.de/mhm/mhm/-/issues/84processCase(3) and processCase(5) in mhm v5.10_fixed -- Linux: forrtl: sever...2020-04-14T23:14:54+02:00Mehmet Cüneyd DemirelprocessCase(3) and processCase(5) in mhm v5.10_fixed -- Linux: forrtl: severe (174): SIGSEGV, segmentation fault occurred -- Cygwin: Program received signal SIGSEGV: Segmentation fault - invalid memory reference.[System_Details_Cuneyd.pdf](/uploads/f642cea1d004e1be1c79ac1f6880b8b2/System_Details_Cuneyd.pdf)
For screenshots and easy to follow the issue, please refer to the attached pdf file.
---------------------------------------------------...[System_Details_Cuneyd.pdf](/uploads/f642cea1d004e1be1c79ac1f6880b8b2/System_Details_Cuneyd.pdf)
For screenshots and easy to follow the issue, please refer to the attached pdf file.
-----------------------------------------------------------------------------
Error messages in CYGWIN:
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
#0 0xffffffffffffffff in ???
Segmentation fault (core dumped)
Note: The following floating-point exceptions are signalling: IEEE_DIVIDE_BY_ZERO IEEE_OVERFLOW_FLAG IEEE_UNDERFLOW_FLAG IEEE_DENORMAL
-----------------------------------------------------------------------------
Error Messages in Linux Cluster:
forrtl: severe (174): SIGSEGV, segmentation fault occurred
-----------------------------------------------------------------------------
Attn: Dear @ottor and Dear @thober
**------- Email Correspondence--------**
Dear Robert,
Thank you very much for the quick and kind reply. I hope I provide sufficient details below about both systems: cygwin and linux that I am using. I am quite new in Linux environment.
p.s. Thanks Olda, Gitlab email is added in Cc. Hopefully the issue will be reported there soon.
Cygwin x64 details: setup version 2.897 (64 bit)
OS: Windows 10
Compiler: GNU 7.3.0
Installed using:
1) make system=cygwin compiler=gnu (also tested openmp=true and release=debug versions)
2) also tested cmake installation
NETCDF-FORTRAN DETAILS:
Linux cluster details:
[mdemirel@sariyer ~ ]$ uname --all
Linux sariyer 3.10.0-327.36.1.el7.x86_64 #1 SMP Sun Sep 18 13:04:29 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[mdemirel@sariyer ~ ]$ cat /proc/version
Linux version 3.10.0-327.36.1.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) ) #1 SMP Sun Sep 18 13:04:29 UTC 2016
[mdemirel@sariyer ~ ]$ cat /etc/centos-release
CentOS Linux release 7.2.1511 (Core)
[mdemirel@sariyer ~ ]$ cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
mHM is compiled using these modules
Currently Loaded Module files:
1) uhem-bin 4) NetCDF/netcdf-c-4.4.1.1-shared-intel-2018.3
2) intel/parallel_studio_xe_2018.3.051 5) NetCDF/netcdf-fortran-4.4.4-shared-netcdf-c-4.4.1.1-shared-intel-2018.3
3) HDF5/hdf5-1.8.9-shared-intel-2018.3 6) cmake/3.9.1-intel-2018.3
In addition to ifort I also tested gfortran using gcc version 7.5.0 (GCC)
[mdemirel@sariyer slaveBEHV1 ]$ echo $FC
gfortran
[mdemirel@sariyer slaveBEHV1 ]$ gfortran -v
Using built-in specs.
COLLECT_GCC=gfortran
COLLECT_LTO_WRAPPER=/okyanus/progs/gcc/GCC-7.5.0/libexec/gcc/x86_64-pc-linux-gnu/7.5.0/lto-wrapper
Target: x86_64-pc-linux-gnu
Configured with: ../gcc-7.5.0/configure --prefix=/okyanus/progs/gcc/GCC-7.5.0 --enable-languages=c,c++,fortran,go
Thread model: posix
gcc version 7.5.0 (GCC)
Kind Regards,
Cüneyd
-----------------------------------------------------------------------------
From: Robert Schweppe <robert.schweppe@ufz.de>
Sent: Monday, January 20, 2020 11:30 AM
To: Mehmet Cüneyd Demirel <demirelmc@itu.edu.tr>
Cc: 'Oldrich Rakovec' <oldrich.rakovec@ufz.de>; rohini.kumar@ufz.de; luis.samaniego@ufz.de; 'Simon Stisen' <sst@geus.dk>; 'Julian Koch' <juko@geus.dk>
Subject: Re: processCase(3) and processCase(5) in mhm v5.10 _fixed
Dear Cüneyd,
thank you for your report. We will have a look at it and come back to you once we need further information or have a fix.
Could you please tell us the versions of the Fortran compilers, netcdf-fortran libraries and the operating systems that you have tested?
Thank you,
Robert
-----------------------------------------------------------------------------
Am 18.01.20 um 21:09 schrieb Mehmet Cüneyd Demirel:
Dear Robert,
I am Cüneyd Demirel. I have contributed to mhm v5.7 and 5.8 on processCase(3) and processCase(5) with my colleagues from GEUS and Mathias Zink.
I see that you contributed refactoring and reformatting of these processes.
Did you test mhm v5.10_fixed in large and multiple basins with these options?
processCase(3) = 2 or 3
processCase(5) = -1
processCase(8) = 1
We had no issues in running 6 EU basins (listed below) together with v5.8; however, we are getting segmentation fault in both Linux and Cygwin versions.
I have trying/testing many scenarios (stack size etc) but I couldn’t diagnose the issue (screen shots are below). The model runs at coarse resolutions e.g. when I reduce L11 and L1 from 2km to 8km (Elbe model from UFZ).
I would be very happy if you can run Elbe basin L1 and L11 both at 2km (L0=500m) in UFZ side. I can share the setup and nml files if you need.
processCase(3) = 2 or 3
processCase(5) = -1
processCase(8) = 1
p.s. Even this combination causes segmentation fault.
processCase(3) = 1 Feddes
processCase(5) = -1 LAI correction
processCase(8) = 1 Muskingum
Kind Regards,
Cüneyd
These flags didn’t help either.
EXTRA_LDFLAGS += -Wl,--stack,16777216
EXTRA_CFLAGS += -Wl,--stack,16777216
I also tested the code (compiled with ifort) in a super computer/cluster in our university and mhm also failed with this error “forrtl: severe (174): SIGSEGV, segmentation fault occurred”
L0 scale Elbe Main Meuse Moselle Neckar Vienne TOTAL
ncols 960 336 432 384 336 432 2880
nrows 864 384 624 624 384 384 3264
-----------------------------------------------------------------------------
!> soil moisture
!> 2 - Jarvis equation for ET reduction, multi-layer infiltration capacity approach, Brooks-Corey like
!> 3 - Jarvis equation for ET reduction and FC dependency on root fraction coefficient
processCase(3) = 3
!> potential evapotranspiration (PET)
!> -1 - PET is input, LAI driven correction
processCase(5) = -1
!> routing
!> 0 - deactivated
!> 1 - Muskingum approach
!> 2 - adaptive timestep
processCase(8) = 1
History
! Rohini Kumar Mar 2016 - changes for handling multiple soil database options
! Zink M. & Demirel M.C. Mar 2017 - Added Jarvis soil water stress function at SM process(3)
! Demirel M.C. & S. Stisen Apr 2017 - Added FC dependency on root fraction coefficient at SM process(3)
! Robert Schweppe Dec 2017 - added loop over LCscenes inside MPR, renamed variables rewrite
! Robert Schweppe Jun 2018 - refactoring and reformatting
--
--------------------------------------------------------------------------------------------
Stephan ThoberStephan Thoberhttps://git.ufz.de/mhm/mhm/-/issues/86Adopt CI/CD to EasyBuild based module system on EVE2020-12-01T09:29:01+01:00Sebastian MüllerAdopt CI/CD to EasyBuild based module system on EVEPipelines are failing at the moment because of missing modules.Pipelines are failing at the moment because of missing modules.5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/87mRM initilization hangs for many hours at "call L11_stream_features" of mo_mr...2020-12-01T09:09:33+01:00Mehmet Cüneyd DemirelmRM initilization hangs for many hours at "call L11_stream_features" of mo_mrm_init.f90 file (L11: 1/64) whereas 1/32 L11 is quickHello,
mRM initilization of Elbe L1 (scale of 1/64) and L11 (1/32) takes several minutes with cygwin in my desktop but mRM hangs when both L1 and L11 are 1/64.
Considering the increase in cells size from 1/32 to 1/64 I would expect an ...Hello,
mRM initilization of Elbe L1 (scale of 1/64) and L11 (1/32) takes several minutes with cygwin in my desktop but mRM hangs when both L1 and L11 are 1/64.
Considering the increase in cells size from 1/32 to 1/64 I would expect an increase in initialization time but here we have hours (more than 5 hours) of waiting.
I did a small search in the code and added some "write" to see where hanging happens.
Hanging is happening at the line of "call L11_stream_features" of mo_mrm_init.f90 file. There is a counter in “iD0” scanning at L0.
For your testing I prepared two mhm.nml files for 1/32 L11 and 1/64 L11.
1) mhm_6basin_ElbeCoarseROUTING_L11_1by32_shortRUNtime.nml
2) mhm_6EU_1by64_degrees_HANGS.nml
Download link: web.itu.edu.tr/demirelmc/mHM_EUscale_NEW_ONLYelbe.rar
I would be very grateful if @ottor and @thober could have time to see the issue.
The issue might be a similar case to https://git.ufz.de/mhm/mhm/issues/10
![write_mo_mrm_net_startup](/uploads/baf1a210cc810687bcd4023a0ae3b911/write_mo_mrm_net_startup.png)
do iBasin = 1, nBasins
if (.not. read_restart) then
call L11_flow_direction(iBasin)
write(*,*) 'Position mrm init after call L11_flow_direction(iBasin)'
call L11_flow_accumulation(iBasin)
write(*,*) 'Position mrm init after call L11_flow_accumulation(iBasin)'
call L11_set_network_topology(iBasin)
write(*,*) 'Position mrm init after call L11_set_network_topology(iBasin)'
call L11_routing_order(iBasin)
write(*,*) 'Position mrm init after call L11_routing_order(iBasin)'
call L11_link_location(iBasin)
write(*,*) 'Position mrm init after call L11_link_location(iBasin)'
call L11_set_drain_outlet_gauges(iBasin)
write(*,*) 'Position mrm init after call L11_set_drain_outlet_gauges(iBasin)'
! stream characteristics
call L11_stream_features(iBasin)
write(*,*) 'Position mrm init after call L11_stream_features(iBasin)'
end if
end dohttps://git.ufz.de/mhm/mhm/-/issues/88Obligation to put one line at the end of mhm_parameter.nml - No line/row resu...2020-12-01T09:19:15+01:00Mehmet Cüneyd DemirelObligation to put one line at the end of mhm_parameter.nml - No line/row results in back trace error.Hello @thober,
May be it is a normal thing in Fortran convention but it was surprising to see an EOF error due to one empty line at the end of mhm_parameter.nml. Dummy users can have this issue if they try to link PEST and OSTRICH calib...Hello @thober,
May be it is a normal thing in Fortran convention but it was surprising to see an EOF error due to one empty line at the end of mhm_parameter.nml. Dummy users can have this issue if they try to link PEST and OSTRICH calibration tools to mHM using a tpl file which doesn't have a empty line at the end.
"At line 851 of file /cygdrive/d/mhm-5.10_fixed/src/MPR/mo_mpr_read_config.f90 (unit = 31, file = 'mhm_parameter.nml')
Fortran runtime error: End of file"
![fortran_EOF](/uploads/85d13d4690b7eef7e76abb27694c5111/fortran_EOF.png)https://git.ufz.de/mhm/mhm/-/issues/89Update mhm_parameter.nml2020-12-01T09:30:03+01:00Stephan ThoberUpdate mhm_parameter.nmlThe mhm_parameter.nml contains wide ranges that is required for rigorous parameter sensitivity analysis and calibration but might lead to unrealistic parameter values in model applications. For the latter purpose, a parameter namelist fi...The mhm_parameter.nml contains wide ranges that is required for rigorous parameter sensitivity analysis and calibration but might lead to unrealistic parameter values in model applications. For the latter purpose, a parameter namelist file needs to be created based on the parameter ranges used before, that are contained in the attached file. The wider ranges should remain available.
[old_parameter_ranges.txt](/uploads/bdb909e31fd0cdad19e7555968bd04fe/old_parameter_ranges.txt)https://git.ufz.de/mhm/mhm/-/issues/90Cygwin - Warning! ***HDF5 library version mismatched error***2020-12-01T09:30:31+01:00Mehmet Cüneyd DemirelCygwin - Warning! ***HDF5 library version mismatched error***Hello,
Cygwin users get this error: HDF version mismatch.
In my knowledge the two ways to omit this warning and continue running are (stackoverflow tricks):
1) export HDF5_DISABLE_VERSION_CHECK=1 (in cygwin terminal)
2) Adding this ...Hello,
Cygwin users get this error: HDF version mismatch.
In my knowledge the two ways to omit this warning and continue running are (stackoverflow tricks):
1) export HDF5_DISABLE_VERSION_CHECK=1 (in cygwin terminal)
2) Adding this to Environmental variables > system variables (if they run mhm from command prompt/cmd).
It would be great if we (@ottor and @thober) can add something to the makefile (like we did once for the memory issues in windows machines EXTRA_LDFLAGS += #-Wl,--stack,12485760 and EXTRA_CFLAGS += #-Wl,--stack,12485760)
I think HDF5 warnings can be avoided with good "Linking Options" during compiling.
Kind Regards,
Cüneyd
![mhm_HDF](/uploads/d32d358b5cc15dbc4617e1d9d5aa528c/mhm_HDF.png)
Details of the error message:
D:\mhm-5.10_fixed>start /B /WAIT mhm.exe /k
----------------------------------------------------------------------
mHM-UFZ
MULTISCALE HYDROLOGIC MODEL
Version 5.10
June 2019
Originally by L. Samaniego & R. Kumar
----------------------------------------------------------------------
Start at 14.02.2020 09:36:08.
Using main file mhm_driver.f90 and namelists:
mhm.nml
mhm_parameter.nml
mhm_outputs.nml
Run with OpenMP with 16 threads.
Read namelist file: mhm.nml
Read namelist file: mhm_parameter.nml
Read namelist file: mhm_outputs.nml
INFO: Process (10, neutrons) is deactivated, so output will be suppressed.
Basin 1:
resolution Hydrology (basin 1) = 24000.
resolution Routing (basin 1) = 24000.
Resolution of routing and hydrological modeling are equal!
Basin 2:
resolution Hydrology (basin 2) = 24000.
resolution Routing (basin 2) = 24000.
Resolution of routing and hydrological modeling are equal!
Following output will be written:
STATES:
interceptional storage (L1_inter) [mm]
height of snowpack (L1_snowpack) [mm]
soil water content in the single layers (L1_soilMoist) [mm]
volumetric soil moisture in the single layers [mm/mm]
mean volum. soil moisture averaged over all soil layers [mm/mm]
waterdepth in reservoir of sealed areas (L1_sealSTW) [mm]
waterdepth in reservoir of unsat. soil zone (L1_unsatSTW) [mm]
waterdepth in reservoir of sat. soil zone (L1_satSTW) [mm]
FLUXES:
actual evapotranspiration aET (L1_pet) [mm/T]
total discharge generated per cell (L1_total_runoff) [mm/T]
direct runoff generated per cell (L1_runoffSeal) [mm/T]
fast interflow generated per cell (L1_fastRunoff) [mm/T]
slow interflow generated per cell (L1_slowRunoff) [mm/T]
baseflow generated per cell (L1_baseflow) [mm/T]
groundwater recharge (L1_percol) [mm/T]
infiltration (L1_infilSoil) [mm/T]
FINISHED reading config
# of basins: 2
Input data directories:
--------------
BASIN 1
--------------
Morphological directory: test_basin/input/morph/
Land cover directory: test_basin/input/luse/
Precipitation directory: test_basin/input/meteo/pre/
Temperature directory: test_basin/input/meteo/tavg/
PET directory: test_basin/input/meteo/pet/
Output directory: test_basin/output_b1/
--------------
BASIN 2
--------------
Morphological directory: test_basin_2/input/morph/
Land cover directory: test_basin_2/input/luse/
Precipitation directory: test_basin_2/input/meteo/pre/
Temperature directory: test_basin_2/input/meteo/tavg/
PET directory: test_basin_2/input/meteo/pet/
Output directory: test_basin_2/output/
Read data ...
Reading data ...
Reading dem and lcover ...
Reading dem for basin: 1 ...
Reading dem for basin: 2 ...
Reading lcover for basin: 1 ...
Reading lcover for basin: 2 ...
Reading data for basin: 1 ...
Reading slope ...
Reading aspect ...
Reading soil ids ...
Reading LAI ...
Reading latitude/logitude ...
Warning! ***HDF5 library version mismatched error***
The HDF5 header files used to compile this application do not match
the version used by the HDF5 library to which this application is linked.
Data corruption or segmentation faults may occur if the application continues.
This can happen when an application was compiled by one version of HDF5 but
linked with a different version of static or shared HDF5 library.
You should recompile the application or check your shared library related
settings such as 'LD_LIBRARY_PATH'.
You can, at your own risk, disable this warning by setting the environment
variable 'HDF5_DISABLE_VERSION_CHECK' to a value of '1'.
Setting it to 2 or higher will suppress the warning messages totally.
Headers are 1.10.5, library is 1.10.6
SUMMARY OF THE HDF5 CONFIGURATION
=================================
General Information:
-------------------
HDF5 Version: 1.10.6
Configured on: Sat Jan 18 22:29:58 CET 2020
Configured by: Marco@LAPTOP-82F08ILC
Host system: x86_64-pc-cygwin
Uname information: CYGWIN_NT-10.0 LAPTOP-82F08ILC 3.1.2(0.340/5/3) 2020-01-12 11:00 x86_64 Cygwin
Byte sex: little-endian
Installation point: /usr
Compiling Options:
------------------
Build Mode: production
Debugging Symbols: no
Asserts: no
Profiling: no
Optimization Level: high
Linking Options:
----------------
Libraries: shared
Statically Linked Executables:
LDFLAGS:
H5_LDFLAGS: -no-undefined
AM_LDFLAGS:
Extra libraries: -lz -ldl -lm
Archiver: ar
AR_FLAGS: cr
Ranlib: ranlib
Languages:
----------
C: yes
C Compiler: /usr/bin/gcc ( gcc (GCC) 7.4.0)
CPPFLAGS:
H5_CPPFLAGS: -DNDEBUG -UH5_DEBUG_API
AM_CPPFLAGS:
C Flags: -ggdb -O2 -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fstack-protector-strong --param=ssp-buffer-size=4 -fdebug-prefix-map=/cygdrive/d/cyg_pub/devel/hdf5/hdf5-1.10.6-1.x86_64/build=/usr/src/debug/hdf5-1.10.6-1 -fdebug-prefix-map=/cygdrive/d/cyg_pub/devel/hdf5/hdf5-1.10.6-1.x86_64/src/hdf5-1.10.6=/usr/src/debug/hdf5-1.10.6-1
H5 C Flags: -pedantic -Wall -Wextra -Wbad-function-cast -Wc++-compat -Wcast-align -Wcast-qual -Wconversion -Wdeclaration-after-statement -Wdisabled-optimization -Wfloat-equal -Wformat=2 -Wno-format-nonliteral -Winit-self -Winvalid-pch -Wmissing-declarations -Wmissing-include-dirs -Wmissing-prototypes -Wnested-externs -Wold-style-definition -Wpacked -Wredundant-decls -Wshadow -Wstrict-prototypes -Wswitch-enum -Wswitch-default -Wundef -Wunused-macros -Wunsafe-loop-optimizations -Wwrite-strings -Wlogical-op -Wlarger-than=2560 -Wsync-nand -Wframe-larger-than=16384 -Wpacked-bitfield-compat -Wstrict-overflow=5 -Wjump-misses-init -Wunsuffixed-float-constants -Wdouble-promotion -Wtrampolines -Wstack-usage=8192 -Wvector-operation-performance -Wdate-time -Warray-bounds=2 -Wc99-c11-compat -Wnull-dereference -Wunused-const-variable -Wduplicated-cond -Whsa -Wnormalized -Walloc-zero -Walloca -Wduplicated-branches -Wformat-overflow=2 -Wformat-truncation=2 -Wimplicit-fallthrough=5 -Wrestrict -fstdarg-opt -s -Wno-inline -Wno-aggregate-return -Wno-missing-format-attribute -Wno-missing-noreturn -Wno-suggest-attribute=const -Wno-suggest-attribute=pure -Wno-suggest-attribute=noreturn -Wno-suggest-attribute=format -O3
AM C Flags:
Shared C Library: yes
Static C Library: no
Fortran: no
C++: yes
C++ Compiler: /usr/bin/g++ ( g++ (GCC) 7.4.0)
C++ Flags: -ggdb -O2 -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fstack-protector-strong --param=ssp-buffer-size=4 -fdebug-prefix-map=/cygdrive/d/cyg_pub/devel/hdf5/hdf5-1.10.6-1.x86_64/build=/usr/src/debug/hdf5-1.10.6-1 -fdebug-prefix-map=/cygdrive/d/cyg_pub/devel/hdf5/hdf5-1.10.6-1.x86_64/src/hdf5-1.10.6=/usr/src/debug/hdf5-1.10.6-1
H5 C++ Flags: -pedantic -Wall -W -Wundef -Wshadow -Wpointer-arith -Wcast-qual -Wcast-align -Wwrite-strings -Wconversion -Wredundant-decls -Winline -Wsign-promo -Woverloaded-virtual -Wold-style-cast -Weffc++ -Wreorder -Wnon-virtual-dtor -Wctor-dtor-privacy -Wabi -finline-functions -s -O
AM C++ Flags:
Shared C++ Library: yes
Static C++ Library: no
Java: no
Features:
---------
Parallel HDF5: no
Parallel Filtered Dataset Writes: no
Large Parallel I/O: no
High-level library: yes
Build HDF5 Tests: yes
Build HDF5 Tools: yes
Threadsafety: no
Default API mapping: v110
With deprecated public symbols: yes
I/O filters (external): deflate(zlib)
MPE: no
Direct VFD: no
(Read-Only) S3 VFD: no
(Read-Only) HDFS VFD: no
dmalloc: no
Packages w/ extra debug output: none
API tracing: no
Using memory checker: no
Memory allocation sanity checks: no
Function stack tracing: no
Strict file format checks: no
Optimization instrumentation: no
Bye...
Program received signal SIGABRT: Process abort signal.
Backtrace for this error:
#0 0xffffffffffffffff in ???
#1 0xffffffffffffffff in ???
#2 0xffffffffffffffff in ???
#3 0xffffffffffffffff in ???
#4 0xffffffffffffffff in ???
#5 0xffffffffffffffff in ???
#6 0xffffffffffffffff in ???
#7 0xffffffffffffffff in ???
#8 0xffffffffffffffff in ???
#9 0xffffffffffffffff in ???
#10 0xffffffffffffffff in ???
#11 0xffffffffffffffff in ???
#12 0xffffffffffffffff in ???
#13 0xffffffffffffffff in ???
#14 0xffffffffffffffff in ???
#15 0xffffffffffffffff in ???
#16 0xffffffffffffffff in ???
#17 0xffffffffffffffff in ???
#18 0xffffffffffffffff in ???
#19 0xffffffffffffffff in ???
#20 0xffffffffffffffff in ???
#21 0xffffffffffffffff in ???
0 [main] mhm 10 cygwin_exception::open_stackdumpfile: Dumping stack trace to mhm.exe.stackdump
D:\mhm-5.10_fixed>pause
Press any key to continue . . .https://git.ufz.de/mhm/mhm/-/issues/91inconsistent treatment of sinks in river network setup2020-12-01T09:17:55+01:00Stephan Thoberinconsistent treatment of sinks in river network setupIf L11 != L0, then fdir < 0 marks sinks, but that does not work if L11 = L0. In the latter case fdir has to be 0. This was reported by @extostem.If L11 != L0, then fdir < 0 marks sinks, but that does not work if L11 = L0. In the latter case fdir has to be 0. This was reported by @extostem.5.11https://git.ufz.de/mhm/mhm/-/issues/85pFUnit unit testing framework2021-07-09T12:01:25+02:00Robert SchweppepFUnit unit testing frameworklet us try out https://github.com/Goddard-Fortran-Ecosystem/pFUnit/ on mHM, working example provided in MPR repolet us try out https://github.com/Goddard-Fortran-Ecosystem/pFUnit/ on mHM, working example provided in MPR repov5.11.2Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/92Land-only met input incompatible with mHM's no noData within bounding box cri...2020-12-01T09:17:33+01:00Pallav Kumar Shresthapallav-kumar.shrestha@ufz.deLand-only met input incompatible with mHM's no noData within bounding box criterionmo_read_forcing_nc modules has following two subroutines:
* read_forcing_nc
* read_weights_nc
These subroutines (creators @thober and Matthias Zink) check for nodata value in input met in the minimum bounding box of the dem mask. This...mo_read_forcing_nc modules has following two subroutines:
* read_forcing_nc
* read_weights_nc
These subroutines (creators @thober and Matthias Zink) check for nodata value in input met in the minimum bounding box of the dem mask. This means mHM disallows noData in the bounding box although noData are not present in the the dem mask/ actual domain. This will lead to problem for mHM with land-only met input like ERA5-land (and in my case, ERA5-SD product from KIT for SaWaM, which is also land-only!). Example below -
The following case a problem. mHM gives "***ERROR: read_forcing_nc: nodata value within domain boundary"
![Screenshot_2020-02-22_at_22.04.46](/uploads/e44e2fb5f491a451d104b90cd4c2ede7/Screenshot_2020-02-22_at_22.04.46.png)
But the following case is not a problem, no errors -
![Screenshot_2020-02-22_at_22.04.58](/uploads/176857da9be381190c347b2e3bc27e5d/Screenshot_2020-02-22_at_22.04.58.png)Stephan ThoberStephan Thoberhttps://git.ufz.de/mhm/mhm/-/issues/94Job Failed #71612020-12-01T09:17:05+01:00Stephan ThoberJob Failed #7161Hi @muellese ,
könntest du dir bitte anschauen, warum diese Pipeline failed.
Vielen Dank im Voraus
Stephan
Job [#7161](https://git.ufz.de/mhm/mhm/-/jobs/7161) failed for fa72fdc524be4e30d4f210093affbcd67d7e4a7c:Hi @muellese ,
könntest du dir bitte anschauen, warum diese Pipeline failed.
Vielen Dank im Voraus
Stephan
Job [#7161](https://git.ufz.de/mhm/mhm/-/jobs/7161) failed for fa72fdc524be4e30d4f210093affbcd67d7e4a7c:Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/97File: mo_restart.f90 Line 6452020-09-10T12:12:26+02:00Mehmet Cüneyd DemirelFile: mo_restart.f90 Line 645Hello @thober and @ottor,
I think, in line 645 in mo_restart.f90 we are mission option 3 in SM.
So this line
if (processMatrix(3, 1) == 2) then
should be
if (processMatrix(3, 1) == 2 .OR. processMatrix(3, 1) == 3) then
or w...Hello @thober and @ottor,
I think, in line 645 in mo_restart.f90 we are mission option 3 in SM.
So this line
if (processMatrix(3, 1) == 2) then
should be
if (processMatrix(3, 1) == 2 .OR. processMatrix(3, 1) == 3) then
or we add a new paragraph like below as in processMatrix(3, 1) both options 2 and 3 have Jarvis in soil moisture calculations.
if (processMatrix(3, 1) == 3) then
! jarvis critical value for normalized soil water content
var = nc%getVariable("L1_jarvis_thresh_c1")
call var%getData(dummyD2)
L1_jarvis_thresh_c1(s1 : e1, 1, 1) = pack(dummyD2, mask1)
end if5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/98mHM states_fluxes netCDF output are curvilinear even if coordinate system is ...2020-12-01T09:16:41+01:00Pallav Kumar Shresthapallav-kumar.shrestha@ufz.demHM states_fluxes netCDF output are curvilinear even if coordinate system is set to regular latlon in mhm.nml* It was noted that the output netCDF files of mHM are saved as curvilinear irrespective of the coordinate system set in mhm.nml. I am working in regular latlon system but the mHM output according to CDO and NCO are in curvilinear syste...* It was noted that the output netCDF files of mHM are saved as curvilinear irrespective of the coordinate system set in mhm.nml. I am working in regular latlon system but the mHM output according to CDO and NCO are in curvilinear system as shown in screenshots below.
* The issue "seems" to be that the lat and lon are stored as 2D variable even for regular latlon system. This makes cdo and nco to assume its a curvilinear system. Storing the lat and lon as a vector instead should solve the problem @thober @schaefed @muellese (???)
cdo sinfo on the output file:
![Screenshot_2020-03-09_at_11.59.46](/uploads/520c7779a3e1bfe8bb9abf6572f67411/Screenshot_2020-03-09_at_11.59.46.png)
ncdump -h on the output file:
![Screenshot_2020-03-09_at_12.00.14](/uploads/d71e769affaec7f9380dfa508fb5f895/Screenshot_2020-03-09_at_12.00.14.png)
cdo griddes on the output file:
![Screenshot_2020-03-09_at_12.01.22](/uploads/e93d2881842dc5d0af0593073d7a2cce/Screenshot_2020-03-09_at_12.01.22.png)
cdo griddes on the input regular latlon pre.nc file for comparison:
![Screenshot_2020-03-09_at_12.02.37](/uploads/9a246d6649ae186bec744806ccea1d4c/Screenshot_2020-03-09_at_12.02.37.png)5.11Stephan ThoberStephan Thoberhttps://git.ufz.de/mhm/mhm/-/issues/100Bsymbolic flag in makefile (cmake error)2020-12-01T09:30:59+01:00Mehmet Cüneyd DemirelBsymbolic flag in makefile (cmake error)In Ubuntu TLS (win10), the -Bsymbolic is causing compilation error at the latest stage (100%). After removing it, compilation is successful. May be this is not a bug but a suggestion. @kaluza @thober @ottor
![B3122BDE-52BC-4104-A0F6-A9...In Ubuntu TLS (win10), the -Bsymbolic is causing compilation error at the latest stage (100%). After removing it, compilation is successful. May be this is not a bug but a suggestion. @kaluza @thober @ottor
![B3122BDE-52BC-4104-A0F6-A9563A5B68EB](/uploads/75ff0cccd63a7f7ea46c16ebd3e03782/B3122BDE-52BC-4104-A0F6-A9563A5B68EB.jpeg)
https://stackoverflow.com/questions/15090198/ld-unknown-option-bsymbolic-when-trying-to-build-iniparser-on-osx5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/131L0 resolutions with repeating decimals2022-07-07T10:38:33+02:00Pallav Kumar Shresthapallav-kumar.shrestha@ufz.deL0 resolutions with repeating decimals**Background**
* MERIT DEM is at 3 arc seconds i.e. 0 deg 0 min 3 secs OR 0.00083333.....33333…..
* I use ASCII input format. All of input .asc files and header.txt have the L0 resolution in decimal degrees.
**Issue**
* I noticed that...**Background**
* MERIT DEM is at 3 arc seconds i.e. 0 deg 0 min 3 secs OR 0.00083333.....33333…..
* I use ASCII input format. All of input .asc files and header.txt have the L0 resolution in decimal degrees.
**Issue**
* I noticed that in order to avoid truncation errors the user needs to enter a lot of decimal places in the .asc files and header.txt. E.g. 18 decimal places were not enough. So I entered 100 decimal places - worked. (note: the error occurs at lines 546-547 of mo_grid.f90, calculation of xllcorner and yllcorner from L0 and L2 resolutions)
**Solution (?)**
* In mHM we have two coordinate systems - metric and lat lon. The latter has numeral format of decimal degrees. However, data which come as deg-min-sec when converted to decimal degrees could bear “repeating decimals” leading to truncation errors as in case of MERIT DEM resolution.
* Wouldn’t it be better if users had the option to enter the resolution in deg-min-sec or decimal degrees as deemed fit? That would be more elegant/ intuitive than entering a lot of decimal places.
* I don’t have experience in using .nc files for L0 data (except lai.nc). Does that solve this issue already?v5.12Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/121PGI-Fortran support2022-04-28T15:23:40+02:00Sebastian MüllerPGI-Fortran supportWe now have the PGI Fortran compiler on EVE (https://git.ufz.de/it/eve/software/-/issues/176).
ATM there are some problems/bugs that need to be resolved in order to make it work:
* [x] we need a load-script for the PGI compiler on EVE
*...We now have the PGI Fortran compiler on EVE (https://git.ufz.de/it/eve/software/-/issues/176).
ATM there are some problems/bugs that need to be resolved in order to make it work:
* [x] we need a load-script for the PGI compiler on EVE
* [ ] in `mhm_driver.f90`:
* the pointer `eval` should be of type `eval_interface` from `mo_optimization_utils`
https://git.ufz.de/mhm/mhm/-/blob/develop/src/mHM/mhm_driver.f90#L172
* the pointer `obj_func` should be of type `objective_interface` from `mo_optimization_utils`
https://git.ufz.de/mhm/mhm/-/blob/develop/src/mHM/mhm_driver.f90#L173
* [ ] in https://git.ufz.de/mhm/mhm/-/blob/develop/src/common/mo_read_latlon.f90#L106, an error occures:
```
0: SHAPE: arg not associated with array
```
(very descriptive) ... my educated guess is, that PGI has a problem with the generic procedures in `mo_netcdf` (https://git.ufz.de/mhm/mhm/-/blob/develop/src/common/mo_read_latlon.f90#L106)
* [ ] there are a lot of pre-processor directives across the code-base to handle PGI-Fortran compilation. We would need to test, which of these are still necessary and if we would need more of them to handle problems like the one described above (what a mess)
* [ ] is it worth the hassle?
List TBC...v5.12Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/123cmake command line arguments2020-12-01T09:32:10+01:00Stephan Thobercmake command line arguments@kaluza @ottor :
I want to parse command line arguments to cmake for compilation on my machine, using
`cmake -DNETCDF_CONFIG=path`
but get
CMake Warning:
Manually-specified variables were not used by the project:
NETCF_CONFIG...@kaluza @ottor :
I want to parse command line arguments to cmake for compilation on my machine, using
`cmake -DNETCDF_CONFIG=path`
but get
CMake Warning:
Manually-specified variables were not used by the project:
NETCF_CONFIG
Any idea why this is happening? It does not happen for NETCDFF_CONFIG.https://git.ufz.de/mhm/mhm/-/issues/127mLM fork: Provision for lakes that outflows directly to another lake2020-07-02T00:14:04+02:00Pallav Kumar Shresthapallav-kumar.shrestha@ufz.demLM fork: Provision for lakes that outflows directly to another lake**Background**
* There would be cases in which a reservoir lake extent would extend upstream such that it reaches the outflow L0 of upstream reservoir lake.
* Or there could be natural lakes where the lake downstream has extent up to t...**Background**
* There would be cases in which a reservoir lake extent would extend upstream such that it reaches the outflow L0 of upstream reservoir lake.
* Or there could be natural lakes where the lake downstream has extent up to the waterfall L0 cell of the upstream lake.
**Issue**
* The code at the moment only checks nodes for being outlets and not lakes as in aforementioned cases.
* Conversely, it could be that the reservoirs are close to each other but not connected. However, if user gives high values for dam crest level of downstream reservoir, this condition can (erroneously) arise.
* I came across such case in Khuzestan where the lake of Masjed e-Soleyman (downstream) extended up to the outflow L0 cell of Shahid Abbaspour reservoir (upstream) with high dam crest level.
![image](/uploads/2f6dce62b770979dfe760feca33fe789/image.png)
**Solution**
* There needs to be a provision where lakes are checked for being lake outlets of upstream lakes.
* If such cases arise, which would be very rare, the user is informed at runtime with lake indices of the upstream-downstream lakes
* User is also provided with the hint that lowering dam crest level of the downstream dam may solve the issue, in case the lake connection is erroneous.
* Lowering of the dam crest level for Masjed e-Soleyman (downstream reservoir) helped to avoid this situation as follows -
![image](/uploads/5508fc5c43b75bc34a6b8fd68b1f4795/image.png)Pallav Kumar Shresthapallav-kumar.shrestha@ufz.dePallav Kumar Shresthapallav-kumar.shrestha@ufz.dehttps://git.ufz.de/mhm/mhm/-/issues/138New SM process =>>> process(3)=4 and fixing bugs in the older FC dependency o...2020-12-01T09:33:17+01:00Mehmet Cüneyd DemirelNew SM process =>>> process(3)=4 and fixing bugs in the older FC dependency on root fraction coefficientHi @thober, @muellese
Simon and I made some revisions on several files of mHM to add Feddes and global FC dependency on root fraction coef. at SM process(3)=4. Simon informed Luis by a detailed email last week.
Also, I downloaded th...Hi @thober, @muellese
Simon and I made some revisions on several files of mHM to add Feddes and global FC dependency on root fraction coef. at SM process(3)=4. Simon informed Luis by a detailed email last week.
Also, I downloaded the latest “develop” folder on 22-8-2020 and compared our files with the current corresponding files to adopt our files to the new circumstances like “Domain” instead of “Basin” term. I run the python check-cases in my gnu (cygwin) computer but received "failed" message at the end.
Files are available at:
[DROPBOX URL](https://www.dropbox.com/sh/wkmf4v9euh0ae79/AADqIKxiE1U5GKgGxNzUNH3la?dl=0)
I can try the pull-request (commit) new files to gitlab if you could guide me.
I only did some issue reporting in gitlab and some commits to old svn system but never did a commit to the new gitlab system.
Revised files:
[mhm.nml](/uploads/78a18620d14c373a61c75e89b2719528/mhm.nml)
[mhm_parameter.nml](/uploads/171f10dfd3634c4a29b599316f6b3bb9/mhm_parameter.nml)[mo_restart.f90](/uploads/5795f8e9e07ca09ebf21cf8ce10cc9ba/mo_restart.f90)
[mo_soil_moisture.f90](/uploads/993e7f90ce54125ee6384efbcaeec328/mo_soil_moisture.f90)[mo_mpr_read_config.f90](/uploads/7cdbc040d4c895a202800f334cfa250d/mo_mpr_read_config.f90)
[mo_mpr_smhorizons.f90](/uploads/b995122be34a2866c14d8db5ffc95c9b/mo_mpr_smhorizons.f90)
[mo_mpr_soilmoist.f90](/uploads/ba167ba8b911945bc5d4e7577c227d19/mo_mpr_soilmoist.f90)
[mo_multi_param_reg.f90](/uploads/91e803d44846986a5db5b6296e8f3a03/mo_multi_param_reg.f90)Stephan ThoberStephan Thoberhttps://git.ufz.de/mhm/mhm/-/issues/107hourly meteo forcing nc2023-01-12T11:13:47+01:00Stephan Thoberhourly meteo forcing ncThis is an email send by Olda:
Hi Stephan,
Just a follow-up on Husain's comment during mhm develop meeting.
During today’s mHM meeting I have checked further the reading of hourly data is not working in mHM.
I remember checking this...This is an email send by Olda:
Hi Stephan,
Just a follow-up on Husain's comment during mhm develop meeting.
During today’s mHM meeting I have checked further the reading of hourly data is not working in mHM.
I remember checking this already long time ago…
FYI, I have attached 3-meteo files with modified time step for the test basin at hourly time,
This can be used in the test example with modified time window:
```fortran
warming_Days(1) = 0
!> first year of wanted simulation period
eval_Per(1)%yStart = 1989
!> first month of wanted simulation period
eval_Per(1)%mStart = 01
!> first day of wanted simulation period
eval_Per(1)%dStart = 02
!> last year of wanted simulation period
eval_Per(1)%yEnd = 1989
!> last month of wanted simulation period
eval_Per(1)%mEnd = 02
!> last day of wanted simulation period
eval_Per(1)%dEnd = 02
```
The first thing is that `./src/common/mo_read_forcing_nc.f90` fails to recognize, these data are at hourly time step.
If `inctimestep` is hardcoded in there to `-4`, then seq fault occurs later in the code.
I did not go deeper, but just want to confirm, there is an issue to be checked.
Thanks!
Best regards,
Olda
[pet.nc](/uploads/042c56ffde633de2f2efb26a126d6c2a/pet.nc)
[tavg.nc](/uploads/005f8ee487b8fa381e8def6fcf7432b2/tavg.nc)
[header.txt](/uploads/4ae6bdb6d30e8de7a3a1cf0fa499dbf2/header.txt)
[pre.nc](/uploads/71f9ed2044168a54a6a9c1bb5bddf115/pre.nc)v5.12Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/95check first if output folders are present2020-11-30T08:58:33+01:00Stephan Thobercheck first if output folders are presentmHM does not check whether output files and folders exist at the beginning. This leads to annoying model behaviour.
@ottor : please provide assert functionmHM does not check whether output files and folders exist at the beginning. This leads to annoying model behaviour.
@ottor : please provide assert function5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/143[Version] intrducing bugfix-releases with v5.112020-10-05T13:49:45+02:00Sebastian Müller[Version] intrducing bugfix-releases with v5.11In order to provide bugfixes for minor releases, we should introduce **micro** releases with `v5.11`
We should follow the semantic versioning: https://semver.org/In order to provide bugfixes for minor releases, we should introduce **micro** releases with `v5.11`
We should follow the semantic versioning: https://semver.org/5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/142[EVE] NAG loadscript loads wrong netcdf version2020-08-31T16:23:44+02:00Sebastian Müller[EVE] NAG loadscript loads wrong netcdf versionThe module versions in the NAG load scripts are not fixed so they are loading wrong modules now.The module versions in the NAG load scripts are not fixed so they are loading wrong modules now.5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/151module loads under easybuild in INSTALL.md2020-10-05T13:48:25+02:00Friedrich Boeingmodule loads under easybuild in INSTALL.mdHello,
The INSTALL.md could be updated by directing to the moduleloadscripts under the different compilers with easybuild.
The solution for loading intel modules `module purge
ml uge/8.5.5-2 Java/1.8.0_202 grid-engine-tools/0.8.3-3-g93...Hello,
The INSTALL.md could be updated by directing to the moduleloadscripts under the different compilers with easybuild.
The solution for loading intel modules `module purge
ml uge/8.5.5-2 Java/1.8.0_202 grid-engine-tools/0.8.3-3-g93f1efa icc/2018.3.222-GCC-7.3.0-2.30 ifort/2018.3.222-GCC-7.3.0-2.30 iccifort/2018.3.222-GCC-7.3.0-2.30
`
currently proposed in INSTALL.md is not up to date as still old modules are used there, which would crash in Nov 2020 when all old modules are deleted (according to Tom Strempel).
Best regards,
Friedrich5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/152Add iomkl/2020a workflow to eve load-scripts and CI2020-09-24T15:11:21+02:00Sebastian MüllerAdd iomkl/2020a workflow to eve load-scripts and CIIOMKL 2020a comes with the following settings:
- Intel C Compiler v2020.1
- Intel Fortran Compiler v2020.1
- OpenMPI v4.0.3
- Intel MKL v2020.1IOMKL 2020a comes with the following settings:
- Intel C Compiler v2020.1
- Intel Fortran Compiler v2020.1
- OpenMPI v4.0.3
- Intel MKL v2020.15.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/153compilation of mHM v5.4 under easybuild with cmake not possible2020-09-16T12:03:44+02:00Friedrich Boeingcompilation of mHM v5.4 under easybuild with cmake not possibleHello,
sorry for bothering again with compilation issues. Unfortunately I cannot figure it out myself. I need to compile mHM v5.4 (commit 7033595d) for the current drought monitor with the new module system. Strangely compilation with C...Hello,
sorry for bothering again with compilation issues. Unfortunately I cannot figure it out myself. I need to compile mHM v5.4 (commit 7033595d) for the current drought monitor with the new module system. Strangely compilation with Cmake (tried both iomkl/2018b and iomkl/2020a) aborts with errors. When compiling with the old Makefile and moduleLoads (intel18) it works without problems.
Any hint what I can do?
these are the error messages i get:
```bash
/work/boeing/mhm_v5.4/src/mHM/mo_mhm_eval.f90(717): error #6404: This name does not have a type, and must have an explicit type. [L1_AREACELL]
area_basin = sum(L1_areaCell(s1:e1) )
--------------------------------^
/work/boeing/mhm_v5.4/src/mHM/mo_mhm_eval.f90(717): error #6514: Substring or array slice notation requires CHARACTER type or array. [L1_AREACELL]
area_basin = sum(L1_areaCell(s1:e1) )
--------------------------------^
/work/boeing/mhm_v5.4/src/mHM/mo_mhm_eval.f90(717): error #6362: The data types of the argument(s) are invalid. [SUM]
area_basin = sum(L1_areaCell(s1:e1) )
--------------------------------^
/work/boeing/mhm_v5.4/src/mHM/mo_mhm_eval.f90(717): error #6361: An array-valued argument is required in this context. [SUM]
area_basin = sum(L1_areaCell(s1:e1) )
--------------------------------^
/work/boeing/mhm_v5.4/src/mHM/mo_mhm_eval.f90(717): error #6303: The assignment operation or the binary expression operation is invalid for the data types of the two operands. [SUM]
area_basin = sum(L1_areaCell(s1:e1) )
----------------------------^
/work/boeing/mhm_v5.4/src/mHM/mo_mhm_eval.f90(723): error #6514: Substring or array slice notation requires CHARACTER type or array. [L1_AREACELL]
basin_avg_TWS_sim(tt,ii) = ( dot_product( TWS_field (s1:e1), L1_areaCell(s1:e1) ) / area_basin )
----------------------------------------------------------------------------^
/work/boeing/mhm_v5.4/src/mHM/mo_mhm_eval.f90(723): error #6362: The data types of the argument(s) are invalid. [DOT_PRODUCT]
basin_avg_TWS_sim(tt,ii) = ( dot_product( TWS_field (s1:e1), L1_areaCell(s1:e1) ) / area_basin )
----------------------------------------------------------------------------^
/work/boeing/mhm_v5.4/src/mHM/mo_mhm_eval.f90(723): error #6372: One-dimensional array-valued arguments are required in this context. [DOT_PRODUCT]
basin_avg_TWS_sim(tt,ii) = ( dot_product( TWS_field (s1:e1), L1_areaCell(s1:e1) ) / area_basin )
```Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/155[RELEASE] 5.11 checklist2021-02-03T11:47:28+01:00Sebastian Müller[RELEASE] 5.11 checklist- **Release notes**
- [x] notes about River temperature module !37
- [x] notes about new SM process !43
- **Clean-ups**
- [x] remove `testingInstructions`
- [x] remove `mRM` standalone specific files (not working as standalone...- **Release notes**
- [x] notes about River temperature module !37
- [x] notes about new SM process !43
- **Clean-ups**
- [x] remove `testingInstructions`
- [x] remove `mRM` standalone specific files (not working as standalone anymore)
- [x] remove Makefile and related folders (new features not implemented there [cpp directives etc.])
- [x] remove `bash` folder
- **Documentation**
- [x] set `tocdepth` to `1` in `header_doxygen.tex`
- [x] proof reading of documentation pages (@thober, @lese)
- [x] cleanup `doc` folder
- [x] move `INSTALL.md` and other additional `*.md` files there
- [x] add intel-2020 workflow to `INSTALL.md`
- **Release Procedure**
- [x] create `release_5.11.0` branch from `develop` at point of feature stop
- [x] provide a `5.11.0-rc1` release candidate from this branch as pre-release to check release process
- [x] when ready update `version.txt` and `version_date.txt`
- [x] merge into `master`
- [x] make release `5.11.0` from `master`
- [x] merge updates made during release back to develop
- [x] set `version.txt` to `5.11.1-dev0`5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/156cmake build fails under ubuntu 20.04 with tag 5.10_fixed and tag 5.102020-10-06T15:02:00+02:00sluedtkecmake build fails under ubuntu 20.04 with tag 5.10_fixed and tag 5.10Trying to build mHM fails with the listed setting above.
For tag 5.10_fixed
```
git checkout 5.10_fixed
HEAD is now at b7181fe Update README.md
root@09a3ccc82a3d:/mhm# git status
HEAD detached at 5.10_fixed
nothing to commit, working t...Trying to build mHM fails with the listed setting above.
For tag 5.10_fixed
```
git checkout 5.10_fixed
HEAD is now at b7181fe Update README.md
root@09a3ccc82a3d:/mhm# git status
HEAD detached at 5.10_fixed
nothing to commit, working tree clean
root@09a3ccc82a3d:/mhm# mkdir build && cd build
root@09a3ccc82a3d:/mhm/build# cmake ..
-- The Fortran compiler identification is GNU 9.3.0
-- Check for working Fortran compiler: /usr/bin/f95
-- Check for working Fortran compiler: /usr/bin/f95 -- works
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - done
-- Checking whether /usr/bin/f95 supports Fortran 90
-- Checking whether /usr/bin/f95 supports Fortran 90 -- yes
-- build INDEPENDENT of module system OFF
-- search in additional directory for netCDF
-- found /usr/bin/nf-config
-- netcdff includes /usr/include
-- netcdff netcdf link flags -I/usr/include
-- netcdff netcdf library link flags -L/usr/lib/x86_64-linux-gnu -lnetcdff -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -lnetcdf -lnetcdf -ldl -lm
-- found /usr/lib/x86_64-linux-gnu/libnetcdff.so
-- found /usr/lib/x86_64-linux-gnu/libnetcdf.so
-- found /usr/lib/x86_64-linux-gnu/libnetcdf.so
-- found /usr/lib/x86_64-linux-gnu/libdl.so
-- found /usr/lib/x86_64-linux-gnu/libm.so
-- found netcdf libraries /usr/lib/x86_64-linux-gnu/libnetcdff.so;/usr/lib/x86_64-linux-gnu/libnetcdf.so;/usr/lib/x86_64-linux-gnu/libnetcdf.so;/usr/lib/x86_64-linux-gnu/libdl.so;/usr/lib/x86_64-linux-gnu/libm.so
-- found netcdf other flags -Wl,-Bsymbolic-functions;-Wl,-z,relro;-Wl,-z,now
-- Performing Test CPP_FLAG
-- Performing Test CPP_FLAG - Success
-- the following debug flags will be used: -g -pedantic-errors -Wall -W -O -g -Wno-maybe-uninitialized
-- Configuring done
-- Generating done
-- Build files have been written to: /mhm/build
root@09a3ccc82a3d:/mhm/build# make
Scanning dependencies of target mhm
[ 1%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_kind.f90.o
[ 1%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_mpr_constants.f90.o
[ 2%] Building Fortran object CMakeFiles/mhm.dir/src/common/mo_common_variables.f90.o
[ 3%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_constants.f90.o
[ 4%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_message.f90.o
[ 5%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_mpr_global_variables.f90.o
[ 6%] Building Fortran object CMakeFiles/mhm.dir/src/common/mo_common_constants.f90.o
[ 7%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_append.f90.o
[ 8%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_string_utils.f90.o
[ 9%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_utils.f90.o
[ 10%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_upscaling_operators.f90.o
[ 11%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_mpr_pet.f90.o
[ 12%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_mpr_runoff.f90.o
[ 13%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_mpr_smhorizons.f90.o
[ 14%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_mpr_soilmoist.f90.o
[ 15%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_multi_param_reg.f90.o
[ 16%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_timer.f90.o
[ 17%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_mpr_eval.f90.o
[ 18%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_mpr_file.f90.o
[ 19%] Building Fortran object CMakeFiles/mhm.dir/src/common/mo_common_functions.f90.o
[ 20%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_finish.f90.o
[ 21%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_nml.f90.o
[ 22%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_mpr_read_config.f90.o
[ 23%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_netcdf.f90.o
[ 24%] Building Fortran object CMakeFiles/mhm.dir/src/common/mo_common_restart.f90.o
[ 25%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_mpr_restart.f90.o
[ 26%] Building Fortran object CMakeFiles/mhm.dir/src/common/mo_grid.f90.o
[ 27%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_orderpack.f90.o
[ 28%] Building Fortran object CMakeFiles/mhm.dir/src/common/mo_read_latlon.f90.o
[ 29%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_soil_database.f90.o
[ 30%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_mpr_startup.f90.o
[ 31%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_ncread.f90.o
[ 32%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_julian.f90.o
[ 33%] Building Fortran object CMakeFiles/mhm.dir/src/common/mo_read_forcing_nc.f90.o
[ 34%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_prepare_gridded_lai.f90.o
[ 35%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_read_lut.f90.o
[ 36%] Building Fortran object CMakeFiles/mhm.dir/src/common/mo_common_file.f90.o
[ 37%] Building Fortran object CMakeFiles/mhm.dir/src/common/mo_read_spatial_data.f90.o
[ 38%] Building Fortran object CMakeFiles/mhm.dir/src/common/mo_common_read_data.f90.o
[ 39%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_read_wrapper.f90.o
[ 39%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mpr_driver.f90.o
[ 40%] Building Fortran object CMakeFiles/mhm.dir/src/common/mo_common_read_config.f90.o
[ 41%] Building Fortran object CMakeFiles/mhm.dir/src/common/mo_read_timeseries.f90.o
[ 42%] Building Fortran object CMakeFiles/mhm.dir/src/common/mo_template.f90.o
[ 43%] Building Fortran object CMakeFiles/mhm.dir/src/common_mHM_mRM/mo_common_mHM_mRM_file.f90.o
[ 44%] Building Fortran object CMakeFiles/mhm.dir/src/common_mHM_mRM/mo_common_mHM_mRM_variables.f90.o
[ 45%] Building Fortran object CMakeFiles/mhm.dir/src/common_mHM_mRM/mo_common_mHM_mRM_read_config.f90.o
[ 45%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_optimization_utils.f90.o
[ 46%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_xor4096.f90.o
[ 46%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_anneal.f90.o
[ 47%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_dds.f90.o
[ 48%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_moment.f90.o
[ 49%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_ncwrite.f90.o
[ 50%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_mcmc.f90.o
[ 51%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_sce.f90.o
[ 52%] Building Fortran object CMakeFiles/mhm.dir/src/common_mHM_mRM/mo_optimization.f90.o
[ 53%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_corr.f90.o
[ 54%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_errormeasures.f90.o
[ 55%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_linfit.f90.o
[ 56%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_percentile.f90.o
[ 57%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_mad.f90.o
[ 58%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_spatialsimilarity.f90.o
[ 59%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_standard_score.f90.o
[ 60%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_temporal_aggregation.f90.o
[ 61%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_canopy_interc.f90.o
[ 62%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_file.f90.o
[ 63%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_mhm_constants.f90.o
[ 64%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_global_variables.f90.o
[ 65%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_init_states.f90.o
[ 66%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_spatial_agg_disagg_forcing.f90.o
[ 67%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_meteo_forcings.f90.o
[ 68%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_neutrons.f90.o
c[ 69%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_pet.f90.o
[ 70%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_runoff.f90.o
a[ 71%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_snow_accum_melt.f90.o
[ 72%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_soil_moisture.f90.o
[ 73%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_temporal_disagg_forcing.f90.o
m[ 73%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_mhm.f90.o
[ 73%] Building Fortran object CMakeFiles/mhm.dir/src/mRM/mo_mrm_constants.f90.o
[ 74%] Building Fortran object CMakeFiles/mhm.dir/src/mRM/mo_mrm_global_variables.f90.o
[ 75%] Building Fortran object CMakeFiles/mhm.dir/src/mRM/mo_mrm_file.f90.o
[ 76%] Building Fortran object CMakeFiles/mhm.dir/src/mRM/mo_mrm_net_startup.f90.o
[ 77%] Building Fortran object CMakeFiles/mhm.dir/src/mRM/mo_mrm_mpr.f90.o
[ 78%] Building Fortran object CMakeFiles/mhm.dir/src/mRM/mo_mrm_read_config.f90.o
[ 79%] Building Fortran object CMakeFiles/mhm.dir/src/mRM/mo_mrm_read_data.f90.o
[ 80%] Building Fortran object CMakeFiles/mhm.dir/src/mRM/mo_mrm_restart.f90.o
[ 81%] Building Fortran object CMakeFiles/mhm.dir/src/mRM/mo_mrm_river_head.f90.o
[ 82%] Building Fortran object CMakeFiles/mhm.dir/src/mRM/mo_mrm_init.f90.o
[ 83%] Building Fortran object CMakeFiles/mhm.dir/src/mRM/mo_mrm_routing.f90.o
[ 84%] Building Fortran object CMakeFiles/mhm.dir/src/mRM/mo_mrm_write_fluxes_states.f90.o
[ 85%] Building Fortran object CMakeFiles/mhm.dir/src/mRM/mo_mrm_write.f90.o
[ 86%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_restart.f90.o
[ 87%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_write_fluxes_states.f90.o
[ 88%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_mhm_eval.f90.o
[ 89%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_mhm_read_config.f90.o
[ 90%] Building Fortran object CMakeFiles/mhm.dir/src/mRM/mo_mrm_signatures.f90.o
[ 91%] Building Fortran object CMakeFiles/mhm.dir/src/mRM/mo_mrm_objective_function_runoff.f90.o
[ 92%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_objective_function.f90.o
[ 93%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_read_optional_data.f90.o
[ 94%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_set_netcdf_outputs.f90.o
[ 95%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_startup.f90.o
[ 96%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mo_write_ascii.f90.o
[ 97%] Building Fortran object CMakeFiles/mhm.dir/src/mRM/mo_mrm_eval.f90.o
[ 98%] Building Fortran object CMakeFiles/mhm.dir/src/mRM/mrm_driver.f90.o
[ 99%] Building Fortran object CMakeFiles/mhm.dir/src/mHM/mhm_driver.f90.o
[100%] Linking Fortran executable mhm
/usr/bin/ld: unrecognized option '-Bsymbolic-functions;-Wl'
/usr/bin/ld: use the --help option for usage information
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/mhm.dir/build.make:1649: mhm] Error 1
make[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/mhm.dir/all] Error 2
make: *** [Makefile:84: all] Error 2
```
For tag 5.10
```
root@09a3ccc82a3d:/mhm/build# rm -rf *
root@09a3ccc82a3d:/mhm/build# cd ..
root@09a3ccc82a3d:/mhm# git checkout 5.10
Previous HEAD position was b7181fe Update README.md
HEAD is now at 9ce0ec1 merge release into master
root@09a3ccc82a3d:/mhm# cd build/
root@09a3ccc82a3d:/mhm/build# cmake ..
-- The Fortran compiler identification is GNU 9.3.0
-- Check for working Fortran compiler: /usr/bin/f95
-- Check for working Fortran compiler: /usr/bin/f95 -- works
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - done
-- Checking whether /usr/bin/f95 supports Fortran 90
-- Checking whether /usr/bin/f95 supports Fortran 90 -- yes
-- build INDEPENDENT of module system OFF
-- search in additional directory for netCDF
-- found /usr/bin/nf-config
-- netcdff includes /usr/include
-- netcdff netcdf link flags -I/usr/include
-- netcdff netcdf library link flags -L/usr/lib/x86_64-linux-gnu -lnetcdff -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -lnetcdf -lnetcdf -ldl -lm
-- found /usr/lib/x86_64-linux-gnu/libnetcdff.so
-- found /usr/lib/x86_64-linux-gnu/libnetcdf.so
-- found /usr/lib/x86_64-linux-gnu/libnetcdf.so
-- found /usr/lib/x86_64-linux-gnu/libdl.so
-- found /usr/lib/x86_64-linux-gnu/libm.so
-- found netcdf libraries /usr/lib/x86_64-linux-gnu/libnetcdff.so;/usr/lib/x86_64-linux-gnu/libnetcdf.so;/usr/lib/x86_64-linux-gnu/libnetcdf.so;/usr/lib/x86_64-linux-gnu/libdl.so;/usr/lib/x86_64-linux-gnu/libm.so
-- found netcdf other flags -Wl,-Bsymbolic-functions;-Wl,-z,relro;-Wl,-z,now
-- Performing Test CPP_FLAG
-- Performing Test CPP_FLAG - Success
-- the following debug flags will be used: -g -pedantic-errors -Wall -W -O -g -Wno-maybe-uninitialized
-- Configuring done
-- Generating done
-- Build files have been written to: /mhm/build
root@09a3ccc82a3d:/mhm/build# make
Scanning dependencies of target mhm
[ 1%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_kind.f90.o
[ 2%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_mpr_constants.f90.o
[ 4%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_constants.f90.o
[ 5%] Building Fortran object CMakeFiles/mhm.dir/src/lib/mo_message.f90.o
[ 6%] Building Fortran object CMakeFiles/mhm.dir/src/MPR/mo_mpr_global_variables.f90.o
/mhm/src/MPR/mo_mpr_global_variables.f90:17:6:
17 | use mo_common_variables, only : period
| 1
Fatal Error: Cannot open module file 'mo_common_variables.mod' for reading at (1): No such file or directory
compilation terminated.
make[2]: *** [CMakeFiles/mhm.dir/build.make:102: CMakeFiles/mhm.dir/src/MPR/mo_mpr_global_variables.f90.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/mhm.dir/all] Error 2
make: *** [Makefile:84: all] Error 2
```
The current HEAD of `develop` build successfully.
On my local machine (Manjaro), the tag 5.10_fixed and head of develop build successfully, but not 5.10, that fails with the same error as on Ubuntu.5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/159[netcdf] use of `_FillValue`2022-07-07T10:38:58+02:00Sebastian Müller[netcdf] use of `_FillValue`To be in line with the [CF conventions](https://cfconventions.org/Data/cf-conventions/cf-conventions-1.8/cf-conventions.html), we should talk about the usage of the `_FillValue` attribute beside `missing_value` in the netcdf output.
@ot...To be in line with the [CF conventions](https://cfconventions.org/Data/cf-conventions/cf-conventions-1.8/cf-conventions.html), we should talk about the usage of the `_FillValue` attribute beside `missing_value` in the netcdf output.
@ottor, @rakovec, @thober opinions?v5.12Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/160cmake error in compiling mhm-develop2020-11-30T14:16:48+01:00Mehmet Cüneyd Demirelcmake error in compiling mhm-developDear @muellese and @thober
I get the following cmake errors. I tried bringing cmake related files from v5.10_fixed but didnt help.
![image](/uploads/4154d36f744e1631ad510adf8080d7b2/image.png)
Also I don't see MakeFile in https://g...Dear @muellese and @thober
I get the following cmake errors. I tried bringing cmake related files from v5.10_fixed but didnt help.
![image](/uploads/4154d36f744e1631ad510adf8080d7b2/image.png)
Also I don't see MakeFile in https://git.ufz.de/mhm/mhm.5.11Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/165NAG check failing due to expired license2021-01-02T16:27:45+01:00Sebastian MüllerNAG check failing due to expired licenseCI for NAG is failing due to expired license: https://git.ufz.de/mhm/mhm/-/jobs/38869
WKDV is informed:
https://git.ufz.de/it/eve/software/-/issues/211CI for NAG is failing due to expired license: https://git.ufz.de/mhm/mhm/-/jobs/38869
WKDV is informed:
https://git.ufz.de/it/eve/software/-/issues/211Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/170mhm output as float?2021-07-08T18:30:34+02:00Oldrich Rakovecmhm output as float?Dear mHM admins,
Please, could be an option to make a flag in mhm.nml and print out netcdf variables in float instead of double?
This could squeeze the file size in some ongoing projects (mainly HICAM, ESM), in which we hit the EVE lim...Dear mHM admins,
Please, could be an option to make a flag in mhm.nml and print out netcdf variables in float instead of double?
This could squeeze the file size in some ongoing projects (mainly HICAM, ESM), in which we hit the EVE limits?
Thanks,
Olda
@lese @muellese @thober @rkumar @boeing @marxav5.11.2Sebastian MüllerSebastian Müllerhttps://git.ufz.de/mhm/mhm/-/issues/173snow melt output in mhm_flux_states2022-06-17T20:58:54+02:00sluedtkesnow melt output in mhm_flux_statesIs it possible the add the snow-melt variable of the model to the output written in "mhm_flux_states".Is it possible the add the snow-melt variable of the model to the output written in "mhm_flux_states".v5.12Sebastian MüllerSebastian Müller