global optimization of mHM
Protocol of working group meeting date: 29/05/2019 persons: @lese, @thober , @thober , @kaluza , @ottor
boundary conditions:
- 160k core hours we need to use per month (total 2 Mio), it is possibly to postpone one month's budget to the next
- period 05/2019 - 04/2020
target:
- use budgets each month
- run global multi-objective optimization runs on JEWELS
tasks:
- prepare reference datasets (TWS, FLUXNET, Q, ...)
- code multi-objective function in mHM
- stepwise setup of final mHM: do one individual objective functions optimization at a time (e.g. start with TWS)
@rakovec tasks:
- need to evaluate streamflow data (Budyko analysis/plausibility) and assign gauge coordinates and then delineate basins for Q optimization
@kaluza tasks:
- 1a) June: first set up Danube with Q optimization (test code and find optimal parallelization parameters on JEWELES)
- 1b) June: run last working mHM parallel version on one of Oldas global subdomain and check if domain decomposition works
-
- July: run mHM with parallel I/O for TWS ()
- @thober will assist with identifying some bugs for 2)
@ottor tasks:
- merge reading of netcdf from MPR branch to preparing_mhm_eval_for_parallel_io branch
open tasks:
- convert global setting from ascii to netcdf
- how to organize domains for Q optimization?
- supply domains that are not too big so that domain decomposition is still able to run on JEWELS
- what is the appropriate subdomain organization for L0 and L2 files
current status:
- global mHM setup is ready (at least with dummy setup)
- coarse domain decomposition script is ready (latlonbox)
- bugs with reading L0 in parallel