Commit a3528338 authored by Robert Schweppe's avatar Robert Schweppe
Browse files

Merge branch 'new_release' into 'develop'

merge master back into develop

See merge request !73
parents dfe3f8d6 9bce4ecd
Pipeline #24325 failed with stages
in 14 minutes and 54 seconds
......@@ -152,8 +152,9 @@ show-env-vars:
# template for documentation jobs
.documentation_template: &documentation_template
#only:
# - develop
only:
- develop
- master
stage: build
script:
- module load foss/2019b
......@@ -167,19 +168,17 @@ show-env-vars:
- cd latex/ && make > ../doxygen_latex_dev.txt
- cp refman.pdf ../html/mpr_doc.pdf
- cp refman.pdf ../mpr_doc_dev.pdf
- cp refman.pdf ../mpr_doc_mas.pdf # TODO: remove once we have a decent version on master, too
- cd .. && mv html html_dev
- mv doxygen_warn.txt doxygen_warn_dev.txt
- rm -rf latex
# TODO: activate once we have a decent version on master, too
# # same for master
# - git checkout master
# - test -f doc/doxygen-1.8.8.config && cp doc/doxygen-1.8.8.config doc/doxygen.config
# - doxygen doc/doxygen.config > doxygen_log_mas.txt
# - cd latex/ && make > ../doxygen_latex_mas.txt
# - cp refman.pdf ../html/mpr_doc.pdf
# - cp refman.pdf ../mpr_doc_mas.pdf
# - cd .. && mv html html_mas
# same for master
- git checkout master
- test -f doc/doxygen-1.8.8.config && cp doc/doxygen-1.8.8.config doc/doxygen.config
- doxygen doc/doxygen.config > doxygen_log_mas.txt
- cd latex/ && make > ../doxygen_latex_mas.txt
- cp refman.pdf ../html/mpr_doc.pdf
- cp refman.pdf ../mpr_doc_mas.pdf
- cd .. && mv html html_mas
# care about warnings file (maybe missing on master)
- |
if [ -f doxygen_warn.txt ]; then
......@@ -196,11 +195,11 @@ show-env-vars:
- doxygen_log_dev.txt
- doxygen_latex_dev.txt
- doxygen_warn_dev.txt
#- html_mas
- html_mas
- mpr_doc_mas.pdf
#- doxygen_log_mas.txt
#- doxygen_latex_mas.txt
#- doxygen_warn_mas.txt
- doxygen_log_mas.txt
- doxygen_latex_mas.txt
- doxygen_warn_mas.txt
# ##################
# ### BUILD JOBS ###
......
# Multiscale parameter regionalization -- MPR
- The current release is **[MPR v0.6.4][1]**.
- The current release is **[MPR v1.0.1][1]**.
- The latest MPR release notes can be found in the file [RELEASES][3] or [online][4].
- General information can be found on the [MPR website](https://www.ufz.de/index.php?en=40126).
- The mHM comes with a [LICENSE][6] agreement, this includes also the GNU Lesser General Public License.
- The MPR comes with a [LICENSE][6] agreement, this includes also the GNU Lesser General Public License.
- There is a list of [publications using MPR][7].
**Please note:** The [GitLab repository](https://git.ufz.de/chs/MPR) grants read access to the code.
......@@ -19,7 +19,7 @@ The online documentation for mHM can be found here (pdf versions are provided th
The model code can be cited as:
> Schweppe, R., S. Thober, M. Kelbling, R. Kumar, S. Attinger, and L. Samaniego (2021): MPR 1.0: A stand-alone Multiscale Parameter Regionalization Tool for Parameter Estimation of Land Surface Models, Geoscientific Model Development (to be submitted)
> Schweppe, R., S. Thober, M. Kelbling, R. Kumar, S. Attinger, and L. Samaniego (2021): MPR 1.0: A stand-alone Multiscale Parameter Regionalization Tool for Parameter Estimation of Land Surface Models, Geoscientific Model Development (submitted)
Please see the original publication of the MPR framework:
......@@ -29,7 +29,7 @@ Please see the original publication of the MPR framework:
The model code can be generally cited as:
> **MPR:**
> Schweppe, Robert, Thober, Stephan, Kelbling, Matthias, Kumar, Rohini, Attinger, Sabine, & Samaniego, Luis. (2021, March 31). Multiscale Parameter Regionalization tool -MPR v. 1.0 (Version 1.0). Zenodo. http://doi.org/10.5281/zenodo.4650513
To cite a certain version, have a look at the [Zenodo site][10].
......@@ -43,10 +43,10 @@ See also the [documentation][5] for detailed instructions to setup MPR.
1. Add the required transfer functions to the code.
2. Compile MPR with CMake.
3. Run MPR on the example `./mhm`, which uses settings from [mpr.nml](mpr.nml).
3. Run MPR on the example via `./MPR`, which uses settings from [mpr.nml](mpr.nml).
[1]: https://git.ufz.de/chs/mpr/tree/0.6.4
[1]: https://git.ufz.de/chs/mpr/tree/1.0.0
[3]: doc/src/07_RELEASES.md
[4]: https://git.ufz.de/chs/mpr/tags/
[5]: https://chs.pages.ufz.de/mpr
......@@ -54,4 +54,4 @@ See also the [documentation][5] for detailed instructions to setup MPR.
[7]: doc/mpr_papers.md
[8]: doc/src/02_dependencies.md
[9]: doc/src/03_installation.md
[10]: https://zenodo.org/
[10]: https://zenodo.org/record/4650513
......@@ -5,15 +5,15 @@
As MPR relies on (relatively) recent Fortran features, we require one of those compilers
- `gfortran` versions >v6.0 (routinely checked versions: 7.3 and 8.3)
- `nag` compiler version >v6.2 (routinely checked versions: 6.2)
- `intel` compiler version >v18.0 (routinely checked versions: 18.0, 20.0)
- `intel` compiler version >v18.0 (routinely checked versions: 18.0, 19.0)
Other compilers have not been tested and likely do not work out of the box.
## Libraries
### lightweight_lib
### FORCES
We rely on a set of independently maintained Fortran utility routines ([lightweight CHS Fortran library](https://git.ufz.de/chs/lightweight_fortran_lib)).
We rely on a set of independently maintained Fortran utility routines ([FORtran library for Computational Environmental Systems](https://git.ufz.de/chs/forces)).
It in turn requires the NetCDF4 Fortran library for input/output parsing. See its documentation for installation details.
### flogging
......
......@@ -2,18 +2,11 @@
## MPR v1.0 (Mar 2021)
### Experimental Features
- initial release
- foo
## MPR v1.0.1 (May 2021)
### Enhancements
- foo
### Changes
- foo
### Bugfixes
- foo
\ No newline at end of file
- reintegrated erroneously deleted tests for Python preproprocessors
- fixed some typos in README.md
- updated the hpc-module-loads subrepo
- activated documentation building for master branch by default as well
\ No newline at end of file
......@@ -5,8 +5,8 @@
;
[subrepo]
remote = https://git.ufz.de/chs/HPC-Fortran-module-loads.git
branch = v1.1
commit = 6829f263b7a64932325f751212a994c03fd0c35b
parent = 6edbc69c5024c0816dcc3d19fec0542903628a8e
branch = v1.2
commit = d0c99c3686409792f6d90b65343a4016b8de9c32
parent = 72dbd446089af04f726c427b3140a8fb10f9d5e6
method = merge
cmdver = 0.4.3
......@@ -10,10 +10,15 @@ All these scripts will load:
- netCDF-Fortran
- CMake
- the MPR Python Environment
- pFUnit - Fortran unit testing framework
- pFUnit - Fortran unit testing framework (_not available for GNU 6.4_)
### Usage
- GNU 6.4 compiler (`foss/2018a` Toolchain):
```bash
source eve.gfortran64 # or
source eve.gfortran64MPI
```
- GNU 7.3 compiler (`foss/2018b` Toolchain):
```bash
source eve.gfortran73 # or
......
module purge
module load foss/2018a
module load netCDF-Fortran
module load CMake
module use /global/apps/modulefiles
module load python_env_mpr
export NETCDF_DIR="$EBROOTNETCDF"
export NETCDF_FORTRAN_DIR="$EBROOTNETCDFMINFORTRAN"
export FC=gfortran
module purge
module load foss/2018a
module load netCDF-Fortran
module load CMake
module use /global/apps/modulefiles
module load python_env_mpr
export NETCDF_DIR="$EBROOTNETCDF"
export NETCDF_FORTRAN_DIR="$EBROOTNETCDFMINFORTRAN"
export FC=mpifort
......@@ -9,18 +9,18 @@ Author : Robert Schweppe (robert.schweppe@ufz.de)
Created : 2019-09-05 11:46
"""
import argparse
import math
import warnings
from itertools import product
# IMPORTS
import geopandas as gpd
import numpy as np
import pandas as pd
import tqdm
import xarray as xr
import numpy as np
import argparse
import sys
from shapely.geometry import Polygon
import tqdm
import math
import warnings
from itertools import product
# GLOBAL VARIABLES
MISSING_VALUE = -9999.0
......@@ -66,7 +66,7 @@ def parse_args():
# CLASSES
class MyGeoDataFrame(gpd.GeoDataFrame):
def get_SCRIP_vars(self):
#self.check_type()
# self.check_type()
print('getting the centroid lon values')
# centroid_lon = self.check_longitude(self.get_centroid('x'))
centroid_lon = self.get_centroid('x')
......@@ -90,22 +90,23 @@ class MyGeoDataFrame(gpd.GeoDataFrame):
# first get the number of corners for each polygon
# subtract 1 because last vertex is double
print('getting number of vertices for each cell')
lengths = [len(item.exterior.coords.xy[0]) - 1 if item.geom_type == 'Polygon' else len(item[0].exterior.coords.xy[0]) - 1 for item in self.geometry]
lengths = [len(item.exterior.coords.xy[0]) - 1 if item.geom_type == 'Polygon' else len(
item[0].exterior.coords.xy[0]) - 1 for item in self.geometry]
max_length = max(lengths)
# init the final arrays and set the default missing value
corner_lon = np.zeros((len(self.geometry), max_length))
corner_lon[corner_lon==0]= MISSING_VALUE
corner_lon[corner_lon == 0] = MISSING_VALUE
corner_lat = np.zeros((len(self.geometry), max_length))
corner_lat[corner_lat==0]= MISSING_VALUE
corner_lat[corner_lat == 0] = MISSING_VALUE
# now loop over each polygon and iteratively set the corner values in the target array
# TODO: sort the nodes so the centroid is always left of the node path
# https://stackoverflow.com/questions/1165647/how-to-determine-if-a-list-of-polygon-points-are-in-clockwise-order
for i_item, item in enumerate(self.geometry):
if item.geom_type == 'Polygon':
corner_lon[i_item,:lengths[i_item]], corner_lat[i_item,:lengths[i_item]] = \
corner_lon[i_item, :lengths[i_item]], corner_lat[i_item, :lengths[i_item]] = \
self._check_order(*item.exterior.coords.xy)
elif item.geom_type == 'MultiPolygon':
corner_lon[i_item,:lengths[i_item]], corner_lat[i_item,:lengths[i_item]] = \
corner_lon[i_item, :lengths[i_item]], corner_lat[i_item, :lengths[i_item]] = \
self._check_order(*item[0].exterior.coords.xy)
else:
raise
......@@ -115,7 +116,7 @@ class MyGeoDataFrame(gpd.GeoDataFrame):
def check_longitude(self, lon_arg):
if lon_arg.min() <= 0:
warnings.warn('Longitude values are ranging from -180 to 180, converting to range 0 to 360')
lon_arg= np.where(lon_arg<0,lon_arg+360, lon_arg)
lon_arg = np.where(lon_arg < 0, lon_arg + 360, lon_arg)
return lon_arg
def _check_order(self, lons, lats):
......@@ -140,11 +141,11 @@ class MyGeoDataFrame(gpd.GeoDataFrame):
min_index = np.argmin(lats)
while True:
# all neighboring indices
indices = [(min_index-1)%len(lats), min_index, (min_index+1)%len(lats)]
indices = [(min_index - 1) % len(lats), min_index, (min_index + 1) % len(lats)]
# calculate determinant as here: https://en.wikipedia.org/wiki/Curve_orientation
det = \
(lons[indices[1]] - lons[indices[0]])*(lats[indices[2]] - lats[indices[0]]) - \
(lons[indices[2]] - lons[indices[0]])*(lats[indices[1]] - lats[indices[0]])
(lons[indices[1]] - lons[indices[0]]) * (lats[indices[2]] - lats[indices[0]]) - \
(lons[indices[2]] - lons[indices[0]]) * (lats[indices[1]] - lats[indices[0]])
if det == 0:
# the three points are colinear, check next vertices
min_index = indices[2]
......@@ -156,6 +157,7 @@ class MyGeoDataFrame(gpd.GeoDataFrame):
# sort ascending
return lons[::-1], lats[::-1]
def handle_latlon_format(ds):
coord_var = ['south_north', 'west_east']
# select all data_vars that have dimension grid_size
......@@ -176,13 +178,13 @@ def handle_latlon_format(ds):
df.columns = [data_var]
dfs.append(df)
# we turn the pandas DataFrame in a Geopandas GeoDataFrame and still need geometry
gdf = gpd.GeoDataFrame(pd.concat(dfs, axis=1), geometry=[[]]*len(dfs[0]))
gdf = gpd.GeoDataFrame(pd.concat(dfs, axis=1), geometry=[[]] * len(dfs[0]))
# convert units to degree
# loop over polygons and calculate number of edges and set geometry item
for i, j in product(ds[coord_var[0]], ds[coord_var[1]]):
xvals = ds['XLONG_bnds'].isel(**{coord_var[1]: j}).values
yvals = ds['XLAT_bnds'].isel(**{coord_var[0]: i}).values
gdf.loc['{}_{}'.format(float(i),float(j)), 'geometry'] = Polygon([
gdf.loc['{}_{}'.format(float(i), float(j)), 'geometry'] = Polygon([
(xvals[0], yvals[0]),
(xvals[1], yvals[0]),
(xvals[1], yvals[1]),
......@@ -227,10 +229,10 @@ if __name__ == '__main__':
ds = xr.Dataset(
data_vars={'grid_corner_lon': (['grid_size', 'grid_corners'], corner_lon),
'grid_corner_lat': (['grid_size', 'grid_corners'], corner_lat),
'grid_center_lon': (['grid_size'], centroid_lon),
'grid_center_lat': (['grid_size'], centroid_lat),
},
'grid_corner_lat': (['grid_size', 'grid_corners'], corner_lat),
'grid_center_lon': (['grid_size'], centroid_lon),
'grid_center_lat': (['grid_size'], centroid_lat),
},
attrs={'title': args.name, 'units': units}
)
# add units globally and to each variable
......@@ -258,7 +260,7 @@ if __name__ == '__main__':
EXCLUDE_DIMS = ['grid_center_lon', 'grid_center_lat', 'grid_corner_lon', 'grid_corner_lat', 'grid_imask']
# all MPI-ICON based variable names and its SCRIP counterparts
RENAME_VAR_DICT = {'clon': 'grid_center_lon', 'clat': 'grid_center_lat',
'clon_vertices': 'grid_corner_lon', 'clat_vertices': 'grid_corner_lat'}
'clon_vertices': 'grid_corner_lon', 'clat_vertices': 'grid_corner_lat'}
# all MPI-ICON based dimension names and its SCRIP counterparts
RENAME_DIM_DICT = {'cell': 'grid_size', 'nv': 'grid_corners'}
# all MPI-ICON based data variable names
......@@ -274,17 +276,18 @@ if __name__ == '__main__':
# do we have special MPI-ICON names?
if all(mpi_icon_format):
# select and rename
ds = ds[list(RENAME_VAR_DICT.keys())+SELECT_VARS].rename({**RENAME_VAR_DICT, **RENAME_DIM_DICT})
ds = ds[list(RENAME_VAR_DICT.keys()) + SELECT_VARS].rename({**RENAME_VAR_DICT, **RENAME_DIM_DICT})
# set the grid_imask property based on sea_land_mask or to default 1
if 'cell_sea_land_mask' in ds:
ds['grid_imask'] = xr.where(ds['cell_sea_land_mask']>1, 1, 0)
ds['grid_imask'] = xr.where(ds['cell_sea_land_mask'] > 1, 1, 0)
else:
ds['grid_imask'] = (('grid_size',), np.ones_like(ds['grid_center_lon'].values, dtype=int))
# select all data_vars that have dimension grid_size
shp_vars = [data_var for data_var in ds.data_vars if 'grid_size' in ds[data_var].dims and data_var not in EXCLUDE_DIMS]
shp_vars = [data_var for data_var in ds.data_vars if
'grid_size' in ds[data_var].dims and data_var not in EXCLUDE_DIMS]
if not shp_vars:
ds['id'] = (('grid_size',), np.array(range(1,len(ds['grid_size'])+1)))
shp_vars =['id']
ds['id'] = (('grid_size',), np.array(range(1, len(ds['grid_size']) + 1)))
shp_vars = ['id']
# empty container
dfs = []
for data_var in shp_vars:
......@@ -300,7 +303,10 @@ if __name__ == '__main__':
df.columns = [data_var]
dfs.append(df)
# we turn the pandas DataFrame in a Geopandas GeoDataFrame and still need geometry
gdf = gpd.GeoDataFrame(pd.concat(dfs, axis=1))
gdf = gpd.GeoDataFrame(pd.concat(dfs, axis=1, keys=shp_vars))
# merge the MultiIndex into a flat index
gdf.columns = ['_'.join((str(item) for item in _)) for _ in gdf.columns]
# convert units to degree
n_corners = len(ds['grid_corners'])
for var_name in EXCLUDE_DIMS:
......@@ -323,7 +329,7 @@ if __name__ == '__main__':
ds['grid_corner_lat'].isel(grid_size=i_polygon)))
# set WGS84 by default
gdf.crs = {'init' :'epsg:4326'}
gdf.crs = {'init': 'epsg:4326'}
gdf.to_file(args.output_file)
else:
raise Exception('Did not get proper filenames. Script only works for conversion of *.shp->*.nc or vice versa')
......@@ -20,6 +20,13 @@ from textwrap import dedent
from src_python.pre_proc.mpr_interface import OPTIONS
import re
DOT2TEX_IS_AVAILABLE = True
try:
import dot2tex as d2t
except ImportError:
d2t = None
DOT2TEX_IS_AVAILABLE = False
DOT2TEX_IS_AVAILABLE = True
try:
import dot2tex as d2t
......@@ -32,6 +39,7 @@ OUTPUT_FILE_BASENAME = 'MPR_graphviz'
DOT2TEX_GUIDE_HTML = 'https://dot2tex.readthedocs.io/en/latest/installation_guide.html'
DEFAULT_PATH_TO_ROOT = '../../'
DEFAULT_CONFIG_FILE = 'predefined_nmls/mpr_mhm_visual.nml'
# DEFAULT_PARAM_FILE = 'mpr_global_parameter.nml' # NOT USED
DEFAULT_OUTPUT_FORMAT = 'pdf'
DEFAULT_ENGINE = 'dot'
# TODO: this is broken?
......
0.6.10-dev0
\ No newline at end of file
1.0.1
\ No newline at end of file
Mar 2021
\ No newline at end of file
May 2021
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment