ChatGPT generated docstrings
Not a real issue more something I found out.
I just threw the function SaQC.flagByGrubbs
(without the docstring) into ChatGPT and asked it to generated a docstring. This is the result
"""
Apply the Grubbs' test for outliers to flag data within a specified window.
Parameters
----------
self : SaQC
The SaQC object.
field : str
The name of the field in the SaQC object to perform the test on.
window : str or int
The window size for the Grubbs' test. If it's a string, it represents a time interval;
if it's an integer, it's the number of data points in each window.
alpha : float, optional
The significance level for the Grubbs' test (default is 0.05).
min_periods : int, optional
The minimum number of data points required within a window for testing (default is 8).
pedantic : bool, optional
If True, perform the test on offset-defined test intervals (default is False).
flag : float, optional
The flag value to assign to outliers (default is BAD).
**kwargs
Additional keyword arguments.
Returns
-------
SaQC
The SaQC object with flagged data.
Notes
-----
If the time series analyzed is regular, the window size can be transformed to a number of periods.
Raises
------
ValueError
If input parameters are not valid.
Example
-------
```python
saqc = SaQC()
saqc.flagByGrubbs("temperature", 24, alpha=0.01, min_periods=10)
```
In the example, the "temperature" field is analyzed using a window of 24 data points, with a significance level of 0.01, and a minimum of 10 data points in each window. Outliers are flagged with the value BAD.
"""
Not perfect, but certainly not bad, I'd say.