Skip to content
Michael Deceglie edited this page Nov 2, 2017 · 2 revisions

Goals

Rdtools will provide a set of tools for quantifying the degradation of photovoltaic systems and modules. Our goal for Rdtools is to enable consistent and transparent degradation calculations. To this end consensus-building is an integral part of the development process. It is also important that we develop examples in parallel to the tools. We envision Rdtools providing a suite of tools useful both to advanced analysts developing their own analyses and out-of-the-box workflows shown in examples.

Collaboration and workflow

Please develop in a separate branch or forked repository, even for small changes. When your changes are ready for consideration, please open a pull request into the development branch following these guidelines:

  1. Initiate a pull request and request review from at least one person outside your organization and one person from NREL. (If you are outside NREL, this can be a single reviewer). Reviews may not be necessary for minor bug fixes. Before the review stage, pull requests are also a great way to initiate discussion and coordinate help from other collaborators.
  2. If your contribution represents an additional capability, please consider including an example notebook in docs.
  3. Consistent with the goals of Rdtools, it may be useful to build consensus and help others on the team understand how your contributions fit into the analysis workflow.

Releases

When the the developers determine that the development branch is ready to be considered for a release, a pull request should be opened to merge development into master. After review and testing, if the pull request is approved and merged, we will issue a release numbered according to semantic versioning. The new version will also be uploaded to PyPI.

General calculation flow and input/output standards

Steps 2–5 correspond to a module within Rdtools, not all modules listed below are currently present in the master branch.

  1. Raw data is processed by the analyst into a form suitable for Rdtools modules. Not only need the data be brought into suitable format, the data also need to be screened for sensible values (we may add modules for this in the future). Examples include checking the timestamp, power, temperatures are within reasonable ranges (a lot of data loggers when they go offline insert nonsensical values like -9999). Equalizing of time series should be addressed in this step as well, for example if the acquisition frequency switched from 15min to 1min, this can lead to issues.

  2. Normalization (normalization.py): This step normalizes the observed performance of the system based on a model (for example the PVWatts model)

    1. Input:
      1. Pandas time series of raw energy. Time series should have a well-defined frequency. Note that the calculations are based on energy production as opposed to instantaneous power, but handling of power vs. energy is currently an area of active development.
      2. Dict of keywords for relevant model.
    2. Output:
      1. Pandas time series of normalized energy. (Note that associated insolation time series may be added in future versions.)
  3. Filter (filter.py): This step excludes data points from the normalized energy time series based on defensible criteria. It is expected that filtering development will not be merged into master until consensus is reached about best practices.

    1. Proposed filters:
      1. Inverter clipping:
        1. Input: Un-normalized AC power/energy, irradiance?, DC/AC ratio?
        2. Method: Dirk proposed a good solution to remove the highest values in a raw power histogram. One implementation excludes data that is greater than 99% of the 95th percentile in power production.
      2. Outages and outliers: eliminate points with unphysical or suspect normalized yield
        1. Input: normalized energy/power
        2. Method: compute the 3-month rolling median. Exclude points +/- 30% of the rolling median.
      3. Clearsky: remove cloudy points or days
        1. Input: measured irradiance, modeled irradiance. Measurement and model must use same mounting configuration.
        2. Method: detect_clearsky in PVLIB.
      4. Low irradiance cut-off:
        1. Input: irradiance data
        2. Method: exclude points where irradiance is <200 W/m2. The threshold of 200 W/m2 excludes inverter start-up effects without excluding too much winter data.
  4. Aggregate (aggregation.py): This step calculates insolation-weighted averages of normalized and filtered energy data at specified frequency

    1. Input:
      1. Pandas time series of normalized energy (need not have well-defined frequency)
      2. Pandas time series of irradiance or insolation for weighting
      3. Aggregation frequency
    2. Output:
      1. Pandas time series
  5. Degradation calculation (degradation.py): In this step the degradation rate of the modules or system is calculated

    1. Input:
      1. Pandas time series of normalized energy (need not have well-defined frequency)
      2. Output: Dict including degradation rate in %/year, and the 68.2% confidence interval. Other information is returned in the dict depending on the calculation method.

Development goals

Note that Rdtools is in an early stage and is subject to changes, including in the input/output of functions as we work toward an initial minimum viable product. With the exception of #1, these goals aren’t necessarily in chronological order. Progress may happen in parallel.

  1. The short-term development goal is to get a functional version of the workflow above, including at least normalization.py, aggregation.py, and degradation.py. The filtering step may take some more time to reach consensus about.
  2. Reach consensus and upload examples including the filtering step.
  3. Clearsky workflow: There has been much discussion about using clearsky filtering and modeling. Let’s start by sketching out how a clearsky workflow can be implemented in, or parallel to, the workflow outlined above. Then we can construct the tools and a working example.
    1. Proposed workflow:
      1. Normalize with clearsky
      2. Filter for clearsky points
      3. Agregate
      4. Degradation calculation