The following example shows the definition of a testing experiment of a single time-independent forecast against a catalog.
.. currentmodule:: floatcsep
TL; DR
In a terminal, navigate to floatcsep/tutorials/case_a and type:
$ floatcsep run config.ymlAfter the calculation is complete, the results will be summarized in results/report.md.
The experiment region, catalog, forecasts and results can be viewed in the Experiment Dashboard with:
$ floatcsep view config.ymlContents
The source code can be found in the tutorials/case_a folder or in GitHub. The directory structure of the experiment is:
case_a
├── region.txt
├── catalog.csep
├── best_model.dat
└── config.yml
The testing region
region.txtconsists of a grid with two 1ºx1º bins, defined by its bottom-left nodes. (see :doc:`pycsep:concepts/regions` in pyCSEP*)/ The grid spacing is obtained automatically. The nodes are:.. literalinclude:: ../../tutorials/case_a/region.txt :caption: tutorials/case_a/region.txt
The testing catalog
catalog.csepcontains only one event and is formatted in the :meth:`~pycsep.utils.readers.csep_ascii` style (see :doc:`pycsep:concepts/catalogs` in pyCSEP*). Catalog formats are detected automatically.. literalinclude:: ../../tutorials/case_a/catalog.csep :caption: tutorials/case_a/catalog.csep
The forecast
best_model.datto be evaluated is written in the.datformat (see :doc:`pycsep:concepts/forecasts` in pyCSEP). Forecast formats are detected automatically (see :mod:`floatcsep.utils.file_io.GriddedForecastParsers`).. literalinclude:: ../../tutorials/case_a/best_model.dat :caption: tutorials/case_a/best_model.dat
The experiment is defined by a time-, region-, model- and test-configurations, as well as a catalog and a region. In this example, they are written together in the
config.ymlfile.Warning
Every file path (e.g., of a catalog) specified in the
config.ymlfile should be relative to the directory containing the configuration file.
The time configuration is manifested in the
time_configinset. The simplest definition is to set only the start and end dates of the experiment. These are always UTC date-times in isoformat (%Y-%m-%dT%H:%M:%S.%f- ISO861):.. literalinclude:: ../../tutorials/case_a/config.yml :caption: tutorials/case_a/config.yml :language: yaml :lines: 3-5Note
In case the time window are bounded by their midnights, the
start_dateandend_datecan be in the format%Y-%m-%d.The results of the experiment run will be associated with this time window, whose identifier will be its bounds:
2020-01-01_2021-01-01
The region - a file path or a :mod:`pycsep` function, such as :obj:`~csep.core.regions.italy_csep_region` (check the available regions in :mod:`csep.core.regions`) -, the depth limits and magnitude discretization are defined in the
region_configinset... literalinclude:: ../../tutorials/case_a/config.yml :caption: tutorials/case_a/config.yml :language: yaml :lines: 7-13
It is defined in the
cataloginset. This should only make reference to a catalog file or a catalog query function (see catalog loaders in :mod:`csep`). floatCSEP will automatically filter the catalog to the experiment time, spatial and magnitude frames:.. literalinclude:: ../../tutorials/case_a/config.yml :caption: tutorials/case_a/config.yml :language: yaml :lines: 15-15
The model configuration is set in the
modelsinset with a list of model names, which specify their file paths (and other attributes). Here, we just set the path asbest_model.dat, whose format is automatically detected (see Working with conventional gridded forecasts in pyCSEP) ... literalinclude:: ../../tutorials/case_a/config.yml :caption: tutorials/case_a/config.yml :language: yaml :lines: 17-19Note
A time-independent forecast model has default units of
[eq/year]per cell. A forecast defined for a different number of years can be specified with theforecast_unit: {years}attribute.
The experiment's evaluations are defined in the
testsinset. It should be a list of test names making reference to their function and plotting function. These can be either from pyCSEP (see :doc:`pycsep:concepts/evaluations`) or defined manually. Here, we use the Poisson consistency N-test: its function is :func:`poisson_evaluations.number_test <csep.core.poisson_evaluations.number_test>` with a plotting function :func:`plot_poisson_consistency_test <csep.utils.plots.plot_poisson_consistency_test>`.. literalinclude:: ../../tutorials/case_a/config.yml :caption: tutorials/case_a/config.yml :language: yaml :lines: 21-24Important
See here all available Evaluation Functions, along with their corresponding Plotting Functions.
Note
For further details on how to configure an experiment, models and evaluations, see:
The experiment can be run by simply navigating to the
tutorials/case_afolder in the terminal and typing.$ floatcsep run config.ymlThis will automatically set all the calculation paths (testing catalogs, evaluation results, figures) and will create a summarized report in
results/report.md.Note
The command
floatcsep run {config_file}can be called from any working directory, as long as the specified file paths (e.g. region, models) are relative to theconfig.ymlfile.
The :obj:`~floatcsep.cmd.main.run` command creates the result path tree for each time window analyzed.
- The testing catalog of the window is stored in
results/{window}/cataloginjsonformat. This is a subset of the global testing catalog.- Human-readable results are found in
results/{window}/evaluations- Catalog, forecasts and evaluation results figures in
results/{window}/figures.- The complete results are summarized in
results/report.md
This tutorial uses floatCSEP as the orchestrator, but relies on pyCSEP for functions and objects.
Classes and functions used in this tutorial
Catalog: :py:class:`csep.core.catalogs.CSEPCatalog`
Forecast class: :py:class:`csep.core.forecasts.GriddedForecast`
Test functions: :py:func:`csep.core.poisson_evaluations.number_test`
Result plotting functions: :py:func:`csep.utils.plots.plot_poisson_consistency_test`
Where to learn pyCSEP further:
- Catalogs: :doc:`pycsep:concepts/catalogs`
- Regions: :doc:`pycsep:concepts/regions`
- Forecasts: :doc:`pycsep:concepts/forecasts`
- Evaluations: :doc:`pycsep:concepts/evaluations`