-
Notifications
You must be signed in to change notification settings - Fork 5
Added replicability check #24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,61 @@ | ||
|
|
||
| :orphan: | ||
|
|
||
| .. _sphx_glr_sg_execution_times: | ||
|
|
||
|
|
||
| Computation times | ||
| ================= | ||
| **02:50.968** total execution time for 9 files **from all galleries**: | ||
|
|
||
| .. container:: | ||
|
|
||
| .. raw:: html | ||
|
|
||
| <style scoped> | ||
| <link href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/5.3.0/css/bootstrap.min.css" rel="stylesheet" /> | ||
| <link href="https://cdn.datatables.net/1.13.6/css/dataTables.bootstrap5.min.css" rel="stylesheet" /> | ||
| </style> | ||
| <script src="https://code.jquery.com/jquery-3.7.0.js"></script> | ||
| <script src="https://cdn.datatables.net/1.13.6/js/jquery.dataTables.min.js"></script> | ||
| <script src="https://cdn.datatables.net/1.13.6/js/dataTables.bootstrap5.min.js"></script> | ||
| <script type="text/javascript" class="init"> | ||
| $(document).ready( function () { | ||
| $('table.sg-datatable').DataTable({order: [[1, 'desc']]}); | ||
| } ); | ||
| </script> | ||
|
|
||
| .. list-table:: | ||
| :header-rows: 1 | ||
| :class: table table-striped sg-datatable | ||
|
|
||
| * - Example | ||
| - Time | ||
| - Mem (MB) | ||
| * - :ref:`sphx_glr_auto_examples_plot_replicability_analysis.py` (``../examples/plot_replicability_analysis.py``) | ||
| - 02:50.968 | ||
| - 0.0 | ||
| * - :ref:`sphx_glr_auto_examples_plot_bike_plotly.py` (``../examples/plot_bike_plotly.py``) | ||
| - 00:00.000 | ||
| - 0.0 | ||
| * - :ref:`sphx_glr_auto_examples_plot_core_consistency.py` (``../examples/plot_core_consistency.py``) | ||
| - 00:00.000 | ||
| - 0.0 | ||
| * - :ref:`sphx_glr_auto_examples_plot_labelled_decompositions.py` (``../examples/plot_labelled_decompositions.py``) | ||
| - 00:00.000 | ||
| - 0.0 | ||
| * - :ref:`sphx_glr_auto_examples_plot_optimisation_diagnostic.py` (``../examples/plot_optimisation_diagnostic.py``) | ||
| - 00:00.000 | ||
| - 0.0 | ||
| * - :ref:`sphx_glr_auto_examples_plot_outlier_detection.py` (``../examples/plot_outlier_detection.py``) | ||
| - 00:00.000 | ||
| - 0.0 | ||
| * - :ref:`sphx_glr_auto_examples_plot_selecting_aminoacids_components.py` (``../examples/plot_selecting_aminoacids_components.py``) | ||
| - 00:00.000 | ||
| - 0.0 | ||
| * - :ref:`sphx_glr_auto_examples_plot_split_half_analysis.py` (``../examples/plot_split_half_analysis.py``) | ||
| - 00:00.000 | ||
| - 0.0 | ||
| * - :ref:`sphx_glr_auto_examples_plot_working_with_xarray.py` (``../examples/plot_working_with_xarray.py``) | ||
| - 00:00.000 | ||
| - 0.0 |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,200 @@ | ||
| """ | ||
| .. _replicability_analysis: | ||
| Replicability analysis | ||
| ---------------- | ||
| This example desrcibes how replicability of patterns can be used to guide the component selection process for PARAFAC models :cite:p:`reprref1, reprref2`. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please add reprref3 (see comment above). |
||
| This process evaluates the consistency of the uncovered patterns by fitting the model to different subsets of the data. The rationale is that if the appropriate number of components is used, the uncovered patterns should be consistent. This can be seen as a more strict extension of `split-half analysis <https://tensorly.org/viz/stable/auto_examples/plot_split_half_analysis.html>`_ where a higher number of smaller subsets of the input are removed. | ||
| """ | ||
|
|
||
| ############################################################################### | ||
| # Imports and utilities | ||
| # ^^^^^^^^^^^^^^^^^^^^^ | ||
|
|
||
| import matplotlib.pyplot as plt | ||
| import numpy as np | ||
| import tensorly as tl | ||
| from tensorly.decomposition import parafac | ||
|
|
||
| import sklearn | ||
| from sklearn.model_selection import RepeatedKFold | ||
|
|
||
| import tlviz | ||
|
|
||
| rng = np.random.default_rng(1) | ||
|
|
||
| ############################################################################### | ||
| # To fit PARAFAC models, we need to solve a non-convex optimization problem, possibly with local minima. It is | ||
| # therefore useful to fit several models with the same number of components using many different random | ||
| # initialisations. | ||
|
|
||
|
|
||
| def fit_many_parafac(X, num_components, num_inits=5): | ||
| return [ | ||
| parafac( | ||
| X, | ||
| num_components, | ||
| n_iter_max=1000, | ||
| tol=1e-8, | ||
| init="random", | ||
| orthogonalise=True, | ||
| linesearch=True, | ||
| random_state=i, | ||
| ) | ||
| for i in range(num_inits) | ||
| ] | ||
|
|
||
|
|
||
| ############################################################################### | ||
| # Creating simulated data | ||
| # ^^^^^^^^^^^^^^^^^^^^^^^ | ||
| # | ||
| # We start with some simulated data, since then, we know exactly how many components there are in the data. | ||
|
|
||
| cp_tensor, dataset = tlviz.data.simulated_random_cp_tensor((30, 40, 25), 3, noise_level=0.3, labelled=True) | ||
|
|
||
| ############################################################################### | ||
| # The replicability analysis boils down to the following steps: | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Consider adding Fig1 from reprref3 as illustration? |
||
| # | ||
| # 1. Split the data in a (user-chosen) mode into :math:`N` folds (user-chosen). | ||
| # 2. Create :math:`N` train subsets by subtracting each fold from the complete | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe just use 'subset' instead of train, to avoid implying that we also use a 'test' set. |
||
| # dataset. | ||
| # 3. Fit multiple initializations to each train subset and choose the *best* run | ||
| # according to lowest loss (total of :math:`N` *best* runs). | ||
| # 4. Repeat the above process :math:`M` times (user-chosen), to find a total of | ||
| # :math:`M N` *best* runs. | ||
| # 5. Compare, in terms of FMS, the factorization to evaluate the replicability | ||
| # of the uncovered patterns. | ||
|
Comment on lines
+58
to
+69
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Rephrase a bit based on our paper :) |
||
|
|
||
|
|
||
| ############################################################################### | ||
| # Splitting the data | ||
| # ^^^^^^^^^^^^^^^^^^ | ||
| # | ||
|
|
||
| splits = 5 # N | ||
| repeats = 10 # M | ||
|
|
||
| models = {} | ||
| split_indices = {} # Keeps track of which indices are used in each subset | ||
|
|
||
| for rank in [2, 3, 4, 5]: | ||
|
|
||
| print(f"{rank} components") | ||
|
|
||
| rskf = RepeatedKFold(n_splits=splits, n_repeats=repeats, random_state=1) | ||
|
|
||
| models[rank] = [] | ||
| split_indices[rank] = [] | ||
|
|
||
| for train_index, _ in rskf.split(dataset): | ||
|
|
||
| # Sort rows for consistent ordering (not necessary) | ||
|
|
||
| sorted_train_index = sorted(train_index) | ||
| split_indices[rank].append(sorted_train_index) | ||
| train = dataset[sorted_train_index] | ||
|
|
||
| train = train / tl.norm(train) # Normalize the tensor without leaking info from other folds | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Add |
||
|
|
||
| current_models = fit_many_parafac(train.data, rank) | ||
| current_model = tlviz.multimodel_evaluation.get_model_with_lowest_error(current_models, train) | ||
| models[rank].append(current_model) | ||
|
|
||
| ############################################################################### | ||
| # Often, the mode one will be splitting within refers to different samples | ||
| # Depending on the use-case, it might be deemed reasonable to retain the | ||
| # distributions of some properties in each subset. For this goal, | ||
| # `RepeatedStratifiedKFold <https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RepeatedStratifiedKFold.html#sklearn.model_selection.RepeatedStratifiedKFold>`_ | ||
| # can be used. | ||
| # | ||
| # Each subset might require certain pre-processing. It is important to pre-process | ||
| # each subset in isolation to avoid leaking information from the omitted part of the input. | ||
| # For example, in this case we normalize each subset to unit norm independently. | ||
| # Also, notice that ``for train_index, _ in rskf.split(dataset):`` is embarrassingly parallel. | ||
|
|
||
| ############################################################################### | ||
| # Computing and plotting factor similarity | ||
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | ||
| # Here, we are skipping the mode we split (``mode=0``). | ||
|
|
||
| replicability_stability = {} | ||
| for rank in [2, 3, 4, 5]: | ||
| replicability_stability[rank] = [] | ||
| for i, cp_i in enumerate(models[rank]): | ||
| for j, cp_j in enumerate(models[rank]): | ||
| if i < j: # include every pair only once and omit i == j | ||
| fms = tlviz.factor_tools.factor_match_score(cp_i, cp_j, consider_weights=False, skip_mode=0) | ||
| replicability_stability[rank].append(fms) | ||
|
|
||
| ranks = sorted(replicability_stability.keys()) | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Drop the sorting? |
||
| data = [np.ravel(replicability_stability[r]) for r in ranks] | ||
|
|
||
| fig, ax = plt.subplots() | ||
| ax.axhline(0.9, linestyle="--", color="gray") | ||
| ax.boxplot(data, positions=ranks) | ||
| ax.set_xlabel("Number of components") | ||
| ax.set_ylabel("Replicability stability") | ||
| plt.show() | ||
|
|
||
| ############################################################################### | ||
| # Here, we can observe that over-estimating the number of components | ||
| # results in not replicable patterns, indicated by low FMS. | ||
|
|
||
| ############################################################################### | ||
| # Computing and plotting factor similarity (alt.) | ||
| # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | ||
| # There is an alternative way to estimate the replicability of the uncovered patterns | ||
| # that includes the mode we are splitting within. When comparing two factorizations in | ||
| # terms of FMS, we can include the previously skipped factor by using only the indices | ||
| # presend in both subsets. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. type present |
||
|
|
||
| replicability_stability_alt = {} | ||
| for rank in [2, 3, 4, 5]: | ||
| replicability_stability_alt[rank] = [] | ||
| for i, cp_i in enumerate(models[rank]): | ||
| for j, cp_j in enumerate(models[rank]): | ||
| if i < j: # include every pair only once and omit i == j | ||
|
|
||
| weights_i, (A_i, B_i, C_i) = cp_i | ||
| weights_j, (A_j, B_j, C_j) = cp_j | ||
|
|
||
| indices_subset_i = sorted(split_indices[rank][i]) | ||
| indices_subset_j = sorted(split_indices[rank][j]) | ||
|
|
||
| common_indices = sorted(list(set(indices_subset_i).intersection(set(indices_subset_j)))) | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Also drop sorting? |
||
|
|
||
| indices2use_i = [] | ||
| indices2use_j = [] | ||
|
|
||
| for common_idx in common_indices: | ||
| indices2use_i.append(indices_subset_i.index(common_idx)) | ||
| indices2use_j.append(indices_subset_j.index(common_idx)) | ||
|
|
||
| A_i = A_i[indices2use_i, :] | ||
| A_j = A_j[indices2use_j, :] | ||
|
|
||
| fms = tlviz.factor_tools.factor_match_score( | ||
| (weights_i, (A_i, B_i, C_i)), (weights_j, (A_j, B_j, C_j)), consider_weights=False | ||
| ) | ||
| replicability_stability_alt[rank].append(fms) | ||
|
|
||
| ranks = sorted(replicability_stability_alt.keys()) | ||
| data = [np.ravel(replicability_stability_alt[r]) for r in ranks] | ||
|
|
||
| fig, ax = plt.subplots() | ||
| ax.axhline(0.9, linestyle="--", color="gray") | ||
| ax.boxplot(data, positions=ranks) | ||
| ax.set_xlabel("Number of components") | ||
| ax.set_ylabel("Replicability stability") | ||
| plt.show() | ||
|
|
||
| ############################################################################### | ||
| # ``common_indices`` contains the indices (e.g. samples) present in both subsets, | ||
| # but since the position of each index can change (e.g. sample no 3 is not guaranteeed at | ||
| # the third position in all subsets as the first and second sampled might be omitted) we need to | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. type samples |
||
| # utilize the indices in the original tensor input. | ||
| # | ||
| # Simialar results can be also observed here in terms of the replicability of the patterns. | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -37,6 +37,7 @@ docs = | |
| tensorly-sphinx-theme | ||
| plotly>=4.12 | ||
| torch | ||
| scikit-learn | ||
|
|
||
| test = | ||
| pytest | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add the following reference as well.