Skip to content

Conversation

@alejoe91
Copy link
Member

This PR includes a major refactor of the metrics concept.

It defines a BaseMetric, with core metadata of individual metrics including dtypes, column names, extension dependance, and a compute function.
Another BaseMetricExtension contains a collection of BaseMetrics and deals with most of the machinery, including:

  • setting params
  • checking dependencies and removing metrics based on it
  • computing metrics
  • deleting, merging, splitting metrics
  • preparing data that can be shared across metrics (e.g., pca for pca metrics, peaks info and templates for template metrics)

The template_metrics, quality_metrics, and a new spiketrain_metrics extensions are now in the metrics module. The latter only includes num_spikes and firing_rate, which are also imported as quality metrics.

Still finalizing tests, but this should be 90% done

@alejoe91 alejoe91 added the qualitymetrics Related to qualitymetrics module label Oct 22, 2025
@chrishalcrow
Copy link
Member

This looks great - love the prepare_data idea! And this refactor will make it less awkward to develop new metrics. And the pca file is much neater now - nice.

I think this is a good chance to remove compute_{metric_name} stuff from the docs (modules/qualitymetrics) and replace with analyzer.compute("quality_metrics", metric_names={metric_name}) as our recommended method. More awkward, but much better for provenance etc.

I'd vote to take the chance to make multi channel template metrics included by default: they're very helpful.

@alejoe91
Copy link
Member Author

I'd vote to take the chance to make multi channel template metrics included by default: they're very helpful.

I agree! Maybe we can make it default for number of channel > 64?

@alejoe91 alejoe91 marked this pull request as ready for review November 25, 2025 09:30
.. code-block:: python
import spikeinterface.qualitymetrics as sqm
from spikeinterface.metrics.spiketrain import compute_firing_rates
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More easy no ?

Suggested change
from spikeinterface.metrics.spiketrain import compute_firing_rates
from spikeinterface.metrics import compute_firing_rates

metric_params = {} # to be defined in subclass
metric_columns = {} # column names and their dtypes of the dataframe
needs_recording = False # to be defined in subclass
needs_tmp_data = False # to be defined in subclass
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this diserve more comments to explain the concept

@@ -0,0 +1,8 @@
import pytest
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do have this file in sources ?
Better in tests sub folder no ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it needs to be at the module level, so that config are shared across submodules

@@ -0,0 +1,84 @@
import numpy as np
from spikeinterface.core.analyzer_extension_core import BaseMetric

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do we prevent previous users to not do
analyzer.get_extension("quality_metics").get_data()["firing_rates"] is it duplicated ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that it should work!


def _compute_metrics(
self,
sorting_analyzer: SortingAnalyzer,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An extension is aware of the analyzer no ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is because the same function is also used for merge/splits, where you provide a new analyzer!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK sure.

Comment on lines +143 to +144
if not include_multi_channel_metrics and num_channels >= MIN_CHANNELS_FOR_MULTI_CHANNEL_METRICS:
include_multi_channel_metrics = True
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure to like the hidden magic choices.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was Chris's idea :) then we should make it True by default. These are really useful

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are the best template metrics. I would vote True by default -- but am a little worried that tetrode people will get loads of warnings about nan results. Maybe we can discuss how to deal with these better?

)
return params

def _prepare_data(self, sorting_analyzer, unit_ids=None):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this deserve also more explcation and how this cache this propagated on other other class

@samuelgarcia
Copy link
Member

This mostly almost OK for me.
thanks camarade for this hudge effort.
I am pretty sure that we will face some none backward compatibility for this lets fix then when they arrive.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

qualitymetrics Related to qualitymetrics module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants