Models and Criteria#
Pre-defined neural network model architectures
This package contains everything related to implementing data encoders and the loss functions
applied to the feature spaces. cebra.models.criterions contains the implementations of
InfoNCE and other contrastive losses. All additions regarding how data is encoded and losses are
computed should be added to this package.
This module is a registry and currently contains the options [‘offset10-model’, ‘offset10-model-mse’, ‘offset5-model’, ‘offset1-model-mse’, ‘offset1-model’, ‘offset1-model-v2’, ‘offset1-model-v3’, ‘offset1-model-v4’, ‘offset1-model-v5’, ‘offset40-model-4x-subsample’].
To retrieve a list of options, call:
>>> print(cebra.models.get_options())
['offset10-model', 'offset10-model-mse', 'offset5-model', ...]
To obtain an initialized instance, call cebra.models.init,
defined in cebra.registry.add_helper_functions().
The first parameter to provide is the models name to use,
which is one of the available options presented above.
Then the required positional arguments specific to the module are provided, if
needed.
You can register additional options by defining and registering
classes with a name. To do that, you can add a decorator on top of it:
@cebra.models.register("my-cebra-models").
Later, initialize your class similarly to the pre-defined options, using cebra.models.init
with the models name set to my-cebra-models.
Note that these customized options will not be automatically added to this docstring.
Registration and initialization#
- cebra.models.init(name, *args, **kwargs)#
Initialize an instance from the registry with the specified arguments.
- Parameters:
name (
str) – The to identify the registered classargs – Arguments and keyword arguments to pass to the constructor while instantiating the selected type.
kwargs – Arguments and keyword arguments to pass to the constructor while instantiating the selected type.
- Returns:
An instance of the specified class.
- cebra.models.get_options(pattern=None, limit=None, expand_parametrized=True)#
Retrieve a list of registered names, optionally filtered.
- Parameters:
pattern (
Optional[str]) – A glob-like pattern (supporting wildcards*and?to filter the options. Optional argument, defaults to no filtering.limit (
Optional[int]) – An optional maximum amount of options to return, in the order of finding them with the given query.expand_parametrized (
bool) – Whether to list classes registered with theparametrizedecorator in the options.
- Return type:
- Returns:
All matching names. If a
limitwas specified, the maximum length is given by the limit.
- cebra.models.register(name, base=None, override=False, deprecated=False)#
Decorator to add a new class type to the registry.
- cebra.models.parametrize(pattern, *, kwargs=[], **all_kwargs)#
Decorator to add parametrizations of a new class to the registry.
The specified keyword arguments will be passed as default arguments to the constructor of the class.
Models#
Neural network models and criterions for training CEBRA models.
- class cebra.models.model.Model(*, num_input, num_output, offset=None)#
Bases:
ModuleBase model for CEBRA experiments.
The model is a pytorch
nn.Module. Features can be computed by calling theforward()or__call__method. This class should not be directly instantiated, and instead used as the base class for CEBRA models.- Parameters:
num_input (
int) – The number of input dimensions. The tensor passed to theforwardmethod will have shape(batch, num_input, in_time).num_output (
int) – The number of output dimensions. The tensor returned by theforwardmethod will have shape(batch, num_output, out_time).offset (
Optional[Offset]) – A specification of the offset to the left and right of the signal due to the network’s receptive field. The offset specifies the relation between the input and output times,in_time - out_time = len(offset).
- num_input#
The input dimensionality (of the input signal). When calling
forward, this is the dimensionality expected for the input argument. In typical applications of CEBRA, the input dimension corresponds to the number of neurons for neural data analysis, number of keypoints for kinematik analysis, or can also be the dimension of a feature space in case preprocessing happened before feeding the data into the model.
- num_output#
The output dimensionality (of the embedding space). This is the feature dimension of value returned by
forward. Note that for models using normalization, the output dimension should be at least 3D, and 2D without normalization to learn meaningful embeddings. The output dimensionality is typically smaller thannum_input, but this is not enforced.
- abstract get_offset()#
Offset between input and output sequence caused by the receptive field.
The offset specifies the relation between the length of the input and output time sequences. The output sequence is
len(offset)steps shorter than the input sequence. For input sequences of shape(*, *, len(offset)), the model should return an output sequence that drops the last dimension (which would be 1).- Returns
The offset of the network. See
cebra.data.datatypes.Offsetfor full documentation.
- Return type:
- class cebra.models.model.ConvolutionalModelMixin#
Bases:
objectMixin for models that support operating on a time-series.
The input for convolutional models should be
batch, dim, timeand the convolution will be applied across the last dimension.
- class cebra.models.model.ResampleModelMixin#
Bases:
objectMixin for models that re-sample the signal over time.
- class cebra.models.model.HasFeatureEncoder#
Bases:
objectNetworks with an explicitly defined feature encoder.
- class cebra.models.model.ClassifierModel(*, num_input, num_output, offset=None)#
Bases:
Model,HasFeatureEncoderBase model for classifiers.
Adds an additional
classifierlayer to the model which is lazily initialized after callingset_output_num().- Parameters:
- features_encoder#
The feature encoder to map the input tensor (2d or 3d depending on the exact model implementation) into a feature space of same dimension
- classifier#
Map from the feature space to class scores
- abstract get_offset()#
See
get_offset()- Return type:
- set_output_num(label_num, override=False)#
Set the number of output classes.
- forward(inputs)#
See
ClassifierModel.- Return type:
- class cebra.models.model._OffsetModel(*args: Any, **kwargs: Any)#
Bases:
Model,HasFeatureEncoder- forward(inp)#
Compute the embedding given the input signal.
- Parameters:
inp – The input tensor of shape num_samples x self.num_input x time
- Returns:
The output tensor of shape num_samples x self.num_output x (time - receptive field).
Based on the parameters used for initializing, the output embedding is normalized to the hypersphere (normalize = True).
- class cebra.models.model.ParameterCountMixin#
Bases:
objectAdd a parameter counter to a torch.nn.Module.
- class cebra.models.model.Offset10ModelMSE(*args: Any, **kwargs: Any)#
Bases:
Offset10ModelSymmetric model with 10 sample receptive field, without normalization.
Suitable for use with InfoNCE metrics for Euclidean space.
- class cebra.models.model.Offset5Model(*args: Any, **kwargs: Any)#
Bases:
_OffsetModel,ConvolutionalModelMixinCEBRA model with a 5 sample receptive field and output normalization.
- get_offset()#
See
get_offset()- Return type:
- class cebra.models.model.Offset0ModelMSE(*args: Any, **kwargs: Any)#
Bases:
_OffsetModelCEBRA model with a single sample receptive field, without output normalization.
- get_offset()#
See
get_offset()- Return type:
- class cebra.models.model.Offset0Model(*args: Any, **kwargs: Any)#
Bases:
_OffsetModelCEBRA model with a single sample receptive field, with output normalization.
- get_offset()#
See
get_offset()- Return type:
- class cebra.models.model.Offset0Modelv2(*args: Any, **kwargs: Any)#
Bases:
_OffsetModelCEBRA model with a single sample receptive field, with output normalization.
This is a variant of
Offset0Model.- get_offset()#
See
get_offset()- Return type:
- class cebra.models.model.Offset0Modelv3(*args: Any, **kwargs: Any)#
Bases:
_OffsetModelCEBRA model with a single sample receptive field, with output normalization.
This is a variant of
Offset0Model.- get_offset()#
See
get_offset()- Return type:
- class cebra.models.model.Offset0Modelv4(*args: Any, **kwargs: Any)#
Bases:
_OffsetModelCEBRA model with a single sample receptive field, with output normalization.
This is a variant of
Offset0Model.- get_offset()#
See
get_offset()- Return type:
- class cebra.models.model.Offset0Modelv5(*args: Any, **kwargs: Any)#
Bases:
_OffsetModelCEBRA model with a single sample receptive field, with output normalization.
This is a variant of
Offset0Model.- get_offset()#
See
get_offset()- Return type:
- class cebra.models.model.ResampleModel(*args: Any, **kwargs: Any)#
Bases:
_OffsetModel,ConvolutionalModelMixin,ResampleModelMixinCEBRA model with 40 sample receptive field, output normalization and 4x subsampling.
- property resample_factor#
The factor by which the signal is downsampled.
- get_offset()#
See
get_offset()- Return type:
- class cebra.models.model.Resample5Model(*args: Any, **kwargs: Any)#
Bases:
_OffsetModel,ConvolutionalModelMixin,ResampleModelMixinCEBRA model with 20 sample receptive field, output normalization and 4x subsampling.
- property resample_factor#
The factor by which the signal is downsampled.
- get_offset()#
See
get_offset()- Return type:
- class cebra.models.model.Resample1Model(*args: Any, **kwargs: Any)#
Bases:
_OffsetModel,ResampleModelMixinCEBRA model with 4 sample receptive field, output normalization and 2x subsampling.
This model is not convolutional, and needs to be applied to fixed
(N, d, 4)inputs.- property resample_factor#
The factor by which the signal is downsampled.
- get_offset()#
See
get_offset()- Return type:
- class cebra.models.model.SupervisedNN10(*args: Any, **kwargs: Any)#
Bases:
ClassifierModelA supervised model with 10 sample receptive field.
- get_offset()#
See
get_offset()- Return type:
- class cebra.models.model.SupervisedNN1(*args: Any, **kwargs: Any)#
Bases:
ClassifierModelA supervised model with single sample receptive field.
- get_offset()#
See
get_offset()- Return type:
- class cebra.models.model.Offset36(*args: Any, **kwargs: Any)#
Bases:
_OffsetModel,ConvolutionalModelMixinCEBRA model with a 10 sample receptive field.
- class cebra.models.model.Offset36Dropout(*args: Any, **kwargs: Any)#
Bases:
_OffsetModel,ConvolutionalModelMixinCEBRA model with a 10 sample receptive field.
Note
Requires
torch>=1.12.
- class cebra.models.model.Offset36Dropoutv2(*args: Any, **kwargs: Any)#
Bases:
_OffsetModel,ConvolutionalModelMixinCEBRA model with a 10 sample receptive field.
Note
Requires
torch>=1.12.
- class cebra.models.model.Offset40(*args: Any, **kwargs: Any)#
Bases:
_OffsetModel,ConvolutionalModelMixinCEBRA model with a 40 samples receptive field.
- class cebra.models.model.Offset50(*args: Any, **kwargs: Any)#
Bases:
_OffsetModel,ConvolutionalModelMixinCEBRA model with a sample receptive field.
- class cebra.models.model.Offset15Model(*args: Any, **kwargs: Any)#
Bases:
_OffsetModel,ConvolutionalModelMixinCEBRA model with a 15 sample receptive field.
- class cebra.models.model.Offset20Model(*args: Any, **kwargs: Any)#
Bases:
_OffsetModel,ConvolutionalModelMixinCEBRA model with a 15 sample receptive field.
- class cebra.models.model.Offset10Model(*args: Any, **kwargs: Any)#
Bases:
_OffsetModel,ConvolutionalModelMixinCEBRA model with a 10 sample receptive field.
- get_offset()#
See
get_offset()- Return type:
- class cebra.models.model.Offset0ModelMSETanH(*args: Any, **kwargs: Any)#
Bases:
_OffsetModelCEBRA model with a single sample receptive field, without output normalization.
- get_offset()#
See
get_offset()- Return type:
- class cebra.models.model.Offset0ModelMSEClip(*args: Any, **kwargs: Any)#
Bases:
_OffsetModelCEBRA model with a single sample receptive field, without output normalization.
- forward(inputs)#
Compute the embedding given the input signal.
- Parameters:
inp – The input tensor of shape num_samples x self.num_input x time
- Returns:
The output tensor of shape num_samples x self.num_output x (time - receptive field).
Based on the parameters used for initializing, the output embedding is normalized to the hypersphere (normalize = True).
- get_offset()#
See
get_offset()- Return type:
- class cebra.models.model.Offset0ModelMSETanHv2(*args: Any, **kwargs: Any)#
Bases:
_OffsetModelCEBRA model with a single sample receptive field, without output normalization.
- get_offset()#
See
get_offset()- Return type:
- class cebra.models.model.Offset0ModelResNetTanH(*args: Any, **kwargs: Any)#
Bases:
_OffsetModelCEBRA model with a single sample receptive field, without output normalization.
- get_offset()#
See
get_offset()- Return type:
Criterions#
Criterions for contrastive learning
Different criterions can be used for learning embeddings with CEBRA. The common
interface of criterions implementing the generalized InfoNCE metric is given by
BaseInfoNCE.
Criterions are available for fixed and learnable temperatures, as well as different similarity measures.
Note that criterions can have trainable parameters, which are automatically handled
by the training loops implemented in cebra.solver.base.Solver classes.
- cebra.models.criterions.dot_similarity(ref, pos, neg)#
Cosine similarity the ref, pos and negative pairs
- Parameters:
- Return type:
- Returns:
The similarity between reference samples and positive samples of shape (n,), and the similarities between reference samples and negative samples of shape (n, n).
- cebra.models.criterions.euclidean_similarity(ref, pos, neg)#
Negative L2 distance between the ref, pos and negative pairs
- Parameters:
- Return type:
- Returns:
The similarity between reference samples and positive samples of shape (n,), and the similarities between reference samples and negative samples of shape (n, n).
- cebra.models.criterions.infonce(pos_dist, neg_dist)#
InfoNCE implementation
See
BaseInfoNCEfor reference.Note
The behavior of this function changed beginning in CEBRA 0.3.0. The InfoNCE implementation is numerically stabilized.
- class cebra.models.criterions.ContrastiveLoss(*args, **kwargs)#
Bases:
ModuleBase class for contrastive losses.
Note
Added in 0.0.2.
- class cebra.models.criterions.BaseInfoNCE(*args, **kwargs)#
Bases:
ContrastiveLossBase class for all InfoNCE losses.
Given a similarity measure \(\phi\) which will be implemented by the subclasses of this class, the generalized InfoNCE loss is computed as
\[\sum_{i=1}^n - \phi(x_i, y^{+}_i) + \log \sum_{j=1}^{n} e^{\phi(x_i, y^{-}_{ij})}\]where \(n\) is the batch size, \(x\) are the reference samples (
ref), \(y^{+}\) are the positive samples (pos) and \(y^{-}\) are the negative samples (neg).- _distance(ref, pos, neg)#
The similarity measure.
- Parameters:
- Return type:
- Returns:
The distance between reference samples and positive samples of shape (n,), and the distances between reference samples and negative samples of shape (n, n).
- class cebra.models.criterions.FixedInfoNCE(temperature=1.0)#
Bases:
BaseInfoNCEInfoNCE base loss with a fixed temperature.
- temperature#
The softmax temperature
- class cebra.models.criterions.LearnableInfoNCE(temperature=1.0, min_temperature=None)#
Bases:
BaseInfoNCEInfoNCE base loss with a learnable temperature.
- temperature#
The current value of the learnable temperature parameter.
- min_temperature#
The minimum temperature to use. Increase the minimum temperature if you encounter numerical issues during optimization.
- class cebra.models.criterions.FixedCosineInfoNCE(temperature=1.0)#
Bases:
FixedInfoNCECosine similarity function with fixed temperature.
The similarity metric is given as
\[\phi(x, y) = x^\top y / \tau\]with fixed temperature \(\tau > 0\).
Note that this loss function should typically only be used with normalized. This class itself does not perform any checks. Ensure that \(x\) and \(y\) are normalized.
- _distance(ref, pos, neg)#
The similarity measure.
- Parameters:
- Return type:
- Returns:
The distance between reference samples and positive samples of shape (n,), and the distances between reference samples and negative samples of shape (n, n).
- class cebra.models.criterions.FixedEuclideanInfoNCE(temperature=1.0)#
Bases:
FixedInfoNCEL2 similarity function with fixed temperature.
The similarity metric is given as
\[\phi(x, y) = - \| x - y \| / \tau\]with fixed temperature \(\tau > 0\).
- _distance(ref, pos, neg)#
The similarity measure.
- Parameters:
- Return type:
- Returns:
The distance between reference samples and positive samples of shape (n,), and the distances between reference samples and negative samples of shape (n, n).
- class cebra.models.criterions.LearnableCosineInfoNCE(temperature=1.0, min_temperature=None)#
Bases:
LearnableInfoNCECosine similarity function with a learnable temperature.
Like
FixedCosineInfoNCE, but with a learnable temperature parameter \(\tau\).- _distance(ref, pos, neg)#
The similarity measure.
- Parameters:
- Return type:
- Returns:
The distance between reference samples and positive samples of shape (n,), and the distances between reference samples and negative samples of shape (n, n).
- class cebra.models.criterions.LearnableEuclideanInfoNCE(temperature=1.0, min_temperature=None)#
Bases:
LearnableInfoNCEL2 similarity function with fixed temperature.
Like
FixedEuclideanInfoNCE, but with a learnable temperature parameter \(\tau\).- _distance(ref, pos, neg)#
The similarity measure.
- Parameters:
- Return type:
- Returns:
The distance between reference samples and positive samples of shape (n,), and the distances between reference samples and negative samples of shape (n, n).
- cebra.models.criterions.InfoNCE#
alias of
FixedCosineInfoNCE
- cebra.models.criterions.InfoMSE#
alias of
FixedEuclideanInfoNCE
- class cebra.models.criterions.NCE(*args: Any, **kwargs: Any)#
Bases:
ContrastiveLossNoise contrastive estimation (Gutman & Hyvarinen, 2012)
Layers and model building blocks#
Neural network layers used for building cebra models.
Layers are used in the models defined in model.
Multi-objective models#
The multi-objective interface was moved to a separate section beginning with CEBRA 0.6.0. Please see the Multi-objective models section for all details, both on the old and new API interface.