Regularized Contrastive Learning#

Regularized solvers#

Regularized contrastive learning.

class cebra.solver.regularized.RegularizedSolver(model, criterion, optimizer, history=<factory>, decode_history=<factory>, log=<factory>, tqdm_on=True, lambda_JR=None)#

Bases: SingleSessionSolver

Optimize a model using Jacobian Regularizer.

step(batch)#

Perform a single gradient update using the jacobian regularizaiton!.

Parameters:

batch (Batch) – The input samples

Return type:

dict

Returns:

Dictionary containing training metrics.

Schedulers#

class cebra.solver.schedulers.Scheduler#

Bases: ABC

class cebra.solver.schedulers.ConstantScheduler(initial_weights)#

Bases: Scheduler

class cebra.solver.schedulers.LinearScheduler(n_splits, step_to_switch_on_reg, step_to_switch_off_reg, start_weight, end_weight, stay_constant_after_switch_off=False)#

Bases: Scheduler

class cebra.solver.schedulers.LinearRampUp(n_splits, step_to_switch_on_reg, step_to_switch_off_reg, start_weight, end_weight, stay_constant_after_switch_off=False)#

Bases: LinearScheduler

Jacobian Regularization#

Jacobian Regularization for CEBRA.

This implementation is adapted from the Jacobian regularization described in [1].

class cebra.models.jacobian_regularizer.JacobianReg(n=1)#

Bases: Module

Loss criterion that computes the trace of the square of the Jacobian.

Parameters:

n (int) – Determines the number of random projections. If n=-1, then it is set to the dimension of the output space and projection is non-random and orthonormal, yielding the exact result. For any reasonable batch size, the default (n=1) should be sufficient.

Default: 1

forward(x, y)#

Computes (1/2) tr |dy/dx|^2.

Parameters:
Return type:

Tensor

Returns:

The computed regularization term