fairseq2.recipes.wav2vec2

        classDiagram
  ABC <|-- EvalUnit
  ABC <|-- TrainUnit
  AbstractEvalUnit <|-- Wav2Vec2EvalUnit
  AbstractTrainUnit <|-- Wav2Vec2TrainUnit
  BaseMetricBag <|-- Wav2Vec2MetricBag
  DatasetSection <|-- Wav2Vec2EvalDatasetSection
  DatasetSection <|-- Wav2Vec2TrainDatasetSection
  EvalUnit <|-- AbstractEvalUnit
  Generic <|-- EvalUnit
  Generic <|-- TrainUnit
  MetricBag <|-- BaseMetricBag
  TrainUnit <|-- AbstractTrainUnit
    

Classes

class fairseq2.recipes.wav2vec2.Wav2Vec2TrainConfig(*, model=<factory>, dataset=<factory>, gang=<factory>, trainer=<factory>, loss=<factory>, optimizer=<factory>, lr_scheduler=<factory>, regime=<factory>, common=<factory>)[source]

Bases: object

The default values correspond to the base ls960h training setup as described in Baevski et al. [BZMA20].

final class fairseq2.recipes.wav2vec2.Wav2Vec2TrainUnit(criterion, gangs)[source]

Bases: AbstractTrainUnit[SequenceBatch]

property metric_bag: Wav2Vec2MetricBag

The training-related metrics.

class fairseq2.recipes.wav2vec2.Wav2Vec2EvalConfig(*, model: 'ReferenceModelSection' = <factory>, dataset: 'Wav2Vec2EvalDatasetSection' = <factory>, gang: 'GangSection' = <factory>, evaluator: 'EvaluatorSection' = <factory>, loss: 'Wav2Vec2LossSection' = <factory>, common: 'CommonSection' = <factory>)[source]

Bases: object

class fairseq2.recipes.wav2vec2.Wav2Vec2MetricBag(gang, train=True)[source]

Bases: BaseMetricBag

Parameters:

train (bool) – If True, indicates that this bag is used in a training task.

update_batch_metrics(batch)[source]

Update the batch metrics.

final class fairseq2.recipes.wav2vec2.Wav2Vec2Criterion(model, diversity_loss_weight, feature_penalty_weight)[source]

Bases: object

Functions

fairseq2.recipes.wav2vec2.load_wav2vec2_trainer(context, config, output_dir)[source]
Return type:

Trainer[SequenceBatch]

fairseq2.recipes.wav2vec2.load_wav2vec2_evaluator(context, config, output_dir)[source]
Return type:

Evaluator[SequenceBatch]