fairseq2.recipes.wav2vec2

        classDiagram
  ABC <|-- EvalUnit
  ABC <|-- TrainUnit
  DatasetSection <|-- Wav2Vec2EvalDatasetSection
  DatasetSection <|-- Wav2Vec2TrainDatasetSection
  EvalUnit <|-- Wav2Vec2EvalUnit
  Generic <|-- EvalUnit
  Generic <|-- TrainUnit
  TrainUnit <|-- Wav2Vec2TrainUnit
    

Classes

class fairseq2.recipes.wav2vec2.Wav2Vec2TrainConfig(*, model=<factory>, dataset=<factory>, gang=<factory>, trainer=<factory>, loss=<factory>, optimizer=<factory>, lr_scheduler=<factory>, regime=<factory>, common=<factory>)[source]

Bases: object

The default values correspond to the base ls960h training setup as described in Baevski et al. [BZMA20].

final class fairseq2.recipes.wav2vec2.Wav2Vec2TrainUnit(model, criterion)[source]

Bases: TrainUnit[SequenceBatch]

class fairseq2.recipes.wav2vec2.Wav2Vec2EvalConfig(*, model: 'ReferenceModelSection' = <factory>, dataset: 'Wav2Vec2EvalDatasetSection' = <factory>, gang: 'GangSection' = <factory>, evaluator: 'EvaluatorSection' = <factory>, loss: 'Wav2Vec2LossSection' = <factory>, common: 'CommonSection' = <factory>)[source]

Bases: object

final class fairseq2.recipes.wav2vec2.Wav2Vec2Criterion(module, diversity_weight, features_penalty_weight)[source]

Bases: object

Functions

fairseq2.recipes.wav2vec2.load_wav2vec2_trainer(context, config, output_dir)[source]
Return type:

Trainer

fairseq2.recipes.wav2vec2.load_wav2vec2_evaluator(context, config, output_dir)[source]
Return type:

Evaluator