fairseq2.recipes.wav2vec2¶
classDiagram ABC <|-- EvalUnit ABC <|-- TrainUnit AbstractEvalUnit <|-- Wav2Vec2EvalUnit AbstractTrainUnit <|-- Wav2Vec2TrainUnit BaseMetricBag <|-- Wav2Vec2MetricBag DatasetSection <|-- Wav2Vec2EvalDatasetSection DatasetSection <|-- Wav2Vec2TrainDatasetSection EvalUnit <|-- AbstractEvalUnit Generic <|-- EvalUnit Generic <|-- TrainUnit MetricBag <|-- BaseMetricBag TrainUnit <|-- AbstractTrainUnit
Classes¶
- class fairseq2.recipes.wav2vec2.Wav2Vec2TrainConfig(*, model=<factory>, dataset=<factory>, gang=<factory>, trainer=<factory>, loss=<factory>, optimizer=<factory>, lr_scheduler=<factory>, regime=<factory>, common=<factory>)[source]¶
Bases:
object
The default values correspond to the base ls960h training setup as described in Baevski et al. [BZMA20].
- final class fairseq2.recipes.wav2vec2.Wav2Vec2TrainUnit(criterion, gangs)[source]¶
Bases:
AbstractTrainUnit
[SequenceBatch
]- property metric_bag: Wav2Vec2MetricBag¶
The training-related metrics.
- class fairseq2.recipes.wav2vec2.Wav2Vec2EvalConfig(*, model: 'ReferenceModelSection' = <factory>, dataset: 'Wav2Vec2EvalDatasetSection' = <factory>, gang: 'GangSection' = <factory>, evaluator: 'EvaluatorSection' = <factory>, loss: 'Wav2Vec2LossSection' = <factory>, common: 'CommonSection' = <factory>)[source]¶
Bases:
object