Projection Layers

class fairseq2.nn.Projection(input_dim: int, output_dim: int)[source]

Bases: Module, ABC

Applies a linear transformation to input data.

abstract forward(x: Tensor) Tensor[source]

Projects the input data.

x must be of shape \((*,H_{inp})\), where \(H_{inp}\) is the input dimensionality of this module.

The projected output will be of shape \((*,H_{out})\), where all but the last dimension are the same shape as x and \(H_{out}\) is the output dimensionality of this module.

final class fairseq2.nn.Linear(input_dim: int, output_dim: int, bias: bool, *, init_fn: Callable[[Linear], None] | None = None, device: device | None = None, dtype: dtype | None = None)[source]

Bases: Projection

Represents the standard implementation of Projection.

Note

This class is identical to torch.nn.Linear.

Unless overridden by init_fn, the weight and bias of this module are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where \(k = \frac{1}{\text{input_dim}}\).

If init_fn is provided, it will be used to initialize the weight and bias in reset_parameters().

reset_parameters() None[source]
forward(x: Tensor) Tensor[source]

Projects the input data.

x must be of shape \((*,H_{inp})\), where \(H_{inp}\) is the input dimensionality of this module.

The projected output will be of shape \((*,H_{out})\), where all but the last dimension are the same shape as x and \(H_{out}\) is the output dimensionality of this module.

final class fairseq2.nn.TiedProjection(weight: Parameter, bias: Parameter | None)[source]

Bases: Projection

Applies a linear transformation to input data using the weight and bias of another Module instance.

forward(x: Tensor) Tensor[source]

Projects the input data.

x must be of shape \((*,H_{inp})\), where \(H_{inp}\) is the input dimensionality of this module.

The projected output will be of shape \((*,H_{out})\), where all but the last dimension are the same shape as x and \(H_{out}\) is the output dimensionality of this module.