neuraltrain.models.simpleconv.SimpleConv¶
- class neuraltrain.models.simpleconv.SimpleConv(*, hidden: int = 16, depth: int = 4, linear_out: bool = False, complex_out: bool = False, kernel_size: int = 5, growth: float = 1.0, dilation_growth: int = 2, dilation_period: int | None = None, skip: bool = False, post_skip: bool = False, scale: float | None = None, rewrite: bool = False, groups: int = 1, glu: int = 0, glu_context: int = 0, glu_glu: bool = True, gelu: bool = False, dropout: float = 0.0, dropout_rescale: bool = True, conv_dropout: float = 0.0, dropout_input: float = 0.0, batch_norm: bool = False, relu_leakiness: float = 0.0, transformer_config: TransformerEncoder | None = None, subject_layers_config: SubjectLayers | None = None, subject_layers_dim: Literal['input', 'hidden'] = 'hidden', merger_config: ChannelMerger | None = ChannelMerger(n_virtual_channels=270, fourier_emb_config=FourierEmb(n_freqs=None, total_dim=2048, n_dims=2, margin=0.2), dropout=0.2, dropout_around_channel=False, usage_penalty=0.0, n_subjects=200, per_subject=False, embed_ref=False, unmerge=False, invalid_value=-0.1), initial_linear: int = 0, initial_depth: int = 1, initial_nonlin: bool = False, backbone_out_channels: int | None = None)[source][source]¶
1-D convolutional encoder, adapted from brainmagick.
- Parameters:
hidden (int) – Number of channels in the first convolutional layer.
depth (int) – Number of convolutional layers.
linear_out (bool) – Use a single transposed convolution as the output projection.
complex_out (bool) – Use a two-layer transposed-convolution output projection with a non-linearity in between. Mutually exclusive with linear_out.
kernel_size (int) – Kernel size for every convolutional layer (must be odd).
growth (float) – Multiplicative channel growth factor per layer.
dilation_growth (int) – Multiplicative dilation growth factor per layer.
dilation_period (int or None) – If set, reset dilation to 1 every dilation_period layers.
skip (bool) – Add residual skip connections when input and output shapes match.
post_skip (bool) – Append a depth-wise convolution after each skip connection.
scale (float or None) – If set, apply
LayerScalewith this initial value after each skip connection.rewrite (bool) – Append a 1x1 convolution + LeakyReLU after each layer.
groups (int) – Number of groups for grouped convolutions (first layer always uses 1).
glu (int) – If non-zero, insert a GLU gate every glu layers.
glu_context (int) – Context (padding) size for the GLU convolution.
glu_glu (bool) – If True the gate uses
nn.GLU; otherwise the layer activation.gelu (bool) – Use GELU activation instead of (Leaky)ReLU.
dropout (float) – Channel-dropout probability (currently raises
NotImplementedError).dropout_rescale (bool) – Rescale activations after channel dropout.
conv_dropout (float) – Dropout probability inside each convolutional block.
dropout_input (float) – Dropout probability applied to the input of the convolutional stack.
batch_norm (bool) – Apply batch normalization after each convolution.
relu_leakiness (float) – Negative slope for
LeakyReLU(0 gives standard ReLU).transformer_config (TransformerEncoder or None) – If set, append a Transformer encoder after the convolutional stack.
subject_layers_config (SubjectLayers or None) – If set, prepend a per-subject linear projection.
subject_layers_dim ({"input", "hidden"}) – Dimension used for the subject-layer projection.
merger_config (ChannelMerger or None) – If set, prepend a
ChannelMergerfor multi-montage support.initial_linear (int) – If non-zero, prepend a 1x1 convolution projecting to this many channels.
initial_depth (int) – Number of 1x1 convolution layers in the initial projection.
initial_nonlin (bool) – Append a non-linearity after the initial 1x1 projection stack.
backbone_out_channels (int or None) – If set, force the backbone output to this many channels.