neuraltrain.models.simpleconv.SimpleConv¶
- pydantic model neuraltrain.models.simpleconv.SimpleConv[source][source]¶
1-D convolutional encoder, adapted from brainmagick.
- Parameters:
hidden (int) – Number of channels in the first convolutional layer.
depth (int) – Number of convolutional layers.
linear_out (bool) – Use a single transposed convolution as the output projection.
complex_out (bool) – Use a two-layer transposed-convolution output projection with a non-linearity in between. Mutually exclusive with linear_out.
kernel_size (int) – Kernel size for every convolutional layer (must be odd).
growth (float) – Multiplicative channel growth factor per layer.
dilation_growth (int) – Multiplicative dilation growth factor per layer.
dilation_period (int or None) – If set, reset dilation to 1 every dilation_period layers.
skip (bool) – Add residual skip connections when input and output shapes match.
post_skip (bool) – Append a depth-wise convolution after each skip connection.
scale (float or None) – If set, apply
LayerScalewith this initial value after each skip connection.rewrite (bool) – Append a 1x1 convolution + LeakyReLU after each layer.
groups (int) – Number of groups for grouped convolutions (first layer always uses 1).
glu (int) – If non-zero, insert a GLU gate every glu layers.
glu_context (int) – Context (padding) size for the GLU convolution.
glu_glu (bool) – If True the gate uses
nn.GLU; otherwise the layer activation.gelu (bool) – Use GELU activation instead of (Leaky)ReLU.
dropout (float) – Channel-dropout probability (currently raises
NotImplementedError).dropout_rescale (bool) – Rescale activations after channel dropout.
conv_dropout (float) – Dropout probability inside each convolutional block.
dropout_input (float) – Dropout probability applied to the input of the convolutional stack.
batch_norm (bool) – Apply batch normalization after each convolution.
relu_leakiness (float) – Negative slope for
LeakyReLU(0 gives standard ReLU).transformer_config (TransformerEncoder or None) – If set, append a Transformer encoder after the convolutional stack.
subject_layers_config (SubjectLayers or None) – If set, prepend a per-subject linear projection.
subject_layers_dim ({"input", "hidden"}) – Dimension used for the subject-layer projection.
merger_config (ChannelMerger or None) – If set, prepend a
ChannelMergerfor multi-montage support.initial_linear (int) – If non-zero, prepend a 1x1 convolution projecting to this many channels.
initial_depth (int) – Number of 1x1 convolution layers in the initial projection.
initial_nonlin (bool) – Append a non-linearity after the initial 1x1 projection stack.
backbone_out_channels (int or None) – If set, force the backbone output to this many channels.
- Fields:
- field transformer_config: TransformerEncoder | None = None[source]¶
- field subject_layers_config: SubjectLayers | None = None[source]¶
- field merger_config: ChannelMerger | None = ChannelMerger(n_virtual_channels=270, fourier_emb_config=FourierEmb(n_freqs=None, total_dim=2048, n_dims=2, margin=0.2), dropout=0.2, dropout_around_channel=False, usage_penalty=0.0, n_subjects=200, per_subject=False, embed_ref=False, unmerge=False, invalid_value=-0.1)[source]¶