neuraltrain.models.simpleconv.SimpleConvTimeAgg¶
- class neuraltrain.models.simpleconv.SimpleConvTimeAgg(*, hidden: int = 16, depth: int = 4, linear_out: bool = False, complex_out: bool = False, kernel_size: int = 5, growth: float = 1.0, dilation_growth: int = 2, dilation_period: int | None = None, skip: bool = False, post_skip: bool = False, scale: float | None = None, rewrite: bool = False, groups: int = 1, glu: int = 0, glu_context: int = 0, glu_glu: bool = True, gelu: bool = False, dropout: float = 0.0, dropout_rescale: bool = True, conv_dropout: float = 0.0, dropout_input: float = 0.0, batch_norm: bool = False, relu_leakiness: float = 0.0, transformer_config: TransformerEncoder | None = None, subject_layers_config: SubjectLayers | None = None, subject_layers_dim: Literal['input', 'hidden'] = 'hidden', merger_config: ChannelMerger | None = ChannelMerger(n_virtual_channels=270, fourier_emb_config=FourierEmb(n_freqs=None, total_dim=2048, n_dims=2, margin=0.2), dropout=0.2, dropout_around_channel=False, usage_penalty=0.0, n_subjects=200, per_subject=False, embed_ref=False, unmerge=False, invalid_value=-0.1), initial_linear: int = 0, initial_depth: int = 1, initial_nonlin: bool = False, backbone_out_channels: int | None = None, time_agg_out: Literal['gap', 'linear', 'att'] = 'gap', n_time_groups: int | None = None, output_head_config: Mlp | dict[str, Mlp] | None = None)[source][source]¶
SimpleConv with temporal aggregation layer and optional output heads.
- Parameters:
time_agg_out (Literal['gap', 'linear', 'att']) –
“gap” : Global average pooling
”linear” : Linear layer with one output
”att” : Bahdanau attention layer
n_time_groups (int | None) – Number of groups within which to apply temporal aggregation, e.g. 4 means the time dimension will be split into 4 groups and each group will be aggregated (and optionally projected) separately.