neuralset.extractors.neuro.ChannelPositions

pydantic model neuralset.extractors.neuro.ChannelPositions[source][source]

Channel positions in 2D or 3D, extracted from a Raw object’s mne.Info.

3D positions (n_spatial_dims=3) are always returned in MNE’s head coordinate frame, which is defined by the LPA, RPA, and nasion fiducial landmarks (origin at the midpoint of LPA–RPA, x-axis toward RPA, y-axis toward nasion, z-axis upward). This holds regardless of whether positions come from a named standard montage or from the raw data’s channel locations. See https://mne.tools/stable/documentation/implementation.html#coordinate-systems for details.

Parameters:
  • neuro – Extractor that defines the preprocessing steps applied to the Raw objects. This can either be specified in the config, or built with the build method.

  • n_spatial_dims (int) – Number of spatial dimensions (i.e. coordinates) to extract for each channel. For n_spatial_dims=2, the 2D projection of the channel positions as obtained through mne.Layout will be used. For n_spatial_dims=3, the 3D positions are extracted from mne.Montage in head coordinate frame.

  • layout_or_montage_name – Name of the Layout or Montage to use. See mne.channels.read_layout() for a list of valid layouts and mne.channels.get_builtin_montages() for standard montages. If not provided, the function will look for a layout in the Raw.info object or for a montage in the Raw object. Note: MNE’s standard montages are only for EEG systems; MEG montages must be loaded from the raw data.

  • include_ref_eeg – If True, additionally try to extract the position of the anode of bipolar EEG channel (e.g. for the channel name “P3-Cz”, return position of both “P3” and “Cz”), yielding and output of shape (n_channels, n_spatial_dims * 2). If True, event_types must be one of Eeg or Ieeg.

  • normalize – If True, min-max normalize channel positions between 0 and 1 across each dimension. If False, 2D positions are in arbitrary units given by the mne.Layout projection, while 3D positions will be in head coordinate frame (approximately in the range [-0.1, 0.1] meters).

  • factor – Factor to scale the channel positions by. E.g. set it to 10.0 to get 3D coordinates in decimeters, which yields values approximately in the range [-1, 1].

Fields:
field event_types: Literal['MneRaw', 'Meg', 'Eeg', 'Ieeg'] = 'MneRaw'[source]
field neuro: MneRaw | None = None[source]
field n_spatial_dims: Literal[2, 3] = 2[source]
field layout_or_montage_name: str | None = None[source]
field include_ref_eeg: bool = False[source]
field normalize: bool = True[source]
field factor: float = 1.0[source]
INVALID_VALUE: ClassVar[float] = -0.1[source]
field infra: MapInfra = MapInfra(folder=None, cluster=None, logs='{folder}/logs/{user}/%j', job_name=None, timeout_min=None, nodes=1, tasks_per_node=1, cpus_per_task=None, gpus_per_node=None, mem_gb=None, max_pickle_size_gb=None, slurm_constraint=None, slurm_partition=None, slurm_account=None, slurm_qos=None, slurm_use_srun=False, slurm_additional_parameters=None, slurm_setup=None, conda_env=None, workdir=None, permissions=511, version='0', keep_in_ram=True, max_jobs=128, min_samples_per_job=1, forbid_single_item_computation=False, mode='cached')[source]
build(neuro: MneRaw) ChannelPositions[source][source]
prepare(obj: DataFrame | Sequence[Event] | Sequence[Segment]) None[source][source]

Pre-compute and cache extractor data for a collection of events.

This method triggers _get_data on every matching event so that expensive computation (e.g. model inference) is done once and cached. It then calls the extractor on a single event to populate the output shape, which is needed when allow_missing=True.

Call prepare before using the extractor in a dataloader.

Parameters:

obj (DataFrame or sequence of Event or sequence of Segment) – The structure containing the events. When calling prepare on several objects, prefer passing a list of events or segments over a DataFrame to avoid redundant conversion overhead.

get_static(event: MneRaw) Tensor[source][source]

Return a single feature vector for the given event.

Override this method in subclasses to produce a static (non-temporal) embedding for one event. The returned tensor should have no time dimension — temporal wrapping is handled by BaseStatic automatically.

Parameters:

event (Event) – The event to extract a feature from.

Returns:

A tensor of shape (*feature_shape,) (no time axis).

Return type:

torch.Tensor

requirements: tp.ClassVar[tuple[str, ...]] = ()[source]