Tutorials: the neuralset pipeline

Each tutorial covers one step of the pipeline — from loading data through to building a DataLoader — with code you can run and modify.

Note

Concepts at a glance: StudyEvents DataFrameTransformsSegmenterDatasetDataLoader

Everything stays lightweight (metadata only) until you call the DataLoader. Every step is cacheable via exca.

Study
Interface to an external dataset. Download, iterate timelines, load events.
study = ns.Study(name="Fake2025Meg",
                 path=ns.CACHE_FOLDER)
events = study.run()
print(f"{len(events)} events, "
      f"{events['subject'].nunique()} subjects")
Events DataFrame
Typed DataFrame rows. Neural, stimuli, text — everything is an event.
from neuralset.events import Event
evt = Event(type="Word", start=1.0,
            duration=0.3, timeline="sub-01")
print(evt)           # pydantic model
print(evt.model_dump())  # dict
Transforms
Modify the events DataFrame: split, chunk, align, add context.
import neuralset as ns
transform = ns.events.transforms.AddSentenceToWords()
events = transform(events)
print(events[events.type == "Sentence"].head())
Extractors
Convert events into tensors. EEG, fMRI, text, images, audio.
meg = ns.extractors.MegExtractor(frequency=100.0)
freq = ns.extractors.WordFrequency(language="english")
sample = meg(events, start=0.0, duration=1.0)
print(f"MEG shape: {sample.shape}")
Segmenter & Dataset
Time segments around triggers, then a torch.utils.data.Dataset ready for a DataLoader.
meg = ns.extractors.MegExtractor(frequency=100.0)
segmenter = ns.dataloader.Segmenter(
    start=-0.1, duration=0.5,
    trigger_query='type=="Word"',
    extractors=dict(meg=meg),
    drop_incomplete=True)
dataset = segmenter.apply(events)
loader = DataLoader(dataset, batch_size=8,
                    collate_fn=dataset.collate_fn)
Putting it Together
Compose full pipelines: Studies + Transforms + Extractors + Segmenter, all in one config.
import neuralset as ns
from neuralset.events import transforms
chain = ns.Chain(steps=[
    ns.Study(name="Fake2025Meg",
             path=ns.CACHE_FOLDER),
    transforms.AddSentenceToWords(),
])
events = chain.run()

Events

Events

Studies

Studies

Transforms

Transforms

Extractors

Extractors

Segmenter & Dataset

Segmenter & Dataset

Chains

Chains

Gallery generated by Sphinx-Gallery