Tutorials: the neuralset pipeline¶
Each tutorial covers one step of the pipeline — from loading data through to building a DataLoader — with code you can run and modify.
Note
Concepts at a glance:
Study → Events DataFrame → Transforms → Segmenter → Dataset → DataLoader
Everything stays lightweight (metadata only) until you call the DataLoader.
Every step is cacheable via exca.
↓
↓
↓
↓
Segmenter & Dataset
Time segments around triggers, then a
torch.utils.data.Dataset ready for a DataLoader.meg = ns.extractors.MegExtractor(frequency=100.0)
segmenter = ns.dataloader.Segmenter(
start=-0.1, duration=0.5,
trigger_query='type=="Word"',
extractors=dict(meg=meg),
drop_incomplete=True)
dataset = segmenter.apply(events)
loader = DataLoader(dataset, batch_size=8,
collate_fn=dataset.collate_fn)
↓
Putting it Together
Compose full pipelines: Studies + Transforms + Extractors + Segmenter, all in one config.
import neuralset as ns
from neuralset.events import transforms
chain = ns.Chain(steps=[
ns.Study(name="Fake2025Meg",
path=ns.CACHE_FOLDER),
transforms.AddSentenceToWords(),
])
events = chain.run()