Tutorials: the neuraltrain pipeline

Each tutorial covers one stage of the training pipeline – from wiring data loaders through to running experiment sweeps – with code you can run and modify.

Data
Use neuralset studies and a Segmenter to build train/val/test loaders.
events = self.study.run()
dataset = self.segmenter.apply(events)
dataset.prepare()
for split in ["train", "val", "test"]:
    ds = dataset.select(
        dataset.triggers["split"] == split)
    loaders[split] = DataLoader(ds, ...)
Model Config
Define serializable model configs and build the PyTorch module when shapes are known.
from neuraltrain import models

model_cfg = models.SimpleConvTimeAgg(
    hidden=32, depth=4,
    merger_config=None)
model = model_cfg.build(
    n_in_channels=208,
    n_outputs=4)
Objective
Compose losses, metrics, optimizers, and schedulers as typed config objects.
loss = CrossEntropyLoss()
metric = Accuracy(
    log_name="acc",
    kwargs={"task": "multiclass",
            "num_classes": 4})
optim = LightningOptimizer(...)
Trainer
Wrap the model in a Lightning module with train, validation, and test loops.
brain_module = BrainModule(
    model=brain_model,
    loss=self.loss.build(),
    optim_config=self.optim,
    metrics={m.log_name: m.build()
             for m in self.metrics})

exp = Experiment(**default_config)
results = exp.run()