NeuralTrain¶
NeuralTrain allows training PyTorch models on NeuralSet datasets at scale.
Quick install¶
pip install neuraltrain
For the full model zoo, Lightning support, and dev tools:
pip install 'neuraltrain[dev,lightning,models]'
Quick start¶
Define training pieces as config objects, then build concrete PyTorch modules at runtime.
import torch
from neuraltrain.losses import base as losses
from neuraltrain.metrics import base as metrics
from neuraltrain import models
from neuraltrain.optimizers import base as optimizers
# Model
model_cfg = models.SimpleConvTimeAgg(hidden=32, depth=4, merger_config=None)
model = model_cfg.build(n_in_channels=208, n_outputs=4)
# Loss & metrics
loss_cfg = losses.CrossEntropyLoss()
metric_cfg = metrics.Accuracy(
log_name="acc",
kwargs={"task": "multiclass", "num_classes": 4},
)
# Optimizer
optim_cfg = optimizers.LightningOptimizer(
optimizer=optimizers.Adam(lr=1e-4),
scheduler=optimizers.OneCycleLR(
kwargs={"max_lr": 3e-3, "pct_start": 0.2},
),
)
x = torch.randn(8, 208, 120)
y = torch.randint(0, 4, (8,))
logits = model(x)
loss = loss_cfg.build()(logits, y)
metric = metric_cfg.build()
metric.update(logits, y)
optimizer_bundle = optim_cfg.build(model.parameters(), total_steps=100)
print(logits.shape)
print(float(loss))
print(metric.compute())
print(sorted(optimizer_bundle))
Tutorials¶
Each tutorial covers one stage of the training pipeline.
↓
↓
↓
Citation¶
If you use NeuralTrain in your research, please cite
A foundation model of vision, audition, and language for in-silico
neuroscience:
@article{dAscoli2026TribeV2,
title={A foundation model of vision, audition, and language for in-silico neuroscience},
author={d'Ascoli, St{\'e}phane and Rapin, J{\'e}r{\'e}my and Benchetrit, Yohann and Brooks, Teon and Begany, Katelyn and Raugel, Jos{\'e}phine and Banville, Hubert and King, Jean-R{\'e}mi},
year={2026}
}