neuralbench.utils.TrainerConfig

pydantic model neuralbench.utils.TrainerConfig[source][source]

Joint configuration for Trainer and some callbacks.

Fields:
field n_epochs: int = 100[source]
field enable_progress_bar: bool = True[source]
field log_every_n_steps: int = 20[source]
field fast_dev_run: bool = False[source]
field gradient_clip_val: float = 0.0[source]
field limit_train_batches: int | None = None[source]
field limit_val_batches: int | None = None[source]
field num_sanity_val_steps: int = 2[source]
field accumulate_grad_batches: int = 1[source]
field strategy: str = 'auto'[source]
field precision: str = '32-true'[source]
field accelerator: str = 'auto'[source]
field devices: int = 1[source]
field num_nodes: int = 1[source]
field patience: int = 5[source]
field monitor: str = 'val/loss'[source]
field mode: str = 'min'[source]
build(logger, callbacks, accelerator: str | None = None, devices: int | None = None, num_nodes: int | None = None) Trainer[source][source]