neuraltrain.utils.run_grid¶
- neuraltrain.utils.run_grid(exp_cls: Type[BaseExperiment], exp_name: str, base_config: dict[str, Any], grid: dict[str, list], n_randomly_sampled: int | None = None, job_name_keys: list[str] | None = None, combinatorial: bool = False, overwrite: bool = False, dry_run: bool = False, debug: bool = False, infra_mode: str = 'retry', random_state: int | None = None) list[ConfDict][source][source]¶
Run grid over provided experiment.
- Parameters:
exp_cls – Experiment class to instantiate with grid. Must have an infra attribute, which will be updated when instantiating the different experiments of the grid.
exp_name – Name of the base experiment to run.
grid – Dictionary containing values to perform the sweep on.
n_randomly_sampled – If provided, number of randomly sampled configurations from the grid. If None, run full grid. See random_state parameter to seed the sampling.
base_config – Base configuration to update.
job_name_keys – Flattened config key(s) to update with the experiment-specific ‘job_name’ variable. E.g., can be used to pass the job name to a wandb logger.
combinatorial – If True, run grid over all possible combinations of the grid. If False, run each parameter change individually.
overwrite – If True, delete existing experiment-specific folder.
dry_run – If True, do not add tasks to the infra.
debug – If True, bypass the infra.cluster and run the first experiment only locally. This is useful for quick sanity checking of the experiment configuration.
infra_mode –
Whether to rerun existing or failed experiments. - cached: cache is returned if available (error or not),
otherwise computed (and cached)
- retry: cache is returned if available except if it’s an error,
otherwise (re)computed (and cached)
force: cache is ignored, and result is (re)computed (and cached)
random_state – Random state for random sampling of the grid.
- Returns:
List of config dictionaries used for each experiment of the grid.
- Return type: