kats.utils.time_series_parameter_tuning module¶

Module that has parameter tuning classes for time series models.

This module has a collection of classes. A subset of these classes are parameter tuning strategies with their abstract parent class. In addition, there are helper classes, such as a factory that creates search strategy objects.

Typical usage example:

>>> import time_series_parameter_tuning as tspt
>>> a_search_strategy = tspt.SearchMethodFactory.create_search_method(...)
class kats.utils.time_series_parameter_tuning.BayesianOptSearch(parameters: List[Dict], evaluation_function: Callable, experiment_name: Optional[str] = None, objective_name: Optional[str] = None, bootstrap_size: int = 5, seed: Optional[int] = None, random_strategy: kats.consts.SearchMethodEnum = <SearchMethodEnum.RANDOM_SEARCH_UNIFORM: 2>, outcome_constraints: Optional[List[str]] = None, multiprocessing: bool = False, **kwargs)[source]¶

Bases: kats.utils.time_series_parameter_tuning.TimeSeriesParameterTuning

Bayesian optimization search for hyperparameter tuning.

Do not instantiate this class using its constructor. Rather use the factory, SearchMethodFactory.

parameters¶

List[Dict], Defines parameters by their names, their types their optional values for custom parameter search space.

evaluation_function¶

Callable The evaluation function to pass to Ax to evaluate arms.

Type

Optional[Callable]

experiment_name¶

str = None, Name of the experiment to be used in Ax’s experiment object.

objective_name¶

str = None, Name of the objective to be used in Ax’s experiment evaluation.

bootstrap_size¶

int = 5, The number of arms that will be randomly generated to bootstrap the Bayesian optimization.

seed¶

int = None, Seed for Ax quasi-random model. If None, then time.time() is set.

random_strategy¶

SearchMethodEnum = SearchMethodEnum.RANDOM_SEARCH_UNIFORM, By now, we already know that the search method is random search. However, there are optional random strategies: UNIFORM, or SOBOL. This parameter allows to select it.

outcome_constraints¶

List[str] = None List of constraints defined as strings. Example: [‘metric1 >= 0’, ‘metric2 < 5]

Type

Optional[List[ax.core.outcome_constraint.OutcomeConstraint]]

generate_evaluate_new_parameter_values(evaluation_function: Callable, arm_count: int = 1)None[source]¶

This method can be called as many times as desired with arm_count in desired number. The total number of generated candidates will be equal to the their multiplication. Suppose we would like to sample k candidates where k = m x n such that k, m, n are integers. We can call this function once with arm_count=k, or call it k time with arm_count=1 (or without that parameter at all), or call it n times arm_count=m and vice versa. They all will yield k candidates, however it is not guaranteed that the candidates will be identical across these scenarios. We re-initiate BOTORCH model on each call.

class kats.utils.time_series_parameter_tuning.Final(name, bases, classdict)[source]¶

Bases: type

A helper class to ensure a class cannot be inherited.

It is used as:
class Foo(metaclass=Final):

…

Once the class, Foo, is declared in this way, no other class can inherit it. See the declaration of SearchMethodFactory class below.

N/A
class kats.utils.time_series_parameter_tuning.GridSearch(parameters: List[Dict], experiment_name: Optional[str] = None, objective_name: Optional[str] = None, outcome_constraints: Optional[List[str]] = None, multiprocessing: bool = False, **kwargs)[source]¶

Bases: kats.utils.time_series_parameter_tuning.TimeSeriesParameterTuning

The method factory class that creates the search method object. It does not require the class to be instantiated.

Do not instantiate this class using its constructor. Rather use the factory, SearchMethodFactory.

parameters¶

List[Dict] = None, Defines parameters by their names, their types their optional values for custom parameter search space.

experiment_name¶

str = None, Name of the experiment to be used in Ax’s experiment object.

objective_name¶

str = None, Name of the objective to be used in Ax’s experiment evaluation.

outcome_constraints¶

List[str] = None List of constraints defined as strings. Example: [‘metric1 >= 0’, ‘metric2 < 5]

Type

Optional[List[ax.core.outcome_constraint.OutcomeConstraint]]

generate_evaluate_new_parameter_values(evaluation_function: Callable, arm_count: int = - 1)None[source]¶

This method can only be called once. arm_count other than -1 will be ignored as this search strategy exhaustively explores all arms.

class kats.utils.time_series_parameter_tuning.RandomSearch(parameters: List[Dict], experiment_name: Optional[str] = None, objective_name: Optional[str] = None, seed: Optional[int] = None, random_strategy: kats.consts.SearchMethodEnum = <SearchMethodEnum.RANDOM_SEARCH_UNIFORM: 2>, outcome_constraints: Optional[List[str]] = None, multiprocessing: bool = False, **kwargs)[source]¶

Bases: kats.utils.time_series_parameter_tuning.TimeSeriesParameterTuning

Random search for hyperparameter tuning.

Do not instantiate this class using its constructor. Rather use the factory, SearchMethodFactory.

parameters¶

List[Dict], Defines parameters by their names, their types their optional values for custom parameter search space.

experiment_name¶

str = None, Name of the experiment to be used in Ax’s experiment object.

objective_name¶

str = None, Name of the objective to be used in Ax’s experiment evaluation.

seed¶

int = None, Seed for Ax quasi-random model. If None, then time.time() is set.

random_strategy¶

SearchMethodEnum = SearchMethodEnum.RANDOM_SEARCH_UNIFORM, By now, we already know that the search method is random search. However, there are optional random strategies: UNIFORM, or SOBOL. This parameter allows to select it.

outcome_constraints¶

List[str] = None List of constraints defined as strings. Example: [‘metric1 >= 0’, ‘metric2 < 5]

Type

Optional[List[ax.core.outcome_constraint.OutcomeConstraint]]

generate_evaluate_new_parameter_values(evaluation_function: Callable, arm_count: int = 1)None[source]¶

This method can be called as many times as desired with arm_count in desired number. The total number of generated candidates will be equal to the their multiplication. Suppose we would like to sample k candidates where k = m x n such that k, m, n are integers. We can call this function once with arm_count=k, or call it k time with arm_count=1 (or without that parameter at all), or call it n times arm_count=m and vice versa. They all will yield k candidates, however it is not guaranteed that the candidates will be identical across these scenarios.

class kats.utils.time_series_parameter_tuning.SearchMethodFactory[source]¶

Bases: object

Generates and returns search strategy object.

static create_search_method(parameters: List[Dict], selected_search_method: kats.consts.SearchMethodEnum = <SearchMethodEnum.GRID_SEARCH: 1>, experiment_name: Optional[str] = None, objective_name: Optional[str] = None, outcome_constraints: Optional[List[str]] = None, seed: Optional[int] = None, bootstrap_size: int = 5, evaluation_function: Optional[Callable] = None, bootstrap_arms_for_bayes_opt: Optional[List[dict]] = None, multiprocessing: bool = False)kats.utils.time_series_parameter_tuning.TimeSeriesParameterTuning[source]¶

The static method of factory class that creates the search method object. It does not require the class to be instantiated.

Parameters
  • parameters – List[Dict] = None, Defines parameters by their names, their types their optional values for custom parameter search space.

  • selected_search_method – SearchMethodEnum = SearchMethodEnum.GRID_SEARCH Defines search method to be used during parameter tuning. It has to be an option from the enum, SearchMethodEnum.

  • experiment_name – str = None, Name of the experiment to be used in Ax’s experiment object.

  • objective_name – str = None, Name of the objective to be used in Ax’s experiment evaluation.

  • outcome_constraints – List[str] = None List of constraints defined as strings. Example: [‘metric1 >= 0’, ‘metric2 < 5]

  • bootstrap_arms_for_bayes_opt – List[dict] = None List of params. It provides a list of self-defined inital parameter values for Baysian Optimal search. Example: for Holt Winter’s model, [{‘m’: 7}, {‘m’: 14}]

Returns

A search object, GridSearch, RandomSearch, or BayesianOptSearch,

depending on the selection.

Raises

NotImplementedError – Raised if the selection is not among strategies that are implemented.

class kats.utils.time_series_parameter_tuning.TimeSeriesEvaluationMetric(name: str, evaluation_function: Callable, logger: logging.Logger, multiprocessing: bool = False)[source]¶

Bases: ax.core.metric.Metric

Object to evaluate an arm

An object of this class is used to evaluate an arm through search. It is mainly used to parallelize the search, as evaluation of an arm needs to be run in parallel. Obviously, this is possible if the search strategy allows it in theory.

evaluation_function¶

The name of the function to be used in evaluation.

logger¶

the logger object to log.

multiprocessing¶

Flag to decide whether evaluation will run in parallel.

evaluate_arm(arm)Dict[source]¶

Evaluates the performance of an arm.

Takes an arm object, gets its parameter values, runs evaluation_function and returns what that function returns after reformatting it.

Parameters

arm – The arm object to be evaluated.

Returns

Either a dict or a list of dict. These dict objects need to have metric name that describes the metric, arm_name, mean which is the mean of the evaluation value and its standard error.

fetch_trial_data(trial)ax.core.data.Data[source]¶

Calls evaluation of every arm in a trial.

Parameters

trial – The trial of which all arms to be evaluated.

Returns

Data object that has arm names, trial index, evaluation.

classmethod is_available_while_running()bool[source]¶

Metrics are available while the trial is RUNNING and should always be re-fetched.

class kats.utils.time_series_parameter_tuning.TimeSeriesParameterTuning(parameters: Optional[List[Dict]] = None, experiment_name: Optional[str] = None, objective_name: Optional[str] = None, outcome_constraints: Optional[List[str]] = None, multiprocessing: bool = False)[source]¶

Bases: abc.ABC

Abstract class for search strategy class, such as GridSearch, RandomSearch.

Defines and imposes a structure to search strategy classes. Each search strategy has to have attributes listed below. Also, it provides methods that are common to search strategies.

parameters¶

List of dictionaries where each dict represents a hyperparameter.

experiment_name¶

An arbitrary name for the experiment object.

objective_name¶

An arbitrary name for the objective function that is used in the evaluation function.

outcome_constraints¶

Constraints set on the outcome of the objective.

Type

Optional[List[ax.core.outcome_constraint.OutcomeConstraint]]

abstract generate_evaluate_new_parameter_values(evaluation_function: Callable, arm_count: int = - 1)None[source]¶

A place holder method for users that are still using it.

It previously ran evaluation for trials. That part was moved to generator_run_for_search_methods(). Now this method does nothing.

generator_run_for_search_method(evaluation_function: Callable, generator_run: ax.modelbridge.discrete.DiscreteModelBridge)None[source]¶

Creates a new batch trial then runs the lastest.

Parameters
  • evaluation_function – The name of the function to use for arm evaluation

  • generator_run – Generator_run object that is used to populate new arms

get_search_space()[source]¶

Getter of search space attribute of the private attribute, _exp.

list_parameter_value_scores(legit_arms_only: bool = False)pandas.core.frame.DataFrame[source]¶

Creates a Pandas DataFrame from evaluated arms then returns it.

The method should be called to fetch evaluation results of arms that are populated and evaluated so far.

Parameters

legit_arms_only – A flag to filter arms that violate output_constraints if given any.

Returns

A Pandas DataFrame that holds arms populated and evaluated so far.

static validate_parameters_format(parameters: List)None[source]¶

Check parameters objects structure.

parameters object needs to be in a specific format. It needs to be a list of dict where each dict associates a parameter. Raises an error depending on the format violation.

Parameters

parameters – parameters of which format is to be audited.

Returns

None, if none of the checks fail, raises error if any fails.

Raises
  • TypeError – If parameters is not of type list.

  • ValueError – Parameters cannot be empty as there should be at least one hyperparameter to tune.

  • TypeError – If any of the list element is of type other then dict