Skip to content

GaussianHyperparameterSearch

GaussianHyperparameterSearch(
    param_space: Dict[str, Any],
    pipeline: Pipeline,
    score: str = metrics.rmse,
    n_iter: int = 100,
    seed: int = 5483,
    ratio_iter: float = 0.8,
    optimization_algorithm: str = nelder_mead,
    optimization_burn_in_algorithm: str = latin_hypercube,
    optimization_burn_ins: int = 500,
    surrogate_burn_in_algorithm: str = latin_hypercube,
    gaussian_kernel: str = matern52,
    gaussian_optimization_burn_in_algorithm: str = latin_hypercube,
    gaussian_optimization_algorithm: str = nelder_mead,
    gaussian_optimization_burn_ins: int = 500,
    gaussian_nugget: int = 50,
    early_stopping: bool = True,
)

Bases: _Hyperopt

Bayesian hyperparameter optimization using a Gaussian process.

After a burn-in period, a Gaussian process is used to pick the most promising parameter combination to be evaluated next based on the knowledge gathered throughout previous evaluations. Assessing the quality of potential combinations will be done using the expected information (EI).

Enterprise edition

This feature is exclusive to the Enterprise edition and is not available in the Community edition. Discover the benefits of the Enterprise edition and compare their features.

For licensing information and technical support, please contact us.

PARAMETER DESCRIPTION
param_space

Dictionary containing numerical arrays of length two holding the lower and upper bounds of all parameters which will be altered in pipeline during the hyperparameter optimization.

If we have two feature learners and one predictor, the hyperparameter space might look like this:

param_space = {
    "feature_learners": [
        {
            "num_features": [10, 50],
        },
        {
            "max_depth": [1, 10],
            "min_num_samples": [100, 500],
            "num_features": [10, 50],
            "reg_lambda": [0.0, 0.1],
            "shrinkage": [0.01, 0.4]
        }],
    "predictors": [
        {
            "reg_lambda": [0.0, 10.0]
        }
    ]
}

If we only want to optimize the predictor, then we can leave out the feature learners.

TYPE: Dict[str, Any]

pipeline

Base pipeline used to derive all models fitted and scored during the hyperparameter optimization. Be careful when constructing it since only the parameters present in param_space will be overwritten. It defines the data schema and any hyperparameters that are not optimized.

TYPE: Pipeline

score

The score to optimize. Must be from metrics.

TYPE: str DEFAULT: rmse

n_iter

Number of iterations in the hyperparameter optimization and thus the number of parameter combinations to draw and evaluate. Range: [1, ∞]

TYPE: int DEFAULT: 100

seed

Seed used for the random number generator that underlies the sampling procedure to make the calculation reproducible. Due to nature of the underlying algorithm, this is only the case if the fit is done without multithreading. To reflect this, a seed of None is only allowed to be set to an actual integer if both num_threads and n_jobs instance variables of the predictor and feature_selector in model - if they are instances of either XGBoostRegressor or XGBoostClassifier - are set to 1. Internally, a seed of None will be mapped to 5543. Range: [0, ∞]

TYPE: int DEFAULT: 5483

ratio_iter

Ratio of the iterations used for the burn-in. For a ratio_iter of 1.0, all iterations will be spent in the burn-in period resulting in an equivalence of this class to LatinHypercubeSearch or RandomSearch - depending on surrogate_burn_in_algorithm. Range: [0, 1]

As a rule of thumb at least 70 percent of the evaluations should be spent in the burn-in phase. The more comprehensive the exploration of the param_space during the burn-in, the less likely it is that the Gaussian process gets stuck in local minima.

TYPE: float DEFAULT: 0.8

optimization_algorithm

Determines the optimization algorithm used for the local search in the optimization of the expected information (EI). Must be from optimization.

TYPE: str DEFAULT: nelder_mead

optimization_burn_in_algorithm

Specifies the algorithm used to draw initial points in the burn-in period of the optimization of the expected information (EI). Must be from burn_in.

DEFAULT: latin_hypercube

optimization_burn_ins

Number of random evaluation points used during the burn-in of the minimization of the expected information (EI). After the surrogate model - the Gaussian process - was successfully fitted to the previous parameter combination, the algorithm is able to calculate the EI for a given point. In order to get to the next combination, the EI has to be maximized over the whole parameter space. Much like the GaussianProcess itself, this requires a burn-in phase. Range: [3, ∞]

TYPE: int DEFAULT: 500

surrogate_burn_in_algorithm

Specifies the algorithm used to draw new parameter combinations during the burn-in period. Must be from burn_in.

TYPE: str DEFAULT: latin_hypercube

gaussian_kernel

Specifies the 1-dimensional kernel of the Gaussian process which will be used along each dimension of the parameter space. All the choices below will result in continuous sample paths and their main difference is the degree of smoothness of the results with 'exp' yielding the least and 'gauss' yielding the most smooth paths. Must be from kernels.

TYPE: str DEFAULT: matern52

gaussian_optimization_burn_in_algorithm

Specifies the algorithm used to draw new parameter combinations during the burn-in period of the optimization of the Gaussian process. Must be from burn_in.

TYPE: str DEFAULT: latin_hypercube

gaussian_optimization_algorithm

Determines the optimization algorithm used for the local search in the fitting of the Gaussian process to the previous parameter combinations. Must be from optimization.

TYPE: str DEFAULT: nelder_mead

gaussian_optimization_burn_ins

Number of random evaluation points used during the burn-in of the fitting of the Gaussian process. Range: [3, ∞]

TYPE: int DEFAULT: 500

early_stopping

Whether you want to apply early stopping to the predictors.

TYPE: bool DEFAULT: True

Note

A Gaussian hyperparameter search works like this:

  • It begins with a burn-in phase, usually about 70% to 90% of all iterations. During that burn-in phase, the hyperparameter space is sampled more or less at random. You can control this phase using ratio_iter and surrogate_burn_in_algorithm.

  • Once enough information has been collected, it fits a Gaussian process on the hyperparameters with the score we want to maximize or minimize as the predicted variable. Note that the Gaussian process has hyperparameters itself, which are also optimized. You can control this phase using gaussian_kernel, gaussian_optimization_algorithm, gaussian_optimization_burn_in_algorithm and gaussian_optimization_burn_ins.

  • It then uses the Gaussian process to predict the expected information (EI), which is how much additional information it might get from evaluating a particular point in the hyperparameter space. The expected information is to be maximized. The point in the hyperparameter space with the maximum expected information is the next point that is actually evaluated (meaning a new pipeline with these hyperparameters is trained). You can control this phase using optimization_algorithm, optimization_burn_ins and optimization_burn_in_algorithm.

In a nutshell, the GaussianHyperparameterSearch behaves like human data scientists:

  • At first, it picks random hyperparameter combinations.

  • Once it has gained a better understanding of the hyperparameter space, it starts evaluating hyperparameter combinations that are particularly interesting.

References
Example
from getml import data
from getml import datasets
from getml import engine
from getml import feature_learning
from getml.feature_learning import aggregations
from getml.feature_learning import loss_functions
from getml import hyperopt
from getml import pipeline
from getml import predictors

# ----------------

engine.set_project("examples")

# ----------------

population_table, peripheral_table = datasets.make_numerical()

# ----------------
# Construct placeholders

population_placeholder = data.Placeholder("POPULATION")
peripheral_placeholder = data.Placeholder("PERIPHERAL")
population_placeholder.join(peripheral_placeholder, "join_key", "time_stamp")

# ----------------
# Base model - any parameters not included
# in param_space will be taken from this.

fe1 = feature_learning.Multirel(
    aggregation=[
        aggregations.COUNT,
        aggregations.SUM
    ],
    loss_function=loss_functions.SquareLoss,
    num_features=10,
    share_aggregations=1.0,
    max_length=1,
    num_threads=0
)

# ----------------
# Base model - any parameters not included
# in param_space will be taken from this.

fe2 = feature_learning.Relboost(
    loss_function=loss_functions.SquareLoss,
    num_features=10
)

# ----------------
# Base model - any parameters not included
# in param_space will be taken from this.

predictor = predictors.LinearRegression()

# ----------------

pipe = pipeline.Pipeline(
    population=population_placeholder,
    peripheral=[peripheral_placeholder],
    feature_learners=[fe1, fe2],
    predictors=[predictor]
)

# ----------------
# Build a hyperparameter space.
# We have two feature learners and one
# predictor, so this is how we must
# construct our hyperparameter space.
# If we only wanted to optimize the predictor,
# we could just leave out the feature_learners.

param_space = {
    "feature_learners": [
        {
            "num_features": [10, 50],
        },
        {
            "max_depth": [1, 10],
            "min_num_samples": [100, 500],
            "num_features": [10, 50],
            "reg_lambda": [0.0, 0.1],
            "shrinkage": [0.01, 0.4]
        }],
    "predictors": [
        {
            "reg_lambda": [0.0, 10.0]
        }
    ]
}

# ----------------
# Wrap a GaussianHyperparameterSearch around the reference model

gaussian_search = hyperopt.GaussianHyperparameterSearch(
    pipeline=pipe,
    param_space=param_space,
    n_iter=30,
    score=pipeline.metrics.rsquared
)

gaussian_search.fit(
    population_table_training=population_table,
    population_table_validation=population_table,
    peripheral_tables=[peripheral_table]
)

# ----------------

# We want 5 additional iterations.
gaussian_search.n_iter = 5

# We do not want another burn-in-phase,
# so we set ratio_iter to 0.
gaussian_search.ratio_iter = 0.0

# This widens the hyperparameter space.
gaussian_search.param_space["feature_learners"][1]["num_features"] = [10, 100]

# This narrows the hyperparameter space.
gaussian_search.param_space["predictors"][0]["reg_lambda"] = [0.0, 0.0]

# This continues the hyperparameter search using the previous iterations as
# prior knowledge.
gaussian_search.fit(
    population_table_training=population_table,
    population_table_validation=population_table,
    peripheral_tables=[peripheral_table]
)

# ----------------

all_hyp = hyperopt.list_hyperopts()

best_pipeline = gaussian_search.best_pipeline
Source code in getml/hyperopt/hyperopt.py
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
def __init__(
    self,
    param_space: Dict[str, Any],
    pipeline: Pipeline,
    score: str = metrics.rmse,
    n_iter: int = 100,
    seed: int = 5483,
    ratio_iter: float = 0.80,
    optimization_algorithm: str = nelder_mead,
    optimization_burn_in_algorithm: str = latin_hypercube,
    optimization_burn_ins: int = 500,
    surrogate_burn_in_algorithm: str = latin_hypercube,
    gaussian_kernel: str = matern52,
    gaussian_optimization_burn_in_algorithm: str = latin_hypercube,
    gaussian_optimization_algorithm: str = nelder_mead,
    gaussian_optimization_burn_ins: int = 500,
    gaussian_nugget: int = 50,
    early_stopping: bool = True,
):
    super().__init__(
        param_space=param_space,
        pipeline=pipeline,
        score=score,
        n_iter=n_iter,
        seed=seed,
        ratio_iter=ratio_iter,
        optimization_algorithm=optimization_algorithm,
        optimization_burn_in_algorithm=optimization_burn_in_algorithm,
        optimization_burn_ins=optimization_burn_ins,
        surrogate_burn_in_algorithm=surrogate_burn_in_algorithm,
        gaussian_kernel=gaussian_kernel,
        gaussian_optimization_algorithm=gaussian_optimization_algorithm,
        gaussian_optimization_burn_in_algorithm=gaussian_optimization_burn_in_algorithm,
        gaussian_optimization_burn_ins=gaussian_optimization_burn_ins,
        gaussian_nugget=gaussian_nugget,
        early_stopping=early_stopping,
    )

    self._type = "GaussianHyperparameterSearch"

    self.validate()

best_pipeline property

best_pipeline: Pipeline

The best pipeline that is part of the hyperparameter optimization.

This is always based on the validation data you have passed even if you have chosen to score the pipeline on other data afterwards.

RETURNS DESCRIPTION
Pipeline

The best pipeline.

id property

id: str

Name of the hyperparameter optimization. This is used to uniquely identify it on the engine.

RETURNS DESCRIPTION
str

The name of the hyperparameter optimization.

name property

name: str

Returns the ID of the hyperparameter optimization. The name property is kept for backward compatibility.

RETURNS DESCRIPTION
str

The name of the hyperparameter optimization.

score property

score: str

The score to be optimized.

RETURNS DESCRIPTION
str

The score to be optimized.

type property

type: str

The algorithm used for the hyperparameter optimization.

RETURNS DESCRIPTION
str

The algorithm used for the hyperparameter optimization.

clean_up

clean_up() -> None

Deletes all pipelines associated with hyperparameter optimization, but the best pipeline.

Source code in getml/hyperopt/hyperopt.py
246
247
248
249
250
251
252
253
254
255
256
257
def clean_up(self) -> None:
    """
    Deletes all pipelines associated with hyperparameter optimization,
    but the best pipeline.
    """
    best_pipeline = self._best_pipeline_name()
    names = [obj["pipeline_name"] for obj in self.evaluations]
    for name in names:
        if name == best_pipeline:
            continue
        if exists(name):
            delete(name)

fit

fit(
    container: Union[Container, StarSchema, TimeSeries],
    train: str = "train",
    validation: str = "validation",
) -> _Hyperopt

Launches the hyperparameter optimization.

PARAMETER DESCRIPTION
container

The data container used for the hyperparameter tuning.

TYPE: Union[Container, StarSchema, TimeSeries]

train

The name of the subset in 'container' used for training.

TYPE: str DEFAULT: 'train'

validation

The name of the subset in 'container' used for validation.

TYPE: str DEFAULT: 'validation'

RETURNS DESCRIPTION
_Hyperopt

The current instance.

Source code in getml/hyperopt/hyperopt.py
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
def fit(
    self,
    container: Union[Container, StarSchema, TimeSeries],
    train: str = "train",
    validation: str = "validation",
) -> _Hyperopt:
    """Launches the hyperparameter optimization.

    Args:
        container:
            The data container used for the hyperparameter tuning.

        train:
            The name of the subset in 'container' used for training.

        validation:
            The name of the subset in 'container' used for validation.

    Returns:
        The current instance.
    """

    if isinstance(container, (StarSchema, TimeSeries)):
        container = container.container

    if not isinstance(container, Container):
        raise TypeError(
            "'container' must be a `~getml.data.Container`, "
            + "a `~getml.data.StarSchema` or a `~getml.data.TimeSeries`"
        )

    if not isinstance(train, str):
        raise TypeError("""'train' must be a string""")

    if not isinstance(validation, str):
        raise TypeError("""'validation' must be a string""")

    self.pipeline.check(container[train])

    population_table_training = container[train].population

    population_table_validation = container[validation].population

    peripheral_tables = _transform_peripheral(
        container[train].peripheral, self.pipeline.peripheral
    )

    self._send()

    cmd: Dict[str, Any] = {}

    cmd["name_"] = self.id
    cmd["type_"] = "Hyperopt.launch"

    cmd["population_training_df_"] = population_table_training._getml_deserialize()

    cmd["population_validation_df_"] = (
        population_table_validation._getml_deserialize()
    )

    cmd["peripheral_dfs_"] = [
        elem._getml_deserialize() for elem in peripheral_tables
    ]

    with comm.send_and_get_socket(cmd) as sock:
        begin = time.monotonic()
        msg = comm.log(sock)
        end = time.monotonic()

    if msg != "Success!":
        comm.handle_engine_exception(msg)

    _print_time_taken(begin, end, "Time taken: ")

    self._save()

    return self.refresh()

refresh

refresh() -> _Hyperopt

Reloads the hyperparameter optimization from the Engine.

RETURNS DESCRIPTION
_Hyperopt

Current instance

Source code in getml/hyperopt/hyperopt.py
367
368
369
370
371
372
373
374
375
def refresh(self) -> _Hyperopt:
    """Reloads the hyperparameter optimization from the Engine.

    Returns:
            Current instance

    """
    json_obj = _get_json_obj(self.id)
    return self._parse_json_obj(json_obj)

validate

validate() -> None

Validate the parameters of the hyperparameter optimization.

Source code in getml/hyperopt/hyperopt.py
795
796
797
798
799
def validate(self) -> None:
    """
    Validate the parameters of the hyperparameter optimization.
    """
    _validate_hyperopt(_Hyperopt._supported_params, **self.__dict__)  # type: ignore