getml.feature_learning.Fastboost
dataclass
Fastboost(
gamma: float = 0.0,
loss_function: Optional[
Union[CrossEntropyLossType, SquareLossType]
] = None,
max_depth: int = 5,
min_child_weights: float = 1.0,
num_features: int = 100,
num_threads: int = 1,
reg_lambda: float = 1.0,
seed: int = 5543,
shrinkage: float = 0.1,
silent: bool = True,
subsample: float = 1.0,
)
Bases: _FeatureLearner
Feature learning based on Gradient Boosting.
Fastboost
automates feature learning
for relational data and time series. The algorithm used is slightly
simpler than Relboost
and much faster.
Enterprise edition
This feature is exclusive to the Enterprise edition and is not available in the Community edition. Discover the benefits of the Enterprise edition and compare their features.
For licensing information and technical support, please contact us.
PARAMETER | DESCRIPTION |
---|---|
gamma |
During the training of Fastboost, which is based on
gradient tree boosting, this value serves as the minimum
improvement in terms of the
TYPE:
|
loss_function |
Objective function used by the feature learning algorithm
to optimize your features. For regression problems use
TYPE:
|
max_depth |
Maximum depth of the trees generated during the gradient tree boosting. Deeper trees will result in more complex models and increase the risk of overfitting. Range: [0, ∞]
TYPE:
|
min_child_weights |
Determines the minimum sum of the weights a subcondition should apply to in order for it to be considered. Higher values lead to less complex statements and less danger of overfitting. Range: [1, ∞]
TYPE:
|
num_features |
Number of features generated by the feature learning algorithm. Range: [1, ∞]
TYPE:
|
num_threads |
Number of threads used by the feature learning algorithm. If set to zero or a negative value, the number of threads will be determined automatically by the getML Engine. Range: [0, ∞]
TYPE:
|
reg_lambda |
L2 regularization on the weights in the gradient boosting
routine. This is one of the most important hyperparameters
in the
TYPE:
|
seed |
Seed used for the random number generator that underlies
the sampling procedure to make the calculation
reproducible. Internally, a
TYPE:
|
shrinkage |
Since Fastboost works using a gradient-boosting-like
algorithm,
TYPE:
|
silent |
Controls the logging during training.
TYPE:
|
subsample |
Fastboost uses a bootstrapping procedure (sampling with replacement) to train each of the features. The sampling factor is proportional to the share of the samples randomly drawn from the population table every time Fastboost generates a new feature. A lower sampling factor (but still greater than 0.0), will lead to less danger of overfitting, less complex statements and faster training. When set to 1.0, the number of samples drawn will be identical to the size of the population table. When set to 0.0, there will be no sampling at all. Range: [0, 1]
TYPE:
|