Air Pollution - Why feature learning is better than simple propositionalization¶
In this notebook we will compare getML to featuretools and tsfresh, both of which open-source libraries for feature engineering. We find that advanced algorithms featured in getML yield significantly better predictions on this dataset. We then discuss why that is.
Summary:
- Prediction type: Regression model
- Domain: Air pollution
- Prediction target: pm 2.5 concentration
- Source data: Multivariate time series
- Population size: 41757
Background¶
Many data scientists and AutoML tools use propositionalization methods for feature engineering. These propositionalization methods usually work as follows:
- Generate a large number of hard-coded features,
- Use feature selection to pick a percentage of these features.
By contrast, getML contains approaches for feature learning: Feature learning adapts machine learning approaches such as decision trees or gradient boosting to the problem of extracting features from relational data and time series.
In this notebook, we will benchmark getML against featuretools and tsfresh. Both of these libaries use propositionalization approaches for feature engineering.
As our example dataset, we use a publicly available dataset on air pollution in Beijing, China: Beijing PM2.5 Data. The data set has been originally used in the following study:
Liang, X., Zou, T., Guo, B., Li, S., Zhang, H., Zhang, S., Huang, H. and Chen, S. X. (2015). Assessing Beijing's PM2.5 pollution: severity, weather impact, APEC and winter heating. Proceedings of the Royal Society A, 471, 20150257.
We find that getML significantly outperforms featuretools and tsfresh in terms of predictive accuracy ( see Discussion ). Our findings indicate that getML's feature learning algorithms are better at adapting to data sets and are also more scalable due to their lower memory requirement.
Analysis¶
We start the analysis with the setup of our session.
%pip install -q "getml==1.5.0" "featuretools==1.31.0" "tsfresh==0.20.3" "ipywidgets==8.1.5"
import os
import pandas as pd
import getml
from utils.load import load_or_retrieve
os.environ["PYARROW_IGNORE_TIMEZONE"] = "1"
print(f"getML API version: {getml.__version__}\n")
getML API version: 1.5.0
# NOTE: Due to featuretools's and tsfresh's substantial resource requirements, prepared data can be used via RUN_FEATURETOOLS or RUN_TSFRESH.
RUN_FEATURETOOLS = False
RUN_TSFRESH = False
if RUN_FEATURETOOLS:
from utils import FTTimeSeriesBuilder
if RUN_TSFRESH:
from utils import TSFreshBuilder
getml.engine.launch(allow_remote_ips=True, token='token')
getml.set_project("air_pollution")
Launching ./getML --allow-push-notifications=true --allow-remote-ips=true --home-directory=/home/user --in-memory=true --install=false --launch-browser=true --log=false --token=token in /home/user/.getML/getml-1.5.0-x64-linux... Launched the getML Engine. The log output will be stored in /home/user/.getML/logs/20240912133326.log. Loading pipelines... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00
Connected to project 'air_pollution'.
1. Loading data¶
1.1 Download from source¶
Downloading the raw data from the UCI Machine Learning Repository into a prediction ready format takes time. To get to the getML model building as fast as possible, we prepared the data for you and excluded the code from this notebook.
data = getml.datasets.load_air_pollution()
First, we spilt our data. We introduce a simple, time-based split and use all data until 2013-12-31 for training and everything starting from 2014-01-01 for testing.
split = getml.data.split.time(
population=data, time_stamp="date", test=getml.data.time.datetime(2014, 1, 1)
)
split
0 | train |
---|---|
1 | train |
2 | train |
3 | train |
4 | train |
... |
41757 rows
type: StringColumnView
For our first experiment, we will learn complex features and allow a memory of up to seven days. That means at every given point in time, the algorithm is allowed to back seven days into the past.
time_series1 = getml.data.TimeSeries(
population=data,
alias="population",
split=split,
time_stamps="date",
memory=getml.data.time.days(7),
)
time_series1
data frames | staging table | |
---|---|---|
0 | population | POPULATION__STAGING_TABLE_1 |
1 | population | POPULATION__STAGING_TABLE_2 |
subset | name | rows | type | |
---|---|---|---|---|
0 | test | population | 8661 | View |
1 | train | population | 33096 | View |
name | rows | type | |
---|---|---|---|
0 | population | 41757 | DataFrame |
relmt = getml.feature_learning.RelMT(
num_features=10,
loss_function=getml.feature_learning.loss_functions.SquareLoss,
seed=4367,
num_threads=1,
)
predictor = getml.predictors.XGBoostRegressor(n_jobs=1)
pipe1 = getml.pipeline.Pipeline(
tags=["getML: RelMT", "memory: 7d", "complex features"],
data_model=time_series1.data_model,
feature_learners=[relmt],
predictors=[predictor],
)
pipe1
Pipeline(data_model='population', feature_learners=['RelMT'], feature_selectors=[], include_categorical=False, loss_function='SquareLoss', peripheral=['population'], predictors=['XGBoostRegressor'], preprocessors=[], share_selected_features=0.5, tags=['getML: RelMT', 'memory: 7d', 'complex features'])
It is good practice to always check your data model first, even though check(...)
is also called by fit(...)
. That enables us to make last-minute changes.
pipe1.check(time_series1.train)
Checking data model...
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 Checking... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:01
OK.
We now fit our data on the training set and evaluate our findings, both, in-sample and out-of-sample.
pipe1.fit(time_series1.train)
Checking data model...
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00
OK.
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 RelMT: Training features... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 01:53 RelMT: Building features... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:18 XGBoost: Training as predictor... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:01
Trained pipeline.
Time taken: 0:02:14.495720.
Pipeline(data_model='population', feature_learners=['RelMT'], feature_selectors=[], include_categorical=False, loss_function='SquareLoss', peripheral=['population'], predictors=['XGBoostRegressor'], preprocessors=[], share_selected_features=0.5, tags=['getML: RelMT', 'memory: 7d', 'complex features', 'container-m6fGhl'])
pipe1.score(time_series1.test)
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 Preprocessing... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 RelMT: Building features... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:05
date time | set used | target | mae | rmse | rsquared | |
---|---|---|---|---|---|---|
0 | 2024-09-12 12:50:57 | train | pm2.5 | 35.1042 | 50.7378 | 0.6946 |
1 | 2024-09-12 12:51:02 | test | pm2.5 | 39.7981 | 57.6703 | 0.6272 |
2.2 Pipeline 2: Complex features, 1 day¶
time_series2 = getml.data.TimeSeries(
population=data,
alias="population",
split=split,
time_stamps="date",
memory=getml.data.time.days(1),
)
time_series2
data frames | staging table | |
---|---|---|
0 | population | POPULATION__STAGING_TABLE_1 |
1 | population | POPULATION__STAGING_TABLE_2 |
subset | name | rows | type | |
---|---|---|---|---|
0 | test | population | 8661 | View |
1 | train | population | 33096 | View |
name | rows | type | |
---|---|---|---|
0 | population | 41757 | DataFrame |
relmt = getml.feature_learning.RelMT(
num_features=10,
loss_function=getml.feature_learning.loss_functions.SquareLoss,
seed=4367,
num_threads=1,
)
predictor = getml.predictors.XGBoostRegressor(n_jobs=1)
pipe2 = getml.pipeline.Pipeline(
tags=["getML: RelMT", "memory: 1d", "complex features"],
data_model=time_series2.data_model,
feature_learners=[relmt],
predictors=[predictor],
)
pipe2
Pipeline(data_model='population', feature_learners=['RelMT'], feature_selectors=[], include_categorical=False, loss_function='SquareLoss', peripheral=['population'], predictors=['XGBoostRegressor'], preprocessors=[], share_selected_features=0.5, tags=['getML: RelMT', 'memory: 1d', 'complex features'])
pipe2.check(time_series2.train)
Checking data model...
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 Checking... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:01
OK.
pipe2.fit(time_series2.train)
Checking data model...
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00
OK.
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 RelMT: Training features... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:30 RelMT: Building features... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:03 XGBoost: Training as predictor... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:02
Trained pipeline.
Time taken: 0:00:36.135924.
Pipeline(data_model='population', feature_learners=['RelMT'], feature_selectors=[], include_categorical=False, loss_function='SquareLoss', peripheral=['population'], predictors=['XGBoostRegressor'], preprocessors=[], share_selected_features=0.5, tags=['getML: RelMT', 'memory: 1d', 'complex features', 'container-yXALH8'])
pipe2.score(time_series2.test)
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 Preprocessing... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 RelMT: Building features... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00
date time | set used | target | mae | rmse | rsquared | |
---|---|---|---|---|---|---|
0 | 2024-09-12 12:51:40 | train | pm2.5 | 38.1927 | 55.3496 | 0.6367 |
1 | 2024-09-12 12:51:41 | test | pm2.5 | 48.0167 | 67.5441 | 0.4802 |
2.3 Pipeline 3: Simple features, 7 days¶
For our third experiment, we will learn simple features and allow a memory of up to seven days.
time_series3 = getml.data.TimeSeries(
population=data,
alias="population",
split=split,
time_stamps="date",
memory=getml.data.time.days(7),
)
time_series3
data frames | staging table | |
---|---|---|
0 | population | POPULATION__STAGING_TABLE_1 |
1 | population | POPULATION__STAGING_TABLE_2 |
subset | name | rows | type | |
---|---|---|---|---|
0 | test | population | 8661 | View |
1 | train | population | 33096 | View |
name | rows | type | |
---|---|---|---|
0 | population | 41757 | DataFrame |
fast_prop = getml.feature_learning.FastProp(
loss_function=getml.feature_learning.loss_functions.SquareLoss,
num_threads=1,
aggregation=getml.feature_learning.FastProp.agg_sets.All,
)
predictor = getml.predictors.XGBoostRegressor(n_jobs=1)
pipe3 = getml.pipeline.Pipeline(
tags=["getML: FastProp", "memory: 7d", "simple features"],
data_model=time_series3.data_model,
feature_learners=[fast_prop],
predictors=[predictor],
)
pipe3
Pipeline(data_model='population', feature_learners=['FastProp'], feature_selectors=[], include_categorical=False, loss_function='SquareLoss', peripheral=['population'], predictors=['XGBoostRegressor'], preprocessors=[], share_selected_features=0.5, tags=['getML: FastProp', 'memory: 7d', 'simple features'])
pipe3.check(time_series3.train)
Checking data model...
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 Checking... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:01
OK.
pipe3.fit(time_series3.train)
Checking data model...
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00
OK.
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 FastProp: Trying 378 features... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:28 FastProp: Building features... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:16 XGBoost: Training as predictor... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:18
Trained pipeline.
Time taken: 0:01:02.990256.
Pipeline(data_model='population', feature_learners=['FastProp'], feature_selectors=[], include_categorical=False, loss_function='SquareLoss', peripheral=['population'], predictors=['XGBoostRegressor'], preprocessors=[], share_selected_features=0.5, tags=['getML: FastProp', 'memory: 7d', 'simple features', 'container-hqJmW2'])
pipe3.score(time_series3.test)
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 Preprocessing... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 FastProp: Building features... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:04
date time | set used | target | mae | rmse | rsquared | |
---|---|---|---|---|---|---|
0 | 2024-09-12 12:52:46 | train | pm2.5 | 35.9677 | 50.7711 | 0.7036 |
1 | 2024-09-12 12:52:51 | test | pm2.5 | 45.4779 | 62.6417 | 0.5613 |
2.4 Pipeline 4: Simple features, 1 day¶
For our fourth experiment, we will learn simple features and allow a memory of up to one day.
time_series4 = getml.data.TimeSeries(
population=data,
alias="population",
split=split,
time_stamps="date",
memory=getml.data.time.days(1),
)
time_series4
data frames | staging table | |
---|---|---|
0 | population | POPULATION__STAGING_TABLE_1 |
1 | population | POPULATION__STAGING_TABLE_2 |
subset | name | rows | type | |
---|---|---|---|---|
0 | test | population | 8661 | View |
1 | train | population | 33096 | View |
name | rows | type | |
---|---|---|---|
0 | population | 41757 | DataFrame |
fast_prop = getml.feature_learning.FastProp(
loss_function=getml.feature_learning.loss_functions.SquareLoss,
num_threads=1,
aggregation=getml.feature_learning.FastProp.agg_sets.All,
)
predictor = getml.predictors.XGBoostRegressor(n_jobs=1)
pipe4 = getml.pipeline.Pipeline(
tags=["getML: FastProp", "memory: 1d", "simple features"],
data_model=time_series4.data_model,
feature_learners=[fast_prop],
predictors=[predictor],
)
pipe4
Pipeline(data_model='population', feature_learners=['FastProp'], feature_selectors=[], include_categorical=False, loss_function='SquareLoss', peripheral=['population'], predictors=['XGBoostRegressor'], preprocessors=[], share_selected_features=0.5, tags=['getML: FastProp', 'memory: 1d', 'simple features'])
pipe4.check(time_series4.train)
Checking data model...
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 Checking... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:01
OK.
pipe4.fit(time_series4.train)
Checking data model...
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00
OK.
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 FastProp: Trying 378 features... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:05 FastProp: Building features... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:03 XGBoost: Training as predictor... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:19
Trained pipeline.
Time taken: 0:00:28.050749.
Pipeline(data_model='population', feature_learners=['FastProp'], feature_selectors=[], include_categorical=False, loss_function='SquareLoss', peripheral=['population'], predictors=['XGBoostRegressor'], preprocessors=[], share_selected_features=0.5, tags=['getML: FastProp', 'memory: 1d', 'simple features', 'container-pspq7Q'])
pipe4.score(time_series4.test)
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 Preprocessing... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 FastProp: Building features... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:01
date time | set used | target | mae | rmse | rsquared | |
---|---|---|---|---|---|---|
0 | 2024-09-12 12:53:21 | train | pm2.5 | 38.3028 | 55.2472 | 0.6438 |
1 | 2024-09-12 12:53:22 | test | pm2.5 | 44.2486 | 63.4164 | 0.5462 |
2.5 Using featuretools¶
To make things a bit easier, we have written high-level wrappers around featuretools and tsfresh which we placed in a separate module (utils
).
data_train_pandas = time_series1.train.population.to_pandas()
data_test_pandas = time_series1.test.population.to_pandas()
tsfresh and featuretools require the time series to have ids. Since there is only a single time series, that series has the same id.
data_train_pandas["id"] = 1
data_test_pandas["id"] = 1
if RUN_FEATURETOOLS:
ft_builder = FTTimeSeriesBuilder(
num_features=200,
horizon=pd.Timedelta(days=0),
memory=pd.Timedelta(days=1),
column_id="id",
time_stamp="date",
target="pm2.5",
)
#
featuretools_training = ft_builder.fit(data_train_pandas)
featuretools_test = ft_builder.transform(data_test_pandas)
data_featuretools_training = getml.data.DataFrame.from_pandas(
featuretools_training, name="featuretools_training"
)
data_featuretools_test = getml.data.DataFrame.from_pandas(
featuretools_test, name="featuretools_test"
)
if not RUN_FEATURETOOLS:
data_featuretools_training = load_or_retrieve(
"https://static.getml.com/datasets/air_pollution/featuretools/featuretools_training.csv"
)
data_featuretools_test = load_or_retrieve(
"https://static.getml.com/datasets/air_pollution/featuretools/featuretools_test.csv"
)
Loading 'featuretools_training' from disk (project folder). Loading 'featuretools_test' from disk (project folder).
def set_roles_featuretools(df):
df.set_role(["date"], getml.data.roles.time_stamp)
df.set_role(["pm2.5"], getml.data.roles.target)
df.set_role(["date"], getml.data.roles.time_stamp)
df.set_role(df.roles.unused, getml.data.roles.numerical)
df.set_role(["id"], getml.data.roles.unused_float)
return df
df_featuretools_training = set_roles_featuretools(data_featuretools_training)
df_featuretools_test = set_roles_featuretools(data_featuretools_test)
predictor = getml.predictors.XGBoostRegressor()
pipe5 = getml.pipeline.Pipeline(
tags=["featuretools", "memory: 1d", "simple features"], predictors=[predictor]
)
pipe5
Pipeline(data_model='population', feature_learners=[], feature_selectors=[], include_categorical=False, loss_function='SquareLoss', peripheral=[], predictors=['XGBoostRegressor'], preprocessors=[], share_selected_features=0.5, tags=['featuretools', 'memory: 1d', 'simple features'])
pipe5.check(df_featuretools_training)
Checking data model...
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 Checking... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00
OK.
pipe5.fit(df_featuretools_training)
Checking data model...
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00
OK.
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 XGBoost: Training as predictor... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:09
Trained pipeline.
Time taken: 0:00:09.162672.
Pipeline(data_model='population', feature_learners=[], feature_selectors=[], include_categorical=False, loss_function='SquareLoss', peripheral=[], predictors=['XGBoostRegressor'], preprocessors=[], share_selected_features=0.5, tags=['featuretools', 'memory: 1d', 'simple features'])
pipe5.score(df_featuretools_test)
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 Preprocessing... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00
date time | set used | target | mae | rmse | rsquared | |
---|---|---|---|---|---|---|
0 | 2024-09-12 12:53:34 | featuretools_training | pm2.5 | 38.0455 | 54.4693 | 0.6567 |
1 | 2024-09-12 12:53:34 | featuretools_test | pm2.5 | 45.3084 | 64.2717 | 0.5373 |
2.6 Using tsfresh¶
Next, we construct features with tsfresh. tsfresh is based on pandas and rely\ies on explicit copies for meny operations. This leads to an excessive memory consumption that renders tsfresh nearly unusable for real-world scenarios. Remeber, this is a relatively small data set.
To limit the memory consumption, we undertake the following steps:
- We limit ourselves to a memory of 1 day from any point in time. This is necessary, because tsfresh duplicates records for every time stamp. That means that looking back 7 days instead of one day, the memory consumption would be seven times as high.
- We extract only tsfresh's
MinimalFCParameters
andIndexBasedFCParameters
(the latter is a superset ofTimeBasedFCParameters
).
In order to make sure that tsfresh's features can be compared to getML's features, we also do the following:
- We apply tsfresh's built-in feature selection algorithm.
- Of the remaining features, we only keep the 40 features most correlated with the target (in terms of the absolute value of the correlation).
- We add the original columns as additional features.
data_train_pandas
DEWP | TEMP | PRES | Iws | Is | Ir | pm2.5 | date | id | |
---|---|---|---|---|---|---|---|---|---|
0 | -16.0 | -4.0 | 1020.0 | 1.79 | 0.0 | 0.0 | 129.0 | 2010-01-02 00:00:00 | 1 |
1 | -15.0 | -4.0 | 1020.0 | 2.68 | 0.0 | 0.0 | 148.0 | 2010-01-02 01:00:00 | 1 |
2 | -11.0 | -5.0 | 1021.0 | 3.57 | 0.0 | 0.0 | 159.0 | 2010-01-02 02:00:00 | 1 |
3 | -7.0 | -5.0 | 1022.0 | 5.36 | 1.0 | 0.0 | 181.0 | 2010-01-02 03:00:00 | 1 |
4 | -7.0 | -5.0 | 1022.0 | 6.25 | 2.0 | 0.0 | 138.0 | 2010-01-02 04:00:00 | 1 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
33091 | -19.0 | 7.0 | 1013.0 | 114.87 | 0.0 | 0.0 | 22.0 | 2013-12-31 19:00:00 | 1 |
33092 | -21.0 | 7.0 | 1014.0 | 119.79 | 0.0 | 0.0 | 18.0 | 2013-12-31 20:00:00 | 1 |
33093 | -21.0 | 7.0 | 1014.0 | 125.60 | 0.0 | 0.0 | 23.0 | 2013-12-31 21:00:00 | 1 |
33094 | -21.0 | 6.0 | 1014.0 | 130.52 | 0.0 | 0.0 | 20.0 | 2013-12-31 22:00:00 | 1 |
33095 | -20.0 | 7.0 | 1014.0 | 137.67 | 0.0 | 0.0 | 23.0 | 2013-12-31 23:00:00 | 1 |
33096 rows × 9 columns
if RUN_TSFRESH:
tsfresh_builder = TSFreshBuilder(
num_features=200, memory=24, column_id="id", time_stamp="date", target="pm2.5"
)
#
tsfresh_training = tsfresh_builder.fit(data_train_pandas)
tsfresh_test = tsfresh_builder.transform(data_test_pandas)
#
data_tsfresh_training = getml.data.DataFrame.from_pandas(
tsfresh_training, name="tsfresh_training"
)
data_tsfresh_test = getml.data.DataFrame.from_pandas(
tsfresh_test, name="tsfresh_test"
)
tsfresh does not contain built-in machine learning algorithms. In order to ensure a fair comparison, we use the exact same machine learning algorithm we have also used for getML: An XGBoost regressor with all hyperparameters set to their default values.
In order to do so, we load the tsfresh features into the getML engine.
if not RUN_TSFRESH:
data_tsfresh_training = load_or_retrieve(
"https://static.getml.com/datasets/air_pollution/tsfresh/tsfresh_training.csv"
)
data_tsfresh_test = load_or_retrieve(
"https://static.getml.com/datasets/air_pollution/tsfresh/tsfresh_test.csv"
)
Loading 'tsfresh_training' from disk (project folder).
Loading 'tsfresh_test' from disk (project folder).
As usual, we need to set roles:
def set_roles_tsfresh(df):
df.set_role(["date"], getml.data.roles.time_stamp)
df.set_role(["pm2.5"], getml.data.roles.target)
df.set_role(["date"], getml.data.roles.time_stamp)
df.set_role(df.roles.unused, getml.data.roles.numerical)
df.set_role(["id"], getml.data.roles.unused_float)
return df
df_tsfresh_training = set_roles_tsfresh(data_tsfresh_training)
df_tsfresh_test = set_roles_tsfresh(data_tsfresh_test)
In this case, our pipeline is very simple. It only consists of a single XGBoostRegressor.
predictor = getml.predictors.XGBoostRegressor()
pipe6 = getml.pipeline.Pipeline(
tags=["tsfresh", "memory: 1d", "simple features"], predictors=[predictor]
)
pipe6
Pipeline(data_model='population', feature_learners=[], feature_selectors=[], include_categorical=False, loss_function='SquareLoss', peripheral=[], predictors=['XGBoostRegressor'], preprocessors=[], share_selected_features=0.5, tags=['tsfresh', 'memory: 1d', 'simple features'])
pipe6.check(df_tsfresh_training)
Checking data model...
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 Checking... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00
OK.
pipe6.fit(df_tsfresh_training)
Checking data model...
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00
OK.
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 XGBoost: Training as predictor... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:06
Trained pipeline.
Time taken: 0:00:06.916135.
Pipeline(data_model='population', feature_learners=[], feature_selectors=[], include_categorical=False, loss_function='SquareLoss', peripheral=[], predictors=['XGBoostRegressor'], preprocessors=[], share_selected_features=0.5, tags=['tsfresh', 'memory: 1d', 'simple features'])
pipe6.score(df_tsfresh_test)
Staging... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00 Preprocessing... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • 00:00
date time | set used | target | mae | rmse | rsquared | |
---|---|---|---|---|---|---|
0 | 2024-09-12 12:53:43 | tsfresh_training | pm2.5 | 40.8062 | 57.7874 | 0.6106 |
1 | 2024-09-12 12:53:44 | tsfresh_test | pm2.5 | 46.698 | 65.9163 | 0.5105 |
pipe1.features
target | name | correlation | importance | |
---|---|---|---|---|
0 | pm2.5 | feature_1_1 | 0.7269 | 0.19987785 |
1 | pm2.5 | feature_1_2 | 0.7046 | 0.11091681 |
2 | pm2.5 | feature_1_3 | 0.7158 | 0.08472355 |
3 | pm2.5 | feature_1_4 | 0.6812 | 0.01239961 |
4 | pm2.5 | feature_1_5 | 0.7363 | 0.26067 |
... | ... | ... | ... | |
11 | pm2.5 | temp | -0.2112 | 0.0036176 |
12 | pm2.5 | pres | 0.0811 | 0.00604251 |
13 | pm2.5 | iws | -0.2166 | 0.00110227 |
14 | pm2.5 | is | 0.0045 | 0.00006856 |
15 | pm2.5 | ir | -0.0541 | 0.00071485 |
2.7 Studying features¶
pipe1.features.sort(by="importances")[0].sql
DROP TABLE IF EXISTS "FEATURE_1_5";
CREATE TABLE "FEATURE_1_5" AS
SELECT SUM(
CASE
WHEN ( t2."iws" > 2.996864 ) AND ( t1."date" - t2."date" > 111439.618138 ) THEN COALESCE( t1."dewp" - 1.513306719893546, 0.0 ) * 0.03601591249256588 + COALESCE( t1."temp" - 11.89115103127079, 0.0 ) * -0.04266314571848353 + COALESCE( t1."is" - 0.06613439787092482, 0.0 ) * -0.04725168059692409 + COALESCE( t1."ir" - 0.2117099135063207, 0.0 ) * -0.06318182426623756 + COALESCE( t1."pres" - 1016.467221113307, 0.0 ) * -0.01398384065516279 + COALESCE( t1."iws" - 25.06232468396705, 0.0 ) * -0.001201895740197315 + COALESCE( t1."date" - 1326374597.365269, 0.0 ) * -1.923501069200198e-07 + COALESCE( t2."dewp" - 1.668379064426448, 0.0 ) * 0.001731655710118117 + COALESCE( t2."is" - 0.06086063096820984, 0.0 ) * 0.0606336044544106 + COALESCE( t2."ir" - 0.2111990813489665, 0.0 ) * -0.0540256903488962 + COALESCE( t2."temp" - 12.06001450501632, 0.0 ) * -0.00767357563612661 + COALESCE( t2."pres" - 1016.398404448205, 0.0 ) * 0.002398206365154774 + COALESCE( t2."iws" - 25.00605463556588, 0.0 ) * 0.000683821481632905 + COALESCE( t2."date" - 1326341256.690439, 0.0 ) * 1.921124564163103e-07 + -3.7650673692887868e-02
WHEN ( t2."iws" > 2.996864 ) AND ( t1."date" - t2."date" <= 111439.618138 OR t1."date" IS NULL OR t2."date" IS NULL ) THEN COALESCE( t1."dewp" - 1.513306719893546, 0.0 ) * -0.09708854784677848 + COALESCE( t1."temp" - 11.89115103127079, 0.0 ) * 0.1898230328866885 + COALESCE( t1."is" - 0.06613439787092482, 0.0 ) * 0.2216695565524211 + COALESCE( t1."ir" - 0.2117099135063207, 0.0 ) * 0.2229264173214936 + COALESCE( t1."pres" - 1016.467221113307, 0.0 ) * 0.01369730329877567 + COALESCE( t1."iws" - 25.06232468396705, 0.0 ) * 0.008759560240072777 + COALESCE( t1."date" - 1326374597.365269, 0.0 ) * 3.830243422560845e-05 + COALESCE( t2."dewp" - 1.668379064426448, 0.0 ) * 0.01388305933958263 + COALESCE( t2."is" - 0.06086063096820984, 0.0 ) * -0.05433702238361345 + COALESCE( t2."ir" - 0.2111990813489665, 0.0 ) * -0.170863758308725 + COALESCE( t2."temp" - 12.06001450501632, 0.0 ) * 0.04150526461437114 + COALESCE( t2."pres" - 1016.398404448205, 0.0 ) * 0.0668720727713986 + COALESCE( t2."iws" - 25.00605463556588, 0.0 ) * 0.0003743003078498523 + COALESCE( t2."date" - 1326341256.690439, 0.0 ) * -3.830000792178088e-05 + -2.7370376342137779e+00
WHEN ( t2."iws" <= 2.996864 OR t2."iws" IS NULL ) AND ( t1."dewp" > 11.000000 ) THEN COALESCE( t1."dewp" - 1.513306719893546, 0.0 ) * 0.1393420062387328 + COALESCE( t1."temp" - 11.89115103127079, 0.0 ) * -0.01209129738437093 + COALESCE( t1."is" - 0.06613439787092482, 0.0 ) * -0.1331350023952664 + COALESCE( t1."ir" - 0.2117099135063207, 0.0 ) * -0.01622004854424353 + COALESCE( t1."pres" - 1016.467221113307, 0.0 ) * 0.001749305913977287 + COALESCE( t1."iws" - 25.06232468396705, 0.0 ) * 0.003366205707337489 + COALESCE( t1."date" - 1326374597.365269, 0.0 ) * -1.876475085083195e-07 + COALESCE( t2."dewp" - 1.668379064426448, 0.0 ) * -0.06649947838403912 + COALESCE( t2."is" - 0.06086063096820984, 0.0 ) * -0.1410305376611299 + COALESCE( t2."ir" - 0.2111990813489665, 0.0 ) * -0.1304194940194324 + COALESCE( t2."temp" - 12.06001450501632, 0.0 ) * -0.1337048685573981 + COALESCE( t2."pres" - 1016.398404448205, 0.0 ) * -0.01591411669667236 + COALESCE( t2."iws" - 25.00605463556588, 0.0 ) * 0.05236298037580929 + COALESCE( t2."date" - 1326341256.690439, 0.0 ) * 1.819565845131136e-07 + 1.4688062110162212e+00
WHEN ( t2."iws" <= 2.996864 OR t2."iws" IS NULL ) AND ( t1."dewp" <= 11.000000 OR t1."dewp" IS NULL ) THEN COALESCE( t1."dewp" - 1.513306719893546, 0.0 ) * 0.04638276959734068 + COALESCE( t1."temp" - 11.89115103127079, 0.0 ) * -0.02618590276218461 + COALESCE( t1."is" - 0.06613439787092482, 0.0 ) * -0.04277883865346508 + COALESCE( t1."ir" - 0.2117099135063207, 0.0 ) * -0.02541052942548589 + COALESCE( t1."pres" - 1016.467221113307, 0.0 ) * -0.02717894125656619 + COALESCE( t1."iws" - 25.06232468396705, 0.0 ) * -0.002780031916675637 + COALESCE( t1."date" - 1326374597.365269, 0.0 ) * -1.128077615757182e-06 + COALESCE( t2."dewp" - 1.668379064426448, 0.0 ) * -0.009594127784270889 + COALESCE( t2."is" - 0.06086063096820984, 0.0 ) * -0.2440485267005635 + COALESCE( t2."ir" - 0.2111990813489665, 0.0 ) * -0.1198304522130264 + COALESCE( t2."temp" - 12.06001450501632, 0.0 ) * -0.07233231618474294 + COALESCE( t2."pres" - 1016.398404448205, 0.0 ) * -0.006858030218452998 + COALESCE( t2."iws" - 25.00605463556588, 0.0 ) * -0.4131514311185667 + COALESCE( t2."date" - 1326341256.690439, 0.0 ) * 1.129955468463647e-06 + -9.4340473685385735e+00
ELSE NULL
END
) AS "feature_1_5",
t1.rowid AS rownum
FROM "POPULATION__STAGING_TABLE_1" t1
INNER JOIN "POPULATION__STAGING_TABLE_2" t2
ON 1 = 1
WHERE t2."date" <= t1."date"
AND ( t2."date__7_000000_days" > t1."date" OR t2."date__7_000000_days" IS NULL )
GROUP BY t1.rowid;
This is a typical RelMT feature, where the aggregation (SUM
in this case) is applied conditionally – the conditions are learned by RelMT
– to a set of linear models, whose weights are, again, learned by RelMT
.
2.8 Productionization¶
It is possible to productionize the pipeline by transpiling the features into production-ready SQL code. Please also refer to getML's sqlite3
module.
# Creates a folder named air_pollution_pipeline containing the SQL code
pipe1.features.to_sql().save("air_pollution_pipeline")
3. Discussion¶
We have seen that getML outperforms tsfresh by more than 10 percentage points in terms of R-squared. We now want to analyze why that is.
There are two possible hypotheses:
- getML outperforms featuretools and tsfresh, because it using feature learning and is able to produce more complex features
- getML outperforms featuretools and tsfresh, because it makes better use of memory and is able to look back further.
Let's summarize our findings:
pipes = [pipe1, pipe2, pipe3, pipe4, pipe5, pipe6]
comparison = pd.DataFrame(
dict(
tool=[pipe.tags[0] for pipe in pipes],
memory=[pipe.tags[1].split()[1] for pipe in pipes],
feature_complexity=[pipe.tags[2].split()[0] for pipe in pipes],
rsquared=[f"{pipe.rsquared:.1%}" for pipe in pipes],
rmse=[f"{pipe.rmse:.3}" for pipe in pipes],
)
)
comparison
tool | memory | feature_complexity | rsquared | rmse | |
---|---|---|---|---|---|
0 | getML: RelMT | 7d | complex | 62.7% | 57.7 |
1 | getML: RelMT | 1d | complex | 48.0% | 67.5 |
2 | getML: FastProp | 7d | simple | 56.1% | 62.6 |
3 | getML: FastProp | 1d | simple | 54.6% | 63.4 |
4 | featuretools | 1d | simple | 53.7% | 64.3 |
5 | tsfresh | 1d | simple | 51.0% | 65.9 |
The summary table shows that combination of both of our hypotheses explains why getML outperforms featuretools and tsfresh. Complex features do better than simple features with a memory of one day. With a memory of seven days, simple features actually get worse. But when you look back seven days and allow more complex features, you get good results.
This suggests that getML outperforms featuretools and tsfresh, because it can make more efficient use of memory and thus look back further. Because RelMT uses feature learning and can build more complex features it can make better use of the greater look-back window.
getml.engine.shutdown()
4. Conclusion¶
We have compared getML's feature learning algorithms to tsfresh's brute-force feature engineering approaches on a data set related to air pollution in China. We found that getML significantly outperforms featuretools and tsfresh. These results are consistent with the view that feature learning can yield significant improvements over simple propositionalization approaches.
However, there are other datasets on which simple propositionalization performs well. Our suggestion is therefore to think of algorithms like FastProp
and RelMT
as tools in a toolbox. If a simple tool like FastProp
gets the job done, then use that. But when you need more advanced approaches, like RelMT
, you should have them at your disposal as well.
You are encouraged to reproduce these results.