Why feature learning is better than simple propositionalization

In this notebook we will compare getML to featuretools and tsfresh, both of which open-source libraries for feature engineering. We find that advanced algorithms featured in getML yield significantly better predictions on this dataset. We then discuss why that is.

Summary:

  • Prediction type: Regression model
  • Domain: Air pollution
  • Prediction target: pm 2.5 concentration
  • Source data: Multivariate time series
  • Population size: 41757

Background

Many data scientists and AutoML tools use propositionalization methods for feature engineering. These propositionalization methods usually work as follows:

  • Generate a large number of hard-coded features,
  • Use feature selection to pick a percentage of these features.

By contrast, getML contains approaches for feature learning: Feature learning adapts machine learning approaches such as decision trees or gradient boosting to the problem of extracting features from relational data and time series.

In this notebook, we will benchmark getML against featuretools and tsfresh. Both of these libaries use propositionalization approaches for feature engineering.

As our example dataset, we use a publicly available dataset on air pollution in Beijing, China (https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data). The data set has been originally used in the following study:

Liang, X., Zou, T., Guo, B., Li, S., Zhang, H., Zhang, S., Huang, H. and Chen, S. X. (2015). Assessing Beijing's PM2.5 pollution: severity, weather impact, APEC and winter heating. Proceedings of the Royal Society A, 471, 20150257.

We find that getML significantly outperforms featuretools and tsfresh in terms of predictive accuracy (R-squared of 62.3% vs R-squared of 50.4%). Our findings indicate that getML's feature learning algorithms are better at adapting to data sets and are also more scalable due to their lower memory requirement.

A web frontend for getML

The getML monitor is a frontend built to support your work with getML. The getML monitor displays information such as the imported data frames, trained pipelines and allows easy data and feature exploration. You can launch the getML monitor here.

Analysis

We start the analysis with the setup of our session.

In [1]:
import os
from urllib import request

import getml
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.stats import pearsonr

from utils.load import load_or_retrieve

%matplotlib inline
In [2]:
# NOTE: Due to featuretools's and tsfresh's substantial memory requirements, this notebook will not run on
# try.getml.com when RUN_FEATURETOOLS or RUN_TSFRESH is set.

RUN_FEATURETOOLS = False
RUN_TSFRESH = False

if RUN_FEATURETOOLS:
    from utils import FTTimeSeriesBuilder

if RUN_TSFRESH:
    from utils import TSFreshBuilder
In [3]:
getml.engine.launch()
getml.set_project("air_pollution")
Launched the getML engine. The log output will be stored in /home/patrick/.getML/logs/20220323133117.log.


Loading pipelines...
[========================================] 100%


Connected to project 'air_pollution'

1. Loading data

1.1 Download from source

Downloading the raw data from the UCI Machine Learning Repository into a prediction ready format takes time. To get to the getML model building as fast as possible, we prepared the data for you and excluded the code from this notebook.

In [4]:
data = getml.datasets.load_air_pollution()
Loading population...
[========================================] 100%

2. Predictive modeling

2.1 Pipeline 1: Complex features, 7 days

First, we spilt our data. We introduce a simple, time-based split and use all data until 2013-12-31 for training and everything starting from 2014-01-01 for testing.

In [5]:
split = getml.data.split.time(
    population=data, time_stamp="date", test=getml.data.time.datetime(2014, 1, 1)
)

split
Out[5]:
0 train
1 train
2 train
3 train
4 train
...

41757 rows
type: StringColumnView

For our first experiment, we will learn complex features and allow a memory of up to seven days. That means at every given point in time, the algorithm is allowed to back seven days into the past.

In [6]:
time_series1 = getml.data.TimeSeries(
    population=data,
    alias="population",
    split=split,
    time_stamps="date",
    memory=getml.data.time.days(7),
)

time_series1
Out[6]:

data model

diagram


populationpopulationdate <= dateMemory: 7.0 days

staging

data frames staging table
0 population POPULATION__STAGING_TABLE_1
1 population POPULATION__STAGING_TABLE_2

container

population

subset name rows type
0 test population 8661 View
1 train population 33096 View

peripheral

name rows type
0 population 41757 DataFrame
In [7]:
relmt = getml.feature_learning.RelMT(
    num_features=10,
    loss_function=getml.feature_learning.loss_functions.SquareLoss,
    seed=4367,
    num_threads=1,
)

predictor = getml.predictors.XGBoostRegressor(n_jobs=1)

pipe1 = getml.pipeline.Pipeline(
    tags=["getML: RelMT", "memory: 7d", "complex features"],
    data_model=time_series1.data_model,
    feature_learners=[relmt],
    predictors=[predictor],
)

pipe1
Out[7]:
Pipeline(data_model='population',
         feature_learners=['RelMT'],
         feature_selectors=[],
         include_categorical=False,
         loss_function=None,
         peripheral=['population'],
         predictors=['XGBoostRegressor'],
         preprocessors=[],
         share_selected_features=0.5,
         tags=['getML: RelMT', 'memory: 7d', 'complex features'])

It is good practice to always check your data model first, even though check(...) is also called by fit(...). That enables us to make last-minute changes.

In [8]:
pipe1.check(time_series1.train)
Checking data model...


Staging...
[========================================] 100%

Checking...
[========================================] 100%


OK.

We now fit our data on the training set and evaluate our findings, both, in-sample and out-of-sample.

In [9]:
pipe1.fit(time_series1.train)
Checking data model...


Staging...
[========================================] 100%


OK.


Staging...
[========================================] 100%

RelMT: Training features...
[========================================] 100%

RelMT: Building features...
[========================================] 100%

XGBoost: Training as predictor...
[========================================] 100%


Trained pipeline.
Time taken: 0h:4m:23.190215

Out[9]:
Pipeline(data_model='population',
         feature_learners=['RelMT'],
         feature_selectors=[],
         include_categorical=False,
         loss_function=None,
         peripheral=['population'],
         predictors=['XGBoostRegressor'],
         preprocessors=[],
         share_selected_features=0.5,
         tags=['getML: RelMT', 'memory: 7d', 'complex features', 'container-K1O1Nc'])

url: http://localhost:1709/#/getpipeline/air_pollution/Xa4kSu/0/
In [10]:
pipe1.score(time_series1.test)

Staging...
[========================================] 100%

RelMT: Building features...
[========================================] 100%


Out[10]:
date time set used target mae rmse rsquared
0 2022-03-23 13:36:01 train pm2.5 35.1664 50.9038 0.6925
1 2022-03-23 13:36:10 test pm2.5 39.6596 57.5014 0.6306

2.2 Pipeline 2: Complex features, 1 day

In [11]:
time_series2 = getml.data.TimeSeries(
    population=data,
    alias="population",
    split=split,
    time_stamps="date",
    memory=getml.data.time.days(1),
)

time_series2
Out[11]:

data model

diagram


populationpopulationdate <= dateMemory: 1.0 days

staging

data frames staging table
0 population POPULATION__STAGING_TABLE_1
1 population POPULATION__STAGING_TABLE_2

container

population

subset name rows type
0 test population 8661 View
1 train population 33096 View

peripheral

name rows type
0 population 41757 DataFrame
In [12]:
relmt = getml.feature_learning.RelMT(
    num_features=10,
    loss_function=getml.feature_learning.loss_functions.SquareLoss,
    seed=4367,
    num_threads=1,
)

predictor = getml.predictors.XGBoostRegressor(n_jobs=1)

pipe2 = getml.pipeline.Pipeline(
    tags=["getML: RelMT", "memory: 1d", "complex features"],
    data_model=time_series2.data_model,
    feature_learners=[relmt],
    predictors=[predictor],
)

pipe2
Out[12]:
Pipeline(data_model='population',
         feature_learners=['RelMT'],
         feature_selectors=[],
         include_categorical=False,
         loss_function=None,
         peripheral=['population'],
         predictors=['XGBoostRegressor'],
         preprocessors=[],
         share_selected_features=0.5,
         tags=['getML: RelMT', 'memory: 1d', 'complex features'])
In [13]:
pipe2.check(time_series2.train)
Checking data model...


Staging...
[========================================] 100%

Checking...
[========================================] 100%


OK.
In [14]:
pipe2.fit(time_series2.train)
Checking data model...


Staging...
[========================================] 100%


OK.


Staging...
[========================================] 100%

RelMT: Training features...
[========================================] 100%

RelMT: Building features...
[========================================] 100%

XGBoost: Training as predictor...
[========================================] 100%


Trained pipeline.
Time taken: 0h:1m:0.791243

Out[14]:
Pipeline(data_model='population',
         feature_learners=['RelMT'],
         feature_selectors=[],
         include_categorical=False,
         loss_function=None,
         peripheral=['population'],
         predictors=['XGBoostRegressor'],
         preprocessors=[],
         share_selected_features=0.5,
         tags=['getML: RelMT', 'memory: 1d', 'complex features', 'container-sXpcLM'])

url: http://localhost:1709/#/getpipeline/air_pollution/Wva6YJ/0/
In [15]:
pipe2.score(time_series2.test)

Staging...
[========================================] 100%

RelMT: Building features...
[========================================] 100%


Out[15]:
date time set used target mae rmse rsquared
0 2022-03-23 13:37:28 train pm2.5 38.1593 55.3541 0.6366
1 2022-03-23 13:37:30 test pm2.5 47.5451 66.9418 0.4901

2.3 Pipeline 3: Simple features, 7 days

For our third experiment, we will learn simple features and allow a memory of up to seven days.

In [16]:
time_series3 = getml.data.TimeSeries(
    population=data,
    alias="population",
    split=split,
    time_stamps="date",
    memory=getml.data.time.days(7),
)

time_series3
Out[16]:

data model

diagram


populationpopulationdate <= dateMemory: 7.0 days

staging

data frames staging table
0 population POPULATION__STAGING_TABLE_1
1 population POPULATION__STAGING_TABLE_2

container

population

subset name rows type
0 test population 8661 View
1 train population 33096 View

peripheral

name rows type
0 population 41757 DataFrame
In [17]:
fast_prop = getml.feature_learning.FastProp(
    loss_function=getml.feature_learning.loss_functions.SquareLoss,
    num_threads=1,
    aggregation=getml.feature_learning.FastProp.agg_sets.All,
)

predictor = getml.predictors.XGBoostRegressor(n_jobs=1)

pipe3 = getml.pipeline.Pipeline(
    tags=["getML: FastProp", "memory: 7d", "simple features"],
    data_model=time_series3.data_model,
    feature_learners=[fast_prop],
    predictors=[predictor],
)

pipe3
Out[17]:
Pipeline(data_model='population',
         feature_learners=['FastProp'],
         feature_selectors=[],
         include_categorical=False,
         loss_function=None,
         peripheral=['population'],
         predictors=['XGBoostRegressor'],
         preprocessors=[],
         share_selected_features=0.5,
         tags=['getML: FastProp', 'memory: 7d', 'simple features'])
In [18]:
pipe3.check(time_series3.train)
Checking data model...


Staging...
[========================================] 100%

Checking...
[========================================] 100%


OK.
In [19]:
pipe3.fit(time_series3.train)
Checking data model...


Staging...
[========================================] 100%


OK.


Staging...
[========================================] 100%

FastProp: Trying 330 features...
[========================================] 100%

FastProp: Building features...
[========================================] 100%

XGBoost: Training as predictor...
[========================================] 100%


Trained pipeline.
Time taken: 0h:2m:47.234395

Out[19]:
Pipeline(data_model='population',
         feature_learners=['FastProp'],
         feature_selectors=[],
         include_categorical=False,
         loss_function=None,
         peripheral=['population'],
         predictors=['XGBoostRegressor'],
         preprocessors=[],
         share_selected_features=0.5,
         tags=['getML: FastProp', 'memory: 7d', 'simple features', 'container-AKVTYt'])

url: http://localhost:1709/#/getpipeline/air_pollution/Ra29cj/0/
In [20]:
pipe3.score(time_series3.test)

Staging...
[========================================] 100%

FastProp: Building features...
[========================================] 100%


Out[20]:
date time set used target mae rmse rsquared
0 2022-03-23 13:40:34 train pm2.5 36.1998 50.8575 0.7031
1 2022-03-23 13:40:49 test pm2.5 46.2451 63.7779 0.5449

2.4 Pipeline 4: Simple features, 1 day

For our fourth experiment, we will learn simple features and allow a memory of up to one day.

In [21]:
time_series4 = getml.data.TimeSeries(
    population=data,
    alias="population",
    split=split,
    time_stamps="date",
    memory=getml.data.time.days(1),
)

time_series4
Out[21]:

data model

diagram


populationpopulationdate <= dateMemory: 1.0 days

staging

data frames staging table
0 population POPULATION__STAGING_TABLE_1
1 population POPULATION__STAGING_TABLE_2

container

population

subset name rows type
0 test population 8661 View
1 train population 33096 View

peripheral

name rows type
0 population 41757 DataFrame
In [22]:
fast_prop = getml.feature_learning.FastProp(
    loss_function=getml.feature_learning.loss_functions.SquareLoss,
    num_threads=1,
    aggregation=getml.feature_learning.FastProp.agg_sets.All,
)

predictor = getml.predictors.XGBoostRegressor(n_jobs=1)

pipe4 = getml.pipeline.Pipeline(
    tags=["getML: FastProp", "memory: 1d", "simple features"],
    data_model=time_series4.data_model,
    feature_learners=[fast_prop],
    predictors=[predictor],
)

pipe4
Out[22]:
Pipeline(data_model='population',
         feature_learners=['FastProp'],
         feature_selectors=[],
         include_categorical=False,
         loss_function=None,
         peripheral=['population'],
         predictors=['XGBoostRegressor'],
         preprocessors=[],
         share_selected_features=0.5,
         tags=['getML: FastProp', 'memory: 1d', 'simple features'])
In [23]:
pipe4.check(time_series4.train)
Checking data model...


Staging...
[========================================] 100%

Checking...
[========================================] 100%


OK.
In [24]:
pipe4.fit(time_series4.train)
Checking data model...


Staging...
[========================================] 100%


OK.


Staging...
[========================================] 100%

FastProp: Trying 330 features...
[========================================] 100%

FastProp: Building features...
[========================================] 100%

XGBoost: Training as predictor...
[========================================] 100%


Trained pipeline.
Time taken: 0h:1m:1.842769

Out[24]:
Pipeline(data_model='population',
         feature_learners=['FastProp'],
         feature_selectors=[],
         include_categorical=False,
         loss_function=None,
         peripheral=['population'],
         predictors=['XGBoostRegressor'],
         preprocessors=[],
         share_selected_features=0.5,
         tags=['getML: FastProp', 'memory: 1d', 'simple features', 'container-IPHmCK'])

url: http://localhost:1709/#/getpipeline/air_pollution/t0BEM5/0/
In [25]:
pipe4.score(time_series4.test)

Staging...
[========================================] 100%

FastProp: Building features...
[========================================] 100%


Out[25]:
date time set used target mae rmse rsquared
0 2022-03-23 13:42:09 train pm2.5 38.3429 55.1886 0.6443
1 2022-03-23 13:42:13 test pm2.5 44.1997 63.4948 0.545

2.5 Using featuretools

To make things a bit easier, we have written high-level wrappers around featuretools and tsfresh which we placed in a separate module (utils).

In [26]:
data_train_pandas = time_series1.train.population.to_pandas()
data_test_pandas = time_series1.test.population.to_pandas()

tsfresh and featuretools require the time series to have ids. Since there is only a single time series, that series has the same id.

In [27]:
data_train_pandas["id"] = 1
data_test_pandas["id"] = 1
In [28]:
if RUN_FEATURETOOLS:
    ft_builder = FTTimeSeriesBuilder(
        num_features=200,
        horizon=pd.Timedelta(days=0),
        memory=pd.Timedelta(days=1),
        column_id="id",
        time_stamp="date",
        target="pm2.5",
    )
    #
    featuretools_training = ft_builder.fit(data_train_pandas)
    featuretools_test = ft_builder.transform(data_test_pandas)

    data_featuretools_training = getml.data.DataFrame.from_pandas(
        featuretools_training, name="featuretools_training"
    )
    data_featuretools_test = getml.data.DataFrame.from_pandas(
        featuretools_test, name="featuretools_test"
    )
In [29]:
if not RUN_FEATURETOOLS:
    data_featuretools_training = load_or_retrieve(
        "https://static.getml.com/datasets/air_pollution/featuretools/featuretools_training.csv"
    )
    data_featuretools_test = load_or_retrieve(
        "https://static.getml.com/datasets/air_pollution/featuretools/featuretools_test.csv"
    )
Loading 'featuretools_training' from disk (project folder).

Loading 'featuretools_test' from disk (project folder).

In [30]:
def set_roles_featuretools(df):
    df.set_role(["date"], getml.data.roles.time_stamp)
    df.set_role(["pm2.5"], getml.data.roles.target)
    df.set_role(["date"], getml.data.roles.time_stamp)
    df.set_role(df.roles.unused, getml.data.roles.numerical)
    df.set_role(["id"], getml.data.roles.unused_float)
    return df


df_featuretools_training = set_roles_featuretools(data_featuretools_training)
df_featuretools_test = set_roles_featuretools(data_featuretools_test)
In [31]:
predictor = getml.predictors.XGBoostRegressor()

pipe5 = getml.pipeline.Pipeline(
    tags=["featuretools", "memory: 1d", "simple features"], predictors=[predictor]
)

pipe5
Out[31]:
Pipeline(data_model='population',
         feature_learners=[],
         feature_selectors=[],
         include_categorical=False,
         loss_function=None,
         peripheral=[],
         predictors=['XGBoostRegressor'],
         preprocessors=[],
         share_selected_features=0.5,
         tags=['featuretools', 'memory: 1d', 'simple features'])
In [32]:
pipe5.check(df_featuretools_training)
Checking data model...


Staging...
[========================================] 100%

Checking...
[========================================] 100%


OK.
In [33]:
pipe5.fit(df_featuretools_training)
Checking data model...


Staging...
[========================================] 100%


OK.


Staging...
[========================================] 100%

XGBoost: Training as predictor...
[========================================] 100%


Trained pipeline.
Time taken: 0h:0m:15.115218

Out[33]:
Pipeline(data_model='population',
         feature_learners=[],
         feature_selectors=[],
         include_categorical=False,
         loss_function=None,
         peripheral=[],
         predictors=['XGBoostRegressor'],
         preprocessors=[],
         share_selected_features=0.5,
         tags=['featuretools', 'memory: 1d', 'simple features'])

url: http://localhost:1709/#/getpipeline/air_pollution/ZSDfNQ/0/
In [34]:
pipe5.score(df_featuretools_test)

Staging...
[========================================] 100%


Out[34]:
date time set used target mae rmse rsquared
0 2022-03-23 13:42:34 featuretools_training pm2.5 38.0455 54.4693 0.6567
1 2022-03-23 13:42:35 featuretools_test pm2.5 45.3084 64.2717 0.5373

2.6 Using tsfresh

Next, we construct features with tsfresh. tsfresh is based on pandas and rely\ies on explicit copies for meny operations. This leads to an excessive memory consumption that renders tsfresh nearly unusable for real-world scenarios. Remeber, this is a relatively small data set.

To limit the memory consumption, we undertake the following steps:

  • We limit ourselves to a memory of 1 day from any point in time. This is necessary, because tsfresh duplicates records for every time stamp. That means that looking back 7 days instead of one day, the memory consumption would be seven times as high.
  • We extract only tsfresh's MinimalFCParameters and IndexBasedFCParameters (the latter is a superset of TimeBasedFCParameters).

In order to make sure that tsfresh's features can be compared to getML's features, we also do the following:

  • We apply tsfresh's built-in feature selection algorithm.
  • Of the remaining features, we only keep the 40 features most correlated with the target (in terms of the absolute value of the correlation).
  • We add the original columns as additional features.
In [35]:
data_train_pandas
Out[35]:
DEWP TEMP PRES Iws Is Ir pm2.5 date id
0 -16.0 -4.0 1020.0 1.79 0.0 0.0 129.0 2010-01-02 00:00:00 1
1 -15.0 -4.0 1020.0 2.68 0.0 0.0 148.0 2010-01-02 01:00:00 1
2 -11.0 -5.0 1021.0 3.57 0.0 0.0 159.0 2010-01-02 02:00:00 1
3 -7.0 -5.0 1022.0 5.36 1.0 0.0 181.0 2010-01-02 03:00:00 1
4 -7.0 -5.0 1022.0 6.25 2.0 0.0 138.0 2010-01-02 04:00:00 1
... ... ... ... ... ... ... ... ... ...
33091 -19.0 7.0 1013.0 114.87 0.0 0.0 22.0 2013-12-31 19:00:00 1
33092 -21.0 7.0 1014.0 119.79 0.0 0.0 18.0 2013-12-31 20:00:00 1
33093 -21.0 7.0 1014.0 125.60 0.0 0.0 23.0 2013-12-31 21:00:00 1
33094 -21.0 6.0 1014.0 130.52 0.0 0.0 20.0 2013-12-31 22:00:00 1
33095 -20.0 7.0 1014.0 137.67 0.0 0.0 23.0 2013-12-31 23:00:00 1

33096 rows × 9 columns

In [36]:
if RUN_TSFRESH:
    tsfresh_builder = TSFreshBuilder(
        num_features=200, memory=24, column_id="id", time_stamp="date", target="pm2.5"
    )
    #
    tsfresh_training = tsfresh_builder.fit(data_train_pandas)
    tsfresh_test = tsfresh_builder.transform(data_test_pandas)
    #
    data_tsfresh_training = getml.data.DataFrame.from_pandas(
        tsfresh_training, name="tsfresh_training"
    )
    data_tsfresh_test = getml.data.DataFrame.from_pandas(
        tsfresh_test, name="tsfresh_test"
    )

tsfresh does not contain built-in machine learning algorithms. In order to ensure a fair comparison, we use the exact same machine learning algorithm we have also used for getML: An XGBoost regressor with all hyperparameters set to their default values.

In order to do so, we load the tsfresh features into the getML engine.

In [37]:
if not RUN_TSFRESH:
    data_tsfresh_training = load_or_retrieve(
        "https://static.getml.com/datasets/air_pollution/tsfresh/tsfresh_training.csv"
    )
    data_tsfresh_test = load_or_retrieve(
        "https://static.getml.com/datasets/air_pollution/tsfresh/tsfresh_test.csv"
    )
Loading 'tsfresh_training' from disk (project folder).

Loading 'tsfresh_test' from disk (project folder).

As usual, we need to set roles:

In [38]:
def set_roles_tsfresh(df):
    df.set_role(["date"], getml.data.roles.time_stamp)
    df.set_role(["pm2.5"], getml.data.roles.target)
    df.set_role(["date"], getml.data.roles.time_stamp)
    df.set_role(df.roles.unused, getml.data.roles.numerical)
    df.set_role(["id"], getml.data.roles.unused_float)
    return df


df_tsfresh_training = set_roles_tsfresh(data_tsfresh_training)
df_tsfresh_test = set_roles_tsfresh(data_tsfresh_test)

In this case, our pipeline is very simple. It only consists of a single XGBoostRegressor.

In [39]:
predictor = getml.predictors.XGBoostRegressor()

pipe6 = getml.pipeline.Pipeline(
    tags=["tsfresh", "memory: 1d", "simple features"], predictors=[predictor]
)

pipe6
Out[39]:
Pipeline(data_model='population',
         feature_learners=[],
         feature_selectors=[],
         include_categorical=False,
         loss_function=None,
         peripheral=[],
         predictors=['XGBoostRegressor'],
         preprocessors=[],
         share_selected_features=0.5,
         tags=['tsfresh', 'memory: 1d', 'simple features'])
In [40]:
pipe6.check(df_tsfresh_training)
Checking data model...


Staging...
[========================================] 100%

Checking...
[========================================] 100%


OK.
In [41]:
pipe6.fit(df_tsfresh_training)
Checking data model...


Staging...
[========================================] 100%


OK.


Staging...
[========================================] 100%

XGBoost: Training as predictor...
[========================================] 100%


Trained pipeline.
Time taken: 0h:0m:10.410626

Out[41]:
Pipeline(data_model='population',
         feature_learners=[],
         feature_selectors=[],
         include_categorical=False,
         loss_function=None,
         peripheral=[],
         predictors=['XGBoostRegressor'],
         preprocessors=[],
         share_selected_features=0.5,
         tags=['tsfresh', 'memory: 1d', 'simple features'])

url: http://localhost:1709/#/getpipeline/air_pollution/J5o5u2/0/
In [42]:
pipe6.score(df_tsfresh_test)

Staging...
[========================================] 100%


Out[42]:
date time set used target mae rmse rsquared
0 2022-03-23 13:42:48 tsfresh_training pm2.5 40.8062 57.7874 0.6106
1 2022-03-23 13:42:49 tsfresh_test pm2.5 46.698 65.9163 0.5105
In [43]:
pipe1.features
Out[43]:
target name correlation importance
0 pm2.5 feature_1_1 0.7269 0.18463717
1 pm2.5 feature_1_2 0.7046 0.11726964
2 pm2.5 feature_1_3 0.7158 0.08975971
3 pm2.5 feature_1_4 0.6811 0.01235796
4 pm2.5 feature_1_5 0.7363 0.27688485
... ... ... ...
11 pm2.5 temp -0.2112 0.00403082
12 pm2.5 pres 0.0811 0.00672836
13 pm2.5 iws -0.2166 0.00111994
14 pm2.5 is 0.0045 0.00006808
15 pm2.5 ir -0.0541 0.00060757

2.7 Studying features

In [44]:
pipe1.features.sort(by="importances")[0].sql
Out[44]:
DROP TABLE IF EXISTS "FEATURE_1_5";

CREATE TABLE "FEATURE_1_5" AS
SELECT SUM( 
    CASE
        WHEN ( t2."iws" > 2.996864 ) AND ( t1."date" - t2."date" > 111439.618138 ) THEN COALESCE( t1."dewp" - 1.514736211828887, 0.0 ) * 0.03601656954671504 + COALESCE( t1."temp" - 11.89228926884439, 0.0 ) * -0.04267662247010789 + COALESCE( t1."is" - 0.06612999800412481, 0.0 ) * -0.04725594429771846 + COALESCE( t1."ir" - 0.2116958286208502, 0.0 ) * -0.06321612715292692 + COALESCE( t1."pres" - 1016.466458208569, 0.0 ) * -0.01400320405937972 + COALESCE( t1."iws" - 25.06146098064165, 0.0 ) * -0.001201539145976257 + COALESCE( t1."date" - 1326377672.037789, 0.0 ) * -1.91747588977566e-07 + COALESCE( t2."dewp" - 1.668379064426448, 0.0 ) * 0.001731759164268386 + COALESCE( t2."is" - 0.06086063096820984, 0.0 ) * 0.06061734539826681 + COALESCE( t2."ir" - 0.2111990813489665, 0.0 ) * -0.05403702019138458 + COALESCE( t2."temp" - 12.06001450501632, 0.0 ) * -0.007673283821278228 + COALESCE( t2."pres" - 1016.398404448205, 0.0 ) * 0.002389770661357365 + COALESCE( t2."iws" - 25.00605463556588, 0.0 ) * 0.0006834212113433441 + COALESCE( t2."date" - 1326341256.690439, 0.0 ) * 1.915110364308629e-07 + -3.8304206632990674e-02
        WHEN ( t2."iws" > 2.996864 ) AND ( t1."date" - t2."date" <= 111439.618138 OR t1."date" IS NULL OR t2."date" IS NULL ) THEN COALESCE( t1."dewp" - 1.514736211828887, 0.0 ) * -0.09710953675504476 + COALESCE( t1."temp" - 11.89228926884439, 0.0 ) * 0.1898035531135342 + COALESCE( t1."is" - 0.06612999800412481, 0.0 ) * 0.2216653278054844 + COALESCE( t1."ir" - 0.2116958286208502, 0.0 ) * 0.2229778643018364 + COALESCE( t1."pres" - 1016.466458208569, 0.0 ) * 0.0137232972180141 + COALESCE( t1."iws" - 25.06146098064165, 0.0 ) * 0.008757818778673529 + COALESCE( t1."date" - 1326377672.037789, 0.0 ) * 3.828784685924325e-05 + COALESCE( t2."dewp" - 1.668379064426448, 0.0 ) * 0.01390967975925731 + COALESCE( t2."is" - 0.06086063096820984, 0.0 ) * -0.05439808805172083 + COALESCE( t2."ir" - 0.2111990813489665, 0.0 ) * -0.1708184586046546 + COALESCE( t2."temp" - 12.06001450501632, 0.0 ) * 0.04153068423706643 + COALESCE( t2."pres" - 1016.398404448205, 0.0 ) * 0.06692208362422453 + COALESCE( t2."iws" - 25.00605463556588, 0.0 ) * 0.0003737878687475554 + COALESCE( t2."date" - 1326341256.690439, 0.0 ) * -3.82854279001483e-05 + -2.6193156234554085e+00
        WHEN ( t2."iws" <= 2.996864 OR t2."iws" IS NULL ) AND ( t1."dewp" > 11.000000 ) THEN COALESCE( t1."dewp" - 1.514736211828887, 0.0 ) * 0.1393137945465734 + COALESCE( t1."temp" - 11.89228926884439, 0.0 ) * -0.01214102681155058 + COALESCE( t1."is" - 0.06612999800412481, 0.0 ) * -0.1338390047625948 + COALESCE( t1."ir" - 0.2116958286208502, 0.0 ) * -0.0162258602121815 + COALESCE( t1."pres" - 1016.466458208569, 0.0 ) * 0.001699285730186932 + COALESCE( t1."iws" - 25.06146098064165, 0.0 ) * 0.003371800239128447 + COALESCE( t1."date" - 1326377672.037789, 0.0 ) * -1.911896389633282e-07 + COALESCE( t2."dewp" - 1.668379064426448, 0.0 ) * -0.06644334047600213 + COALESCE( t2."is" - 0.06086063096820984, 0.0 ) * -0.1417763462234966 + COALESCE( t2."ir" - 0.2111990813489665, 0.0 ) * -0.1305274026025503 + COALESCE( t2."temp" - 12.06001450501632, 0.0 ) * -0.1337481445687078 + COALESCE( t2."pres" - 1016.398404448205, 0.0 ) * -0.0159175033671927 + COALESCE( t2."iws" - 25.00605463556588, 0.0 ) * 0.0526624167554332 + COALESCE( t2."date" - 1326341256.690439, 0.0 ) * 1.855001999924372e-07 + 1.4765736620125673e+00
        WHEN ( t2."iws" <= 2.996864 OR t2."iws" IS NULL ) AND ( t1."dewp" <= 11.000000 OR t1."dewp" IS NULL ) THEN COALESCE( t1."dewp" - 1.514736211828887, 0.0 ) * 0.04638612658210784 + COALESCE( t1."temp" - 11.89228926884439, 0.0 ) * -0.02616592034638174 + COALESCE( t1."is" - 0.06612999800412481, 0.0 ) * -0.04279224385040904 + COALESCE( t1."ir" - 0.2116958286208502, 0.0 ) * -0.02539472146735003 + COALESCE( t1."pres" - 1016.466458208569, 0.0 ) * -0.0271755357448101 + COALESCE( t1."iws" - 25.06146098064165, 0.0 ) * -0.002779614164530073 + COALESCE( t1."date" - 1326377672.037789, 0.0 ) * -1.127852653091244e-06 + COALESCE( t2."dewp" - 1.668379064426448, 0.0 ) * -0.009599325629289402 + COALESCE( t2."is" - 0.06086063096820984, 0.0 ) * -0.2440324127160611 + COALESCE( t2."ir" - 0.2111990813489665, 0.0 ) * -0.1198017640418239 + COALESCE( t2."temp" - 12.06001450501632, 0.0 ) * -0.0723450470366292 + COALESCE( t2."pres" - 1016.398404448205, 0.0 ) * -0.006849322627387147 + COALESCE( t2."iws" - 25.00605463556588, 0.0 ) * -0.4129622526694541 + COALESCE( t2."date" - 1326341256.690439, 0.0 ) * 1.129731320846992e-06 + -9.4331291663918275e+00
        ELSE NULL
    END
) AS "feature_1_5",
       t1.rowid AS rownum
FROM "POPULATION__STAGING_TABLE_1" t1
INNER JOIN "POPULATION__STAGING_TABLE_2" t2
ON 1 = 1
WHERE t2."date" <= t1."date"
AND ( t2."date, '+7.000000 days'" > t1."date" OR t2."date, '+7.000000 days'" IS NULL )
GROUP BY t1.rowid;

This is a typical RelMT feature, where the aggregation (SUM in this case) is applied conditionally – the conditions are learned by RelMT – to a set of linear models, whose weights are, again, learned by RelMT.

2.8 Productionization

It is possible to productionize the pipeline by transpiling the features into production-ready SQL code. Please also refer to getML's sqlite3 module.

In [45]:
# Creates a folder named air_pollution_pipeline containing the SQL code
pipe1.features.to_sql().save("air_pollution_pipeline")

3. Discussion

We have seen that getML outperforms tsfresh by more than 10 percentage points in terms of R-squared. We now want to analyze why that is.

There are two possible hypotheses:

  • getML outperforms featuretools and tsfresh, because it using feature learning and is able to produce more complex features
  • getML outperforms featuretools and tsfresh, because it makes better use of memory and is able to look back further.

Let's summarize our findings:

In [46]:
pipes = [pipe1, pipe2, pipe3, pipe4, pipe5, pipe6]

comparison = pd.DataFrame(
    dict(
        tool=[pipe.tags[0] for pipe in pipes],
        memory=[pipe.tags[1].split()[1] for pipe in pipes],
        feature_complexity=[pipe.tags[2].split()[0] for pipe in pipes],
        rsquared=[f"{pipe.rsquared * 100:.3} %" for pipe in pipes],
        rmse=[f"{pipe.rmse:.3}" for pipe in pipes],
    )
)

comparison
Out[46]:
tool memory feature_complexity rsquared rmse
0 getML: RelMT 7d complex 63.1 % 57.5
1 getML: RelMT 1d complex 49.0 % 66.9
2 getML: FastProp 7d simple 54.5 % 63.8
3 getML: FastProp 1d simple 54.5 % 63.5
4 featuretools 1d simple 53.7 % 64.3
5 tsfresh 1d simple 51.0 % 65.9

The summary table shows that combination of both of our hypotheses explains why getML outperforms featuretools and tsfresh. Complex features do better than simple features with a memory of one day. With a memory of seven days, simple features actually get worse. But when you look back seven days and allow more complex features, you get good results.

This suggests that getML outperforms featuretools and tsfresh, because it can make more efficient use of memory and thus look back further. Because RelMT uses feature learning and can build more complex features it can make better use of the greater look-back window.

4. Conclusion

We have compared getML's feature learning algorithms to tsfresh's brute-force feature engineering approaches on a data set related to air pollution in China. We found that getML significantly outperforms featuretools and tsfresh. These results are consistent with the view that feature learning can yield significant improvements over simple propositionalization approaches.

However, there are other datasets on which simple propositionalization performs well. Our suggestion is therefore to think of algorithms like FastProp and RelMT as tools in a toolbox. If a simple tool like FastProp gets the job done, then use that. But when you need more advanced approaches, like RelMT, you should have them at your disposal as well.

You are encouraged to reproduce these results.

Next Steps

If you are interested in further real-world applications of getML, visit the notebook section on getml.com. If you want to gain a deeper understanding about our notebooks' contents or download the code behind the notebooks, have a look at the getml-demo repository. Here, you can also find futher benchmarks of getML.

Want to try out getML without much hassle? Just head to try.getml.com to launch an instance of getML directly in your browser.

Further, here is some additional material from our documentation if you want to learn more about getML:

Get in contact

If you have any questions, just write us an email. Prefer a private demo of getML for your team? Just contact us to arrange an introduction to getML.