Disease lethality prediction

Feature engineering and the curse of dimensionality

With this notebook we give a brief introduction to feature engineering on relational data with many columns. We discuss why feature engineering on such data is particularly challenging and what we can do to overcome these problems.

Summary:

  • Prediction type: Binary classification
  • Domain: Health
  • Prediction target: Mortality within one year
  • Source data: 146 columns in 2 tables, 22 MB
  • Population size: 28433

Author: Dr. Patrick Urbanke

The problem

To illustrate the point, we give a simplified example based on the real data used in the analysis below. When we engineer features from relational data, we usually write something like this:

SELECT AVG(t2.HDL)
FROM population_training t1
LEFT JOIN contr t2
ON t1.ICO = t2.ICO
WHERE t1.AGE >= 60 AND t1.ALOKOHOL IN ('1', '2')
GROUP BY t1.ICO;

Think about that for a second. This feature aggregates high-density lipoprotein (HDL) cholesterol values recorded during control dates conditional on age and alcohol consumption. We arbitrarily chose both, the column to aggregate over (HDL) and the set of columns to construct conditions on (AGE and ALKOHOL) out of a greater set of 146 columns.

Every column that we have can either be aggregated (here HDL) or it can be used for our conditions (here AGE and ALKOHOL). That means if we have n columns to aggregate, we can potentially build conditions for $n$ other columns. In other words, the computational complexity is $n^2$ in the number of columns.

Note that this problem occurs regardless of whether you automate feature engineering or you do it by hand. The size of the search space is $n^2$ in the number of columns in either case, unless you can rule something out a-priori.

The solution

So when we have relational data sets with many columns, what do we do? The answer is to write different features. Specifically, suppose we had features like this:

SELECT AVG(
    CASE WHEN t1.AGE >= THEN weight1
    CASE WHEN t1.ALKOHOL IN ('1', '2') THEN weight2
    END
)
FROM population_training t1
LEFT JOIN contr t2
ON t1.ICO = t2.ICO
GROUP BY t1.ICO;

weight1 and weight2 are learnable weights. An algorithm that generates features like this can only use columns for conditions, it is not allowed to aggregate columns – and it doesn't need to do so.

That means the computational complexity is linear instead of quadratic. For data sets with a large number of columns this can make all the difference in the world. For instance, if you have 100 columns the size of the search space of the second approach is only 1% of the size of the search space of the first one.

Background

To illustrate the problem of dimensionality in predictive analytics on relational data, we use the STULONG 1 dataset. It is a longitudinal study of atherosclerosis patients.

One of its defining features is that it contains many columns, which makes it a good candidate to illustrate the problem discussed in this notebook.

The are some academic studies related to this dataset:

The way these studies handle the large number of columns in the data set is to divide the columns into subgroups and then handling each subgroup separately. Even though this is one way to overcome the curse of dimensionality, it is not a very satisfying approach. We would like to be able to handle a large number of columns at once.

The analysis is based on the STULONG 1 dataset. It is publicly available and can be downloaded the the CTU Prague Relational Learning Repository.

A web frontend for getML

The getML monitor is a frontend built to support your work with getML. The getML monitor displays information such as the imported data frames, trained pipelines and allows easy data and feature exploration. You can launch the getML monitor here.

Where is this running?

Your getML live session is running inside a docker container on mybinder.org, a service built by the Jupyter community and funded by Google Cloud, OVH, GESIS Notebooks and the Turing Institute. As it is a free service, this session will shut down after 10 minutes of inactivity.

Analysis

Let's get started with the analysis and set up your session:

In [1]:
import os
import numpy as np
import pandas as pd
from IPython.display import Image, Markdown
import matplotlib.pyplot as plt
plt.style.use('seaborn')
%matplotlib inline  

import getml

getml.engine.launch()
getml.engine.set_project('atherosclerosis')
Launched the getML engine. The log output will be stored in /home/patrick/.getML/logs/20220324215749.log.



Connected to project 'atherosclerosis'
http://localhost:1709/#/listprojects/atherosclerosis/

1. Loading data

1.1 Download from source

Downloading the raw data and convert it into a prediction ready format takes time. To get to the getML model building as fast as possible, we prepared the data for you and excluded the code from this notebook. It is made available in the example notebook featuring the full analysis.

In [2]:
population, contr = getml.datasets.load_atherosclerosis()
Loading population...
[========================================] 100%

Loading contr...
[========================================] 100%

1.2 Prepare data for getML

The getml.datasets.load_atherosclerosis method took care of the entire data lifting:

  • Downloads csv's from our servers in python
  • Converts csv's to getML DataFrames
  • Sets roles to columns inside getML DataFrames

Data visualization

The original data (image below) model is condensed into 2 tables:

  • A population table population_{train/test/validate}, based on death table
  • A peripheral table: contr.

Death: population table

  • Reference Date: Period of time from 1976 to 1999
  • Target: If the patient dies within one year after each reference date
In [3]:
population
Out[3]:
name REFERENCE_DATE ENTRY_DATE ICO TARGET KONSKUP STAV VZDELANI ZODPOV TELAKTZA AKTPOZAM DOPRAVA DOPRATRV ALKOHOL BOLHR BOLDK DUSNOST RARISK OBEZRISK KOURRISK HTRISK CHOLRISK MOC AGE PARTICIPATION VYSKA VAHA SYST1 DIAST1 SYST2 DIAST2 TRIC SUBSC CHLST TRIGL KOURENI DOBAKOUR BYVKURAK PIVOMN VINOMN LIHMN KAVA CAJ CUKR ROKNAR ROKVSTUP MESVSTUP IM HT ICT DIABET HYPLIP DUMMY YEAR PIVO7 PIVO10 PIVO12 VINO LIHOV IML HTD HTL ICTL DIABD DIABL HYPLD HYPLL IMTRV HTTRV ICTTRV DIABTRV HYPLTRV DENUMR MESUMR ROKUMR PRICUMR DEATH_DATE
role time_stamp time_stamp join_key target categorical categorical categorical categorical categorical categorical categorical categorical categorical categorical categorical categorical categorical categorical categorical categorical categorical categorical numerical numerical numerical numerical numerical numerical numerical numerical numerical numerical numerical numerical numerical numerical numerical numerical numerical numerical numerical numerical numerical unused_float unused_float unused_float unused_float unused_float unused_float unused_float unused_float unused_float unused_float unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string
unit time stamp time stamp
0 1977-01-01 1976-10-01 10001 0  4 1 3 1 1 2 3 6 2 1 1 1 0 0 1 0 0 1 48  -1899  169  71  120  85  120  90  4  12  209  86  4  10  nan  1  4  9  2  4  3  29  1976  10  2  2  2  2  2  1  1977  12.0
1 1978-01-01 1976-10-01 10001 0  4 1 3 1 1 2 3 6 2 1 1 1 0 0 1 0 0 1 49  -1898  169  71  120  85  120  90  4  12  209  86  4  10  nan  1  4  9  2  4  3  29  1976  10  2  2  2  2  2  1  1978  12.0
2 1979-01-01 1976-10-01 10001 0  4 1 3 1 1 2 3 6 2 1 1 1 0 0 1 0 0 1 50  -1897  169  71  120  85  120  90  4  12  209  86  4  10  nan  1  4  9  2  4  3  29  1976  10  2  2  2  2  2  1  1979  12.0
3 1980-01-01 1976-10-01 10001 0  4 1 3 1 1 2 3 6 2 1 1 1 0 0 1 0 0 1 51  -1896  169  71  120  85  120  90  4  12  209  86  4  10  nan  1  4  9  2  4  3  29  1976  10  2  2  2  2  2  1  1980  12.0
4 1981-01-01 1976-10-01 10001 0  4 1 3 1 1 2 3 6 2 1 1 1 0 0 1 0 0 1 52  -1895  169  71  120  85  120  90  4  12  209  86  4  10  nan  1  4  9  2  4  3  29  1976  10  2  2  2  2  2  1  1981  12.0
... ... ... ...  ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
28428 1995-01-01 1977-03-01 30065 0  5 2 2 3 1 2 1 5 2 3 2 2 0 0 1 0 0 1 67  -1882  179  69  110  80  120  80  10  18  235  733  4  10  nan  3  4  7  1  5  2  28  1977  3  2  2  2  2  2  1  1995  9.0
28429 1996-01-01 1977-03-01 30065 0  5 2 2 3 1 2 1 5 2 3 2 2 0 0 1 0 0 1 68  -1881  179  69  110  80  120  80  10  18  235  733  4  10  nan  3  4  7  1  5  2  28  1977  3  2  2  2  2  2  1  1996  9.0
28430 1997-01-01 1977-03-01 30065 0  5 2 2 3 1 2 1 5 2 3 2 2 0 0 1 0 0 1 69  -1880  179  69  110  80  120  80  10  18  235  733  4  10  nan  3  4  7  1  5  2  28  1977  3  2  2  2  2  2  1  1997  9.0
28431 1998-01-01 1977-03-01 30065 0  5 2 2 3 1 2 1 5 2 3 2 2 0 0 1 0 0 1 70  -1879  179  69  110  80  120  80  10  18  235  733  4  10  nan  3  4  7  1  5  2  28  1977  3  2  2  2  2  2  1  1998  9.0
28432 1999-01-01 1977-03-01 30065 0  5 2 2 3 1 2 1 5 2 3 2 2 0 0 1 0 0 1 71  -1878  179  69  110  80  120  80  10  18  235  733  4  10  nan  3  4  7  1  5  2  28  1977  3  2  2  2  2  2  1  1999  9.0

28433 rows x 76 columns
memory usage: 16.09 MB
name: population
type: getml.DataFrame
url: http://localhost:1709/#/getdataframe/atherosclerosis/population/

Contr: peripheral table

In [4]:
contr
Out[4]:
name CONTROL_DATE ICO ZMTELAKT AKTPOZAM ZMDIET LEKTLAK ZMKOUR POCCIG PRACNES JINAONE BOLHR BOLDK DUSN HODNSK HYPERD HYPCHL HYPTGL HMOT CHLST HDL HDLMG LDL ROKVYS MESVYS PORADK ZMCHARZA MOC LEKCHOL SRDCE HYPERT CEVMOZ DIAB HODN0 ROK0 HODN1 ROK1 HODN2 ROK2 HODN3 ROK3 HODN4 ROK4 HODN11 ROK11 HODN12 ROK12 HODN13 ROK13 HODN14 ROK14 HODN15 ROK15 HODN21 ROK21 HODN23 ROK23 SYST DIAST TRIC SUBSC HYPERSD HYPERS CHLSTMG TRIGL TRIGLMG GLYKEMIE KYSMOC
role time_stamp join_key categorical categorical categorical categorical categorical categorical categorical categorical categorical categorical categorical categorical categorical categorical categorical numerical numerical numerical numerical numerical unused_float unused_float unused_float unused_float unused_float unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string unused_string
unit time stamp
0 1977-09-01 10001 3 2 1 2 2 0.0 1 2.0 1 1 1 2 2.0 2.0 2.0 71  5.61 nan  nan  nan  1977  9  1  20  1  0.0 0.0 130.0 90.0 4.0 12.0 2.0 2.0 217.0 1.22 108.0
1 1979-01-01 10001 1 1 1 2 1 0.0 2 NULL 1 2 1 2 2.0 2.0 2.0 72  6  nan  nan  nan  1979  1  2  20  3  0.0 0.0 140.0 90.0 4.0 11.0 2.0 2.0 232.0 4.4 389.0
2 1980-04-01 10001 2 1 1 2 1 0.0 2 NULL 2 1 1 2 2.0 2.0 2.0 71  6.23 nan  nan  nan  1980  4  3  20  1  0.0 0.0 130.0 90.0 5.0 22.0 2.0 2.0 241.0 1.51 134.0
3 1982-01-01 10001 2 1 1 2 1 0.0 1 2.0 1 1 1 2 2.0 2.0 2.0 74  5.2 nan  nan  nan  1982  1  4  20  1  0.0 0.0 150.0 100.0 9.0 18.0 2.0 2.0 201.0 1.42 126.0
4 1983-02-01 10001 2 2 1 2 1 0.0 2 NULL 1 2 1 2 2.0 2.0 2.0 73  6.08 1.47 57  4.15 1983  2  5  20  1  0.0 0.0 165.0 105.0 7.0 15.0 2.0 2.0 235.0 0.99 88.0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
10567 1987-10-01 20358 2 2 1 2 1 0.0 2 NULL 4 1 1 3 2.0 2.0 2.0 85  4.89 0.98 38  3.11 1987  10  1  20  1  0.0 0.0 120.0 80.0 2.0 2.0 189.0 1.74 154.0
10568 1987-11-01 20359 2 2 1 2 1 14.0 2 NULL 1 1 1 3 2.0 2.0 2.0 80  4.5 0.98 38  3.16 1987  11  1  20  1  0.0 0.0 100.0 70.0 2.0 2.0 174.0 0.79 70.0
10569 1987-10-01 20360 2 2 5 2 1 0.0 2 NULL 2 1 2 3 2.0 2.0 2.0 95  5.51 1.29 50  3.68 1987  10  1  40  1  0.0 0.0 120.0 80.0 2.0 2.0 213.0 1.18 104.0
10570 1987-10-01 20362 1 2 1 2 1 0.0 2 NULL 6 4 1 3 2.0 2.0 2.0 56  4.89 3  116  1.51 1987  10  1  40  1  0.0 0.0 140.0 85.0 2.0 2.0 189.0 0.82 73.0
10571 1978-09-01 30037 2 1 1 2 3 15.0 2 NULL 4 1 2 4 2.0 2.0 1.0 72  5.61 nan  nan  nan  1978  9  1  20  1  0.0 0.0 125.0 90.0 9.0 17.0 2.0 2.0 217.0

10572 rows x 67 columns
memory usage: 5.82 MB
name: contr
type: getml.DataFrame
url: http://localhost:1709/#/getdataframe/atherosclerosis/contr/

1.3 Define relational model

To start with relational learning, we need to specify an abstract data model. Here, we use the high-level star schema API that allows us to define the abstract data model and construct a container with the concrete data at one-go. While a simple StarSchema indeed works in many cases, it is not sufficient for more complex data models like schoflake schemas, where you would have to define the data model and construct the container in separate steps, by utilzing getML's full-fledged data model and container APIs respectively.

In [5]:
split = getml.data.split.random(train=0.7, test=0.3)
In [6]:
star_schema = getml.data.StarSchema(population, split=split)

star_schema.join(
    contr,
    on="ICO",
    time_stamps=("REFERENCE_DATE", "CONTROL_DATE")
)

star_schema
Out[6]:

data model

diagram


contrpopulationICO = ICOCONTROL_DATE <= REFERENCE_DATE

staging

data frames staging table
0 population POPULATION__STAGING_TABLE_1
1 contr CONTR__STAGING_TABLE_2

container

population

subset name rows type
0 test population 8557 View
1 train population 19876 View

peripheral

name rows type
0 contr 10572 DataFrame

2. Predictive modeling

We loaded the data, defined the roles, units and the abstract data model. Next, we create a getML pipeline for relational learning.

2.1 Fitting the pipeline

To illustrate the problem of computational complexity, we fit two getML pipelines using different feature learning algorithms:

The first pipeline uses RelMT and the second pipeline uses Relboost. To demonstrate the power of feature ensembling, we build a third pipeline that uses both algorithms. At the end, we compare the runtime and predictive accuracies of all three pipelines.

In [7]:
relmt = getml.feature_learning.RelMT(
    num_features=30,
    loss_function=getml.feature_learning.loss_functions.CrossEntropyLoss,
    num_threads=1
)

relboost = getml.feature_learning.Relboost(
    num_features=60,
    loss_function=getml.feature_learning.loss_functions.CrossEntropyLoss,
    min_num_samples=500,
    max_depth=2,
    num_threads=1
)

feature_selector = getml.predictors.XGBoostClassifier()

xgboost = getml.predictors.XGBoostClassifier(
    max_depth=5,
    reg_lambda=100.0,
    learning_rate=0.1
)
In [8]:
pipe1 = getml.pipeline.Pipeline(
    tags=["relmt"],
    data_model=star_schema.data_model,
    feature_learners=relmt,
    feature_selectors=feature_selector,
    share_selected_features=0.8,
    predictors=xgboost,
    include_categorical=True
)

pipe1
Out[8]:
Pipeline(data_model='population',
         feature_learners=['RelMT'],
         feature_selectors=['XGBoostClassifier'],
         include_categorical=True,
         loss_function=None,
         peripheral=['contr'],
         predictors=['XGBoostClassifier'],
         preprocessors=[],
         share_selected_features=0.8,
         tags=['relmt'])
In [9]:
pipe2 = getml.pipeline.Pipeline(
    tags=["relboost"],
    data_model=star_schema.data_model,
    feature_learners=relboost,
    feature_selectors=feature_selector,
    share_selected_features=0.8,    
    predictors=xgboost,
    include_categorical=True
)

pipe2
Out[9]:
Pipeline(data_model='population',
         feature_learners=['Relboost'],
         feature_selectors=['XGBoostClassifier'],
         include_categorical=True,
         loss_function=None,
         peripheral=['contr'],
         predictors=['XGBoostClassifier'],
         preprocessors=[],
         share_selected_features=0.8,
         tags=['relboost'])

We begin with RelMT. Features generated by RelMT suffer from quadratic complexity. Luckily, the number of columns is not too high, so it is still manageble.

It always a good idea to check before fitting. There is one minor warning: This warning means that about 12% of patients never appear in CONTR (presumably because they never showed up to their health check-ups).

In [10]:
pipe1.check(star_schema.train)
Checking data model...


Staging...
[========================================] 100%

Checking...
[========================================] 100%


INFO [FOREIGN KEYS NOT FOUND]: When joining POPULATION__STAGING_TABLE_1 and CONTR__STAGING_TABLE_2 over 'ICO' and 'ICO', there are no corresponding entries for 12.291205% of entries in 'ICO' in 'POPULATION__STAGING_TABLE_1'. You might want to double-check your join keys.
In [11]:
pipe1.fit(star_schema.train)
Checking data model...


Staging...
[========================================] 100%


INFO [FOREIGN KEYS NOT FOUND]: When joining POPULATION__STAGING_TABLE_1 and CONTR__STAGING_TABLE_2 over 'ICO' and 'ICO', there are no corresponding entries for 12.291205% of entries in 'ICO' in 'POPULATION__STAGING_TABLE_1'. You might want to double-check your join keys.


Staging...
[========================================] 100%

RelMT: Training features...
[========================================] 100%

RelMT: Building features...
[========================================] 100%

XGBoost: Training as feature selector...
[========================================] 100%

XGBoost: Training as predictor...
[========================================] 100%


Trained pipeline.
Time taken: 0h:8m:59.20493

Out[11]:
Pipeline(data_model='population',
         feature_learners=['RelMT'],
         feature_selectors=['XGBoostClassifier'],
         include_categorical=True,
         loss_function=None,
         peripheral=['contr'],
         predictors=['XGBoostClassifier'],
         preprocessors=[],
         share_selected_features=0.8,
         tags=['relmt', 'container-0L4Y7m'])

url: http://localhost:1709/#/getpipeline/atherosclerosis/9gSAjM/0/

Let's see how well Relboost does. This algorithm has linear complexity in the number of columns.

In [12]:
pipe2.fit(star_schema.train)
Checking data model...


Staging...
[========================================] 100%

Checking...
[========================================] 100%


INFO [FOREIGN KEYS NOT FOUND]: When joining POPULATION__STAGING_TABLE_1 and CONTR__STAGING_TABLE_2 over 'ICO' and 'ICO', there are no corresponding entries for 12.291205% of entries in 'ICO' in 'POPULATION__STAGING_TABLE_1'. You might want to double-check your join keys.


Staging...
[========================================] 100%

Relboost: Training features...
[========================================] 100%

Relboost: Building features...
[========================================] 100%

XGBoost: Training as feature selector...
[========================================] 100%

XGBoost: Training as predictor...
[========================================] 100%


Trained pipeline.
Time taken: 0h:0m:50.001329

Out[12]:
Pipeline(data_model='population',
         feature_learners=['Relboost'],
         feature_selectors=['XGBoostClassifier'],
         include_categorical=True,
         loss_function=None,
         peripheral=['contr'],
         predictors=['XGBoostClassifier'],
         preprocessors=[],
         share_selected_features=0.8,
         tags=['relboost', 'container-0L4Y7m'])

url: http://localhost:1709/#/getpipeline/atherosclerosis/jsW7KT/0/

Note that this runs through in under a minute. This demonstrates the power of computational complexity theory. If we had more columns, the difference between these two algorithms would become even more noticable.

2.2 Model evaluation

In [13]:
pipe1.score(star_schema.test)

Staging...
[========================================] 100%

RelMT: Building features...
[========================================] 100%


Out[13]:
date time set used target accuracy auc cross entropy
0 2022-03-24 22:06:56 train TARGET 0.9891 0.8921 0.05011
1 2022-03-24 22:07:48 test TARGET 0.9867 0.7192 0.06785
In [14]:
pipe2.score(star_schema.test)

Staging...
[========================================] 100%

Relboost: Building features...
[========================================] 100%


Out[14]:
date time set used target accuracy auc cross entropy
0 2022-03-24 22:07:46 train TARGET 0.9877 0.8883 0.05411
1 2022-03-24 22:07:51 test TARGET 0.9867 0.7111 0.06741

2.3 Studying the features

It is always a good idea to study the features generated by the algorithms.

In [15]:
names, correlations = pipe2.features.correlations()

plt.subplots(figsize=(20, 10))

plt.bar(names, correlations, color='#6829c2')

plt.title("feature correlations")
plt.grid(True)
plt.xlabel("features")
plt.ylabel("correlations")
plt.xticks(rotation='vertical')

plt.show()
In [16]:
names, importances = pipe2.features.importances()

plt.subplots(figsize=(20, 10))

plt.bar(names, importances, color='#6829c2')

plt.title("feature importances")
plt.grid(True)
plt.xlabel("features")
plt.ylabel("importances")
plt.xticks(rotation='vertical')


plt.show()

As we can see from these figures we need many features to get a good result. No single feature is very correlated with the target and the feature importance is not concentrated on a small number of features (as is often the case with other data sets).

This implies that the many columns in the data set are actually needed. The reason we emphasize that is that we sometimes see data sets with many columns, but after analyzing them we find and only a handful of these columns are actually needed. This is not one of these times.

The most important features look like this:

In [17]:
pipe1.features.to_sql()[pipe1.features.sort(by="importances")[0].name]
Out[17]:
DROP TABLE IF EXISTS "FEATURE_1_2";

CREATE TABLE "FEATURE_1_2" AS
SELECT SUM( 
    CASE
        WHEN ( t2."poccig" IN ( '0.0', '2.0', '1.0', '12.0', '10.0', '35.0', '25.0', '20.0', '15.0', '30.0', '5.0', '3.0', '28.0', '4.0', '22.0', '17.0', '8.0', '6.0', '40.0', '18.0', '14.0', '7.0', '13.0', '23.0', '16.0', '21.0', '50.0', '11.0', '27.0', '9.0', '60.0', '70.0', '80.0' ) ) AND ( t1."htrisk" IN ( '6' ) ) THEN COALESCE( t1."age" - 58.55541532813217, 0.0 ) * -3.397095496386469 + COALESCE( t1."participation" - -1887.414410279945, 0.0 ) * -8.217082116534575 + COALESCE( t1."vyska" - 174.626548875631, 0.0 ) * 0.8989800432775159 + COALESCE( t1."vaha" - 80.15500229463056, 0.0 ) * -1.237316756163966 + COALESCE( t1."syst1" - 132.0390087195962, 0.0 ) * -0.9928111708027221 + COALESCE( t1."diast1" - 83.58329508949059, 0.0 ) * -2.788498399735765 + COALESCE( t1."syst2" - 129.4263423588802, 0.0 ) * -0.4818583551937765 + COALESCE( t1."diast2" - 83.2121385956861, 0.0 ) * 6.971943651250299 + COALESCE( t1."tric" - 9.410853602569986, 0.0 ) * -3.694050975182785 + COALESCE( t1."subsc" - 18.22670949977054, 0.0 ) * -5.856998651527096 + COALESCE( t1."chlst" - 232.7238412115649, 0.0 ) * 2.110663631459102 + COALESCE( t1."trigl" - 134.5320100963745, 0.0 ) * 0.119131970047223 + COALESCE( t1."koureni" - 3.382973841211565, 0.0 ) * -19.83581231536901 + COALESCE( t1."dobakour" - 6.960188159706287, 0.0 ) * -0.6738932939324694 + COALESCE( t1."byvkurak" - 1.689421753097751, 0.0 ) * 1.356032686173819e-07 + COALESCE( t1."pivomn" - 1.800826067003213, 0.0 ) * 43.42683265738321 + COALESCE( t1."vinomn" - 4.298990362551629, 0.0 ) * -188.4516196796772 + COALESCE( t1."lihmn" - 6.948141349242772, 0.0 ) * 149.8310937108893 + COALESCE( t1."kava" - 1.919343735658559, 0.0 ) * -7.925434240864883 + COALESCE( t1."caj" - 4.731642955484167, 0.0 ) * 7.372069762323473 + COALESCE( t1."cukr" - 4.495869664983938, 0.0 ) * -6.522002678957503 + COALESCE( t1."reference_date" - 614493572.4644332, 0.0 ) * 3.705927741723315e-07 + COALESCE( t1."entry_date" - 230484053.9697109, 0.0 ) * -1.783586052963881e-06 + COALESCE( t2."hmot" - 81.27387640449439, 0.0 ) * -0.03696877820155445 + COALESCE( t2."hdlmg" - 35.46147672552167, 0.0 ) * -0.8975936237922416 + COALESCE( t2."chlst" - 5.892755818619529, 0.0 ) * -2.613310598875336 + COALESCE( t2."hdl" - 0.9149819422150931, 0.0 ) * 33.09171658345281 + COALESCE( t2."ldl" - 2.463498194221503, 0.0 ) * 2.06224169478161 + COALESCE( t2."control_date" - 533301113.6436597, 0.0 ) * -5.888017807774677e-10 + -1.0081362925125225e+02
        WHEN ( t2."poccig" IN ( '0.0', '2.0', '1.0', '12.0', '10.0', '35.0', '25.0', '20.0', '15.0', '30.0', '5.0', '3.0', '28.0', '4.0', '22.0', '17.0', '8.0', '6.0', '40.0', '18.0', '14.0', '7.0', '13.0', '23.0', '16.0', '21.0', '50.0', '11.0', '27.0', '9.0', '60.0', '70.0', '80.0' ) ) AND ( t1."htrisk" NOT IN ( '6' ) OR t1."htrisk" IS NULL ) THEN COALESCE( t1."age" - 58.55541532813217, 0.0 ) * 0.002411294099492637 + COALESCE( t1."participation" - -1887.414410279945, 0.0 ) * -0.0626104782770142 + COALESCE( t1."vyska" - 174.626548875631, 0.0 ) * 0.002566103012395416 + COALESCE( t1."vaha" - 80.15500229463056, 0.0 ) * -0.005532871003705906 + COALESCE( t1."syst1" - 132.0390087195962, 0.0 ) * -0.001054498518485803 + COALESCE( t1."diast1" - 83.58329508949059, 0.0 ) * 0.004007094220024539 + COALESCE( t1."syst2" - 129.4263423588802, 0.0 ) * 0.001871053399197597 + COALESCE( t1."diast2" - 83.2121385956861, 0.0 ) * 0.0002121004831901501 + COALESCE( t1."tric" - 9.410853602569986, 0.0 ) * -0.002691696833181459 + COALESCE( t1."subsc" - 18.22670949977054, 0.0 ) * 0.0001431658988299645 + COALESCE( t1."chlst" - 232.7238412115649, 0.0 ) * -0.0001289298054566767 + COALESCE( t1."trigl" - 134.5320100963745, 0.0 ) * -3.355295825128488e-05 + COALESCE( t1."koureni" - 3.382973841211565, 0.0 ) * 0.01324732393277822 + COALESCE( t1."dobakour" - 6.960188159706287, 0.0 ) * 0.002395697464216007 + COALESCE( t1."byvkurak" - 1.689421753097751, 0.0 ) * -0.0003315238122537618 + COALESCE( t1."pivomn" - 1.800826067003213, 0.0 ) * 0.02567624314633523 + COALESCE( t1."vinomn" - 4.298990362551629, 0.0 ) * -0.005160409052324677 + COALESCE( t1."lihmn" - 6.948141349242772, 0.0 ) * -0.004594191940926427 + COALESCE( t1."kava" - 1.919343735658559, 0.0 ) * 0.02593337789132812 + COALESCE( t1."caj" - 4.731642955484167, 0.0 ) * -0.01167465366012087 + COALESCE( t1."cukr" - 4.495869664983938, 0.0 ) * 0.002144534133525993 + COALESCE( t1."reference_date" - 614493572.4644332, 0.0 ) * 3.425815197311889e-09 + COALESCE( t1."entry_date" - 230484053.9697109, 0.0 ) * -2.057864437834938e-09 + COALESCE( t2."hmot" - 81.27387640449439, 0.0 ) * 0.0044411821410372 + COALESCE( t2."hdlmg" - 35.46147672552167, 0.0 ) * -0.002401235266795935 + COALESCE( t2."chlst" - 5.892755818619529, 0.0 ) * -0.0211518305113043 + COALESCE( t2."hdl" - 0.9149819422150931, 0.0 ) * 0.03021101637486201 + COALESCE( t2."ldl" - 2.463498194221503, 0.0 ) * 0.02532570959453475 + COALESCE( t2."control_date" - 533301113.6436597, 0.0 ) * -2.69202639260626e-09 + -2.7877747341427978e-01
        WHEN ( t2."poccig" NOT IN ( '0.0', '2.0', '1.0', '12.0', '10.0', '35.0', '25.0', '20.0', '15.0', '30.0', '5.0', '3.0', '28.0', '4.0', '22.0', '17.0', '8.0', '6.0', '40.0', '18.0', '14.0', '7.0', '13.0', '23.0', '16.0', '21.0', '50.0', '11.0', '27.0', '9.0', '60.0', '70.0', '80.0' ) OR t2."poccig" IS NULL ) AND ( t1."reference_date" > 852038400.000000 ) THEN COALESCE( t1."age" - 58.55541532813217, 0.0 ) * 0.7005540342173785 + COALESCE( t1."participation" - -1887.414410279945, 0.0 ) * -0.3721815968225617 + COALESCE( t1."vyska" - 174.626548875631, 0.0 ) * 0.001514347782697983 + COALESCE( t1."vaha" - 80.15500229463056, 0.0 ) * 0.7143978295176737 + COALESCE( t1."syst1" - 132.0390087195962, 0.0 ) * 0.6208372807968181 + COALESCE( t1."diast1" - 83.58329508949059, 0.0 ) * -0.2854287364907576 + COALESCE( t1."syst2" - 129.4263423588802, 0.0 ) * 0.5421938420599178 + COALESCE( t1."diast2" - 83.2121385956861, 0.0 ) * -0.7689222625697328 + COALESCE( t1."tric" - 9.410853602569986, 0.0 ) * -0.4929019826671734 + COALESCE( t1."subsc" - 18.22670949977054, 0.0 ) * 0.223780579913266 + COALESCE( t1."chlst" - 232.7238412115649, 0.0 ) * -0.2303339841644577 + COALESCE( t1."trigl" - 134.5320100963745, 0.0 ) * -0.08730665947172637 + COALESCE( t1."koureni" - 3.382973841211565, 0.0 ) * 3.578307299045961 + COALESCE( t1."dobakour" - 6.960188159706287, 0.0 ) * 3.424298336831009 + COALESCE( t1."byvkurak" - 1.689421753097751, 0.0 ) * 1.469247538772654 + COALESCE( t1."pivomn" - 1.800826067003213, 0.0 ) * 2.007088802312646 + COALESCE( t1."vinomn" - 4.298990362551629, 0.0 ) * 1.508201373333506 + COALESCE( t1."lihmn" - 6.948141349242772, 0.0 ) * 1.694371256928238 + COALESCE( t1."kava" - 1.919343735658559, 0.0 ) * 15.34300789713041 + COALESCE( t1."caj" - 4.731642955484167, 0.0 ) * 10.98526961717351 + COALESCE( t1."cukr" - 4.495869664983938, 0.0 ) * 1.131391278158518 + COALESCE( t1."reference_date" - 614493572.4644332, 0.0 ) * -1.552221861580311e-08 + COALESCE( t1."entry_date" - 230484053.9697109, 0.0 ) * -2.200646489251252e-07 + COALESCE( t2."hmot" - 81.27387640449439, 0.0 ) * 0.2536479591053882 + COALESCE( t2."hdlmg" - 35.46147672552167, 0.0 ) * -0.04144600919003553 + COALESCE( t2."chlst" - 5.892755818619529, 0.0 ) * -10.71915452291016 + COALESCE( t2."hdl" - 0.9149819422150931, 0.0 ) * -1.584720081647418 + COALESCE( t2."ldl" - 2.463498194221503, 0.0 ) * -1.712372865814812 + COALESCE( t2."control_date" - 533301113.6436597, 0.0 ) * -6.542237117486055e-08 + 6.6081461026926416e+00
        WHEN ( t2."poccig" NOT IN ( '0.0', '2.0', '1.0', '12.0', '10.0', '35.0', '25.0', '20.0', '15.0', '30.0', '5.0', '3.0', '28.0', '4.0', '22.0', '17.0', '8.0', '6.0', '40.0', '18.0', '14.0', '7.0', '13.0', '23.0', '16.0', '21.0', '50.0', '11.0', '27.0', '9.0', '60.0', '70.0', '80.0' ) OR t2."poccig" IS NULL ) AND ( t1."reference_date" <= 852038400.000000 OR t1."reference_date" IS NULL ) THEN COALESCE( t1."age" - 58.55541532813217, 0.0 ) * 6.963983784467057 + COALESCE( t1."participation" - -1887.414410279945, 0.0 ) * 12.7201171250798 + COALESCE( t1."vyska" - 174.626548875631, 0.0 ) * 1.296534129086624 + COALESCE( t1."vaha" - 80.15500229463056, 0.0 ) * -3.178987445661821 + COALESCE( t1."syst1" - 132.0390087195962, 0.0 ) * 0.4173045416261214 + COALESCE( t1."diast1" - 83.58329508949059, 0.0 ) * 1.92693099294999 + COALESCE( t1."syst2" - 129.4263423588802, 0.0 ) * 0.6497416198716232 + COALESCE( t1."diast2" - 83.2121385956861, 0.0 ) * -2.934698113722341 + COALESCE( t1."tric" - 9.410853602569986, 0.0 ) * 2.43950913153937 + COALESCE( t1."subsc" - 18.22670949977054, 0.0 ) * 1.47634891241367 + COALESCE( t1."chlst" - 232.7238412115649, 0.0 ) * -0.04727950911958443 + COALESCE( t1."trigl" - 134.5320100963745, 0.0 ) * 0.2284859457718159 + COALESCE( t1."koureni" - 3.382973841211565, 0.0 ) * -4.284780550478363 + COALESCE( t1."dobakour" - 6.960188159706287, 0.0 ) * 5.640112799208898 + COALESCE( t1."byvkurak" - 1.689421753097751, 0.0 ) * 2.782893387299822 + COALESCE( t1."pivomn" - 1.800826067003213, 0.0 ) * 18.72747293890264 + COALESCE( t1."vinomn" - 4.298990362551629, 0.0 ) * -11.08758851568889 + COALESCE( t1."lihmn" - 6.948141349242772, 0.0 ) * -14.05642864637471 + COALESCE( t1."kava" - 1.919343735658559, 0.0 ) * 15.29700767610475 + COALESCE( t1."caj" - 4.731642955484167, 0.0 ) * -8.372507661585622 + COALESCE( t1."cukr" - 4.495869664983938, 0.0 ) * -3.785073697931573 + COALESCE( t1."reference_date" - 614493572.4644332, 0.0 ) * -6.216702473867759e-07 + COALESCE( t1."entry_date" - 230484053.9697109, 0.0 ) * 5.610046107243454e-07 + COALESCE( t2."hmot" - 81.27387640449439, 0.0 ) * 3.030242084663193 + COALESCE( t2."hdlmg" - 35.46147672552167, 0.0 ) * -0.7262712487263796 + COALESCE( t2."chlst" - 5.892755818619529, 0.0 ) * -3.932089120575836 + COALESCE( t2."hdl" - 0.9149819422150931, 0.0 ) * -29.67762684885863 + COALESCE( t2."ldl" - 2.463498194221503, 0.0 ) * 1.104743672199196 + COALESCE( t2."control_date" - 533301113.6436597, 0.0 ) * 2.142548400344535e-07 + -7.5267732055428329e+00
        ELSE NULL
    END
) AS "feature_1_2",
       t1.rowid AS rownum
FROM "POPULATION__STAGING_TABLE_1" t1
INNER JOIN "CONTR__STAGING_TABLE_2" t2
ON t1."ico" = t2."ico"
WHERE t2."control_date" <= t1."reference_date"
GROUP BY t1.rowid;
In [18]:
pipe2.features.to_sql()[pipe2.features.sort(by="importances")[1].name]
Out[18]:
DROP TABLE IF EXISTS "FEATURE_1_3";

CREATE TABLE "FEATURE_1_3" AS
SELECT AVG( 
    CASE
        WHEN ( t1."reference_date" - t2."control_date" > 615123663.157895 ) THEN 9.667613221179364
        WHEN ( t1."reference_date" - t2."control_date" <= 615123663.157895 OR t1."reference_date" IS NULL OR t2."control_date" IS NULL ) AND ( t2."zmkour" IN ( '4', '1', '2', '8' ) ) THEN 0.0681595669898879
        WHEN ( t1."reference_date" - t2."control_date" <= 615123663.157895 OR t1."reference_date" IS NULL OR t2."control_date" IS NULL ) AND ( t2."zmkour" NOT IN ( '4', '1', '2', '8' ) OR t2."zmkour" IS NULL ) THEN 2.081264699633921
        ELSE NULL
    END
) AS "feature_1_3",
       t1.rowid AS rownum
FROM "POPULATION__STAGING_TABLE_1" t1
INNER JOIN "CONTR__STAGING_TABLE_2" t2
ON t1."ico" = t2."ico"
WHERE t2."control_date" <= t1."reference_date"
GROUP BY t1.rowid;

2.4 Productionization

It is possible to productionize the pipeline by transpiling the features into production-ready SQL code. Please also refer to getML's sqlite3 and spark modules.

In [19]:
# Creates a folder named atherosclerosis_pipeline containing
# the SQL code.
pipe2.features.to_sql().save("atherosclerosis_pipeline")
In [20]:
pipe2.features.to_sql(dialect=getml.pipeline.dialect.spark_sql).save("atherosclerosis_pipeline_spark")

3. Conclusion

The runtime benchmark between RelMT & Relboost on the same dataset demonstrates the problem of computational complexity in practice. We find that RelMT takes four times as long as Relboost to learn fewer features.

The purpose of this notebook has been to illustrate the problem of the curse of dimensionality when engineering features from datasets with many columns.

The most important thing to remember is that this problem exists regardless of whether you engineer your features manually or using algorithms. Whether you like it or not: If you write your features in the traditional way, your search space grows quadratically with the number of columns.

Next Steps

This tutorial applied getML's feature learning algorithms RelMT and Relboost to a data set with many columns.

If you are interested in further real-world applications of getML, head back to the notebook overview and choose one of the remaining examples.

Here is some additional material from our documentation if you want to learn more about getML:

Get in contact

If you have any question schedule a call with Alex, the co-founder of getML, or write us an email. Prefer a private demo of getML? Just contact us to make an appointment.