In this tutorial, we demonstrate how getML can be applied in an e-commerce context. Using a dataset of about 400,000 orders, our goal is to predict whether an order will be cancelled.
We also show that we can significantly improve our results by using getML's built-in hyperparameter tuning routines.
Summary:
Author: Dr. Patrick Urbanke
The data set contains about 400,000 orders from a British online retailer. Each order consists of a product that has been ordered and a corresponding quantity. Several orders can be summarized onto a single invoice. The goal is to predict whether an order will be cancelled.
Because the company mainly sells to other businesses, the cancellation rate is relatively low, namely 1.83%.
The data set has been originally collected for this study:
Daqing Chen, Sai Liang Sain, and Kun Guo, Data mining for the online retail industry: A case study of RFM model-based customer segmentation using data mining, Journal of Database Marketing and Customer Strategy Management, Vol. 19, No. 3, pp. 197-208, 2012 (Published online before print: 27 August 2012. doi: 10.1057/dbm.2012.17).
It has been downloaded from the UCI Machine Learning Repository:
Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
Your getML live session is running inside a docker container on mybinder.org, a service built by the Jupyter community and funded by Google Cloud, OVH, GESIS Notebooks and the Turing Institute. As it is a free service, this session will shut down after 10 minutes of inactivity.
Let's get started with the analysis and set up your session:
import copy
import os
from urllib import request
import numpy as np
import pandas as pd
from IPython.display import Image
import matplotlib.pyplot as plt
plt.style.use('seaborn')
%matplotlib inline
from pyspark.sql import SparkSession
import getml
getml.engine.launch()
getml.engine.set_project('online_retail')
RUN_SPARK = False
We begin by downloading the data from the source file:
fname = "online_retail.csv"
if not os.path.exists(fname):
fname, res = request.urlretrieve(
"https://static.getml.com/datasets/online_retail/" + fname,
fname
)
full_data_pandas = pd.read_csv(fname, sep="|")
The invoice dates are in a somewhat unusual format, fo we need to rectify that.
def add_zero(string):
if len(string) == 1:
return "0" + string
return string
def format_date(string):
datetime = string.split(" ")
assert len(datetime) == 2, "Expected date and time"
date_components = datetime[0].split("/")
assert len(date_components) == 3, "Expected three date components"
date_components = [add_zero(x) for x in date_components]
return "-".join(date_components) + " " + datetime[1]
full_data_pandas["InvoiceDate"] = [
format_date(string) for string in np.asarray(full_data_pandas["InvoiceDate"])
]
full_data_pandas
In this data set, the targets aren't as clearly defined as we would like to, so we have do define them ourselves.
def add_target(df):
df = df.sort_values(by=["Customer ID", "InvoiceDate"])
cancelled = np.zeros(df.shape[0])
invoice = np.asarray(df["Invoice"])
stock_code = np.asarray(df["StockCode"])
customer_id = np.asarray(df["Customer ID"])
for i in range(len(invoice)):
if (invoice[i][0] == 'C') or (i == len(invoice) - 1):
continue
j = i + 1
while customer_id[j] == customer_id[i]:
if (invoice[j][0] == 'C') and (stock_code[i] == stock_code[j]):
cancelled[i] = 1.0
break
if stock_code[i] == stock_code[j]:
break
j += 1
df["cancelled"] = cancelled
return df
Also, we want to remove any orders in the data set that are actually cancellations.
def remove_cancellations(df):
invoice = np.asarray(df["Invoice"])
is_order = [inv[0] != 'C' for inv in invoice]
df = df[is_order]
return df
full_data_pandas = add_target(full_data_pandas)
full_data_pandas = remove_cancellations(full_data_pandas)
Finally, there are some order for which we do not have a customer ID. We want to remove those.
full_data_pandas = full_data_pandas[~np.isnan(full_data_pandas["Customer ID"])]
Now we can upload the data to getML.
full_data = getml.data.DataFrame.from_pandas(full_data_pandas, "full_data")
full_data
getML requires that we define roles for each of the columns.
full_data.set_role("InvoiceDate", getml.data.roles.time_stamp, time_formats=['%Y-%m-%d %H:%M'])
full_data.set_role(["Customer ID", "Invoice"], getml.data.roles.join_key)
full_data.set_role(["cancelled"], getml.data.roles.target)
full_data.set_role(["Quantity", "Price"], getml.data.roles.numerical)
full_data.set_role("Country", getml.data.roles.categorical)
full_data.set_role("Description", getml.data.roles.text)
The StockCode is a 5-digit code that uniquely defines a product. It is hierarchical, meaning that every digit has a meaning. We want to make use of that, so we assign a unit to the stock code, which we can reference in our preprocessors.
full_data.set_unit("StockCode", "code")
split = getml.data.split.random(train=0.7, validation=0.15, test=0.15)
split
We loaded the data and defined the roles and units. Next, we create a getML pipeline for relational learning.
To get started with relational learning, we need to specify the data model.
In our case, there are two joins we are interested in:
1) We want to take a look at all of the other orders on the same invoice.
2) We want to check out how often a certain customer has cancelled orders in the past. Here, we limit ourselves to the last 90 days. To avoid data leaks, we set a horizon of one day.
star_schema = getml.data.StarSchema(
population=full_data,
alias="population",
split=split,
)
star_schema.join(
full_data.drop("Description"),
alias="full_data",
on='Invoice',
)
star_schema.join(
full_data.drop("Description"),
alias="full_data",
on='Customer ID',
time_stamps='InvoiceDate',
horizon=getml.data.time.days(1),
memory=getml.data.time.days(90),
lagged_targets=True,
)
star_schema
Set-up the feature learner & predictor
We have mentioned that the StockCode is a hierarchical code. To make use of that fact, we use getML's substring preprocessor, extracting the first digit, the first two digits etc. Since we have assigned the unit code to the StockCode, the preprocessors know which column they should be applied to.
substr1 = getml.preprocessors.Substring(0, 1, "code")
substr2 = getml.preprocessors.Substring(0, 2, "code")
substr3 = getml.preprocessors.Substring(0, 3, "code")
mapping = getml.preprocessors.Mapping()
text_field_splitter = getml.preprocessors.TextFieldSplitter()
fast_prop = getml.feature_learning.FastProp(
loss_function=getml.feature_learning.loss_functions.CrossEntropyLoss,
num_threads=1,
sampling_factor=0.1,
)
feature_selector = getml.predictors.XGBoostClassifier()
predictor = getml.predictors.XGBoostClassifier()
Build the pipeline
pipe = getml.pipeline.Pipeline(
tags=['fast_prop'],
data_model=star_schema.data_model,
preprocessors=[substr1, substr2, substr3, mapping, text_field_splitter],
feature_learners=[fast_prop],
feature_selectors=[feature_selector],
predictors=[predictor],
share_selected_features=0.2,
)
pipe
pipe.check(star_schema.train)
pipe.fit(star_schema.train)
pipe.score(star_schema.test)
The most important feature looks as follows:
pipe.features.to_sql()[pipe.features.sort(by="importances")[0].name]
It is possible to productionize the pipeline by transpiling the features into production-ready SQL code.
pipe.features.to_sql(dialect=getml.pipeline.dialect.spark_sql).save("online_retail_spark")
if RUN_SPARK:
spark = SparkSession.builder.appName(
"online_retail"
).config(
"spark.driver.maxResultSize","5g"
).config(
"spark.driver.memory", "5g"
).config(
"spark.executor.memory", "5g"
).config(
"spark.sql.execution.arrow.pyspark.enabled", "true"
).config(
"spark.sql.session.timeZone", "UTC"
).enableHiveSupport().getOrCreate()
spark.sparkContext.setLogLevel("ERROR")
if RUN_SPARK:
population_spark = star_schema.train.population.to_pyspark(spark, name="population")
peripheral_spark = star_schema.full_data.to_pyspark(spark, name="full_data")
if RUN_SPARK:
getml.spark.execute(spark, "online_retail_spark")
The resulting features are in a table called features. Here is how you can retrieve them:
if RUN_SPARK:
spark.sql("SELECT * FROM `FEATURES` LIMIT 20").toPandas()
In this notebook we have demonstrated how getML can be applied to an e-commerce setting. In particular, we have seen how results can be improved using the built-in hyperparamater tuning routines.
This tutorial presented an application of getML in the e-commerce context.
If you are interested in further real-world applications of getML, head back to the notebook overview and choose one of the remaining examples.
Here is some additional material from our documentation if you want to learn more about getML:
If you have any question schedule a call with Alex, the co-founder of getML, or write us an email. Prefer a private demo of getML? Just contact us to make an appointment.