Posts

2023/10/19
A Flexible Model Family of Simple Forecast Methods

Introducing a flexible model family that interpolates between simple forecast methods to produce interpretable probabilistic forecasts of seasonal data by weighting past observations. In business forecasting applications for operational decisions, simple approaches are hard-to-beat and provide robust expectations that can be relied upon for short- to medium-term decisions. They’re often better at recovering from structural breaks or modeling seasonal peaks than more complicated models, and they don’t overfit unrealistic trends.

Continue reading?


2022/12/09
ChatGPT and ML Product Management

Huh, look at that, OpenAI’s ChatGPT portrays absolute confidence while giving plain wrong answers. But ChatGPT also does provide helpful responses a large number of times. So one kind of does want to use it. Sounds an awful lot like every other machine learning model deployed in 2022. But really, how do we turn fallible machine learning models into products to be used by humans? Not by injecting its answers straight into StackOverflow.

Continue reading?


2022/08/31
Legible Forecasts, and Design for Contestability

Some models are inherently interpretable because one can read their decision boundary right off them. In fact, you could call them interpreted as there is nothing left for you to interpret: The entire model is written out for you to read. For example, assume it’s July and we need to predict how many scoops of ice cream we’ll sell next month. The Seasonal Naive method tells us: AS 12 MONTHS AGO IN August, PREDICT sales = 3021.

Continue reading?


2022/08/30
Where Is the Seasonal Naive Benchmark?

Yesterday morning, I retweeted this tweet by sklearn_inria that promotes a scikit-learn tutorial notebook on time-related feature engineering. It’s a neat notebook that shows off some fantastic ways of creating features to predict time series within a scikit-learn pipeline. There are, however, two things that irk me: All features of the dataset including the hourly weather are passed to the model. I don’t know the details of this dataset, but skimming what I believe to be its description on the OpenML repository, I suspect this might introduce data leakage as in reality we can’t know the exact hourly humidity and temperature days in advance.

Continue reading?


2022/07/25
When Quantiles Do Not Suffice, Use Sample Paths Instead

I don’t need to convince you that you should absolutely, to one hundred percent, quantify your forecast uncertainty—right? We agree about the advantages of using probabilistic measures to answer questions and to automate decision making—correct? Great. Then let’s dive a bit deeper. So you’re forecasting not just to fill some numbers in a spreadsheet, you are trying to solve a problem, possibly aiming to make optimal decisions in a process concerned with the future.

Continue reading?


2022/07/05
Failure Modes of State Space Models

State space models are great, but they will fail in predictable ways. Well, claiming that they “fail” is a bit unfair. They actually behave exactly as they should given the input data. But if the input data fails to adhere to the Normal assumption or lacks stationarity, then this will affect the prediction derived from the state space models in perhaps unexpected yet deterministic ways. This article ensures that none of us is surprised by these “failure modes”.

Continue reading?


2021/10/03
On Google Maps Directions

Google Maps and its Directions feature are the kind of data science product everyone wished they’d be building. It augments the user, enabling decision-making while driving. Directions exemplifies the difference between prediction and prescription. Google Maps doesn’t just expose data, and it doesn’t provide a raw analysis by-product like SHAP values. It processes historical and live data to predict the future and to optimize my route based on it, returning only the refined recommendations.

Continue reading?


2021/08/12
What Needs to Prove True for This to Work?

Data science projects are a tricky bunch. They entice you with challenging problems and promise a huge return if successful. In contrast to more traditional software engineering projects, however, data science projects entail more upfront uncertainty: You’ll not know until you tried whether the technology is good enough to solve the problem. Consequently, a data science endeavor fails more often, or doesn’t turn out to be the smash hit you and your stakeholders expected it to be.

Continue reading?


2020/06/14
Embedding Many Time Series via Recurrence Plots

We demonstrate how recurrence plots can be used to embed a large set of time series via UMAP and HDBSCAN to quickly identify groups of series with unique characteristics such as seasonality or outliers. The approach supports exploratory analysis of time series via visualization that scales poorly when combined with large sets of related time series. We show how it works using a Walmart dataset of sales and a Citi Bike dataset of bike rides.

Continue reading?


2020/06/07
Rediscovering Bayesian Structural Time Series

This article derives the Local-Linear Trend specification of the Bayesian Structural Time Series model family from scratch, implements it in Stan and visualizes its components via tidybayes. To provide context, links to GAMs and the prophet package are highlighted. The code is available here. I tried to come up with a simple way to detect “outliers” in time series. Nothing special, no anomaly detection via variational auto-encoders, just finding values of low probability in a univariate time series.

Continue reading?


2020/04/26
Are You Sure This Embedding Is Good Enough?

Suppose you are given a data set of five images to train on, and then have to classify new images with your trained model. Five training samples are in general not sufficient to train a state-of-the-art image classification model, thus this problem is hard and earned it’s own name: few-shot image classification. A lot has been written on few-shot image classification and complex approaches have been suggested.1 Tian et al.

Continue reading?


2020/01/18
The Causal Effect of New Year’s Resolutions

We treat the turn of the year as an intervention to infer the causal effect of New Year’s resolutions on McFit’s Google Trend index. By comparing the observed values from the treatment period against predicted values from a counterfactual model, we are able to derive the overall lift induced by the intervention. Throughout the year, people’s interest in a McFit gym membership appears quite stable.1 The following graph shows the Google Trend for the search term “McFit” in Germany for April 2017 to until the week of December 17, 2017.

Continue reading?


2019/06/16
satRday Berlin Presentation

My satRday Berlin slides on “Modeling Short Time Series” are available here. This saturday, June 15, Berlin had its first satRday conference. I eagerly followed the hashtags of satRday Amsterdam last year and satRday Capetown the year before that on Twitter. Thanks to Noa Tamir, Jakob Graff, Steve Cunningham, and many others, we got a conference in Berlin as well. When I saw the call for papers, I jumped at the opportunity to present, trying what it feels like to be on the other side of the microphone; being in the hashtag instead of following it.

Continue reading?


2019/04/16
Modeling Short Time Series with Prior Knowledge

I just published a longer case study, Modeling Short Time Series with Prior Knowledge: What ‘Including Prior Information’ really looks like. It is generally difficult to model time series when there is insuffient data to model a (suspected) long seasonality. We show how this difficulty can be overcome by learning a seasonality on a different, long related time series and transferring the posterior as a prior distribution to the model of the short time series.

Continue reading?


2019/03/23
The Probabilistic Programming Workflow

Last week, I gave a presentation about the concept of and intuition behind probabilistic programming and model-based machine learning in front of a general audience. You can read my extended notes here. Drawing on ideas from Winn and Bishop’s “Model-Based Machine Learning” and van de Meent et al.’s “An Introduction to Probabilistic Programming”, I try to show why the combination of a data-generating process with an abstracted inference is a powerful concept by walking through the example of a simple survival model.

Continue reading?


2019/02/24
Problem Representations and Model-Based Machine Learning

Back in 2003, Paul Graham, of Viaweb and Y Combinator fame, published an article entitled “Better Bayesian Filtering”. I was scrolling chronologically through his essays archive the other day when this article stuck out to me (well, the “Bayesian” keyword). After reading the first few paragraphs, I was a little disappointed to realize the topic was Naive Bayes rather than Bayesian methods. But it turned out to be a tale of implementing a machine learning solution for a real world application before anyone dared to mention AI in the same sentence.

Continue reading?


2018/07/25
SVD for a Low-Dimensional Embedding of Instacart Products

Building on the Instacart product recommendations based on Pointwise Mutual Information (PMI) in the previous article, we use Singular Value Decomposition to factorize the PMI matrix into a matrix of lower dimension (“embedding”). This allows us to identify groups of related products easily. We finished the previous article with a long table where every row measured how surprisingly often two products were bought together according to the Instacart Online Grocery Shopping dataset.

Continue reading?


2018/06/17
Pointwise Mutual Information for Instacart Product Recommendations

Using pointwise mutual information, we create highly efficient “customers who bought this item also bought” style product recommendations for more than 8000 Instacart products. The method can be implemented in a few lines of SQL yet produces high quality product suggestions. Check them out in this Shiny app. Back in school, I was a big fan of the Detective Conan anime. For whatever reason, one of the episodes stuck with me.

Continue reading?


2017/07/01
Pokémon Recommendation Engine

Using t-SNE, I wrote a Shiny app that recommends similar Pokémon. Try it out here. Needless to say, I was and still am a big fan of the Pokémon games. So I was very excited to see that a lot of the meta data used in Pokémon games is available on Github due to the Pokémon API project. Data on Pokémon’s names, types, moves, special abilities, strengths and weaknesses is all cleanly organized in a few dozen csv files.

Continue reading?


2017/01/25
Look At All These Links

By now, some time has passed since NIPS 2016. Consequently, several recaps can be found on blogs. One of them is this one by Eric Jang. If you want to make your first steps in putting some of the theory presented at NIPS into practice, why not take a look at this slide deck about reinforcement learning in R? The RStudio Conference also took place, and apparently has been a blast.

Continue reading?


2016/11/06
Look At All These Links

At Airbnb, the data science team has written their own R packages to scale with the company’s growth. The most basic achievement of the packages is the standardization of the work (ggplot and RMarkdown templates) and reduction of duplicate effort (importing data). New employees are introduced to the infrastructure with extensive workshops. This reminded me of a presentation by Hilary Parker in April at the New York R Conference on Scaling Analysis Responsibly.

Continue reading?


2016/06/14
Three Types of Cluster Reproducibility

Christian Hennig provides a function called clusterboot() in his R package fpc which I mentioned before when talking about assessing the quality of a clustering. The function runs the same cluster algorithm on several bootstrapped samples of the data to make sure that clusters are reproduced in different samples; it validates the cluster stability. In a similar vein, the reproducibility of clusterings with subsequent use for marketing segmentation is discussed in this paper by Dolnicar and Leisch.

Continue reading?


2016/05/30
Assessing the Quality of a Clustering Solution

During one of the talks at PyData Berlin, a presenter quickly mentioned a k-means clustering used to group similar clothing brands. She commented that it wasn’t perfect, but good enough and the result you would expect from a k-means clustering. There remains the question, however, how one can assess whether a clustering is “good enough”. In above case, the number of brands is rather small, and simply by looking at the groups one is able to assess whether the combination of Tommy Hilfiger and Marc O’Polo is sensible.

Continue reading?


2015/09/21
Taxi Pulse of New York City

I don’t know about you, but I think taxi data is fascinating. There is a lot you can do with the data sets as they usually contain observations on geolocation as well as time stamps besides other information, which makes them unique. Geolocation and timestamps alone, as well as the large number of observations in cities like New York enable you to create stunning visualizations that aren’t possible with any other set of data.

Continue reading?


2015/08/26
Analyzing Taxi Data to Create a Map of New York City

Yet another day was spent working on the taxi data provided by the NYC Taxi and Limousine Commission (TLC). My goal in working with the data was to create a plot that maps the streets of New York using the geolocation data that is provided for the taxis’ pickup and dropoff locations as longitude and latitude values. So far, I had only used the dataset for January of 2015 to plot the locations; also, I hadn’t used the more than 12 million observations in January alone but a smaller sample (100000 to 500000 observations).

Continue reading?