# Changepoint Detection – Part 1

Previously we discussed what a changepoint is, how they work and examples of where you might find them. Today however we shall go one step further and attempt to discuss solving the problem of how to find these changepoints.

Multiple changepoints have be found, they are the red lines.

Let’s begin by trying to figure out a way of detecting if just a single changepoint exists in a set of data. The simplest way to pose this problem is to describe it through a ‘Hypothesis Test’, where we have a null hypothesis ($H_0$) and an alternative hypothesis ($H_1$),

$H_0:$ no changepoint            $H_1:$ 1 changepoint

If you are unfamiliar with hypothesis tests, the idea is to run some kind of test that will give us a value. If that value is above a certain threshold, we can reject the null hypothesis and in our case accept the alternative hypothesis. If not, we can’t disprove the null hypothesis and we accept it as true. In layman’s terms, it’s the classic innocent until proven guilty. Alternatively check it out yourself!

If you remember, we previously said that the definition of a changepoint is whether the statistical parameters (mean,variance,regression etc.) differ before and after the changepoint. To be able notice this change however, we need to be able to estimate the parameters in the first place. The way to do this is through a well-known statistical technique called ‘Log-Likelihood’. If you want to know more about this feel free to Google it, but it is suffice to say that for our purposes we essentially just want to maximise it.

For $H_0$ there is no changepoint, so we need to evaluate the Maximum Log-Likelihood over the entire set of data, $y=(y_1,y_2,...,y_n)$,

$X = \ell (y_{1:n}|\theta)$

For $H_1$ there is a changepoint, so we need to evaluate the log-likelihood of both segments separately (since they have different parameters) and then add them together. In this case we will have to maximise over all the possible locations of the changepoint at position i, this is because we want to put the changepoint in the best place it can be, not just anywhere. This yields,

$Y = \ell (y_{1:i}|\theta_1) + \ell (y_{i:n}|\theta_2)$

We now are in position to use our two maximum log likelihoods to see if we have a changepoint or not. The test that we will use is the ‘Likelihood Ratio Test’, which essentially compares the relative values of our two likelihoods. This is also fairly simple to perform as we already have log-likelihoods.

$T = 2(Y-X) = 2(\ell (y_{1:i}|\theta_1) + \ell (y_{i:n}|\theta_2) -\ell (y_{1:n}|\theta))$

Then if the outcome of our test $T$ is greater than a threshold $\beta$,  i.e  $T>\beta$, we can reject $H_0$ and hence accept $H_1$. This means that we have detected a changepoint and its location will be at i. To find its location, we simply have to find the i from the argument of Y.

$\hat i = \text{arg Y}$

A not so obvious example of a single changepoint.