Maintenance and Optimal Stopping

Imagine we have a machine in a factory that makes products which are sold to the general public. Ideally we want to make as many products as possible so we can sell them all to make a nice healthy profit. This means that we really want to be running the machine all through the day with no unnecessary stoppages.

Realistically however we know that machines do in fact breakdown and companies employ teams of people whose job it is to maintain the machines so that they can keep them running or repair them if they do break. Generally these maintenance teams will have a planned schedule of times when they will stop the machine in order to do their necessary checks and servicing.


This approach works well except for the fact that there are often conflicting views between the maintenance department and the production department. The production department wants to make sure that products are always being made (as few stops as possible), whereas the maintenance viewpoint is that the aim should be to stop the machine breaking down (needing a lot of stops). This poses the question of how to find the optimum balance between having a reliable machine and a productive machine?

Continue reading

Redefined Bradley-Terry Models

In this final part of my series on Bradley-Terry models I will talk about how the simple concepts behind Bradley-Terry models link with and underpin some more well-known and advanced concepts.

1. Logistic Regression

Let’s start by making a substitution in the formula of \lambda_i = e^{b_i}.

P(i \text{ beats } j) = \cfrac{\lambda_i}{\lambda_i + \lambda_j}

P(i \text{ beats } j) = \cfrac{e^{b_i}}{e^{b_i} + e^{b_j}}

With a bit of mathematical manipulation we can recast this into a more familiar form.

P(i \text{ beats } j) = \cfrac{e^{b_i-b_j}}{e^{b_i-b_j} + 1} = \cfrac{1}{1 + e^{-(b_i-b_j)}}

These look very similar to the form of a logistic regression. This is a regression where the dependent variable can only take two values – it is binary. In our case we only have the outcomes of team i winning or team i losing.

\text{ invlogit}(x) = \cfrac{e^{x}}{e^{x} + 1} = \cfrac{1}{1 + e^{-x}}

Then if we substitute our initial expression into the logistic transformation we obtain the following terms which can be furthered simplified using the fact that \lambda_i = e^{b_i}.

\text{ invlogit}(P(i \text{ beats } j)) = \log{\cfrac{P(i \text{ beats } j)}{1 - P(i \text{ beats } j)}} = \log{\cfrac{\lambda_i}{\lambda_j}} = b_i - b_j

From here it is simple to invert the transformation to get the final result, which is that the probability of team i beating team j is just a logistic regression on b_i-b_j.

P(i \text{ beats } j) = \text{ logit}(b_i - b_j)

Continue reading