Fair ticket controls

An employee at a public transportation company in Norway made an interesting post on LinkedIn explaining how the company “uses AI to predict places where people will be traveling without tickets,” and that the goal was to “optimize the existing ticket control process.” He also mentioned that the company “put special focus on the ethical use of AI in this algorithm.”

The comment on ethics raised questions from other LinkedIn members, such as:

  • How did you ‘fix’ the ethical issues?”
  • How do you avoid that this becomes a feedback loop?”
  • I would appreciate it if you could elaborate on your methodologies and ethical considerations in greater detail.”

The employee did not go into detail about exactly how the approach was ethical, but it’s such an interesting problem that I couldn’t resist thinking about it a little.

What’s ethical, anyway?

I’m skeptical of claims that statistical models (or AI algorithms, as it is called here) are “ethical” or “bias-free” without a very precise explanation. The purpose of these models—whether they optimize ticket controls, determine who gets a loan granted, what the size of an insurance premium should be, and so forth—is to pre-judge. No model knows how you will act in the future, so for better or worse they generalize by looking at people like you (or your past actions, if historical data is available).

Is it fair to use historical data in the context of ticket controls? We can argue either way:

  • If there are many freeloaders on your daily bus route, is it fair that you get controlled more often as a result? After all, you always buy a ticket. You might start to feel a bit harassed by the controllers.
  • By contrast, if controllers are dispatched at random, then you will not get controlled more often than anyone else. Is that really fair though? After all, now the bus company is not catching nearly as many freeloaders as they could have if they used historical data. The company is now wasting time and (your) money on inefficient controls, and relatively few cheaters are brought to justice.

Let’s concretize the discussion further with a simple example. Imagine three bus routes in different areas of the city, and label them \(A\), \(B\) and \(C\). Suppose we send a person to control tickets for \(50\) hours or so in each location. We get the following data back:

Route                   A     B     C    
People controlled \(c_i\)        \(1000\) \(3000\) \(2000\)
Without ticket \(w_i\) (freeloaders)     \(50\)  \(60\)  \(80\)
Proportion without ticket \(p_i\) \(0.05\) \(0.02\) \(0.04\)

Assume that there is no uncertainty in \(p_i\), the estimate of the proportion of travelers without a ticket. This simplifies reality a bit, but neatly separates the ethical problem from the statistical problem. We’ll return to the realistic case with uncertainty in the estimates at the very end of this article.

Now, what would an ethical approach to ticket controls entail?


A solution is a probability distribution \(D = [d_A, d_B, d_C]\). The solution is the probability that a ticket controller will, on any given day, perform controls on routes \(A\), \(B\) and \(C\), respectively.

  • (a) If we want every person to have equal probability of being controlled, we distribute controllers based on the overall density of travelers, sending controllers to \(A\), \(B\) and \(C\) with probability proportional to the number of people traveling on each route, i.e., \(D = [1/6, 3/6, 2/6] = c_i / \sum_i c_i \, \forall \, i\). This solution uses no information about where freeloaders travel. Everyone is innocent until proven guilty, or at least under the same amount of suspicion.
  • (b) If we want every freeloader to have equal probability of being controlled, we distribute proportional on the number of people without a ticket in each route, i.e., \(D = [5/19, 6/19, 8/19] = w_i / \sum_i w_i \, \forall \, i\).
  • (c) If we want to catch equally many freeloaders in each route, we distribute based on the inverse proportion of people without a ticket, i.e., \(D = [4/19, 10/19, 5/19] = p_i^{-1} / \sum_i p_i^{-1} \, \forall \, i\). This is the “equal outcome” solution.
  • (d) If we want the proportion of controllers to match the proportion of freeloaders, we distribute based on the proportion of travelers without a ticket, i.e., \(D = [5/11, 2/11, 4/11] = p_i / \sum_i p_i \, \forall \, i\).
  • (e) If we want every bus route to have equal probability of being controlled, we simply distribute controllers with equal probability to each bus route, i.e., \(D = [1/3, 1/3, 1/3]\).

We can also think of more extreme solutions, where we maximize an objective. These solutions are extreme in the sense that they always send controllers to the same route.

  • (f) If we want to maximize the expected number of people controlled per hour, we always send controllers to \(\operatorname{argmax}_i c_i = B\) so that \(D = [0, 1, 0]\).
  • (g) If we want to maximize the expected number of freeloaders we catch per hour, we choose \(\operatorname{argmax}_i w_i = C\) and \(D = [0, 0, 1]\).
  • (h) If we want to maximize the expected number of freeloaders we catch per person controlled, we choose \(\operatorname{argmax}_i p_i = A\) and \(D = [1, 0, 0]\).

The figure below shows the solutions on the probability simplex.

Discussion on solutions

All of these approaches above are valid solutions, in the sense that they satisfy a criterion or maximize an objective. More importantly, they are all at odds with each other. For instance, if we ensure that every person has equal probability of being controlled, then we must accept that we catch less freeloaders than we otherwise would have.

Are any of the eight solutions above obviously more ethical or fair than the others? I don’t think so. Instead of claiming that some method is “ethical,” it would be better to explain the methodology as precisely as possible and let others judge. I’m sure reasonable people would disagree even on this simple problem with three routes, because people have differing views on what is fair (that’s why people disagree on politics).

In summary I don’t believe there is a mathematical solution to fairness in general—but given a precise notion of fairness, mathematics can certainly help us realize that objective.

Notes and references

  • Assuming there is no uncertainty in \(p_i\), I personally lean toward solution (b): I want every freeloader to have equal probability of being controlled. This approach would bring in a decent amount of money and catch freeloaders wherever they travel, but it does mean that some paying travelers would be subject to more frequent controls than others.
  • If we remove the unrealistic assumption that there is no uncertainty in \(p_i\), we have a multi-armed bandit problem. In these problems it is important to model the uncertainty in the parameters and do some joint learning and regularization. Furthermore, I would weigh recent data more heavily than old data and I would never stop exploring. In other words, the estimate of each \(p_i\) should never go to zero. I would still go for solution (b).
  • The website Attacking discrimination with smarter machine learning shows four strategies for granting loans: (1) max profit, (2) group unaware, (3) demographic parity and (4) equal opportunity. Like with the ticket control problem there’s no ultimately fair solution, and it boils down to tradeoffs and value judgements.
  • After posting this article, I was made aware of a news article in Norwegian written about this titled Nå kommer AI-teamet til Ruter med en slags “Ruter-GPT”, for nesten all kollektivtransport i Norge.