5 February 2026

In November 2024, I was unknowingly in a TikTok bubble. Based on my For You Page, I was quite sure Kamala Harris would win the elections in the United States. Not because I had really looked into it, but because my algorithm mostly showed me videos that confirmed that image. That felt convincing until reality proved otherwise.

Jody Hoenink

scientific staff member

More information

In hindsight, that was mainly a wrong expectation, nothing more. But the mechanism behind it is interesting. The algorithm did exactly what it was designed for, namely selecting based on my behavior and preferences. To make as much money as possible by showing content that keeps me using TikTok the longest (and unfortunately, the algorithm succeeds very well at this). The selection was invisible but not neutral.

That experience is not unique. Algorithms are everywhere. Namely, in our phones, in webshops, at insurers, and increasingly also in the government. Government institutions mainly use algorithms to decide how to deploy limited capacity. Who gets extra attention? Which files deserve priority? Like on my phone, these selections are often logical and efficient. But unlike on my phone, the consequences rarely remain limited to a misplaced expectation.

What is efficient is not always fair

At government agencies, selection algorithms can have direct consequences for people. This became visible in the control of the out-of-home grant by the Education Executive Agency. To deploy limited capacity efficiently, an algorithm was used to select students for extra control. This was done based on the characteristics age, type of education, and distance to the parental home. That selection seemed logical but was not tested for its impact on different groups for a long time. The result was that students with a migration background were disproportionately more often checked (Algorithm Audit, 2024). Because the system remained largely invisible, these differences could accumulate before they were recognized.

Here, two goals that often come together in policy clashed: efficiency and fairness. Efficiency does not only mean cost savings but especially the targeted and timely deployment of scarce capacity. The right attention, at the right time, for the right files. Opposed to that is fairness: equal treatment, equal opportunities, and preventing certain groups from structurally suffering disadvantage.

You can see this trade-off as a balance, but not as a barter mechanism. The question is not how much fairness we want to sacrifice for efficiency, but how we design and test systems so that they meet both requirements simultaneously.

Making fairness visible

After scandals and incidents, the emphasis is often on what went wrong. Understandable, but it helps less with the question of how to prevent repetition. That is why we developed the Selectivity Scan: to explicitly make the fairness side of the balance visible. Algorithms are currently not tested thoroughly enough for possible indirect differences between groups. That is not because organizations do not want to, but because it is often difficult or not possible. Sensitive personal data, such as age or migration background, may not be used to test algorithms due to privacy legislation.

Our publication shows that this problem can be solved. Organizations can have their selection analyzed in the secure microdata environment of the Central Bureau of Statistics (CBS), without having access to sensitive personal data themselves. An independent party performs the analysis and the organization only receives the results. This makes it visible whether certain groups are more or less often included in the selection than in a neutral reference group. These insights then offer organizations the opportunity to critically review their selection criteria or working methods and adjust them if necessary.

As an application of the Selectivity Scan, the UWV had a selection algorithm tested that supports employees in identifying job seekers who may need extra help with applying. The analysis shows that both the algorithm and the employees deviate from a neutral group composition. That is not surprising in itself: differences are to be expected in every selection process. However, it appears that these deviations are smaller in the algorithm than in human selection. The scan does not show whether an algorithm is fair or unfair, but maps how large the differences are. It is then up to the owner of the algorithm to assess whether those differences are acceptable and justifiable.

More than one side of the balance

The Selectivity Scan helps to make one side of that trade-off visible: fairness. But that is not an endpoint either. An algorithm can be designed more fairly and still contribute little to the goal for which it was deployed. That is why in our next project we shift the focus to effectiveness. Does the selection algorithm actually do what it promises? Are citizens better helped by it?

Looking back at my TikTok bubble, it was mainly striking how natural that selection felt. Without me realizing it, I was presented with a coherent but incomplete picture. Funny, but not harmful to me. In policy, it is different; there the same invisible selection can have direct consequences for people. That requires a conscious approach to algorithms: not by distrusting or avoiding them, but by making their choices visible, weighing them, and adjusting where necessary. Only then can a balanced balance arise between efficiency and fairness.