16 December 2025
The Central Planning Bureau (CPB) has developed a new method that allows government organizations to test whether their selection algorithms indirectly discriminate between groups. This can be done without accessing sensitive personal data. This so-called Selectivity Scan shows that this is possible and safe. This is important because government algorithms are currently not tested thoroughly enough. As a result, unintended unequal treatment can arise or persist.
More and more government agencies use algorithms to select files or assess risks. This can increase efficiency but also carries risks, as was recently evident in the control of the out-of-home grant. Therefore, it is important to check whether these algorithms disadvantage certain groups. This is often difficult now because organizations cannot use sensitive data such as age or migration background. The Selectivity Scan offers a solution. Organizations upload their selection to the secure microdata environment of the CBS, where an independent party performs the analysis. The organization only sees the results; sensitive personal data remain fully protected.
Case UWV Algorithm
As an application of the Selectivity Scan, the UWV had a selection algorithm tested by the CPB: the WW Application Scan. This algorithm helps select files that employees review to assess whether someone needs extra support with job applications. The analysis shows that both the algorithm and employees have a group composition in the selection that differs from a neutral reference group. In the reference group, 37% have a migration background; the algorithm results in 38% and employees 43%. Some differences are expected in every selection process, and in this case, the differences are smaller for the algorithm than for employees. The Selectivity Scan shows that differences exist; it is up to the algorithm owner to interpret and justify these differences.
Testing Algorithms Safely
It is important that the government more clearly records when selection algorithms may be used and how they should be controlled. Current rules, such as the AI regulation and the algorithm framework, focus on procedures but do not yet guarantee that (unintended) unequal treatment is prevented. This can only be done by properly testing algorithms, but government organizations currently have too few possibilities for this. The Selectivity Scan offers a solution. This method aligns with the European obligation to have a safe test environment for such tests by August 2026 at the latest. By making the scan centrally available, algorithms can be better and more safely assessed without organizations themselves gaining access to sensitive personal data.
Downloads
- Selectivity Scan: Safe Testing of Algorithms
Pdf, 530.6 kB
| In-depth Document Selectivity Scan | 4.86 MB | ⇣ |
If you have questions regarding this publication, contact us.
