Hidden Risks of Automated Decision-Making: Is Article 22 Failing Us?

Automated decision-making (ADM) is everywhere. Hiding in a warehouse waiting to fulfil your order, deciding whether to extend your overdraft, calculating your car insurance premium or determining whether you will be connected to an agent[1]. ADM systems are now integral to modern society. As organisations increasingly rely on algorithms to make impactful decisions, concerns about the adequacy of legal protections for data subjects have grown. Article 22[2] intends to safeguard individuals against solely automated decisions that produce legal effects. Despite this protection, this article argues that the Article's ambiguous language, and lack of clear definitions for terms like "solely" and "legal effects", presents a failure to address multi-step decision-making processes. A need for clearer definitions and enhanced regulations becomes apparent when examining Article 22’s limitations. Demonstrating a need for further clarification to protect individuals' rights in this rapidly evolving digital era.

Image Credit: DALL·E

To determine whether Article 22 provides sufficient protection against ADM, we need to understand the regulation, the Article’s intention, define ADM, and explore its exceptions. Article 1[3] outlines the regulation’s objective, which intends to protect a natural person’s data. Protecting fundamental rights and freedoms is a noble goal—but how does the GDPR safeguard against ADM?

While GDPR does not define ADM, it does prohibit legal effects produced solely by automated processing or profiling[4], such as decision making that impacts access to employment or financial services. The following argues that GDPR should narrow down the scope of Article 22, ensuring that individuals’ rights are not inadvertently infringed by broad wording. Broad wording that may permit decision-making algorithms to make deductions and decisions without adequate human involvement.

As established, Article 22 does not directly address multi-step decision-making processes, which arguably leaves controllers to interpret its broad prohibition when implementing GDPR-compliant processes. Recital 71[5], which provides further insight into the Article, states that a data subject should not be evaluated using automation without human intervention. SCHUFA[6] was the first case where the CJEU[7] addressed the scope and meaning of Article 22; and the initial application of probability-based decision-making to produce legal effects[8]. SCHUFA found probability-based decision-making can breach Article 22[9].

 

Decision-making is not always binary; not all choices can be reduced to a simple yes or no, or right or wrong. Instead, final decisions often result from a series of smaller, incremental choices. In SCHUFA, a third-party credit reporter profiled loan applicants. Although this profiling was found to be preparatory, the CJEU held that these reports carried sufficient weight to be seen as a decision in isolation. Reasoning that if they were to form part of a multi-step decision, it would create a legal lacuna where the decision-maker is unable to provide required information on how the probability value, that denied the applicant,[10] was reached. Similarly, if a recruiter were to profile and screen candidates before passing to a prospective employer, this could also be deemed an automated decision, without adequate human intervention, potentially breaching Article 22. Multi-step processing makes required human intervention difficult to place, potentially permitting employers, financers, or other legal effect decision-makers, an opportunity to hide behind the Article’s foggy syntax. Though SCHUFA finds preparatory acts within the scope of Article 22 the unambiguous step from the third-party’s probability value risk assessment to the deciding bank does not provide certainty for use in more complex scenarios. Such as an end-to-end internal recruitment process with no third-party involvement.

 

Article 22’s ambiguity has long been debated[11]. Until the CJEU advised Article 22 as a right[12] its wording left controllers unsure whether it merely allowed data subjects an opt-out of ADM processing[13]. Despite the CJEU’s clarification, Article 22’s effectiveness remains uncertain while it continues to be interpreted broadly. Article 22’s exclusions[14] permit ADM in some circumstances, such when it is necessary to enter, or for the performance of a contract[15]; it is authorised by union or member state law[16] (such as ADM used in fraud and tax investigation cases); and when the data subject has given explicit consent[17]. These exclusions can give the data subject the illusion of control where ADM is used. If a power imbalance exists between the controller and the data subject[18], is consent sufficient to authorise ADM in situations that produce legal effects?


 

Image credit: DALL-E[19]

 

ADM utilised in a recruitment process may pressure an individual to give consent. Regulation guidelines frequently refer to consent, stating it must be informed[20] and that data controllers must provide information[21] to subjects in clear and plain language[22]. It goes on to state that current or future employees are unlikely to be able to give consent freely[23]. Consequently, it is arguable that Article 22 should not be permitted in an employment, or potential employment, situation, when its application requires consent. This type of relationship does not support free consent. Particularly when the decision-making process is ambiguous as to whether decisions in the chain are causally related.


What constitutes clear and intelligible language? Chapter 3[24] outlines rights of the data subject. Including, but not limited to, the requirement for transparency via clear and intelligible information[25] and the requirement that information is to be provided to the subject relating to data collected[26]. Where ADM, or profiling, are used the decision-making logic, significance, and potential consequences should be shared with the subject[27], explaining how the decision was reached. Though data subjects may provide consent upon application, a lack of free consent undermines their autonomy.

 

Consent is not the only issue. The potential for biased algorithms, coupled with the Article’s uncertain wording, means Article 22 is unlikely to sufficiently protect individuals from discrimination, despite GDPR’s overarching intent[28]. Algorithmic decision-making is a process that uses algorithmically generated knowledge systems to inform or execute decisions[29]. Algorithms vastly vary in complexity, from a single input IF function to a labyrinthine, deep-learning, neural network. With the latter operating under internal logic that is not fully, or easily, understood by humans; such an environment is referred to as a ‘black box’[30].

 




Image credit: DALL-E[31]

 

While the inputs and outputs of a black box may be observed, a black box’s operational in-between is cloudy. For example, an employer may use third-party software to pre-screen candidates based on desirable characteristics like ‘hard worker’ or ‘loyal’. The black box decision-making process behind these terms is unknown, allowing such algorithms to perpetuate existing biases found online[32] ; allowing the controller to operate with impunity. A black box decision may dismiss a candidate with a gap in employment, inferring that they are not a hard worker, or a candidate who frequently changes employer may be reasoned disloyal. Such inferences may disproportionately impact vulnerable or marginalised groups, such as parents or people with disabilities who may have gaps in employment. If an algorithms’ programming and reasoning is opaque, it is impossible to fully determine whether a data subject has been discriminated against. Meaning it will not be possible for the controller to process the data transparently, as required[33]; and it cannot be provided to the data subject[34]. Without further clarification from the European Union as to what constitutes ADM, and any limitations thereof, automated decisions may continue; unknowingly discriminating despite Article 22’s best intentions.

 

Restrictions on algorithms used, and clarification on the Article’s terms relating to ADM producing legal effects, may strengthen the protection awarded by the Article. Arguably, Article 22’s scope was intended to only restrict decisions made at the final step of a decision-making chain[35]. Hintze argues almost every decision could lead to a legal effect, drawing a necessary distinction between causation and the butterfly effect[36]. What this does not do is apply decision-chain logic to a real-world example, such as the recruitment process. An applicant awaits a decision on whether they get the job, not whether they are pre-screened and rejected. The employer’s final decision is “who will get the job”, not “who will not…”. The employer may use this final step to determine whether there was any human involvement in deciding "who will get the job”. Concluding that any human involvement throughout the decision-chain may be rationalised as sufficient. Consequently, any ADM that excludes an individual should be considered a final decision for the purposes of the Article, similarly to SCHUFA’s third-party credit report[37]. Though an employer may be able to provide further information upon request, tailoring individual applications for different jobs, for multiple positions, means it is less likely people will challenge decisions made by prospective employers, irrespective of any potential power imbalance or, black box. Limited recourse for change supports the notion that the Article may benefit from further clarification to ensure the appropriate use of ADM, in line with the regulation’s intentions.

 

Article 22 fails to adequately protect data subjects against automated decision-making due to ambiguous language, muddy definitions, and a failure to address multi-step processes. This loophole allows powerful entities to exploit opaque ADM systems, leading to potentially unchecked and unchallenged discrimination. Clearer regulations, stricter enforcement, and technical safeguarding is needed to ensure ADM in practice aligns with GDPR’s intent—protecting fundamental rights rather than undermining them.

 

[1] European Data Protection Board, Guidelines 05/2020 on Consent (2020) para 21.

[2] Regulation (EU) 2016/679 (General Data Protection Regulation). (GDPR).

[3] GDPR.

[4] GDPR (n3), art 22(1).

[5] GDPR (n3), recital 71.

[6] SCHUFA Holding AG v Hessischer Beauftragter für Datenschutz und Informationsfreiheit (Case C-634/21) ECLI:EU:C:2023:14. (SCHUFA)

[7] Court of Justice of the European Union (CJEU)

[8] Ibid para 40

[9] Ibid para 73

[10] Ibid para 23.

[11] Ibid. Pages 840–858,

[12] SCHUFA (n6)

[13] Michael Hintze, ‘Automated Individual Decisions to Disclose Personal Data: Why GDPR Article 22 Should Not Apply’ (2020)

[14]  GDPR (n3), art. 22(2) (a-c).

[15] Ibid (a)

[16] Ibid (b)

[17] Ibid (c)

[18] GDPR (n3), recital 43(1).

[19] DALL·E, Legal Protection and GDPR Article 22 (20 January 2025)

[20] European Data Protection Board, 'Guidelines 05/2020 on Consent under Regulation 2016/679' (EDPB) (2020) para 63.

[21] Ibid para 66.

[22] Ibid para 67.

[23] EDPB (n21) (2020) para 21.

[24] GDPR (n3)

[25] GDPR (n3), art 12

[26] Ibid, art 13

[27] Ibid, art 13(2)(f)

[28] GDPR (n3)

[29] Andrew Murray, ‘Information Technology Law: The Law and Society’ (5th edn, Oxford University Press 2023) 87

[30] Ibid, 88

[31] DALL·E, The Black Box Problem in AI (20 January 2025)   )

[32] Omar Santos and Petar Radanliev, ‘Beyond the Algorithm: AI, Security, Privacy, and Ethics’ (2024) ch 2, ‘Reflecting on the Societal and Ethical Implications of AI Technologies’.

[33] GDPR (n3), art.5(1)(a)

[34] Ibid, art12

[35] Michael Hintze Article ‘Automated Individual Decisions to Disclose Personal Data: Why GDPR Article 22 Should Not Apply’ (2020)

[36] Ibid page 9

[37] SCHUFA (n6)

Previous
Previous

The Truth Behind the Assumed Gender Bias in the UK Family Law System

Next
Next

The EU Settlement Scheme: the effect on millions