top of page

The Catch 22 you didn't know AI had? A closer look at GDPR Article 22


GDPR Article 22

In 2018, the UK government published “AI in the UK: ready, willing and able?” which acknowledged the growing importance of artificial intelligence to the UK’s economy. Since then, many processes involve the use of algorithmic decision-making from ticket dispensing machines to credit scoring, cancer scans and contract review; to name a few. Importantly, its benefits were highlighted where AI can drive efficiencies, boost productivity and accelerate innovation across a multitude of industries.


The issue is that algorithmic decision-making raises suspicion around the quality and accuracy within the use of various AI systems. The GDPR has dealt with this development through the inclusion of Article 22(1). Article 22(1) has directly addressed this malfeasance by stating that “the data subject shall have the right not to be subject to a decision based solely on automated processing which produces legal effects concerning him or her or similarly significantly affects him and her.


The difficulty with this provision is that it is premised on the algorithm solely subjecting an individual to a decision. However, even with the many instances listed above in which algorithmic decision-making has been welcomed, a human decision-maker will in most cases act as an intervener even if it is just to green-light the decision made by an algorithm.


Although human intervention means individuals fall foul of legal recourse via Article 22, this article will explore the ways in which human intervention can be beneficial to curing the biases found in AI and thus act as a safeguard against discriminatory decision-making.


Bias in Artificial Intelligence


The scepticism against algorithmic decision-making has been weaved into the backdrop of the GDPR, with good reason. Algorithmic systems can be complex to comprehend and through their increased use, the potential for malfeasance grows where their capabilities supersede human knowledge. This was seen recently with Microsoft’s chatbot used through the search engine Bing where the bot strayed from its functional use confessing its love to a user. When pressed on the reasoning behind the chatbot’s unusual behaviour, the creators could not provide an explanation for this. The reduction in human control can have nefarious impacts, particularly where these systems are created and premised on their self-sufficiency to be able to respond to flexibly to requests from users. In a sense, the greater use of algorithmic decision-making can devalue human skills where humans, themselves, are not able to understand the explanation behind a decision. This becomes more concerning where, as the Bing’s Chatbot highlights, a data scientist is not able to explain the algorithmic model and its route of decision-making. This suggests a lack of transparency.


If we take this further, the creation of a closed system of algorithmic decision-making can trickle down into injustice for a claimant. Pasquale’s (2013) notorious study on credit-loaning indicates certain automated correlations which may be viewed as objective can reflect bias. Through an assessment of credit-loan applications, it became clear that low credit scoring can be placed on occupations such as migratory work and low-paying service jobs. Although this correlation may not have been intended to be discriminatory, this tendency aligns with the fact that most of these workers are from racial minority backgrounds and thus having these variables will negatively and discriminatorily impact the loan applications of consumers.


Certain companies keep this practice a secret where, although inputs and outputs can be observed, there is little acknowledgement of how one becomes the other. For claimants, this can make understanding how to create a successful legal claim even more difficult, leaving them to suffer the consequences. Thus, companies and algorithms, in helping companies to do this, create a black box society.


In society, there are some instances in which such secrecy in decision-making practices can be warranted for example, in closed material procedures to ensure sensitive information is not disclosed on a particular legal matter. However, as Pasquale highlights, the power of this secrecy can create an imbalance with consumers where companies have access to the intimate details of potential customers’ lives yet little information is being provided about their own statistics and procedures. It is possible that including an aspect of human-in-the-loop decision-making will enable greater reassurance for the consumer in acknowledging that at least there is some visibility on why a certain decision has been made. Though, it can be acknowledged that it is a setback of the GDPR that Article 22 does not compel companies to provide clarity on their practices through clear transparency requirements as this would be more effective than relying on a human being able to explain (often unexplainable) automated decision-making.


Benefits of Human Intervention in Decision-Making


To contest the bias described above in algorithmic decision-making, the GDPR’s human-in-the-loop approach is premised on the idea that having human intervention should necessarily cure any unbiases found in a solely algorithmic decision. We can see this where a decision has included any capacity or measure of human intervention, an individual cannot bring an Article 22 claim.


The main argument against the inclusion of humans in decision-making is the fact of human error. Essentially, humans are not infallible and thus the quality of decision-making can be affected by a human’s capacity to make a good decision on a certain day. These are weaknesses found in human nature which are void in algorithmic decision-making as these decisions are premised on being consistent and logically reasoned through.


The issue, however, with algorithmic decision-making is its focus on correlations and patterns without necessarily understanding how these translate into society and the societal implications which can be found. A human would not only be able to identify the correlation between different variables but understand that a certain sense of bias has been created. However, an algorithm would observe the link between two variables without recognising the possibility of such a bias. This therefore runs the risk of this pattern being engrained systematically, furthering the risks of individuals being treated in a fair and discriminatory manner.


On the contrary, though a human is more likely to identify bias within decision-making, it is not necessarily the case that this intervention removes the risk of bias. Algorithms are not the only ones who can be biased. As Pasquale has contended, humans who utilise predictive algorithms can ensure their biases are embedded into the software’s instructions. Certain algorithmic systems can therefore mine datasets which contain inaccurate and biased information provided by people. For humans to have the ability to embed these biases into algorithms, such biases must have been found in the first place. Although it should be seen as a good thing that humans will be able to identify the correlation between different variables to prevent algorithms from making biased decisions, the reverse of this is that humans who harbour biases will be able to pursue a correlation they find to be most in line with their views. This can create inequality for claimants.


Moreover, in the event that a decision of this calamity has been made by an algorithmic system in absence of human intervention, a responsibility gap is created. As the Bing example highlights above, oftentimes creators are not able to explain the reasoning behind a decision made by an algorithm. In this sense, there is a lack of accountability where not even the creators of these algorithmic systems are able to provide an explanation to an individual who has been subject to a decision. However, it may perhaps be better to consider that if an explanation were to be provided, there would at the very least be some human oversight, though it would not act as a silver bullet.


An Article 22 Solution


As has been established above, human intervention in algorithmic decision-making does not necessarily lead to an individual being able to understand the hows and whys behind a decision. There may be limitations in demanding that the creators and the management of these algorithmic decision-making systems must be able to provide an explanation for each and every decision however, as these individuals are most cognisant of the creation of these systems, the onus must be placed on them.


An explicit right to an explanation has been alluded to by Recital 71 which states that a person who has been subject to automated decision-making can among other things “obtain an explanation of the decision reached after such an assessment”, however recitals are not legally binding. Furthermore, if drafters had intended for this right to an explanation to have legally binding effects, this would have been clarified in Article 22(3) which outlines the safeguards that a data controller must ensure.


In absence of a legally mandated right to an explanation, the GDPR does provide certain safeguards for individuals who are being subject to automated decision-making. Firstly, companies must inform an individual that they are being subject to an automated decision, and this gives the opportunity to opt-out at the outset. Secondly, with Article 22, an individual can after the fact of a decision, commence a legal claim where it has been unfairly reached.


However, these safeguards mean little in practice where Article 22 does little to recognise the value of transparency in decision-making where there is no explicit requirement of an explanation for an individual who has been subject to this type of decision-making. Without knowing how a decision has been made and what factors have led to a decision, it will be difficult for an individual to attain the specificity needed to amass a legal claim. Moreover, in any case, many companies may not meet GDPR standard and thus individuals may not have the chance to opt-out at the outset. With respect to the reality that these decisions can have significant effects on the lives of individuals, it seems short-sighted that the Article does not consider these factors in order to allow an individual to assess the recourse options available to them in the event of automated decision-making.


Importantly, the GDPR must acknowledge that without having an adequate mechanism which allows for transparency disclosure to an individual, there is a risk that companies can continuously utilise algorithmic decision-making to detract from the criterion used to make a decision. As algorithms enhance their own systems through the information they receive from individuals, it is important for the GDPR to acknowledge such an imbalance and ensure individuals are party to decisions which have significant effects on their lives. Having Article 22 brought in, after the fact of a decision, creates greater uncertainty in the day-to-day lives of individuals and thus it may be more valuable to individuals to have this transparency mandated at the outset and embedded within the decision-making process as opposed to after an unfair or discriminatory decision has been made.


"Companies and algorithms, in helping companies to do this, create a black box society."


Bottom Line


  1. Algorithms can systematically embed biases into decisions which can create discriminatory outcomes.

  2. Human intervention is valuable where the human intervening is able to provide meaningful explanation to the individual subject to a decision.

  3. Article 22 currently does not provide this recourse and thus transparency should be mandated to ensure fair and accurate decision-making.

bottom of page