Machine Learning and GDPR Protections

I.- INTRODUCTION

The development of machine learning algorithms and their growing use in decision-making processes in the era of Big Data poses significant challenges for consumers and regulators. Such algorithms process information in an discreet manner, are not always intelligible to humans and may be protected by proprietary rights, could compromise privacy of the individuals and data protection rights. Automated processing and profiling practices may optimise the improvement of the allocation of resources by allowing private and public parties to personalise their products and make more efficient choices. However, such processing can also be used to exploit vulnerabilities of the consumers and influence their attitudes and choices, which may result in unfair discrimination, financial loss and loss of reputation. This chapter examines the legal mechanisms available in Data Protection Law to safeguard individuals from decisions which result from automated processing and profiling. It considers, in particular, how the regime for automated decision-making under the General Data Protection Regulation (GDPR) balances the interests of consumers and their fundamental rights regarding data protection against the demands of the data-driven industry, such as the development of new products and services based on artificial intelligence and machine learning technologies. It will thus focus on Article 22 GDPR and take a commercial perspective. The chapter analyses three aspects of automated decision-making under the GDPR, such as the context, the definition and the legal regime. Section II examines profiling and automated decision-making and assesses the operation of Article 22 on procedural grounds. Sections III evaluates the concept of automated decision-making referred to in Article 22(1) and demonstrates how the WP29 has helped this concept in becoming less formal. Section IV analyses the so-called right to human intervention and whether the legitimate interests of the controllers play any role as a basis for processing under Article 22. Section V examines the interplay between Article 22 and the information rights under Articles 13(2)(f), 14(2)(g) and 15(1)(h). A conclusion follows in Section VI.

II.- AUTOMATED PROCESSING, PROFILING AND AUTOMATED DECISION-MAKING

1. A dynamic process

‘Automated processing’ and ‘profiling’ are separate legal categories. ‘Processing’ is a generic concept, broadly defined as ‘any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means’. The term ‘automatic’ is commonly used to qualify the way in which information is processed, in a structured and non-manual form. ‘Profiling’, on the other hand, is a type of automated processing seeking to categorise individuals. Article 4(4) GDPR defines profiling as ‘any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements’. Profiling relies on data mining techniques (i.e. procedures by which large sets of data are analysed for patterns and correlations), which are used to build predictions and anticipate individuals’ needs. Methods that the GDPR employs to protect individuals operate at different levels. Territorially, the GDPR has extended its reach further by ensuring its application to processing activities which could be carried out by controllers who are either established in the Union or target consumers in the Union. This is complemented by a comprehensive regime on international transfers of personal data that ensures the export of the European standard of protection abroad. On a substantial and operational level, the GDPR provides a robust regulatory framework for data processing. This comprises general processing principles, detailed data subjects’ rights and controllers’ risk managerial duties (i.e. data protection impact assessments), including privacy by design and default requirements (i.e. the adoption of technical and organisational measures to implement data protection obligations). Finally, a system of supervisory authorities, redress mechanisms and liability rules ensures compliance on an enforcement level. Several classifications have been proposed to explain the stages in which automatic processing and profiling usually develop. Although the terminologies, which could be used to describe each relevant processing stage may vary, most classifications consistently refer to the following three processes: Collection, Analysis and Application. At the collection stage, the user (i.e. the controller), gathers personal data from a variety of sources and not merely from the data subjects. Massive amount of personal data are collected from internet resources, mobile devices and apps, through ambient intelligent technologies embedded in everyday objects (e.g. furniture, vehicles and clothes) and from the human body itself (e.g. biometric data).

Whatsapp

The value of data is often unknown at collection and can only be attained after the data is (re)processed over and over again for different purposes; under the current paradigm, personal data is said to develop at a distance from the individual. In the analytical stage, potent computational frameworks are used to store, combine and analyse large quantities of data in order to generate new information. Data mining increasingly relies on machine learning algorithms to profile individuals. These differ from traditional algorithms in that they feed on a vast amount of data and barely require any rule to operate. Machine learning outputs are not necessarily intelligible to humans and may only allow for a ‘black box’ approach. The application stage follows next. This is where controllers implement specific outcomes resulting from automated processing, including profiling and make decisions based on them (e.g. they apply a score, a recommendation, a trend, etc.). One of two possibilities arises depending on whether the controller implements the algorithm output straightforwardly or, relies on human analysts to make a decision. The first type of automated decision-making is referred to as solely automated decision-making and falls within the scope of Article 22 , whereas the latter is excluded from this provision.

Take a deeper dive into Leadership's Impact on Strategic Success with our additional resources.

2. The procedural design of Article 22

Article 22 GDPR is intended to protect individuals against solely automated decisions. Two aspects of the procedural design of Article 22 deserve attention. On the one hand, this provision is intended to be applied at the last stage of processing only (i.e. application). Like the Data Protection Directive, the way in which the GDPR delivers protection is based on the idea of single and static processing operations to which the rights of the data subjects could be made conformative. This approach contrasts with the dynamic nature of automated processing and profiling, as described before. On the other hand, as to Article 22´s visibility, it is also relevant to notice that this provision is codified last on the list of data subjects’ rights (i.e. Articles 12-22). The procedural design of Article 22 GDPR mirrors that of Article 15 DPD, as Article 15 also could also be applied at the final stage of processing (i.e. application) and was codified last. This may have facilitated the exercise of the right not to be subjected to automated decision-making under the Data Protection Directive or at least help data subjects become more aware of it. However, whether this design is well suited for the GDPR could be questioned. Firstly, the long list of rights that precede Article 22, including significant new additions (e.g. portability, erasure and restriction of processing) may reduce Article 22’s visibility for the average data subject. Secondly, the processing phases discussed above, are not necessarily linear as data processing increasingly occurs in real time. And so, to finalise this section, if automated decision-making is now the rule rather than the exception, enhancing the visibility of the provision(s) intended to regulate it, seems reasonable in legislative policy terms. Recent international proposals appear to follow this approach. The proposed Draft modernised Convention 108 (Council of Europe) codifies the right not to be subjected to solely automated decision-making first.. Under the GDPR, however, choosing to define solely automated decision-making in Article 22(1) in negative terms (i.e. as the right not to be subjected to solely automated decisions) has caused significant interpretative difficulties, as have been discussed below.

Take a deeper dive into Machine Learning and GDPR Protections with our additional resources.

III. WHICH DECISIONS?

1. Classification

Different types of decisions derived from automated processing, including profiling, can be distinguished. The nature of the agent making the decision represents an obvious first criterion. This serves to distinguish human- thinking based decisions from machine-based decisions. Under this classification, automated decisions are typically equated to machine-based decisions. An alternative approach to the same criterion puts a focus on the means of processing of information and data, to distinguish automated from non-automated decisions. This is different from the previous case in that the nature of the agent involved in making the decision is not conclusive in regards to the ‘automated’ character of the decision. Since most automated decisions happen to be machine-based, it could be argued that this approach is of little practical relevance. However, in an increasingly sophisticated processing context, a definition of automated decision-making not strictly relying on the absence of human elements may present some advantages, as have been discussed below. Automated decision-making can also be classified depending on the recipient, be it an individual or a group. An individual decision is directed towards a specific individual (e.g. who is offered personalised interest rates), whereas a group decision relates to a group of individuals sharing common attributes (e.g. consumers aged 20-29 or those living in a certain neighbourhood). The effects of a decision do not represent constitutive elements of the notion ‘decision’, for a decision exists regardless of its effects. Lawmakers, however, may take them into consideration as qualifying requirements of the applicable regime. From this point of view, the effects of a decision can be considered qualitatively, if decisions are required to impact upon their recipients in certain ways. Effects can also be considered quantitatively, either by reference to an individual (e.g. who is admitted to school or university) or by reference to a group (e.g. whose members are offered insurance at higher interest rates).

2. Analysis

(a) Actor

Article 22(1) states that the relevant decision has to be ‘based solely on automated processing, including profiling’. Two interpretations of automated decisions under Article 22(1) are possible. First, a strict interpretation excludes the application of this provision if the automated decision-making process has involved any form of human participation. This focuses on the nature of the determinant making the decision under the first criterion above. By contrast, the notion of automated decision-making referred to in Article 22(1) can also be defined by reference to the means of processing (the second approach). For the purpose of the definition of solely automated decisions under Article 22(1), this would imply that human involvement in the decision-making process is not to mechanically exclude the application of this provision. Under this second interpretation, the key question is not whether a specific decision can be categorised as human or machined-based but whether it retains its automated nature in case of human involvement. In order to determine which type of human participation deprives a decision of its automated nature, Bygrave’s requirement of real and influential human participation can be used to determine the relevant threshold. According to this author, mere nominal involvement of human actors in the decision making process (i.e. participation lacking any real influence on the outcome) must not prevent the application of Article 22. In a context in which controllers could be observed to operate increasingly in automated systems of evaluation and profiling, permitting Article 22(1) to capture truly automated decisions despite nominal human involvement is to be welcomed. This interpretation can also be justified on Teleological grounds, if the rationale of the regime for solely automated decisions is the need to preserve some degree of autonomy of human intervention in decision-making, decision-making processes involving human nominal participation present the same risks as those lacking human involvement. Furthermore, it has to be noted that a strict interpretation of Article 22(1) creates an incentive for controllers to make human actors implement routine procedures to prevent the application of the protective regime for automated decision-making. To conclude, three types of decisions resulting from automated processing and profiling may arise: (i) Decisions where the automated output applies straightforwardly; (ii) Automated decisions with human nominal involvement, i.e. where a human actor intervenes in the application of the automated output without revising or assessing it and (iii) Human-based decisions, i.e. where a human analyst revises the automated output and makes a decision. The proposed interpretation of solely automated decision-making in Article 22(1) includes cases (i) and (ii), whereas the strict interpretation is limited to case (i). It is therefore a positive development that the WP29 has confirmed the non-conventional interpretation of the definition of automated decision-making under Article 22(1) in its Guidelines on automated decision-making and profiling. Convincingly, the Guidelines read, ‘[T]he controller cannot avoid the Article 22 provisions by fabricating human involvement. For example, if someone routinely applies automatically generated profiles to individuals without any actual influence on the result, this would still be a decision based solely on automated processing’.

(b) Recipient

Article 22 makes it clear that the regime for solely automated decisions applies to ‘individual’ decisions only. Moreover, the protection granted in Article 22 operates regardless of whether the data subject plays an active role in requesting the decision (e.g. the data subject applies for a loan) or whether a decision is made on him or her (e.g. the data subject is excluded from an internal promotion within an organisation). Article 22(1) also stipulates that automated decision-making targets decisions on ‘data subjects’ rather than natural persons. The explicit reference to the data subject in paragraph (1) implies that Article 22 is intended to apply to a decision resulting from the processing of personal data of an identified or indefinable person. This creates uncertainty as to the question regarding whether the regime for solely automated decision-making under Article 22 applies to individual decisions on data subjects based on the processing on anonimised data. The Guidelines do not explicitly address this point. What the WP29 does confirm, however, is that children’s personal data are not completely excluded from automated decision-making under Article 22(1). The WP29 does not consider that the Recital 71 constitutes an absolute prohibition on solely automated decision-making in relation to children. This is an important clarification which reconciles the complete ban in Recital 71 with silence in the main text of the GDPR. The WP29 states that controllers should not rely on the exceptions to Article 22(2) to justify solely automated decision making in relation to children (i.e. contractual necessity, imposed by law or based on the data subject’s explicit consent), unless it is ‘necessary’ for them to do so, ‘for example to protect [children’s] welfare’. Although this language may require further clarification , the references to Recitals 71 and 38 and the view taken on Article 22 clearly suggest that the WP29 is advocating for the introduction of a restrictive system of solely automated decision-making in relation to children. This is further confirmed when the WP29 continues to state that controllers processing children’s data under 22 must provide suitable safeguards, as is required in Article 22(2)(b) and Article 22(2)(a) and (c).

(c) Effects

Automated decisions under Article 22(1) are required to have ‘legal effects’ or ‘have to similarly and significantly affect’ the recipient. Since decisions producing ‘legal effects’ on data subjects impact on their legal rights or legal status, they are more easily objectified; for example, decisions granting or denying social benefits guaranteed by law or decisions on the immigration status when entering the country. However, in the absence of objective standards, the meaning of the phrase ‘similarly significantly affects him or her’ remains contextual and subjective; as reported in Recital 71, typical examples include automatic refusal of credit applications and automatic e-recruitment practices. The WP29 has stated that the effects of the processing must be ‘sufficiently great or important to be worthy of attention’. It has also been noted that the permanence or temporality of the consequences can be taken into account factually to evaluate the severity of the effect. Targeted advertising does not ordinarily produce decisions which could ‘similarly and significantly’ affect the individuals (e.g. banners automatically adjusting their content to the user’s browsing preferences, personalised recommendations and updates on available products). Some scholars, however, prefer not to exclude the application of Article 22(1) to targeting advertising practices that systematically and repeatedly discriminate. Importantly, the Guidelines on automated decision-making of the WP29 have confirmed this approach. The WP29 lists some particular circumstances which may increase the likelihood of targeted advertising being caught under Article 22. Within the category of vulnerable adults, in particular, WP29 considers the situation of individuals in financial difficulties. It uses the example of individuals who incur further debt as a result of them being systematically targeted for on-line gambling services. Regardless of how such circumstances may be interpreted in the context of a specific dispute, these are positive developments whichcould assist in raising awareness of the special needs of protection of vulnerable individuals. Noticeably, the ICO has also suggested some specific criteria to assess whether a solely automated decision has a similarly significant effect on a child.

There is a collective dimension to individual automated decision-making. In the insurance sector, for example, big data applications may be used to benefit policy holders who represent a lower risk compared to the average (by offering them discounts). Whereas people belonging to a high-risk group may find that they have to pay high premiums or are not even offered insurance. An increasing number of scholars advocate for the inclusion of a collective dimension to profiling and automated decision-making under data protection law. It has to be noted that the GDPR has introduced a new provision that facilitates the protection of collective interests through the action of representative bodies (i.e. Article 80). However, as to the automated decision-making regime, Article 22(1) makes it explicit that what trigger the applicability of this provision are the effects of the decision on the data subject (‘[…] which produces legal effects on him or her or similarly and significantly affects him or her’).

Solely automatic decision-making under Article 22 is not therefore concerned with the way in which automated decision-making could affect a group of people. This implies that members of a group can only be granted the protection in Article 22 if they claim the application of this provision as individual data subjects. In other words, under Article 22’s current framework, it would appear that the only way to consider collective elements to the decision is factual, i.e. as de facto components of the ‘similarly significantly’ individual effects test; for example, when the adverse consequences of a decision on the data subject could be highlighted through a reference to a group to which the data subject has been ascribed.

3. Summary

Compared to Article 15 DPD, the interpretation of automated decision-making in Article 22 GDPR, including WP29’s Guidelines, is more sophisticated and less formal at most levels. Firstly, nominal human involvement in solely automated decision-making does not mechanically exclude the protective regime in Article 22. As has been discussed before, in a context in which controllers increasingly rely on automated systems of evaluation and profiling to manage their consumer data, permitting Article 22(1) to capture decisions that are truly automated, despite nominal involvement of human actors, has to be welcomed. Secondly, although automated decision-making described in Article 22(1) applies to all data subjects, the WP29 has acknowledged the special needs of protection of vulnerable adults and children and the necessity to adopt appropriate safeguards addressing such vulnerabilities. These are positive policy choices that enhance legal certainty and ensure higher levels of protection.

IV. THE RIGHT TO HUMAN INTERVENTION AND ARTICLE 22

The rationale behind the regime for automated decision-making under Article 22 is linked to the right to human intervention. Some scholars approach this right from the intrusiveness of machine decisions and the need to preserve the autonomy of human intervention in decision-making. There is also a more pragmatic understanding to this right which places an emphasis on the individual’s right to contest an automated decision. The variety of meanings attached to the right to human intervention results from Article 22´s language and structure.

1. Prohibition

Article 22(1) can be interpreted as a prohibition. Paragraph (1) is worded negatively as it refers to the right of the data subject ‘not to be subject to […]’. This corresponds to a negative obligation for the controller (not to subject data subjects to solely automated decisions). As a prohibition, Article 22(1) bans solely automated decision-making categorically, unless one of the derogations in paragraph (2) applies (i.e. data subject’s explicit consent, where the decision is necessary for entering into or performing a contract or is authorised by law). Under this approach, the law sets a standard whereby the interests of data subjects not to be subjected to automated decision-making override the interests of controllers in engaging with it. The resulting regime is both rigid and strict: rigid, because the legal standard is fixed and allows no room for balancing competing interests (i.e. the ground of processing based on the legitimate interests of the controller plays no role); and strict, because the chosen legal standard ensures a high level of protection to individuals by default (i.e. solely automated decision-making is unlawful, unless one of the derogations in paragraph (2) applies). And so, when Article 22(1) is interpreted as a prohibition, ‘human intervention’ contributes to preserve human autonomy by becoming a constitutive element of the decision-making process. In this case, it can thus be said that the right to human intervention protects the interests of individuals ex ante, as an essential element of the decision-making process.

2. Right

Article 22(1) can also be interpreted as granting data subjects the right not to be subject to automated decision-making. Under this interpretation, the interests of controllers and data subjects are on equal footing unless the data subject objects to automated decision-making. If the latter enters an objection, the right not to be subjected to solely automated decision-making prevails. Compared to Section 1 (i.e. Article 22(1) as a prohibition), this interpretation is also rigid but less strict. It is rigid because no competing interests are to be balanced against each other (i.e. the law tolerates solely automated decisions based on the legitimate interests of controllers, unless the data subject lodges an objection). If the data subject objects, solely automated decision-making is prohibited. It is less strict, however, because the protection relies entirely on the data subject, who has to actively exercise the right not to be subject to solely automated decision-making. Overall, this interpretation is more beneficial to controllers than the previous one. Under this approach, the right to human intervention may be operated in one of two ways. Before any decision is formulated, Article 22(1) can be relied upon pre-emptively to avoid solely automated decision-making. In this case, the right to human intervention would reach the decision-making process ex ante, like in Section 1. On the other hand, if the data subject objects to a solely automated decision already taken, the right to human intervention would apply ex post as a safeguard for fair processing.

Order Now

3. Derogations

Article 22(2) on automated decision-making admits one interpretation only. According to this provision, controllers’ interests in carrying out solely automated decision-making based on the explicit consent of the data subject (Article 22.2.c), contractual necessity (Article 22.2.a) or authorised by law (Article 22.2.b), prevail over the data subjects’ right not to be a subject to solely automated decision-making. The rule in Article 22(2) is most beneficial to private controllers. Although data protection authorities may have interpreted the necessity of contractual ground narrowly, this ground does not require the data subject to provide consent to the processing. Turning to consent, the GDPR requires it to be ‘explicit’. The WP29 has stated that an obvious way to comply with this is to obtain written statements signed by the data subject. The WP29 has also clarified that, in the digital context, this requirement can be satisfied by the data subject through filling an electronic form, sending an email, uploading a scanned document (that carries the signature of the data subject) or using an electronic signature. It is noteworthy that the rule in Article 22 (2), although striking the balance in favour of controllers (who can engage in solely automated decision-making under certain conditions), is formulated in identical rigid terms as the rule in Sections 1 and 2 (i.e. which interpret Article 22(1) as a prohibition and as a right, respectively). The legislator sets a fixed standard according to which, if the controller demonstrates that explicit consent or contractual necessity exist (or the decision is authorised by law), the processing is lawful. In regards to the right to human intervention, here, it materialises in Article 22(3) GDPR as a safeguard and operates ex-post only.

4. WP29’s Guidelines

WP29 endorses the interpretation of Article 22(1) as a ‘general prohibition’. As has been discussed before, this implies that the regime for solely automated decisions is rigid and strict, i.e. it is categorically prohibited unless the controller demonstrates the data subject’s explicit consent or contractual necessity to attempt such automated decision formulation (or the processing is authorised by law). This interpretation prevents the legitimate interests of controllers from playing any role as the legal basis for processing solely automated decision-making. The WP29’s interpretation also supports an understanding of the right to human intervention as a right operating ex ante under Article 22(1), i.e. as an essential element of decision making and also ex post, in Article 22(3), as a safeguard for fair processing. Taking the view that Article 22(1) contains a general prohibition, the WP29 ensures that data subjects are afforded a high level of protection by narrowing down the scope of solely automated decision-making. This has disappointed industry representatives, who advocate for the application of the controllers’ legitimate interests as a valid ground for processing in automated decision-making. They claim, in particular, that limiting solely automated decision-making to consent and contractual necessity is dysfunctional in sectors where controllers are required to make a large number of decisions. It is true that the position adopted by the WP29 excludes controllers’ legitimate interests as a basis for processing. It is difficult to see, however, how the WP29 could have introduced alternative and more flexible standards under Article 22’s current framework. As demonstrated before, all three possible formulations of Article 22(1) –i.e. as a prohibition, as a right or within the context of the derogations– rely on rigid and fixed legal standards. This implies that, under the current regulatory framework for solely automated decision-making, controllers’ legitimate interests ground of processing is banned for solely automated decision making in Article 22. Nothing prevents controllers, however, from relying on this ground of processing in a decision-making context that is not solely automated, i.e. outside Article 22.

Looking for further insights on Sportive Plc Governance and Compliance ? Click here.

V. THE RIGHT TO AN EXPLANATION AND ARTICLE 22

Articles 13(2)(f) and 14(2)(g) on notification duties, on the one hand and Article 15(1)(h), on the right of access, on the other hand, impose information obligations on controllers engaging in automated decision-making. Under these provisions, controllers have to inform the subject under consideration about the ‘existence of automated decision-making including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject’. These provisions play an important role in ensuring data subjects’ effective protection in solely automated decision-making process. Knowing the existence of automated decision-making referred to in Article 22(1) allows data subjects to scrutinise the lawfulness of the processing. This is particularly important in cases of contractual necessity, where consent is not required and of processing under Article 22(4). Providing meaningful information on the logic of involved and the significance and the consequences of such processing is also an essential requirement for accountability and transparency of algorithms. Articles 13(2)(f), 14(2)(g) and 15(1)(h) impose duties on controllers carrying out ‘automated decision-making including profiling, referred to in Article 22(1) and (4) and, at least in those cases […]’. Therefore, the relationship between Article 22 and Articles 13(2)(f), 14(2)(g) and 15(1) is determined by the phrase ‘at least in those cases’. The reference to ‘Article 22(4)’ in the former set of provisions is easy to interpret, as it clearly points to automated decision-making for special categories of data under Article 22(4). The reference to Article 22(1) is, however, more problematic as it admits two interpretations. This reference can be understood to exclusively refer to the general prohibition in Article 22(1) and it can also be interpreted to refer to Article 22(1) as a system of general prohibition/derogations. The first interpretation is problematic as it prevents the application of information rights in Articles 13(2)(f), 4(2)(g) and 15(1)(h) to automated decision-making processes based on contractual necessity or consent. Under this approach, automated decision-making under Articles 22(2)(a) and (c) would only receive the protection afforded by the safeguards in Article 22(3), which are not intended to regulate information rights. This interpretation seems untenable and can be challenged on Teleological grounds..Depriving data subjects from information rights would likely compromise their fundamental right to an effective remedy under Article 47 of the Charter and Article 6 of the European Convention on Human Rights.

The second interpretation relies on systemic grounds, according to which the reference to ‘Article 22(1)’ in Articles 13(2)(f), 14(2)(g) and 15(1)(h) can be understood to refer to the general prohibition in Article 22(1), including the derogations in Article 22(2), i.e. contractual necessity in point (a) and consent in point (c). Unsurprisingly, this is the interpretation which is generally followed in practice. None of the relevant stakeholders questions the applicability of information rights to automated decision-making carried out in the context of Article 22. The interpretation of the phrase ‘meaningful information about the logic involved, as well as the significance and the envisaged consequences’ in Articles 13(2)(f), 14(2)(g) and 15(1)(h) has been controversial in the academic literature. Wachter et al have taken the view that the GDPR does not provide the opportunity for a right to explanation of how specific automated decisions on an individual are made, but a more limited right to be informed on the general functionality of an automated decision-making process (i.e. the logic, significance, envisaged consequences and general functionality of the automated decision-making system). They claim, in particular, that Article 15 GDPR on the right of access does not require controllers to provide information which could fully explain the rationale and circumstances of a particular decision. By contrast, Selbst et al suggest that providing data subjects with ‘meaningful information’ does not always require providing information on a specific decision. They argue that, in many systems, a complete system-level explanation provides all the relevant information to understand specific cases. Also, Malgieri et al propose a legibility test to ensure that data controllers provide meaningful information about the architecture and the implementation of the decision-making algorithm. They argue that such a test would help data subjects to meet their information requirements, whilst allowing controllers to identify potential machine bias (i.e. limiting their possible exposure to liability cases). Rather than focussing on transparency obligations from a conventional perspective, these scholars underline the important role that controllers’ accountability is to play within the framework of the GDPR. The tension between transparency and accountability has become an important topic in data protection law. From this perspective, they claim, in particular, that Articles 13(2)(f), 14(2)(g) and 15(1)(h) state a duty to audit decision-making algorithms. The WP29 has confirmed that requesting information on the significance and envisaged consequences of solely automated decision-making under Article 15(1)(h) does not provide the data subject with extended information, compared to Articles 13(2)(f) and 14(2)(g). This follows from the wording of the revised Guidelines on automated decision-making and profiling, which state that Article 15(1)(h) obliges the controller to provide information ‘about the envisaged consequences of the processing, rather than an explanation of a particular decision’. This is likely to help controllers standardise the information they provide on the logic, the significance and envisaged consequences of their automated decision-making systems, which are beneficial to their interests, in particular, when they manage large amounts of solely automated decisions. The WP29 also clarifies that the controller has to provide general information to the data subject on the rationale and the factors relied upon in reaching the decision, including their aggregate weighing. The WP continues to state that this information does not require controllers to disclose the ‘full algorithm’, which helps them to meet their legal obligations towards the third parties (i.e. trade secrets, intellectual property, etc.). Two statements of the WP29 revised Guidelines are particularly relevant, that the information provided ‘has to be sufficiently comprehensible for the data subject to understand the reasons for the decision’ and that the information provided has to be ‘useful for [the data subject] to challenge the decision’. Although the wording used, ‘sufficiently comprehensible’ and ‘useful’, are difficult to objectivise and may require further interpretative guidance (by the Board or by the ECJ in the context of a specific dispute), these statements provide an indication that the relevant threshold is to be determined by reference to the data subject (rather than to the controller). Moreover, these two statements can also be interpreted as supporting the introduction of a purposive approach to controllers’ duties under Articles 13(2)(f), 14(2)(g) and 15(1)(h), according to which the key question is whether the information provided enables an average data subject to understand the ‘how’ of the decision and the ‘why’ of its effects on him or her, so that the data subject can exercise his or her rights under Articles 22(3), e.g. express his or her views and challenge the decision.

One last question is concerned with the nature of the protection afforded by the solely automated decision-making regime, whether special or qualified. As discussed before, this regime is primarily contained in Article 22, including the safeguards in Article 22(2)(b) and (3) and could be complemented by information rights under Articles 13(2)(f), 14(2)(g) and 15(1)(h) and also by controllers’ risk managerial duties stated under Article 35(3)(a). Rules that set common standards of protection are ordinarily classified as general rules; whereas rules that do not fall within this category may operate as special provisions, if they override general rules, or qualified rules, if they offer additional safeguards to those in the general framework. This distinction is important because special rules displace, in principle, the otherwise applicable general rules, whereas qualified rules apply in a cumulative manner. Under the GDPR, the regime for the categories of data which has been regulated under Article 9 is meant to be special –although this has not prevented the WP29 to support the cumulative application of the common grounds for processing of the special categories of data. Article 22(1) on solely automated decision-making is often referred to as a qualified provision. It is difficult to categorise the protection afforded by Article 22 as special. Nothing in the GDPR suggests that this is the intention of the legislator. Moreover, there is not such a thing as a ‘general’ regime for automated decisions outside Article 22. The GDPR does not specifically regulate automated decisions outside Article 22, i.e. decisions not meeting the requirements in paragraph (1). Like any other processing activity on personal data, automated decision-making not meeting the requirements in Article 22(1) will have to comply with the principles and rules of the GDPR. This, however, has not prevented the WP29 from blurring the limits between these two types of automated decision-making processes: by requiring controllers to comply with risk management duties under Article 35(3)(a); and by proposing the application of notifications rights under Articles 13(2)(f) and 14(2)(g) in cases of automated decision-making outside Article 22. Despite this the boundaries between automated decision-making under Article 22 (solely automated) and outside this provision (non-solely automated) are sufficiently clear.

Sitejabber
Google Review
Yell

What Makes Us Unique

  • 24/7 Customer Support
  • 100% Customer Satisfaction
  • No Privacy Violation
  • Quick Services
  • Subject Experts

Research Proposal Samples

Academic services materialise with the utmost challenges when it comes to solving the writing. As it comprises invaluable time with significant searches, this is the main reason why individuals look for the Assignment Help team to get done with their tasks easily. This platform works as a lifesaver for those who lack knowledge in evaluating the research study, infusing with our Dissertation Help writers outlooks the need to frame the writing with adequate sources easily and fluently. Be the augment is standardised for any by emphasising the study based on relative approaches with the Thesis Help, the group navigates the process smoothly. Hence, the writers of the Essay Help team offer significant guidance on formatting the research questions with relevant argumentation that eases the research quickly and efficiently.


DISCLAIMER : The assignment help samples available on website are for review and are representative of the exceptional work provided by our assignment writers. These samples are intended to highlight and demonstrate the high level of proficiency and expertise exhibited by our assignment writers in crafting quality assignments. Feel free to use our assignment samples as a guiding resource to enhance your learning.

Live Chat with Humans