We saw in part 17 of the series that voice clones such as ‘Fake Drake’ raise copyright issues, although the legal situation in the US and the EU is different. However, deep fakes go far beyond copyright and can affect a person’s privacy and therefore their general personal rights. In this part of the series on AI in the music industry, we will discuss how this can happen and why the EU’s General Data Protection Regulation is relevant in this context.

AI in the Music Industry – Part 18: Deep Fakes and Data Protection

Personality rights are constitutionally protected in many countries. A good example is the Federal Republic of Germany, whose Grundgesetz guarantees the protection of human dignity (Article 1 (1) GG) and the free development of every person (Article 2 (1) GG).[1] A personality is thus protected by the Grundgesetz in all its manifestations, such as their life image, their honour, their privacy, their portrait, their non-publicly spoken and written word and even their voice.[2] On this basis, legal action could be taken against the misuse of the voice in a deep fake.

German’s example highlights that the human voice is protected by existing laws as a personal right and that misuse by third parties can be successfully combated legally. Another starting point would be the EU’s General Data Protection Regulation (GDPR).[3] If personal data are processed in whole or in part by automated or non-automated means and stored in a file system (Art. 2 no. 1), a legal basis for the processing of personal data is required. This can be a contract, an explicit consent or a legal regulation (Art. 4 no. 11). Personal data is any information relating to an identified or identifiable natural person (Art. 4 no. 1 GDPR). In addition, data processing may only be carried out for a specified, lawful purpose and only data relevant to that purpose may be processed (Art. 5 no. 1 GDPR). The requirements must be implemented by all companies that operate in the EU and process personal data. However, they do not have to be based in the EU. It is sufficient if they process data of EU citizens (Art. 3 no. 1 GDPR).

So, if a deep voice fake of an EU citizen is created, the GDPR could come into play. Voices in digital form are undoubtedly personal data as defined by the EU directive, and training an AI with large amounts of data falls within the scope of the GDPR anyway. If an artist has not expressly consented to the processing of their personal data, which is unlikely to be the case with a deep fake, they could successfully object to the processing (Article 21 GDPR), request the deletion of processed data (Article 17 GDPR) or restrict the use of the data (Article 18 GDPR). In addition, the artist would have a claim for damages against the data processor (Art. 81 no. 1 GDPR). The processor is considered the “master of the data” and the company that uses the AI programme (Art. 24 no. 1 GDPR), e.g. to create pieces of music. If another company is contracted to process the data, for example to train the AI, it is a data processor (Art. 28 GDPR), which is liable in the same way as the commissioner. Interestingly, the manufacturer of the AI is generally not held liable, as the EU did not want to hinder technological development with its directive.

But that’s the other side of the coin. As we have seen, the GDPR offers protection against deep fakes, but overshoots the mark when it comes to AI applications. A particular problem is posed by the principle of data minimisation, according to which data processing should be limited to what is necessary (Art. 5 no. 1 lit. c). In the case of deep learning with huge data sets, this is not possible from the outset, as Tina Gausling also points out in an article: ” The quality of AI generation currently depends largely on the amount of training data available. In addition, at the beginning of the development phase, it is not always clear for which purposes the AI may be used in the future, other than those originally intended. This means that AI development is diametrically opposed to the core principles of the GDPR, including data minimisation and purpose limitation [translation by the author].”[4]

Kai-Fu Lee, co-chair of the Artificial Intelligence Council at World Economic Research and author of a book on artificial intelligence, agrees. He considers the objectives of the EU’s GDPR – transparency, accountability and confidentiality – to be extremely noble, but sees them as counterproductive given the many possible uses of AI, because the individual purpose of the data collected by the AI is difficult to narrow down, and not all purposes of use can be known at the outset of data collection. If an AI were prohibited from collecting and processing personal data, it would quickly become useless.[5]

There is therefore no specific legal framework that both meets data protection requirements and hampers the technological development of AI to the extent that the EU falls behind the US, China or South Korea. The EU’s AI Act should certainly take this into account.


Endnotes

[1] Federal Republic of Germany, Grundgesetz der Bundesrepublik Deutschland of 23 May 1949.

[2] See Alexander Peukert, 2023, Urheberrecht und verwandte Schutzrechte. Ein Studienbuch, 19th edition, Munich: Verlag C.H. Beck, pp 107-108.

[3] The full title is: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).

[4] Tina Gausling, 2020, “KI und DS-GVO im Spannungsverhältnis”, in: Johannes Graf Ballestrem et al. (ed.), Künstliche Intelligenz, Rechtsgrundlagen und Strategien in der Praxis, Wiesbaden: Springer VS, p 11.

[5] Kai-Fu Lee and Qiufan Chen, 2021, AI 2041: Ten Visions for Our Future, Taipeh: Taiwan Commonwealth Publishing, pp 539.540.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.