email print share on Facebook share on Twitter share on LinkedIn share on reddit pin on Pinterest

Industria / Mercato - Europa

Rapporto industria: Politica europea

La SAA esplora l'impatto dell'IA generativa sul lavoro e sui diritti degli autori europei

di 

Durante un seminario, i relatori hanno affrontato una serie di argomenti, tra cui la trasparenza, l'apprendimento automatico, il text and data mining e l'equa remunerazione

La SAA esplora l'impatto dell'IA generativa sul lavoro e sui diritti degli autori europei
Un momento durante il panel

Questo articolo è disponibile in inglese.

On 30 January, the Society of Audiovisual Authors (SAA) hosted a seminar that brought together experts, policymakers and representatives of collective management organisations (CMOs) to discuss the impact of generative AI on audiovisual authors’ work and rights in Europe.

The event, moderated by president of Press Club Brussels Europe, member of the board at journalismfund.eu and consultant Alia Papageorgiou, saw the participation of author and tech philosopher Tom Chatfield; lawyer, professor of Law and ABA president Sari Depreeuw; head of Cabinet of MEP Dragos Tudorache and co-rapporteur for the AI Act Dan Nechita; SACD secretary general and SAA vice-chair Patrick Raude; head of the Legal Department at Bild-Kunst and president of the board of directors at EVA Anke Schierholz; IP adviser of the Deputy Prime Minister of Belgium Paul Laurent; and MEP and rapporteur for the Legal Affairs Committee on the AI Act Axel Voss.

(L'articolo continua qui sotto - Inf. pubblicitaria)
Hot docs EFP inside

The first takeaway that emerged during the discussion highlighted how art and creativity are not industrial processes. On this aspect, Chatfield warned about how generative AI may threaten the existence of the whole “ecosystem of creativity” and how this is being exploited since “its resources are being mined and extracted, and the creative works are threatened by the automation of content production”. What makes art and creativity unique, he argues, cannot be captured by AI processes, and “machine learning is nothing like human learning or understanding, or the creation and sustaining of value”. Stressing the importance and uniqueness of the human touch, he explained: “When we educate children, we don't criticise them because their drawings are not as good as photographs; we celebrate the process of learning, self-expression and communicating because it’s how we inhabit the world more richly and become human. This doesn't mean there's no place for AI, but it does mean that these questions of why and how we value creativity in all its forms are not captured by a shallow focus on output.”

Next, the speakers tackled the matter of generative AI not being envisioned by the text and data mining exception (TDM) of the 2019 Copyright Directive. Depreeuw first said that the directive dates from before generative AI exploded, and presently, “rights’ reservation can be technically very difficult […]. Let’s assume that the TDM exception is applicable to a certain point. You need to communicate the opt-out for every occurrence of your work on the internet and for all of the AI applications. If this is on the author’s shoulders, that's very difficult and very unlikely. There is no central registry of all of the works in the world where this opt-out can be communicated.”

Voss added: “We never discussed AI at the time of formulating the TDM exception in the Copyright Directive. We allowed companies to use TDM for their own purposes. Public usage was not in our minds back then.” Therefore, Laurent concluded that the European Commission will need to study the impact of the exceptions of Articles 3 and 4, and the opt-out problem.

Next, the discussion zoomed in on the topic of transparency as a prerequisite to defend authors’ rights. Schierholz touched upon the disruptive effect of AI on jobs including content creators, translators, interpreters, illustrators and designers. She said: “We need the AI Act. This idea that the AI Act would be over-regulative and hinder the development of European start-ups is the same argument that is always brought up by the international tech industries when new technology is on the horizon. This argument has been proven wrong time and time again: the internet did not collapse because user-generated content platforms were held liable for the user's content.”

She called for transparency and licensing, the latter being considered as “the only way to balance diverging interests” which can be done “collectively”. She also wished for a return to “the basic principle of copyright”, which would ensure fair remuneration. She acknowledged how hard it is “for authors to apply a machine-readable reservation for their rights”, but also reminded the participants that “those who train the machines can give this information”.

Nechita spoke about the three priorities set out by the European Parliament in terms of the intersections of AI and human creativity: to develop AI in line with European values; “balancing technical progress and what makes us human”; and, again, securing “a future where human creativity can flourish” while “focusing on transparency”.

Later, the speakers discussed whether CMOs have expertise and experience that AI companies could learn from. Raude is confident that CMOs “know both how to negotiate a licence and how to distribute rapidly”, and “have the IT knowledge and human resources to distribute the rights they collect”.

Chatfield agreed, adding, “AI and tech companies understand their field, but can be very naive about data, copyright and remuneration.” Thus, they “could learn a lot from CMOs […]. CMOs can come together globally to provide the expertise, the frameworks of immunisation and checks, the transparency, the data governance – all that the AI sector lacks,” he concluded.

The last point that emerged during the seminar was that this AI Act is seen as just the beginning of a broader, more multifaceted debate on copyright. “It’s not about over-regulation or under-regulation; we need good regulation,” said Laurent.

Voss argued, “Transparency is what the AI Act brings […]. We have achieved, firstly, that if AI developers are developing something, they must respect the existing laws, including copyright. We also decided that every AI output should be marked as synthetic. Thirdly, AI developers must deliver a detailed summary of the content used for training the AI models,” he explained. However, he admitted that more effort is still required, including work on harmonising some aspects of EU copyright law.

You can access the full recording of the seminar by clicking here.

(L'articolo continua qui sotto - Inf. pubblicitaria)

Ti è piaciuto questo articolo? Iscriviti alla nostra newsletter per ricevere altri articoli direttamente nella tua casella di posta.

Privacy Policy