Fake Or Fact Eus Ai Act Fails To Define Deepfake In Era Of Ai-Driven Deception

Fake Or Fact Eus Ai Act Fails To Define Deepfake In Era Of Ai-Driven Deception

The Elusive Definition of ‘Deepfake’: A Regulatory Conundrum

A recent study from Germany has shed light on the EU AI Act’s shortcomings in defining the term ‘deepfake’, a concept increasingly at the forefront of public discourse. The authors argue that the Act’s emphasis on content resembling real people or events, yet potentially appearing fake, is overly vague and fails to account for the pervasive influence of AI in consumer applications.

The EU AI Act’s definition of a ‘deepfake’ is problematic in the context of digital image manipulation. The Act’s focus on content that resembles real people or events, but may not necessarily be fake, raises questions about the subjective nature of artistic conventions that predate the advent of AI. This ambiguity can lead to a ‘chilling effect’, where the law’s broad interpretive scope stifles innovation and the adoption of new systems.

The Act’s exceptions for ‘standard editing’ – supposedly minor AI-aided modifications to images – fail to consider the growing elasticity of AI-driven image-manipulation technologies. For instance, the ‘Scene Optimizer’ function in recent Samsung cameras can replace user-taken images of the moon with an AI-driven, ‘refined’ image, blurring the lines between reality and manipulation.

Article 50(2) of the EU AI Act offers an exception in cases where the majority of an original source image is not altered. However, this exception raises questions about what constitutes ‘content’ in the context of digital audio, images, and videos. For example, in the case of images, do we need to consider the pixel-space or the visible space perceptible by humans? Substantive manipulations in the pixel space might not change human perception, while small perturbations can change the perception dramatically.

A study example illustrates the need for a more nuanced understanding of the impact of small manipulations on the overall significance of an image. Adding a hand-gun to a photo of a person pointing at someone, where the changed portion constitutes only 5% of the image, highlights the potential for significant semantic changes despite minor alterations.

The Act’s failure to define what constitutes ‘standard editing’ allows exceptions for post-processing features as extreme as Google’s Best Take, which may seem to be protected by this exception. This lack of clarity can lead to a ‘scofflaw effect’, where the law is disregarded as overreaching or irrelevant.

The study’s authors emphasize the need for interdisciplinary study around the regulation of deepfakes, and to act as a starting point for new dialogues between computer scientists and legal scholars. However, the paper itself succumbs to tautology at several points, frequently using the term ‘deepfake’ as if its meaning were self-evident.

Ultimately, the elusive definition of ‘deepfake’ highlights the need for a more nuanced understanding of the impact of AI on our perceptions of reality. As AI-based image-manipulation technologies continue to evolve, it is essential that regulatory frameworks keep pace with these developments, providing clarity and guidance for the development of new systems.

Latest Posts