Decision of the OLG Frankfurt dated March 4, 2025 – Case No. 16 W 10/25
If there has already been a notice of an infringing post, the host provider must also block posts with equivalent content. No further notice is then required. The OLG Frankfurt emphasizes the liability of the host provider for infringing media and media content, especially manipulated video material and the falsification of persons. The OLG Frankfurt made this clear with a decision dated March 4, 2025, in urgent proceedings (Case No. 16 W 10/25).
In cases of legal violations on the Internet, e.g. through a meme or deep fakes, host providers must take action and block the corresponding content if they become aware of infringing posts. The services offered by host providers include web hosting, servers, websites, domains, email addresses, and internet connections, thereby enabling access to the Internet. It is important to understand the distinction between host provider, access provider, and content provider, as everything behind these services is very complex. Moreover, there are various offerings in web hosting and server solutions tailored individually to users’ needs. The OLG Frankfurt already ruled on this with a judgment dated January 25, 2024 (Case No. 16 U 65/22). In consistent continuation of this case law, the OLG Frankfurt decided in urgent proceedings that platform operators must also block semantically equivalent content without requiring a new notice, according to the law firm MTR Legal Rechtsanwälte, which among other things advises in IT law.
Deepfake Videos with Equivalent Content
In the underlying case, a deepfake video was published on a social media platform. This deepfake video was created using deepfake technology and deepfake AI, whereby the face and voice of the affected person were manipulated. In this video, a well-known doctor was shown using manipulated image and sound material as if he were advertising a weight loss product. In reality, he had nothing to do with it. After a corresponding notice from the affected person, the platform operator removed this video.
A short time later, however, another video with nearly identical content appeared. It only differed in details, such as a slightly altered presentation and a different headline, but conveyed the same deceptive overall impression. Such falsification of media content and video material poses an increasing problem for the protection of reality and individuals’ privacy. Advanced technology and large amounts of data are used to create such content, with AI model training playing a central role to make the application increasingly realistic. Although this video was also deleted, it was only after a repeated report. Deepfake videos can contribute to the spread of fake news, which is why effective protective measures against such manipulations are becoming ever more important. The affected person sought to have the platform legally compelled to cease such content in the future and applied for a preliminary injunction.
Technology and Detection of Deepfake Videos
The rapid development of deepfake technologies is based on the use of artificial intelligence (AI) and advanced machine learning algorithms. These methods allow videos and audio files to be manipulated so convincingly that they appear authentic and are hardly distinguishable from genuine content, even to trained eyes and ears. Particularly well-known is the so-called face swapping technique, in which the face of one person is replaced with another’s. This requires large amounts of image and video data to realistically replicate the facial expressions, gestures, and tone of the target person.
Technological Foundations of Deepfakes
The detection of such forgeries presents considerable challenges for host providers, access providers, and other providers of web hosting and server services. Deepfake videos and manipulated audio files can be deliberately used to spread fakes, fake news, or to damage a person’s reputation. The employed AI systems and algorithms are becoming increasingly sophisticated, often pushing classical verification methods to their limits.
Methods for Detecting Deepfakes
Various methods are used to detect deepfake videos and other forgeries. These include analyzing irregularities in facial expressions, movements, or lighting conditions in the video. Checking metadata and employing specialized AI tools to detect manipulations are also common procedures. Nevertheless, detection remains an ongoing challenge as the techniques to create deepfakes continuously evolve.
Risks and Protective Measures for Platform Operators
For host providers and platform operators, this means they must not only react to notices but also proactively take measures to protect against the spread of deepfake content. This includes the use of content filters, regular review of hosted content, and collaboration with experts and authorities. The goal is to ensure the authenticity of the media content provided on their servers and websites and to detect manipulations early.
The use of deepfake technologies poses significant risks to the privacy and security of individuals and companies. Manipulated videos, audio files, or images can be deliberately used for deception, extortion, or the spread of misinformation. Therefore, it is important for internet users to critically engage with digital content and to use appropriate verification tools or conduct a domain check when suspecting forgeries.
Need for Research and Collaboration
Given the rapid advancement of deepfake AI and the increasing prevalence of such forgeries, close cooperation between research, technology companies, hosting providers, and authorities is essential. Only through the combination of innovative detection methods, technical development, and clear legal frameworks can the integrity of digital media content be protected in the long term. Together we create a trustworthy and secure online environment where manipulations and fakes have no place.
No Further Notice Required
The regional court initially rejected the application because it did not see a continuing obligation for the platform. However, upon immediate appeal by the applicant, the Higher Regional Court of Frankfurt partially overturned this decision. According to the Senate, the platform is not liable for the first video but is liable for the second. In the legal context, it should be noted that the operator of a website is considered a content provider if it controls, edits, or publishes its own or third-party content, which entails special liability for the content provided on the website. The Higher Regional Court stated that the platform had no knowledge of the infringement before the first notice and therefore had no duty to pre-screen or delete content. However, after the first video was removed, the situation changed: with specific knowledge of the infringement, the host provider is obligated to also check and, if necessary, remove equivalent content. This duty was violated because the second, almost identical video was only blocked after a renewed notice. The blocking should have been carried out without another warning, emphasized the Higher Regional Court of Frankfurt.
It further clarified that a host provider is generally not required to monitor or filter content uploaded by users in advance. Such general monitoring would hardly be compatible with freedom of expression and communication on the internet. In different situations, e.g., when using AI systems or under varying technical and legal frameworks, liability and examination obligations can differ. However, once the operator has concrete knowledge of a clearly identifiable infringement, it must block the respective content and take measures to prevent it from being distributed again in the same or similar form. The issue of distinguishing between permissible and impermissible content, as well as the effectiveness of protective measures for users and platforms, remains a challenge. This duty goes beyond merely removing the reported content.
Equivalent Content: Detection and Legal Duties of Host Providers
Equivalent content exists when, despite minor changes such as a different cut, altered colors, different format, or slightly modified text, the same infringing overall impression is conveyed, according to the Higher Regional Court. Advanced technology and analysis of large datasets are used to reliably identify manipulations. A platform cannot claim that a newly uploaded video is not technically identical if it factually contains the same deceptive message. Through algorithm learning and AI application, systems can increasingly better distinguish between reality and manipulated content. From the moment the operator is informed about a concrete infringement, it must take appropriate technical and organizational measures to prevent repetitions. In practice, however, the problem is that detection methods do not function flawlessly in all situations, making effective protective measures particularly important.
In practice, this means that host providers must not only react to reports but also actively combat the spread of misleading or manipulative content. They are not subject to a general monitoring obligation but to a situational duty of examination. This arises as soon as the operator is notified of a concrete infringement. They must then not only delete the specific post but also check whether comparable content on the platform continues the same infringement. If omitted, they can be held liable as so-called indirect infringers for injunction.
MTR Legal Rechtsanwälte provides comprehensive advice in IT law.
Feel free to contact us!