Many users of Meta services like Facebook and Instagram have recently received an email stating: “Learn how we use your information to improve AI at Meta.” Behind this seemingly harmless announcement lies a plan with far-reaching implications for individual privacy. At Sofortdatenschutz.de, we explain what you need to know and what actions you can take.

What Meta Plans: AI Training with User Data

The email states that user data will be used to improve Meta systems such as generative AI features like Meta AI and AI Creative Tools. The company uses a broad range of data sources:

Public information: This includes posts, photos, and comments from accounts of users aged 18+, and refers to all public content shared on Meta products since account creation.

Interactions with AI features: Any use of Meta’s AI tools will also feed into model training.

Meta claims “legitimate interest” (Art. 6(1)(f) GDPR) as the legal basis to develop and improve its AI systems.

Critical Aspects of the Meta Announcement

Legitimate interest – really sufficient?
This legal basis requires weighing company interests against user rights. Meta argues the need to develop AI, but does this outweigh users’ rights to data protection, especially when involving years of public data? Many data protection experts criticize using this basis for such broad new processing purposes.

Purpose limitation and user expectations
GDPR requires data to be collected for clear and legitimate purposes (Art. 5(1)(b)). Did users share personal posts expecting them to train global AI models? Unlikely. Even public posts were shared within a social context—not for large-scale AI processing.

Scope of data: “All public information since account creation”
This includes potentially decades of posts and content. Such a retroactive change in purpose is problematic, even for public data.

Transparency and clarity
Does the email explain enough? What exactly counts as “public information”? Can users understand what data is used and how? The complexity of AI makes the impact hard to judge.

Right to object – opt-out instead of opt-in
Meta offers an opt-out, as allowed under legitimate interest (Art. 21 GDPR), but users must take action to stop data use. An opt-in model would be more privacy-friendly. Many users may overlook or not understand the email or skip the objection process.

What about already trained models?
Objections apply only to future use. What about data already used in training? Can it really be removed from AI models? This remains unclear.

What You Can Do as a User

Object: You can object to your data being used for AI training. Meta offers a link in the email or privacy settings. After a successful objection, Meta should no longer use your data for training. We recommend doing this if you’re concerned.

Change audience settings: Regularly review privacy settings and adjust post visibility. The less public your content, the less likely it is to be used (though objection is more direct).

Delete data: You can request deletion, but it may not affect data already used for model training.

Our View at Sofortdatenschutz.de

Meta’s plan reflects the conflict between AI progress and privacy. While AI has potential, it must not compromise data protection. Using “legitimate interest” for such extensive use of personal data, originally shared for other reasons, is questionable. Opt-in would have been the more transparent and privacy-compliant choice.

We urge all users to actively review their settings and exercise their rights—especially the right to object. Protecting your data in the digital age requires awareness and action.

Conclusion

AI is advancing fast. That makes it even more important for companies like Meta to offer clear, fair, and transparent handling of user data.

Table of Contents