Criticism Over Meta’s Plans to Use Facebook and Instagram Posts for AI Training

Share this post on:

Plans by Meta to utilize public posts and images from Facebook and Instagram to train its artificial intelligence (AI) tools have sparked backlash from digital rights groups. The social media conglomerate recently notified users in the UK and Europe that, due to changes in its privacy policy effective from June 26, their public information could be used to “develop and improve” its AI products. This includes posts, images, captions, comments, and Stories shared publicly by users over 18, excluding private messages. offers affordable social media management services. Contact us on WhatsApp for more details.

Noyb, a European digital rights advocacy group, labeled this move as an “abuse of personal data for AI” and has filed complaints with 11 data protection authorities across Europe, urging immediate intervention to halt Meta’s plans.

Meta, however, maintains that its approach is compliant with relevant privacy laws and aligns with practices used by other major tech firms to develop AI products in Europe. In a blog post on May 22, Meta explained that European user data would support the broader rollout of its generative AI experiences by providing more relevant training data, reflecting the diverse cultures and languages of European communities.

Tech companies are in a race to gather fresh, multifaceted data to enhance models powering chatbots, image generators, and other AI innovations. In a February earnings call, Meta CEO Mark Zuckerberg emphasized the importance of the firm’s “unique data” to its AI strategy, highlighting the vast amount of publicly shared images, videos, and text posts available to Meta.

Chris Cox, Meta’s Chief Product Officer, noted in May that the company already uses public Facebook and Instagram data for its generative AI products in other parts of the world.

Controversial Notification Process

Meta’s method of informing users about the change has also faced criticism. Recently, UK and European users received notifications or emails detailing how their data would be used for AI from June 26. Meta stated it is relying on legitimate interests as the legal basis for processing this data, requiring users to opt-out by exercising their “right to object” if they do not want their data used for AI.

To opt-out, users must click a hyperlink labeled “right to object” in the notification, leading them to a form where they must explain how the data processing impacts them. This process has been criticized for being cumbersome and potentially discouraging users from opting out.

Noyb co-founder Max Schrems, an Austrian activist and lawyer known for challenging Facebook’s privacy practices, criticized the opt-out mechanism as “absurd,” arguing that Meta should seek users’ consent to opt-in rather than requiring them to opt-out through a hidden and confusing process.

Schrems asserted that Meta should request permission before using user data, instead of making users beg to be excluded. Meta defends the process as legally compliant and similar to those used by other tech companies.

According to Meta’s privacy policy, it will honor objections and cease using the information unless it identifies “compelling” grounds that outweigh user rights or interests. However, Meta may still use some information about individuals for its AI products, even if they do not have a Meta account or successfully object, if they appear in publicly shared images on Facebook or Instagram.

“Meta is essentially claiming it can use any data from any source for any purpose and make it available globally, as long as it’s through ‘AI technology’,” Schrems remarked.

The Irish Data Protection Commission, responsible for overseeing Meta’s compliance with EU data law due to its Dublin headquarters, confirmed it has received a complaint from Noyb and is investigating the matter.

Leave a Reply

Your email address will not be published. Required fields are marked *