June 15, 2024 at 03:54AM
Meta is postponing the training of its large language models using adult user content from Facebook and Instagram in the EU after a request from the Irish DPC. Meta plans to use personal data but is facing regulatory backlash for not seeking explicit consent. The delays affect bringing AI benefits to Europe.
From the meeting notes provided, it appears that Meta is facing challenges in training its large language models (LLMs) using public content from adult users on Facebook and Instagram in the European Union. The delay results from a request from the Irish Data Protection Commission (DPC) and concerns about using personal data to train its AI models without explicit user consent.
The company expressed disappointment but also stated that it had taken feedback from regulators and data protection authorities into account. Meta highlighted the importance of using user-generated content to train its AI models in Europe to provide a quality experience and innovation, expressing confidence in its approach to comply with European laws and regulations.
However, the delay has also drawn attention from the Information Commissioner’s Office (ICO) in the UK, emphasizing the importance of respecting privacy rights from the outset. Additionally, an Austrian non-profit organization filed a complaint in multiple European countries, alleging GDPR privacy law violations by Meta in collecting user data for AI development.
The organization criticized Meta for framing the delay as “collective punishment” and emphasized the importance of users giving informed opt-in consent for data processing, as permitted by the General Data Protection Regulation (GDPR).
The diverse reactions and regulatory scrutiny suggest ongoing challenges for Meta in navigating the use of public content to train its AI models in the European Union, while also reflecting broader debates around privacy, data consent, and AI development in the region.