The Single Resolution Board needed to assess the value of Banco Popular Español during its resolution. They collected comments from shareholders and creditors, removed identifying details (pseudonymised the data), and sent everything to Deloitte for independent valuation. The European Data Protection Supervisor said this violated transparency rules because the Board didn’t properly disclose recipients when collecting the data.
The case reached Luxembourg, and the Court delivered three holdings that matter for anyone processing data in the EU.
First: opinions are inherently personal data. When you express a view or comment, that expression relates to you by its nature. This seems obvious but creates practical headaches. Your feedback on a product, your comment in a shareholder meeting, your review on a platform – all personal data, automatically.
Second, and this is the big one: pseudonymised data aren’t always personal data for everyone. From the Board’s perspective – holding the key to re-identify people – the comments remained personal data with full GDPR obligations. But for Deloitte, receiving pseudonymised comments without any realistic way to identify individuals, those same comments might not constitute personal data at all. No personal data means no GDPR obligations.
Read that again because it contradicts everything data protection authorities have been saying for seven years.
Third: transparency obligations attach at collection. Even if data become non-personal for recipients after pseudonymisation, the controller must disclose all recipients before collecting data (if relying on consent). This creates an asymmetry – the controller’s obligations are absolute, but the recipient’s obligations depend on their actual capability.

Most tech companies have been treating pseudonymisation as a security measure that keeps data firmly within GDPR’s scope. Your legal team probably told you: “Pseudonymised data are still personal data, so we need full compliance.” That was correct based on regulatory guidance. The Court just said that guidance misses the point.
Consider your analytics pipeline. You pseudonymise user data before sending it to a third-party analytics provider. That provider has no access to your identification key, no reasonable way to link the data back to individuals, and processes data purely in aggregate. Under EDPS v SRB, they might not be processing personal data at all. No data protection impact assessment required. No data subject rights to handle. No Article 30 records. No Chapter V transfer mechanism if they’re outside the EU.
The compliance cost reduction is substantial. But it only works if the pseudonymisation is genuinely effective and you can document why the recipient cannot re-identify individuals.
Here’s where it gets interesting. The European Data Protection Board released draft guidelines on pseudonymisation in early 2025. Those guidelines maintain that pseudonymised data remain personal data in all circumstances. The EDPB treats pseudonymisation strictly as a security measure under Article 32 GDPR, never as a method for removing data from the regulation’s scope.
The Court’s judgment directly contradicts this position. The EDPB now faces the uncomfortable task of revising guidance that underpins years of supervisory authority enforcement practice. National regulators have built audit methodologies on the assumption that recipients of pseudonymised data always process personal data. They’ll need new approaches.
This will take time to filter through. Expect a period of regulatory uncertainty whilst authorities figure out how to implement the Court’s reasoning without creating compliance chaos.
Standard Contractual Clauses currently assume transferred data constitute personal data for all parties. The European Commission’s 2021 SCCs don’t contemplate scenarios where a recipient genuinely cannot identify individuals and therefore processes non-personal information.
You might now face situations where:
This creates friction. Some recipients will resist accepting SCC obligations for data they don’t consider personal. You’ll need supplementary clauses addressing who owes what obligations in these split scenarios. Legal teams will love the billable hours. Business teams won’t.
The judgment creates strong incentives to invest in robust pseudonymisation techniques. If you can demonstrate that recipients genuinely cannot re-identify individuals, you can offer them data processing outside GDPR’s scope. That’s an attractive commercial proposition.
But superficial pseudonymisation won’t work. Regulators will scrutinise claims that data aren’t personal for recipients. You need to document:
Think differential privacy, homomorphic encryption, secure multi-party computation. Technologies that enable useful analysis whilst preventing identification even by sophisticated actors. These measures cost money but the compliance benefits may justify the investment.
This creates an odd situation. You’re an EU controller who pseudonymises data and sends them to three recipients. Recipient A has re-identification capability (personal data for them). Recipients B and C don’t (potentially not personal data for them). An individual exercises their erasure right.
You must erase the data you hold. Article 19 GDPR requires you to notify each recipient about the erasure. But Article 19 applies to recipients to whom personal data have been disclosed. If B and C aren’t processing personal data, does Article 19 apply? The judgment doesn’t address this explicitly.
Practical answer: notify all recipients anyway and let them decide whether they’re processing personal data. This avoids the risk of missing a notification whilst pushing the personal data assessment onto recipients.
The judgment opens more questions than it answers. Does the same logic apply beyond pseudonymisation? Information might identify someone from your perspective (you know the context) but mean nothing to a recipient (they lack the background). Is that also potentially non-personal for the recipient?
What about sector-specific legislation? The AI Act, Digital Services Act, and ePrivacy Directive each have data-related provisions. Do they adopt the same contextual approach to personal data? We don’t know yet.
How do legitimate interests assessments work when transferring data that might be personal for you but not for recipients? Do you assess privacy impact based on your processing alone or include the recipient’s use?
These questions will occupy courts, regulators, and legal advisors for years.
EDPS v SRB represents judicial recognition of technological reality. Data protection law must account for the fact that multiple actors with different capabilities process the same information. A rigid approach treating all data uniformly regardless of context either becomes unworkable or extends regulation beyond its justification in fundamental rights.
The Court chose contextual assessment over categorical rules. This better reflects how data processing actually works in 2025 whilst preserving robust protection where identification capability exists. Whether this approach creates more compliance clarity or more uncertainty depends on how quickly regulators and practitioners adapt to the new framework.
For now, tech companies have a clear message from Luxembourg: pseudonymised data aren’t automatically personal data for everyone. Use that knowledge strategically, but document your reasoning carefully. Regulators won’t simply accept assertions that data aren’t personal. You need evidence.
Need guidance on how this judgment affects your data processing arrangements? Wolja Digital specialises in GDPR compliance for technology companies. We provide practical advice on pseudonymisation techniques, data transfer mechanisms, and privacy-enhancing technologies that actually work for product development.