Digital Humans Pose Serious GDPR Risks
China just proposed rules requiring mandatory labelling and consent gates for AI-generated digital humans. The regulation is Chinese, but the exam questions are European. For CIPP/E candidates, digital humans GDPR obligations cut across several domains the exam tests heavily: special category data, consent and withdrawal, transparency and the practical limits of erasure rights when personal data has been absorbed into a trained model.
When Digital Humans Trigger GDPR Special Categories
Creating an AI avatar from someone's likeness means processing their personal data. That much is straightforward. The harder question; and the one the exam tests; is whether that processing involves special category data under Article 9.
The answer depends on the technical processing involved. GDPR Article 4(14) defines biometric data as personal data resulting from specific technical processing relating to physical or physiological characteristics that allow unique identification. A photograph of someone is personal data. A facial scan processed through recognition software to build an AI replica crosses into biometric territory. If the purpose is to identify or replicate a specific individual, you are likely processing special category data under Article 9(1), and the general prohibition applies unless you can rely on one of the Article 9(2) derogations.
For CIPP/E candidates, the IAPP's Body of Knowledge (BoK), the document that defines every domain and topic candidates are tested on, maps this to Domain II.A.1 and II.A.2: personal data concepts and special categories. A scenario question might describe an organisation building AI avatars of employees and ask which lawful basis applies. The correct answer requires recognising that explicit consent under Article 9(2)(a) is almost certainly needed; not mere Article 6 consent, but the higher standard for special categories.
Why Explicit Consent Is Not the End of the Analysis
Consent for biometric processing must be explicit, informed and freely given. But digital humans GDPR problems do not stop at the consent gate. If the avatar is deployed in a customer-facing context; a virtual assistant, a digital spokesperson, a chatbot with a human face; the transparency obligations under Articles 13 and 14 also apply. The data subject whose likeness was used has a right to know how their data is being processed. The individuals interacting with the digital human have a right to know they are not speaking to a person.
This is where the GDPR and the EU AI Act overlap. The AI Act requires that individuals interacting with AI systems be informed they are dealing with artificial intelligence. The GDPR requires that data subjects receive clear information about the processing of their personal data. A scenario question testing Domain III.C (transparency and privacy notices) might present both obligations and ask which regulation imposes the stronger requirement; or whether both apply simultaneously.
Children, Consent and the Digital Human Problem
China's draft rules specifically prohibit addictive digital human services targeting minors. The GDPR takes a different route but reaches a similar place. Article 8 sets the age threshold for valid consent to information society services at 16 (with member states permitted to lower it to 13). Recitals 38 and 58 reinforce that children merit specific protection because they may be less aware of the risks and consequences of data processing.
For the exam, the question is not whether children deserve protection; that is settled. The question is what specific obligations arise when a digital human processes a child's data or when a child interacts with a digital human service. Domain II.C.5 of the BoK covers consent and withdrawal, and the children's consent threshold is a regular exam topic. If you cannot state the default age threshold and name the range member states may set, close that gap before sitting the exam.
The Digital Humans GDPR Erasure Question
Perhaps the sharpest digital humans GDPR question is what happens when someone exercises their right to erasure under Article 17 or their right to object under Article 21 against a model trained on their likeness.
The Technical Reality
The legal right is clear. The practical compliance is not. Removing a specific individual's data from a trained model; "unlearning" their facial features, their voice, their mannerisms; is technically difficult and in some architectures effectively impossible without retraining. The exam may not ask you to solve that technical problem, but it will test whether you understand that the controller's obligation to erase persists regardless of how difficult compliance is. Domain II.C.3 (right of erasure) is tested frequently, and the intersection with AI-trained models is an increasingly likely scenario.
What This Means for Your Preparation
Digital humans GDPR questions sit at the intersection of special category data, consent, transparency, children's protections and erasure rights. That is five BoK domains from a single fact pattern. The candidate who spots only one of them will not select the strongest answer.
For structured CIPP/E exam practice, the free assessment at 22academy.com is a practical place to start.