A Wakeup Call

By Dr Ayesha Ashfaq

A headline from Denmark has stirred a global conversation to redefine what it means to own your identity in the digital age. This felt more personal than distant in many ways. The Danish government leads a global push to protect human identity from AI abuse and exploitation. Denmark recognises face, voice, and body as intellectual property to regulate AI and digital identity rights. In an age where pixels can mimic people and voices can be cloned in seconds, this small yet significant legislative leap might be a global shift towards AI ethics.

This law is the first step of its kind in Europe. If someone uses your face in a deepfake, even for scams, politics, or porn, you will have full legal power to take it down in Denmark. I believe it is not just about identity; it is about how you can own your digital self. It is a declaration of human rights, updated for 2025. The law targets harmful usage of AI, such as creating fake porn, scam calls, and false political propaganda, but it still protects satire and parody, which is a balance between freedom and safety. Once the law is passed, it will provide three rights to the citizens: the right to demand the removal of AI-generated content using their resemblance; the right to seek compensation if harmed or harm is intended; and the right to hold platforms accountable for hosting deepfakes and synthetic abuse. If the platforms, after being reported, do not remove the content, they would be liable to pay heavy fines.

It is a wake-up call for developing nations like Pakistan. In our rapidly expanding digital frontier, with too low AI literacy and awareness of AI’s potential for misuse, we are at risk. The tools to create synthetic media and their usage are already in people’s hands. What is absent, however, are the legal frameworks to protect those who are being exploited by it. I always believe that in the age of cloning, copying, and faking, absolute freedom without responsibility is just chaos dressed in rights. It is important to have your intellectual digital property. It is pertinent to understand that deepfakes are not just digital pranks or jokes; they are weapons to harm. They have been used to target journalists, politicians, and marginalised communities, including women, and religious and ethnic minorities, silence dissent, and distort political, social, historical, and cultural narratives.

The recent deepfake scandal around Chief Minister Maryam Nawaz in early 2025 in Pakistan is a reminder that there are no exceptions. But our response continues to be reactionary, rather than preventive. Existing laws in Pakistan, such as PECA 2025, are being used to apply to deepfake abuse, but they lack clarity, coherence, and consistency. Its victims are frequently left with the onus of proof, and platforms go unpunished. On the contrary, Denmark’s model flips this, which makes platforms responsible and accountable for synthetic abuse and gives power to citizens to use their digital rights and protect their digital self.

It is a wake-up call for developing nations like Pakistan. In our rapidly expanding digital frontier, with too low AI literacy and awareness of AI’s potential for misuse, we are at risk. The tools to create synthetic media and their usage are already in people’s hands. What is absent, however, are the legal frameworks to protect those who are being exploited by it. I always believe that in the age of cloning, copying, and faking, absolute freedom without responsibility is just chaos dressed in rights. It is important to have your intellectual digital property. It is pertinent to understand that deepfakes are not just digital pranks or jokes; they are weapons to harm. They have been used to target journalists, politicians, and marginalised communities, including women, and religious and ethnic minorities, silence dissent, and distort political, social, historical, and cultural narratives.

The recent deepfake scandal around Chief Minister Maryam Nawaz in early 2025 in Pakistan is a reminder that there are no exceptions. But our response continues to be reactionary, rather than preventive. Existing laws in Pakistan, such as PECA 2025, are being used to apply to deepfake abuse, but they lack clarity, coherence, and consistency. Its victims are frequently left with the onus of proof, and platforms go unpunished. On the contrary, Denmark’s model flips this, which makes platforms responsible and accountable for synthetic abuse and gives power to citizens to use their digital rights and protect their digital self.

As we stand between the rapidly emerging technology and fragile human rights, the debate before us is not just about legal boundaries; it is a moral one of tremendous significance. The Danish model must motivate lawmakers everywhere in the world to ask themselves: Are we ready to protect our people in the digital age? Pakistan, like other countries faced with the same dilemma, must build a preventive and empowering legislative architecture. There needs to be public consultations and stakeholder discussions in a conversation that is much more about digital—especially now AI—literacy. AI ethics should be part of the curriculum in our educational institutions from primary education, and our policymaking class must engage with technologists, ethicists, and civil society.

To address the deepfake menace seriously, Pakistan will have to pay serious attention to a multi-tiered, law-based, educational, and digital infrastructure-based approach.

First, we require national laws clearly designed for synthetic media, surpassing the vague sketch contained in PECA 2025. Those laws need to include a clear definition of deepfake abuse, quick remedies, and place the burden of fixing these records on the internet platforms that host harmful material.

Second, we need to raise public digital and AI literacy, particularly among youth and marginalised communities. An extensive national campaign can raise awareness about the evil use of AI by people; schools should include chapters on AI ethics, digital rights, digital behaviour, and responsibilities in their curricula. These are conversations that civil society and academia can help to shape.

Third, Pakistan needs to set up an independent digital rights commission that could monitor AI abuses, assist AI victims, and bring together technologists, lawyers, and human rights advocates.

I believe that this moment calls for us to awaken a national conscience beyond just the policy-making. We need citizens who recognise that protecting their identities is no longer just a matter of protecting passwords or privacy settings. It is about standing up for dignity and truth in a world filled with lost human connections. It is restoring your sovereignty over yourself, one pixel at a time, one byte at a time. Denmark’s initiative may be a legislative first, but the deeper question is: Can we follow their moral lead? We face a choice: will we be spectators in the age of AI or defenders of human rights? The clock is ticking. And if we do not act now, we are not just passive victims of AI manipulation, but actively conspiring to let truth and trust collapse. Pakistan needs not only laws but leadership, with educational reforms, with moral clarity in the digital age.

Dr Ayesha Ashfaq

The writer is the Chairperson and Associate Professor at the Department of Media & Development Communication at the University of the Punjab.

Note: This article first appeared in The Nation on July 27, 2025.