
By Uzma Ehtasham
A new and troubling front has opened in long and bitter information wars. In recent days, a short, explosive video has spread like wildfire across Afghan and other regional social media networks. The clip, which claimed to show security forces desecrating a mosque and the Holy Qur’an, even bringing dogs into a place of worship, was designed to provoke horror and fury. Its circulation was swift, its emotional impact fierce — but it was built entirely on falsehood. Ministry of Information and Broadcasting, backed by independent fact-checkers, has confirmed that the footage was artificially generated using AI tools. There is no trace of any such incident in verified media coverage or field reports. What millions saw and shared was a manufactured illusion, tailored to wound faith and poison relations.
The episode captures something dark about our moment. Modern technology has given propaganda a new, almost surgical precision. Artificial intelligence, once the domain of research labs and big studios, now lies in the hands of anyone with a smartphone and an agenda. With a few clicks, anyone can create scenes that appear real, manipulate voices, and fabricate outrage. However, if the technology is new, the instincts it exploits are not. Fear, faith, and anger travel faster than reason. The digital age rewards emotional heat, not calm inquiry. When the image of a soldier or a sacred place flashes across screens, the question of authenticity rarely keeps pace with the rush to condemn.
This was no random prank. The timing, tone, and circulation of the forged video all align with a wider pattern of cross-border antagonism. Just days earlier, the chief minister of Khyber Pakhtunkhwa, Sohail Afridi, had accused elements within state institutions of disrespecting mosques — a claim already mired in controversy. Within hours, anonymous accounts sympathetic to Afghan actors seized on his remarks, coupling them with the fake clip to weave a single, incendiary narrative: that army was at war with Islam itself. The goal was transparent — to inflame sentiment, to pit believer against soldier, and to weaken Pakistan’s moral footing at a time when its border diplomacy with Afghanistan is already under severe strain.
Islamabad’s response was quick and coordinated. Government agencies released side-by-side analyses of the forged imagery, explaining the hallmarks of digital manipulation. Ministers urged the public to verify before sharing, calling the incident part of a broader campaign of “information terrorism.” Federal Minister Rana Tanveer Hussain went further, accusing Afghan social media networks of harboring anti-Pakistan narratives and urging Kabul to behave as a “peaceful neighbor, not a proxy arena for Pakistan’s foes.” Behind the official statements lies a deep unease: Pakistan’s western frontier has become not just a physical flashpoint but a digital battlefield, one where images, not bullets, can ignite new conflicts.
That anxiety is not misplaced. The relationship between Islamabad and Kabul has grown steadily more brittle since the Taliban’s return to power. Border skirmishes, militant infiltration, and the dispute over cross-border attacks have already soured trust. Now, with AI-forged disinformation spreading through both societies, each fresh outrage risks compounding real-world hostility. What was once handled through diplomatic channels now plays out on TikTok and X, where rage is cheaper to manufacture than dialogue. The deeper danger is not only technological but moral. Falsehoods like these do not just distort perception — they erode the social foundations of coexistence. Every faked atrocity, every doctored clip, chips away at the possibility of truth.
Moreover, when truth becomes negotiable, diplomacy collapses. Across South Asia, a disturbing pattern has emerged: from communal incitement in India to anti-migrant propaganda in Afghanistan and sectarian narratives in Pakistan, AI-generated fakery is being deployed to stoke division and distrust. Technology firms must bear some of the blame. Platforms that profit from engagement are still reluctant to police manipulated media at the speed required. Their detection tools lag behind the pace of innovation, and their labelling of synthetic content remains patchy at best. It is no longer acceptable for social networks to shrug and plead neutrality when manipulated footage threatens to spark cross-border crises.
Responsibility must be shared — by those who host the platforms, those who design the tools, and those who wield them. But the solution is not purely technical. Fact-checking and AI detection software will mean little unless governments and communities learn restraint. Leaders, clerics, and commentators must resist the temptation to weaponize outrage. Every time a politician or a media outlet amplifies an unverified claim, it multiplies the audience for those who thrive on chaos. In societies already scarred by mistrust, the careless repetition of falsehoods can be catastrophic. Pakistan’s government, for its part, has pledged to strengthen verification systems and develop rapid-response digital monitoring units. These are necessary steps, but they will only work if matched by transparency.
(The writer is a public health professional, journalist, and possesses expertise in health communication, having keen interest in national and international affairs, can be reached at uzma@metro-morning.com)
