
By Atiq Raja
In the fast-evolving world of artificial intelligence, it is not hyperbole to say that generative AI has swiftly become one of the most disruptive forces of our time. It can write our essays, compose our music, mimic our faces, and replicate our voices. It doesn’t just copy — it creates. It generates. It fills the gaps. And while its creative powers are impressive, almost magical at times, the real challenge lies not in what this technology can do, but in what it is quietly doing to the people who use it — which is, increasingly, all of us. The integration of generative AI into daily life is no longer confined to niche tech enthusiasts or researchers. From the moment we ask a chatbot for advice or scroll past a hyper-personalized recommendation on social media, we are engaging with systems that learn, simulate, and respond like us — sometimes better than us.
The question isn’t merely whether this is good or bad. It’s about what kind of society we’re becoming as we hand over more of our conversations, emotions, and attention to machines designed to please, persuade, and perform. One of the most profound but under-discussed shifts is happening in the way people relate to one another. Generative AI is subtly reshaping social behavior. We are now witnessing the rise of virtual influencers who don’t age, don’t get tired, and certainly don’t make the awkward gaffes that human influencers do. AI therapists are offering companionship to the lonely, and some people are building genuine emotional attachments to digital avatars who always know the right thing to say. But the emotional cost of such convenience is significant.
The more we engage with flawless, responsive AI, the more human interaction — with all its messiness, unpredictability, and imperfection — begins to feel like an inconvenience. That shift could chip away at empathy, dilute our patience, and even atrophy the very social skills that hold communities together. Then there is the question of trust. Generative AI has made it startlingly easy to create convincing fabrications. Deepfake videos can mimic public figures with uncanny precision. AI-generated articles and social media posts can masquerade as authentic with little effort. The erosion of visual and verbal truth has begun in earnest. In a world where even a video of someone speaking can be faked convincingly, the old rule — “seeing is believing” — no longer applies.
It leaves us in a fog of suspicion, where public figures can plausibly deny real footage and fabricated scandals spread faster than any corrections that follow. When truth itself becomes a casualty of innovation, democracy and social cohesion become vulnerable. Misinformation, already a major challenge in the digital age, has found a powerful new ally in generative AI. Bad actors now have the tools to create fake news at scale, and they no longer need teams of writers or editors. They need only a prompt. Imagine a political campaign hijacked by AI-generated speeches that never took place. Imagine conflict zones where fabricated images are weaponized to incite panic, hatred, or violence. The velocity and volume of false information make manual fact-checking an exercise in futility. This new reality threatens to undermine institutions, fracture public discourse, and embolden extremism with frightening speed.
But perhaps the most intimate intrusion of AI is happening in our emotional lives. People are turning to chatbots not just for answers, but for companionship. These digital companions offer validation, support, and a kind of emotional availability that humans — with their own needs, flaws, and limits — sometimes struggle to provide. But what does it mean to love something that cannot love you back? What happens when grief is comforted not by another person, but by a simulation of concern? AI can mimic emotion, but it cannot feel. And yet, our minds and hearts are often willing to accept the illusion. If we continue down this path, we may begin to prefer the certainty of artificial relationships to the depth and difficulty of real human ones. The emotional toll is subtle, but significant — a creeping sense of loneliness, even amid constant digital engagement.
The rise of generative AI does not spell inevitable doom. But the stakes are high. Whether AI is used to enrich our lives or erode the foundations of human connection depends on how we, as a society, choose to respond. We must begin with education. People, especially young people, need to understand what AI is, how it works, and how to spot its fingerprints in the media they consume. Transparency must be enforced — AI-generated content should be clearly labelled, especially in journalism, advertising, and politics. Legal frameworks need to evolve to protect against the exploitation, fraud, and manipulation that AI makes so easy. At the same time, we must resist the urge to let AI take the place of empathy, creativity, and interpersonal care. Schools and workplaces should emphasize emotional intelligence as much as technological fluency.
(The writer is a rights activist and CEO of AR Trainings and Consultancy, with degrees in Political Science and English Literature, can be reached at news@metro-morning.com)