'It flows through families, caste groups, and local WhatsApp communities where trust is personal.'
'Children are taught to defer to elders, so if a parent or uncle forwards something, you don't question it.'

India has long stood at the epicentre of the global fake news crisis, where health hoaxes, political rumours, and viral WhatsApp forwards have misled millions.
A groundbreaking new study published in the American Political Science Review may hold some answers to the solutions that that educators and governments have been seeking to find.
Co-authored by Sumitra Badrinathan (American University), Simon Chauchard (University Carlos III, Madrid), Florian Sichart (Princeton University) and Priyadarshi Amar (University Carlos III, Madrid), the research represents the world's largest media literacy experiment -- a randomised controlled trial involving more than 13,500 students across 583 villages in Bihar.
For the first time, it rigorously tests whether classroom-based media literacy programmes can actually equip young Indians to discern truth from misinformation in the country's uniquely social, WhatsApp-driven information landscape.
Dr Sumitra Badrinathan shared with US Special Correspondent Abhijit J Masih what the study uncovered, why India presents a special challenge for fighting misinformation, and what it will take to build a generation of more discerning digital citizens.
"The long-term goal is to make critical thinking a civic skill as routine in schools as math or reading," explains Dr Badrinathan.
India has often been described as 'ground zero' in the global fake news crisis. What drew you to study misinformation specifically in the Indian context?
There's already a ton of research on misinformation in Western democracies, but almost nothing in the Global South. And there's no reason to believe that what works in the US or Europe will automatically translate to India.
The information ecosystem here is totally different: we're a WhatsApp-heavy, community-driven society where much of information sharing happens offline.
I wanted to study what misinformation looks like in those spaces, and how people navigate truth and falsehood when the internet isn't the main source of information.
Why do you think India's misinformation problem is particularly acute -- is it technology, politics, or social structures like WhatsApp family groups and local rumour networks?
It's all of the above, but what really stands out in India are the community norms that shape sharing.
Information flows through dense social networks like families, caste groups, and local WhatsApp communities, where trust is deeply personal.
Children are taught to defer to elders, so if a parent or uncle forwards something, you don't question it.
Those informal structures of both information and society make interpersonal trust a huge factor. That's what makes misinformation here so sticky: it's not just about content, it's about relationships.
Why did you choose Bihar and what was unique about working there for testing media literacy interventions?
Honestly, because you have to start somewhere! India is massive, and it's impossible to run such a large-scale study across many states at once.
I've been doing fieldwork in Bihar for many years, and I already had a team on the ground that I could trust.
It made sense logistically and substantively -- it's a young, populous state with low digital literacy and some vulnerability to misinformation.
The hope now is that, having built this model and tested it rigorously, we can scale it to other states across the country.

Since political parties themselves sometimes benefit from misinformation, did you encounter any pushback or scepticism when partnering with the Bihar government to bring this programme into schools?
Honestly, they were remarkably open. They saw this as a way to strengthen education, not as a political issue.
We were very careful to keep the programme nonpartisan -- everything focused on science, health, and reasoning skills. That probably helped avoid pushback.
Our coauthors and I personally handled the training and curriculum design, so we could ensure neutrality and build trust at every step.
We were careful to keep the curriculum nonpartisan -- it focused on reasoning, science, and health, not politics.
The programme was implemented through Jeevika, a government-affiliated institution with strong local credibility, which made schools and parents much more comfortable. So rather than pushback, we got cooperation and even enthusiasm.
The study reveals that students shared less misinformation, trusted better sources, and relied more on science. Which of these outcomes surprised you most?
One of the coolest findings was about diffusion of effects.
After the intervention, we interviewed not just students but also a subset of parents. And we found that parents of students who took the course were themselves better at telling true from false. That means the kids were taking lessons home, discussing them with their families, and changing household information habits.
It's a really striking result -- past media literacy interventions in the Global South have struggled to show much effect, so seeing both children and their parents improve was a pleasant surprise.
It shows that educating one child can create ripple effects through an entire community.
You note that the effects lasted for four months. Why do you think this particular approach stuck?
I think the interactive format of classes really helped.
This wasn't rote memorisation, it was full of hands-on activities and role-plays that broke from traditional classroom styles.
In one session, for example, one student played a parent sharing misinformation and another had to correct them using what they'd learned. That kind of active learning sticks.
We know from educational psychology that repetition and engagement reinforce memory, and the students seemed to genuinely enjoy the process.
I think that's why the lessons stayed with them even months later.
Many have argued that adults are too 'set in their beliefs' to change. Does this study suggest that focusing on children may be the most effective long-term antidote to misinformation?
Children are definitely a promising starting point, but this doesn't mean adults can't learn.
It's just easier to reach kids -- they're already in classrooms, in a structured environment.
For adults, the challenge is finding the right delivery mechanism and ensuring participation. But research in other contexts, like civic education, shows that adults can change their beliefs and behaviors too.
So while schools are a natural entry point, I'd love to see similar experiments designed for adult learners -- community centers, workplaces, or even WhatsApp-based modules.
Policymakers often look for quick fixes like fact-checking or content moderation. How does your research shift that conversation toward education and institutions?
Most misinformation solutions focus on reactive tools, fact-checking or corrections, that deal with misinformation after it spreads.
Education is different because it's preventive. It builds broad-based critical thinking skills, not just reactions to one headline.
Once you teach people how to reason and verify, they can apply those skills to health misinformation, political rumours, or even deepfakes. And that's a big shift: until now, we didn't actually know whether media literacy programs worked.
Our study is the first large-scale evidence that they can -- and that education might be one of the most sustainable solutions.
Was there any resistance from teachers, parents, or local leaders who might have been sceptical about the idea of 'teaching misinformation'?
Surprisingly, no. Jeevika has a stellar reputation in Bihar. That credibility made all the difference.
Parents told us in interviews that they trusted Jeevika, and that was a big reason they sent their children to the sessions.
Many even said they'd happily do it again. It shows that for interventions like this, trustworthy local institutions are key.
If the messenger is trusted, people are far more open to the message.
How did students themselves respond? Any anecdotes that stayed with you?
They were really engaged! Students loved debating and role-playing, it felt different from regular lessons.
I think this was especially true for girls -- they attended classes to a much higher degree.
How do you see the landscape of misinformation evolving in the next decade, especially with AI and deepfakes in local languages?
AI definitely makes the problem more dangerous, especially as deepfakes appear in local languages. But I'd also say, we haven't yet solved the non-AI misinformation problem!
There's still so much work to be done on everyday rumour networks, WhatsApp forwards, and interpersonal trust.
So yes, AI raises the stakes, but the core challenge remains the same: How do we strengthen people's ability to pause, verify, and think critically before they share?
How can educators and governments build on your findings to create scalable, long-term impact?
The exciting part is that this model is scalable. We designed the curriculum to be low-cost and teacher-led, so it can be integrated into existing civics or science lessons without major restructuring.
Governments could train teachers through their regular in-service programmes, or NGOs could adapt the model for adult learners.
Imagine every high school in India teaching a short module on 'how to tell what's true'.
If we can institutionalise that -- make it as routine as math or reading -- we'd build a generation that's naturally sceptical in the best sense. That's the long-term goal: Make critical thinking a civic skill, not a luxury.
Feature Presentation: Rajesh Alva/Rediff