Photo by Igor Omilaev on Unsplash
Religious Freedom in the Digital Age: Navigating AI, Surveillance, and Faith
Freedom of religion or belief (FoRB) has long been recognized as a cornerstone of human rights, enshrined in Article 18 of the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights. Traditionally, this right safeguarded the ability to hold, change, and manifest beliefs in private or public spaces. But in the 21st century, the terrain of religious freedom is shifting dramatically. Why? Because our lives—and increasingly, our spiritual practices—are mediated by digital technologies.
From livestreamed worship services to AI-powered prayer apps, technology offers unprecedented opportunities for religious expression. Yet, alongside these benefits come profound risks: algorithmic bias, invasive surveillance, and opaque moderation systems that can silence voices of faith. As artificial intelligence (AI) and surveillance technologies become embedded in everyday life, the question is no longer whether they affect religious freedom, but how—and what we can do about it. Here, I summarize my latest findings on the subject, a longer version of which can be found in my recent article in The Review of Faith & International Affairs.
The Digital Transformation of Faith
The COVID-19 pandemic accelerated a trend already underway: the migration of religious practices to online spaces. Livestreamed sermons, Zoom prayer groups, and virtual pilgrimages have democratized access to worship, especially for marginalized or geographically isolated communities. AI tools now personalize Bible study plans, assist with Qibla direction, and even power virtual clergy chatbots.
But digital platforms are not neutral. They are governed by algorithms that decide what content is visible, what gets flagged, and what disappears. These systems often lack cultural and religious nuance, leading to troubling consequences. For example, benign religious posts have been misclassified as extremist content, disproportionately affecting minority faiths. In short, the same technologies that enable digital worship can also undermine FoRB.
AI and Content Moderation: When Algorithms Misinterpret Faith
AI-driven content moderation is designed to combat hate speech and disinformation. Yet, these systems frequently misinterpret religious language, symbols, or practices—especially in multilingual contexts. Studies show that automated filters often flag legitimate religious expression as harmful, creating a chilling effect on online faith communities.
This problem is compounded by the opacity of algorithmic decision-making. Users rarely know why their content was removed or how to appeal. While regulations like the EU’s Digital Services Act aim to increase transparency, enforcement remains uneven. Without culturally sensitive AI design, religious voices risk being silenced in the very spaces meant to amplify them.
Surveillance and Religious Minorities: A Global Concern
Surveillance technologies—facial recognition, biometric tracking, predictive policing—are increasingly deployed under the banner of national security. In authoritarian regimes, these tools have been weaponized to monitor and suppress religious minorities. The most cited example is the surveillance of Uyghur Muslims in China, where AI systems track movement, monitor worship, and collect biometric data.
But this is not just an authoritarian problem. Democratic societies have also used AI-driven surveillance in counter-terrorism programs, disproportionately targeting Muslim communities. Such practices erode trust, violate privacy, and create a climate of fear that discourages religious expression. The normalization of surveillance in religious contexts poses a direct threat to FoRB worldwide.
Algorithmic Bias: Invisible Discrimination
AI systems are only as fair as the data they are trained on—and religious diversity is often underrepresented. This leads to algorithmic bias: facial recognition tools misidentifying individuals wearing religious attire, recommendation systems amplifying stereotypes, or predictive policing disproportionately targeting certain faith groups.
Bias is not inevitable; it is a design choice. Inclusive datasets, diverse development teams, and ethical oversight can mitigate these risks. Yet, most AI systems are built without consultation from religious communities, perpetuating digital exclusion and reinforcing systemic discrimination.
Opportunities for Ethical Innovation
Despite these challenges, technology can also be a force for good. AI-powered tools can help combat hate speech, facilitate interfaith dialogue, and support religious education. But ethical innovation requires intentionality. Scholars like Ezieddin Elmahjub advocate for pluralistic AI ethics that incorporate religious moral frameworks—such as the Islamic concept of maṣlaḥa (public good)—alongside secular principles. This approach ensures that technology respects diverse values and promotes the common good.
Bridging the Gaps: What Needs to Change
Current scholarship reveals significant gaps in how we understand and address the intersection of religion and technology:
- Underrepresentation of Non-Western Perspectives: Most research reflects Euro-American frameworks, overlooking traditions like Hinduism, Buddhism, and Indigenous spiritualities.
- Lack of Empirical and Longitudinal Studies: We need data on how religious communities adapt to technological change over time.
- Limited Engagement with Religious Communities: AI systems are rarely designed with input from faith groups, leading to tools that misrepresent or marginalize them.
- Insufficient Interdisciplinary Collaboration: Legal scholars, technologists, and theologians often work in silos, hindering holistic solutions.
Charting a Way Forward
To protect religious freedom in the digital age, we need a multi-pronged approach:
- Inclusive Design: Involve religious communities in the development of AI systems to ensure cultural sensitivity.
- Regulatory Reform: Strengthen laws like the EU Digital Services Act and create global standards for AI governance that safeguard FoRB.
- Interdisciplinary Research: Foster collaboration between technologists, ethicists, and theologians to build frameworks that balance innovation with rights.
- Participatory Methods: Use community-based research to center the voices of those most affected by technological change.
- Global Dialogue: Move beyond Western-centric paradigms to include diverse ethical and religious perspectives in AI governance.
Conclusion: A Moral Imperative
Technology is not destiny; it is design. As AI and surveillance reshape public life, protecting religious freedom is both a legal obligation and a moral imperative. The future of FoRB depends in part on our collective ability to build digital systems that are just, inclusive, and respectful of human dignity. In an era where algorithms mediate belief, the question is clear: will technology serve faith—or silence it?