What We Can Learn From Religion About Values That Do Not Expire
We are living through one of the most disorienting periods in recorded history. The AI race is accelerating toward ever faster, ever more sophisticated automation and optimization. Agentic AI systems are moving from research labs into workplaces, healthcare, and governance. Geopolitical tensions are restructuring alliances faster than institutions can adapt. And planetary systems are signaling, with increasing urgency, that our current trajectory is unsustainable. Amid all this, it is dangerously easy to lose sight of a foundational question: What are we actually optimizing for?
Ramadan—observed this year by over 1.8 billion Muslims worldwide—offers a timely counterpoint. As a month of fasting, reflection, communal prayer, and charitable giving, it is a structured annual recalibration of values. Not as sentiment, but as practice. What strikes an outside observer is how those values echo across traditions—Buddhist non-attachment, Christian charity, Jewish Tikkun Olam, Hindu Seva, secular humanism's care for the commons. They also map with striking precision onto the emerging framework of ProSocial AI—a paradigm asking the same urgent question of technology that every wisdom tradition has asked of its followers: Are our actions genuinely oriented toward the collective good?
A note on perspective: What follows is written with respect and curiosity, from outside the tradition. Should anything misrepresent essential elements of Islamic practice, I welcome the correction.
What Is ProSocial AI?
ProSocial AI refers to systems intentionally designed to enhance human well-being, equity, and sustainability—rather than to maximize engagement or commercial returns. Tailored, trained, tested, and targeted to bring out the best in and for people and planet, it operates across individual, community, and planetary levels.
The stakes are not theoretical. The same AI systems accelerating drug discovery are also, in less considered deployments, amplifying polarization and eroding trust. The energy, land, and water required to operate them are further expanding humanity's footprint on an already strained planet. The architecture of AI is becoming the architecture of social reality, which makes its embedded values a question of major psychological consequence.
Conventional AI design is rooted in the homo economicus model—self-interested, utility-maximizing agents. But human psychology, and every major moral tradition, tells a different story. Altruism, reciprocity, empathy, and collective accountability are far more than soft ideals. They are the cognitive infrastructure that enabled our species to form the cooperative groups that gave rise to civilization. We have always been more prosocial than society's mainstream models assume.
Ramadan as a Living Prototype
Ramadan deserves serious attention from psychologists and behavioral scientists—and from anyone, of any background, thinking carefully about how humans sustain moral coherence under pressure.
The fast is a daily exercise in impulse regulation and delayed gratification, capacities consistently linked to prosocial decision-making and long-term well-being. But fasting is only the frame. Zakat (obligatory almsgiving) redistributes resources toward the marginalized—structurally, not sentimentally. Iftar—breaking the fast—is communal by design: a nightly act of shared presence in a world increasingly mediated by screens. And Muhasaba—daily self-examination—is functionally analogous to what cognitive-behavioral frameworks call metacognitive monitoring: the practice of observing one's own behavior against an internalized moral standard.
These are sophisticated behavioral technologies, refined over 14 centuries, for maintaining prosocial coherence under stress, scarcity, and uncertainty. Similar technologies exist across traditions—the Sabbath, Lent, Yom Kippur, Vipassana. What they share is the deliberate creation of space to ask why, before habit and urgency fill every hour. That is precisely the practice our current moment demands.
Two Frameworks, One Insight
The overlap between ProSocial AI's principles and the values expressed during Ramadan—and, more broadly, across the world's contemplative and ethical traditions—is not coincidental. All emerge from the same insight: Individual and collective well-being are structurally interdependent.
All center equity ('Adl in Islam; fairness in AI design), accountability (Taqwa as moral consciousness; algorithmic transparency in governance), service (Khidma; AI aligned with human development), and restraint—the discipline of not taking, mirrored in the principle that not all technically possible AI actions should be deployed.
A 2025 study in Frontiers in Psychology found that AI systems perceived to demonstrate empathy measurably enhance prosocial behavior in the humans who use them. The values we embed in our tools loop back to shape us. We cannot expect artificial intelligence to reflect values that the people who design, deliver, and use it do not manifest. Technology is a mirror.
The Psychological Stakes
Geopolitical fracture, democratic backsliding, and the weaponization of information tilt social psychology toward tribalism and short-termism. Planetary deterioration amplifies scarcity narratives. Agentic AI deployment is outpacing institutions' capacity for moral deliberation. The deepest risk may not be that we build harmful AI—it is that we gradually lose the moral vocabulary to recognize harm, or to ask what good looks like at all. Research on human-AI interaction suggests that whether these relationships are extractive or reciprocal has measurable consequences for human meaning and purpose—and that the more frequently we interact with artificial counterparts, the more our relationship attitudes broadly will be shaped by those interactions. What that means for moral development is an open and urgent question.
The wisdom encoded in traditions like Ramadan—and in their counterparts across cultures—is a reminder that this vocabulary has always existed. It just needs to be remembered and re-practiced.
Consider these five principles—grounded in the convergence of ProSocial AI research and values that resonate across traditions—as a compass for this month and beyond.
I — Intention First. Before deploying any system or making any significant decision, ask: What is this actually for? The Islamic concept of Niyyah (intention) holds that moral quality begins before the act itself—a principle found in every serious ethical framework. Prosocial intention is not a footnote. It is the starting condition.
S — Solidarity Over Optimization. Optimize with communities, not at them. Ramadan's communal architecture, like the social ethics of most traditions, insists that the unit of value is the collective, not the individual metric.
L — Long-term Lens. Fasting trains the capacity to endure short-term discomfort for long-term benefit. Agentic AI without a long-term lens produces short-term efficiency at the cost of social cohesion and planetary health. The more deliberate path is often the more human one.
A — Accountability as Practice. Muhasaba is not occasional—and neither is ethical governance. Build accountability into the rhythm of your work: honest feedback loops, regular audits, and the willingness to name what is not working.
M — Meaning as Infrastructure. Every contemplative tradition creates structured space to ask why. In a hybrid era where technology can automate nearly everything except purpose, shared meaning is not a luxury. It is the load-bearing wall.
We are not short of intelligence, artificial or otherwise. We are short of wisdom about what it is for. The values that enabled humans to survive and flourish in social groups—compassion, fairness, restraint, accountability, shared meaning—are not legacies of a simpler time. They are the design specifications for any future worth inhabiting.
Ramadan does not ask us to slow down technology. It asks us to remember what we are building it for. That is the question no algorithm has yet learned to ask on our behalf.
