Anthropic has added several more religions on its quest to inject perfect morals into Claude
At a glance:
- Anthropic continues its quest to incorporate moral guidance into Claude AI by expanding consultations to include multiple religious organizations
- The company participated in a "Faith-AI Covenant" roundtable with representatives from the New York Board of Rabbis, Hindu Temple Society, Church of Jesus Christ of Latter-day Saints, U.S.-based Sikh Coalition, and Greek Orthodox Archdiocese of America
- These efforts follow earlier meetings with 15 Christian leaders and involve OpenAI, with future events planned in China, Kenya, and the UAE
Expanding Moral Horizons
Anthropic's latest outreach to religious communities represents a significant expansion of its efforts to infuse moral guidance into its AI model Claude. Last week, representatives from the AI company joined OpenAI at a "Faith-AI Covenant" roundtable in New York, bringing together diverse religious perspectives. The event featured participation from the New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America. This gathering follows Anthropic's earlier initiative last month, when the company organized meetings and dinners with 15 Christian leaders specifically seeking advice on the supposed "spiritual development" of Claude.
The involvement of multiple religious traditions suggests Anthropic is attempting to cast a wide net in its search for ethical frameworks. While the company has not clarified whether these conversations with different religious groups constitute a single coherent program or separate initiatives, the pattern is clear: Anthropic is actively seeking external moral guidance for its AI. When approached for clarification about the relationship between these various religious consultations, Anthropic did not provide a response as of this writing. This lack of transparency leaves observers to speculate about the company's exact methodology for incorporating these diverse perspectives into Claude's decision-making processes.
The Interfaith Initiative
The "Faith-AI Covenant" roundtable was not solely organized by Anthropic and OpenAI, but rather initiated by a Swiss NGO called the Interfaith Alliance for Safer Communities. This organization has plans for similar events in China, Kenya, and the United Arab Emirates, indicating a global approach to addressing ethical questions in AI development. The involvement of multiple stakeholders—including religious organizations, AI companies, and international NGOs—suggests a growing recognition that ethical AI development requires diverse perspectives beyond technical expertise.
Baroness Joanna Shields, a member of the British House of Lords, was identified as a "key partner" in these initiatives, lending additional credibility to the interfaith approach. Her involvement connects these efforts to broader policy discussions about AI governance and ethics. The collaborative nature of these events reflects a shift in how AI companies are approaching ethical challenges, moving from internal value alignment to external consultation with communities that have long-standing traditions of moral reasoning. This approach acknowledges that ethical questions in AI development are not merely technical problems but deeply human ones that benefit from diverse cultural and religious perspectives.
The Challenge of Universal Ethics
Anthropic's pursuit of moral guidance for Claude stems from a fundamental challenge in AI ethics: how to create an AI that can make decisions with perfect values when no explicit rule exists for a given situation. The company describes this as the need for an AI to "make the decision of a person with perfect values when there's no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire." This recognition of ethical ambiguity has led Anthropic to develop what it calls Claude's "constitution," which outlines the philosophical framework for addressing these complex moral questions.
The company's approach is driven by a self-awareness that their efforts to "give Claude good enough ethical values will fail" in certain scenarios. This humility stands in contrast to earlier, more optimistic assumptions about the possibility of creating universally ethical AI systems. Anthropic's willingness to consult with religious leaders represents an acknowledgment that moral reasoning is deeply contextual and culturally embedded, challenging the notion that ethics can be reduced to a set of universal principles. This approach recognizes that different religious traditions have developed sophisticated frameworks for navigating ethical ambiguity over centuries of human experience.
Expert Perspectives on Ethical AI
Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence, offered a critical perspective on the industry's approach to AI ethics. In comments extracted by the Associated Press, she noted: "I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics." Chowdhury added, "They have very quickly realized that that's just not true. That's not real. So now they're looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations."
This comment highlights a significant evolution in how the tech industry approaches AI ethics. Early optimism about the possibility of creating universally ethical AI systems has given way to a more nuanced understanding of the challenges involved. The shift toward consulting religious traditions represents a recognition that ethical reasoning is not merely a technical problem but deeply connected to human values and cultural contexts. This approach acknowledges that different religious traditions have developed sophisticated frameworks for navigating ethical ambiguity over centuries of human experience, potentially offering valuable insights for AI development.
The Limits of Religious Consultation
While Anthropic's outreach to religious leaders is notable, it's important to consider the practical limitations of this approach. The article suggests that Anthropic is unlikely to adopt specific religious doctrines into Claude, instead seeking to glean "high order ethical truths" from these consultations. This approach reflects a pragmatic recognition that imposing any single religious framework on a global AI system would be inappropriate and potentially alienating to users from different backgrounds.
The comparison to the pre-Islamic Kaaba in the article is particularly telling. Just as the ancient cube contained symbols from multiple traditions to accommodate diverse spiritual needs, Anthropic seems to be attempting to create a moral framework that can accommodate diverse perspectives without privileging any single tradition. This approach may be more about demonstrating a comprehensive search for ethical guidance than about actually implementing specific religious principles. Whether this approach will result in a more ethical AI remains an open question, but it represents an acknowledgment that moral reasoning in AI development is a complex, ongoing process rather than a one-time technical solution.
FAQ
What religious organizations has Anthropic consulted with for Claude's moral development?
Why is Anthropic consulting with religious leaders for its AI model?
What is the "Faith-AI Covenant" and who organized it?
More in the feed
Prepared by the editorial stack from public data and external sources.
Original article