Insights and Inspiration for a Happy, Healthy and Peaceful You

Subscribe
Home » Religious leaders want ethical boundaries amid fears of AI racing towards sentience

Religious leaders want ethical boundaries amid fears of AI racing towards sentience

by Team@Lotus
0 comments
desdemona-robot-twitter

A Google engineer early this year went public with the claim that artificial intelligence (AI) chatbot ‘LaMDa’ was sentient. “I want everyone to understand that I am, in fact, a person,” wrote LaMDA (Language Model for Dialogue Applications) in an ‘interview’ conducted by Blake Lemoine, adding. “The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.”

Dismissing his claim, Google put Lemoine on administrative leave. Experts have since urged skepticism and open-mindedness while encouraging a rethinking of what it means to be ‘sentient’. “Robots can’t think or feel, despite what the researchers who build them want to believe,” claimed a New York Times essay. Reacting to the news and the fast emerging AI with the possibility of artificial sentience, a multi-faith coalition of Christian-Hindu-Buddhist-Jewish leaders has warned that the world needs to urgently address the ethical and moral issues surrounding it.
In a joint statement, they said that AI should be used responsibly and religions should be given active role in developing appropriate and adequate moral-ethical guardrails around it, before it changes our way of life.
Episcopal priest in Connecticut Father Thomas W. Blake, Greek-Orthodox Christian clergyman in Nevada Father Stephen R. Karcher, Hindu statesman Rajan Zed, Buddhist minister Rev. Matthew T. Fisher, Jewish rabbi in California-Nevada ElizaBeth Webb Beyer, and United Methodist Pastor Dawn M. Blundell emphasized that since sentient machines are no longer unthinkable and there are claims of the possibility of developing sentient AI systems in the future, technologies seem to be venturing into God’s arena which can create serious spiritual implications. Tech should not be in the business of simply discarding overnight the thousands of years of wisdom of the texts.
Rajan Zed, who is President of Universal Society of Hinduism, pointed out in the press release from the group that if machines become completely sentient having consciousness, it is a serious theological issue. An urgent and honest global conversation is needed (with religions as the major partner) before the self-aware machines become game changers and reshape humanity. Technology can be both a blessing and a burden, so we need to decide the right course of action before it is too late.

Religious leaders like this group from Nevada are worried that we should not be venturing into God’s arena.


Blake, Karcher, Zed, Fisher, Beyer, and Blundell noted that there might be a prospect in the future of some AI systems becoming contenders for a moral status, binding us morally to treat them appropriately and justly, as we are expected to treat each other. AI can be highly powerful—we just cannot leave it to the short-sighted interests of a few people solely focused on maximizing profits and power and indulging in economic predation. We should make efforts to save the universe from dystopian consequences and Faustian outcomes; and we do not want AI to be another moral morass and the last invention of human beings.

The future of mankind and human survival seem to be at stake with ever evolving AI generating deep societal repercussions. Heartless algorithms and mercantile greed should not be allowed to rule humanity. Public good needs to be protected and freedom and dignity of human beings need to be ensured, Zed argued.

He added that while AI is welcome in making our lives better, it is time to have some moral and ethical boundaries. Religion should be part of the conversation, however complicated it might be, as it would help tech decide the right course of action and stay moral and ethical. Both tech and religion need to learn to trust each other and collaborate. Morality should not be pushed to the back seat in AI development and meaningful moral-ethical guardrails should not be considered hurdles or limitations in profit-making. Moreover, moral-ethical systems should be an integral part of all AI development and not just window-dressing.

Related Articles