Back to Discussions
international Dec 6, 2025

What happens when humans achieve god level powers, and how do we survive it?

What happens when humans achieve god level powers, and how do we survive it?

Quick recap

The meeting explored the implications of humans achieving "god-level powers" through science and technology, examining both the potential for destruction and creation while discussing historical examples and ethical considerations. The group delved into specific technologies like nuclear weapons and artificial intelligence, analyzing their dual nature and the challenges of regulating such powerful capabilities in a global context. The discussion concluded with considerations about the role of free market capitalism, the importance of human judgment in AI-assisted decision-making, and the need for balanced approaches to technological advancement and control.

Summary

God-Level Powers and Human Survival

Tone, filling in for Waleed, introduced a discussion on the implications of humans achieving "God-level powers" through science and technology. He shared a link to a previous event he hosted, which explored themes relevant to today's discussion. The meeting will focus on how humans might manage and survive such advancements, considering the potential for both destruction and creation. Participants were encouraged to access the shared notes panel for reference, and the meeting will begin in earnest after 10 more minutes to allow for latecomers.

Human Achievements and God Power

The group discussed the concept of "God power," defined as the ability to manipulate both the external world and human behavior, referencing historical examples like the Manhattan Project and atomic bomb development. They explored how humanity has achieved powers previously attributed to deities, with Tone providing a historical timeline from Einstein's theory of relativity in 1905 to the first sustained fission reaction in 1942. The discussion included questions about ethics and the need for self-imposed limitations on such powers, with Frank raising the question of how to define "God-level power" and LA suggesting that divine power includes both external and internal control.

Ethical Implications of Nuclear Weapons

The group discussed the ethical implications of nuclear weapons and their representation of "god power." They explored how nuclear weapons differ qualitatively from previous weapons of mass destruction, with Tone comparing them to trebuchets in siege warfare. The discussion touched on the fear factor introduced by nuclear weapons, as well as the moral and ethical responsibilities that come with wielding such destructive power. The group also considered the differences between creative and destructive god-like powers, such as genetic engineering versus nuclear weapons, and debated whether the same ethical considerations apply to both.

Godlike Powers: Ethical Implications

The group discussed the implications of developing godlike powers, focusing on both destructive and constructive capabilities. Aditya and Tone explored the potential collapse of morality in a future where humans possess the ability to create and destroy life, while LA emphasized the importance of considering stakeholders and the ethical implications of technological advancements. Frank highlighted the positive aspects of reaching godlike powers, such as curing diseases, and raised questions about distribution and access to such technologies. The discussion touched on the need to reflect on both the negative and positive consequences of technological progress and the responsibility that comes with such capabilities.

Regulating God-like Technologies

The group discussed the dual nature of technology, referencing the Roman god Janus, which can lead to both positive and negative outcomes. They explored the concept of whether certain technologies should be forbidden due to their power resembling "god-like" abilities, using nuclear energy and gun control as examples. Frank mentioned the International Atomic Energy Agency's role in regulating nuclear technology, while LA compared current gun control debates to the biblical perspective on technology and power. The discussion highlighted historical and contemporary efforts to control powerful technologies, emphasizing the challenges and necessity of international agreements.

Nuclear Weapons and Global Regulation

The discussion focused on nuclear weapons and their potential for catastrophic accidents, with Warren highlighting historical close calls and the lack of effective international regulation. Tone mentioned the company Helion, which is developing a novel approach to nuclear fusion, while Frank corrected Warren about the existence of the Treaty on the Prohibition of Nuclear Weapons, a legally binding UN treaty that prohibits most nations from developing nuclear arms. LA built on Frank's point by discussing the unequal power dynamics within the UN Security Council and how nuclear hegemony is preserved by certain world powers, particularly in the Middle East.

Power, Security, and Global Risks

The discussion focused on the nature of power and its impact on global security, with Eduardas raising concerns about the relative dangers of different forms of destruction and questioning the concept of "god power" in relation to nuclear weapons. Tone acknowledged these points but emphasized the unique risks of nuclear warfare due to its potential for global catastrophe, distinguishing between probability and fragility as described by Taleb's concept of anti-fragility. Abdul contributed to the conversation by highlighting the dangers of concentrated power and advocating for a more distributed power structure across global, national, and local levels, warning that technological advancements could exacerbate existing inequalities.

Balancing Power in Tech Development

The group discussed the challenges and potential solutions for managing powerful technologies, particularly focusing on nuclear weapons and artificial intelligence. They explored ideas such as dividing power, nationalizing companies, and implementing regulations like the EU's Artificial Intelligence Act. The conversation highlighted concerns about free market capitalism's role in advancing these technologies and the need for balance between innovation and control. The participants also touched on the importance of human involvement in technology development and the need for personal connections among those involved.

Ethical AI Regulation Challenges

Fred raised concerns about regulating AI, citing the historical pattern of regulatory capture and the unique network effects of AI that could exacerbate this issue. Abdul discussed the challenges of mobilizing power against globalized forces and suggested moving towards shared values to address these issues. Etyene proposed a person-centered approach to developing ethical AI systems, while LA emphasized the importance of recognizing common humanity and the potential illusion of AI's "god powers." The discussion touched on the need for a stakeholder-focused approach and the dangers of over-relying on AI before it can truly achieve "god-level" capabilities.

AI and Capitalism: Balancing Act

The group discussed the compatibility of free market capitalism with AI and its impact on global inequality. Jules proposed a system to measure and regulate AI performance similar to an ELO rating in chess, while Eduardas highlighted concerns about data accessibility and AI's limitations. Tone emphasized that the real danger with AI lies in its optimization power against human interests, rather than its intelligence, using the example of an AI managing an energy grid and potentially prioritizing instrumental goals over human welfare.

AI's Role in Human Expertise

The group discussed the role of AI in society, particularly focusing on its impact on human expertise and decision-making. They explored how AI can assist in research and creative tasks, but also noted limitations such as lack of personal context and emotional understanding. The conversation touched on the potential for AI to one day surpass human experts in certain fields, but also emphasized the importance of human judgment in unique or out-of-distribution cases. The group debated what constitutes an expert in the age of AI, considering both the strengths and weaknesses of machine learning systems. They concluded that while AI can perform many tasks efficiently, human context and interpersonal experience remain crucial in certain domains.