Eric Schmidt argues against a ‘Manhattan Project for AGI’

Eric Schmidt advises against a U.S. push for superintelligent AI, warning of international risks.

: Eric Schmidt, Alexandr Wang, and Dan Hendrycks caution the U.S. against a Manhattan Project-style AGI strategy due to international risks. Their paper highlights potential retaliation from China and advocates for defensive measures. Comparing AI to nuclear arms, they suggest MAIM and urge limiting AI chip access. The paper presents a third way, balancing caution and development in AI strategy.

Eric Schmidt, former CEO of Google, along with Scale AI CEO Alexandr Wang, and Dan Hendrycks, Director of the Center for AI Safety, advise against pursuing a Manhattan Project-like approach for developing AI with superhuman intelligence (AGI). In their paper titled 'Superintelligence Strategy,' they warn that such unilateral actions could lead to fierce backlash from nations like China. This could destabilize global relations, similar to nuclear stand-offs historically seen among global powers.

The authors co-authored the strategy paper in response to a U.S. congressional suggestion to fund an AGI project akin to the 1940s atomic bomb effort. This comes as prominent figures, including Secretary of Energy Chris Wright and OpenAI's Greg Brockman, have shown interest in such monumental leaps in AI technology. Schmidt and his co-authors argue that while technology races often propel scientific advancements, in this case, they risk triggering preemptive countermeasures and escalating international tensions.

The strategy paper highlights a significant divide within AI policy discourse: those dubbed 'doomers' advocate slowing AI progress due to fears of catastrophic failures, while 'ostriches' believe in rapidly advancing AI with the hope for a positive outcome. Schmidt and his team propose a middle path that prioritizes defensive tactics and strategic deterrence over aggressive control of AI technologies.

They propose a framework they call Mutual Assured AI Malfunction (MAIM), which involves proactively disabling harmful AI projects before they become weapons. They recommend expanding the defensive cyber arsenal and limiting global access to advanced AI technologies. Such measures mimic strategies used during nuclear arms races, where global control was deemed unwise due to mutually assured destruction possibilities.

The authors stress that the decisions made by the U.S. affect global AI development environments, urging for a more cautionary approach instead of directly competing with China's AI developments. As America expands its AI horizons, ensuring global stability and security, rather than singular dominance, remains a priority. The authors argue for a multi-lateral approach that avoids potential escalation and maintains a balanced international playing field.

Sources: TechCrunch, Reuters, The Verge, Wired, The New York Times