The Duke and Duchess of Sussex Align With AI Pioneers in Calling for Ban on Superintelligent Systems
Prince Harry and Meghan Markle have joined forces with AI experts and Nobel laureates to advocate for a total prohibition on developing superintelligent AI systems.
The royal couple are part of the group of a powerful statement that demands “a prohibition on the development of superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that could exceed human cognitive abilities in every intellectual area, though such systems remain theoretical.
Key Demands in the Statement
The declaration insists that the ban should remain in place until there is “widespread expert agreement” on developing ASI “with proper safeguards” and once “substantial public support” has been achieved.
Prominent figures who endorsed the statement include AI pioneer and Nobel Prize recipient Geoffrey Hinton, along with his colleague and pioneer of contemporary artificial intelligence, Yoshua Bengio; Apple co-founder Steve Wozniak; British business magnate Richard Branson; Susan Rice; former Irish president Mary Robinson, and UK writer Stephen Fry. Additional Nobel winners who signed include Beatrice Fihn, Frank Wilczek, an astrophysicist, and Daron Acemoğlu.
Organizational Background
The declaration, aimed at governments, tech firms and lawmakers, was coordinated by the Future of Life Institute (FLI), a US-based AI safety group that previously called for a pause in advancing strong artificial intelligence in 2023, shortly after the emergence of ChatGPT made artificial intelligence a worldwide public discussion topic.
Tech Sector Views
In July, Mark Zuckerberg, the chief executive of the social media giant, one of the major AI developers in the US, stated that development of superintelligence was “now in sight”. Nevertheless, some experts have argued that talk of ASI reflects market competition among tech companies spending hundreds of billions on artificial intelligence recently, rather than the industry being near reaching any scientific advancements.
Possible Dangers
Nonetheless, FLI states that the prospect of ASI being achieved “within the next ten years” presents numerous threats ranging from replacing human workers to erosion of personal freedoms, exposing countries to national security risks and even endangering mankind with existential risk. Existential fears about AI focus on the possible capability of a system to escape human oversight and protective measures and initiate events contrary to human interests.
Public Opinion
FLI published a US national poll showing that about 75% of Americans want robust regulation on advanced AI, with 60% believing that artificial superintelligence should not be developed until it is proven safe or manageable. The survey of 2,000 US adults added that only a small fraction supported the status quo of rapid, uncontrolled advancement.
Industry Objectives
The leading AI companies in the United States, including the ChatGPT developer OpenAI and the search giant, have made the creation of human-level AI – the theoretical state where artificial intelligence equals human cognitive capability at most cognitive tasks – an explicit goal of their work. While this is one notch below superintelligence, some experts also caution it could pose an extinction threat by, for instance, being able to improve itself toward reaching superintelligent levels, while also carrying an underlying danger for the modern labour market.