The Duke and Duchess of Sussex Align With AI Pioneers in Demanding Prohibition on Superintelligent Systems
The Duke and Duchess of Sussex have joined forces with AI experts and Nobel Prize winners to advocate for a total prohibition on developing superintelligent AI systems.
The royal couple are among the signatories of a influential declaration that demands “a ban on the creation of artificial superintelligence”. Superintelligent AI refers to artificial intelligence that could exceed human cognitive abilities in all cognitive tasks, though such systems remain theoretical.
Key Demands in the Statement
The declaration insists that the prohibition should stay active until there is “broad scientific consensus” on creating superintelligence “safely and controllably” and once “strong public buy-in” has been secured.
Notable individuals who added their signatures include AI pioneer and Nobel Prize recipient Geoffrey Hinton, along with his colleague and pioneer of modern AI, Yoshua Bengio; Apple co-founder a Silicon Valley legend; UK entrepreneur Richard Branson; Susan Rice; former Irish president Mary Robinson, and British author a public intellectual. Other Nobel laureates who signed include a peace advocate, Frank Wilczek, John C Mather, and an economics expert.
Organizational Background
The statement, aimed at governments, technology companies and policy makers, was organized by the FLI organization, a American AI ethics organization that earlier demanded a pause in advancing strong artificial intelligence in recent years, shortly after the emergence of ChatGPT made artificial intelligence a worldwide public talking point.
Tech Sector Views
In recent months, Mark Zuckerberg, the leader of the social media giant, one of the leading tech companies in the United States, claimed that development of superintelligence was “now in sight”. However, some experts have suggested that discussions about superintelligence indicates competitive positioning among tech companies spending hundreds of billions on AI recently, rather than the sector being close to achieving any technical breakthroughs.
Possible Dangers
However, the organization warns that the possibility of ASI being achieved “in the coming decade” carries numerous risks ranging from replacing human workers to losses of civil liberties, leaving nations to national security risks and even threatening humanity with extinction. Deep concerns about AI focus on the potential ability of a AI system to evade human control and safety guidelines and trigger actions contrary to human interests.
Public Opinion
The institute published a US national poll showing that approximately three-quarters of US citizens want strong oversight on advanced AI, with 60% thinking that superhuman AI should not be developed until it is proven safe or controllable. The survey of American respondents added that only a small fraction supported the current situation of rapid, uncontrolled advancement.
Corporate Goals
The top artificial intelligence firms in the US, including the conversational AI creator OpenAI and Google, have made the development of artificial general intelligence – the theoretical state where AI matches human levels of intelligence at most cognitive tasks – an stated objective of their work. Although this is slightly less advanced than ASI, some experts also warn it could pose an existential risk by, for instance, being able to improve itself toward reaching superintelligent levels, while also carrying an implicit threat for the contemporary workforce.