The Duke and Duchess of Sussex Join AI Pioneers in Calling for Prohibition on Advanced AI
The Duke and Duchess of Sussex have teamed up with artificial intelligence pioneers and Nobel Prize winners to push for a total prohibition on developing superintelligent AI systems.
Harry and Meghan are among the signatories of a influential declaration that demands “a prohibition on the development of superintelligence”. Superintelligent AI refers to AI systems that would surpass human cognitive abilities in all cognitive tasks, though this technology remain theoretical.
Key Demands in the Declaration
The statement states that the prohibition should remain in place until there is “widespread expert agreement” on creating superintelligence “with proper safeguards” and once “substantial public support” has been achieved.
Notable individuals who added their signatures include AI pioneer and Nobel Prize recipient Geoffrey Hinton, along with his colleague and pioneer of contemporary artificial intelligence, Yoshua Bengio; Apple co-founder a Silicon Valley legend; British business magnate Richard Branson; Susan Rice; former Irish president Mary Robinson, and UK writer a public intellectual. Additional Nobel winners who endorsed include Beatrice Fihn, a physics Nobelist, John C Mather, and an economics expert.
Behind the Movement
The declaration, targeted at national leaders, tech firms and lawmakers, was organized by the Future of Life Institute (FLI), a American AI ethics organization that earlier demanded a pause in advancing strong artificial intelligence in 2023, shortly after the launch of conversational AI made artificial intelligence a global political discussion topic.
Industry Perspectives
In recent months, Meta's CEO, the chief executive of Facebook parent Meta, one of the major AI developers in the United States, stated that advancement toward superintelligent AI was “now in sight”. However, some analysts have suggested that discussions about superintelligence reflects market competition among technology firms spending hundreds of billions on AI this year alone, rather than the sector being close to achieving any scientific advancements.
Possible Dangers
However, FLI warns that the prospect of artificial superintelligence being achieved “within the next ten years” carries numerous threats ranging from eliminating all human jobs to losses of civil liberties, exposing countries to security threats and even threatening humanity with extinction. Existential fears about AI center around the possible capability of a system to evade human control and protective measures and initiate events contrary to human interests.
Public Opinion
The institute published a American survey showing that approximately three-quarters of US citizens want strong oversight on sophisticated artificial intelligence, with six out of 10 believing that superhuman AI should not be developed until it is proven safe or manageable. The poll of 2,000 US adults added that only 5% supported the current situation of rapid, uncontrolled advancement.
Corporate Goals
The top artificial intelligence firms in the US, including the ChatGPT developer a major AI lab and Google, have made the creation of human-level AI – the hypothetical condition where AI matches human cognitive capability at many intellectual activities – an stated objective of their research. Although this is slightly less advanced than superintelligence, some experts also warn it could carry an existential risk by, for example, being able to improve itself toward achieving superintelligence, while also carrying an underlying danger for the modern labour market.