The Duke and Duchess of Sussex Align With AI Pioneers in Demanding Prohibition on Superintelligent Systems

The Duke and Duchess of Sussex have teamed up with artificial intelligence pioneers and Nobel Prize winners to advocate for a total prohibition on creating artificial superintelligence.

Harry and Meghan are among the signatories of a powerful statement that calls for “a ban on the development of superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that could exceed human cognitive abilities in every intellectual area, though such systems have not yet been developed.

Primary Requirements in the Declaration

The declaration states that the ban should stay active until there is “broad scientific consensus” on creating superintelligence “with proper safeguards” and once “substantial public support” has been secured.

Prominent figures who endorsed the statement include technology visionary and Nobel Prize recipient a leading AI researcher, along with his colleague and pioneer of modern AI, another AI expert; tech entrepreneur Steve Wozniak; UK entrepreneur Richard Branson; former US national security adviser; ex-head of state an international leader, and British author Stephen Fry. Additional Nobel winners who endorsed include Beatrice Fihn, Frank Wilczek, John C Mather, and Daron Acemoğlu.

Organizational Background

The statement, targeted at national leaders, tech firms and policy makers, was organized by the FLI organization, a US-based AI safety group that previously called for a pause in advancing strong artificial intelligence in 2023, shortly after the launch of conversational AI made AI a global political discussion topic.

Tech Sector Views

In recent months, Meta's CEO, the leader of Facebook parent Meta, one of the leading tech companies in the US, claimed that development of superintelligence was “approaching reality”. Nevertheless, some analysts have suggested that talk of ASI indicates market competition among technology firms spending hundreds of billions on AI recently, rather than the sector being near reaching any scientific advancements.

Potential Risks

However, FLI warns that the prospect of ASI being achieved “in the coming decade” presents numerous threats ranging from replacing human workers to losses of civil liberties, exposing countries to security threats and even endangering mankind with existential risk. Existential fears about artificial intelligence focus on the potential ability of a system to escape human oversight and protective measures and trigger actions contrary to human interests.

Citizen Sentiment

The institute published a US national poll showing that about 75% of US citizens want strong oversight on advanced AI, with six out of 10 believing that artificial superintelligence should not be developed until it is demonstrated to be secure or manageable. The survey of American respondents added that only 5% backed the status quo of fast, unregulated development.

Corporate Goals

The leading AI companies in the United States, including the ChatGPT developer a major AI lab and the search giant, have made the creation of human-level AI – the theoretical state where AI matches human levels of intelligence at many intellectual activities – an stated objective of their research. While this is one notch below ASI, some specialists also warn it could carry an extinction threat by, for instance, being able to improve itself toward achieving superintelligence, while also presenting an underlying danger for the modern labour market.

Jeremy Parker
Jeremy Parker

A passionate interior designer and DIY enthusiast with over a decade of experience in home styling and renovation projects.