The campaign to push governments to agree on binding international limits to curtail the abuse of AI technology has been escalated to the UN level, as more than 200 leading politicians, scientists, and thought leaders, including 10 Nobel Prize winners, have issued a warning about the risks of the technology. The statement, released Monday at […]The campaign to push governments to agree on binding international limits to curtail the abuse of AI technology has been escalated to the UN level, as more than 200 leading politicians, scientists, and thought leaders, including 10 Nobel Prize winners, have issued a warning about the risks of the technology. The statement, released Monday at […]

Over 200 leaders and Nobel Prize winners urge binding international limits on dangerous AI uses by 2026

2025/09/23 04:30

The campaign to push governments to agree on binding international limits to curtail the abuse of AI technology has been escalated to the UN level, as more than 200 leading politicians, scientists, and thought leaders, including 10 Nobel Prize winners, have issued a warning about the risks of the technology.

The statement, released Monday at the opening of the United Nations General Assembly’s High-Level Week, is being called the Global Call for AI Red Lines. It argues that AI’s “current trajectory presents unprecedented dangers” and demands that countries work toward an international agreement on clear, verifiable restrictions by the end of 2026.

Nobel Prize winners lead plea at the U.N.

The plea was revealed by Nobel Peace Prize laureate and journalist Maria Ressa, who used her opening address to urge governments to “prevent universally unacceptable risks” and define what AI should never be allowed to do.

Signatories of the statement include Nobel Prize recipients in chemistry, economics, peace, and physics, alongside celebrated authors such as Stephen Fry and Yuval Noah Harari. Former Irish president Mary Robinson and former Colombian president Juan Manuel Santos, who is also a Nobel Peace Prize winner, lent their names as well.

Geoffrey Hinton and Yoshua Bengio, popularly known as “godfathers of AI” and winners of the Turing Award, which is widely considered the Nobel Prize of computer science, also added their signatures to the statement.

“This is a turning point,” said Harari. “Humans must agree on clear red lines for AI before the technology reshapes society beyond our understanding and destroys the foundations of our humanity.”

Past efforts to raise the alarm about AI have often focused on voluntary commitments by companies and governments. In March 2023, more than 1,000 technology leaders, including Elon Musk, called for a pause on developing powerful AI systems. A few months later, AI executives such as OpenAI’s Sam Altman and Google DeepMind’s Demis Hassabis signed a brief statement equating the existential risks of AI to those of nuclear war and pandemics.

AI stokes fears of existential and societal risks

Just last week, AI was implicated in cases ranging from a teenager’s suicide to reports of its use in manipulating public debate.

The signatories of the call argue that these immediate risks may soon be eclipsed by larger threats. Commentators have warned that advanced AI systems could lead to mass unemployment, engineered pandemics, or systematic human-rights violations if left unchecked.

Some of the items on the embargoed list include banning lethal autonomous weapons, prohibiting self-replicating AI systems, and ensuring AI is never deployed in nuclear warfare.

“It is in our vital common interest to prevent AI from inflicting serious and potentially irreversible damages to humanity, and we should act accordingly.” Ahmet Üzümcü, the former director general of the Organization for the Prohibition of Chemical Weapons, which won the 2013 Nobel Peace Prize under his leadership, said.

More than 60 civil society organizations have signed the letter, including the UK-based think tank Demos and the Beijing Institute of AI Safety and Governance. The effort is being coordinated by three nonprofits: the Center for Human-Compatible AI at the University of California, Berkeley; The Future Society; and the French Center for AI Safety.

Despite recent safety pledges from companies like OpenAI and Anthropic, which have agreed to government testing of models before release, research suggests that firms are fulfilling only about half of their commitments.

“We cannot afford to wait,” Ressa said. “We must act before AI advances beyond our ability to control it.”

If you're reading this, you’re already ahead. Stay there with our newsletter.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

Today’s NYT Pips Hints And Solutions For Saturday, September 20th

Today’s NYT Pips Hints And Solutions For Saturday, September 20th

The post Today’s NYT Pips Hints And Solutions For Saturday, September 20th appeared on BitcoinEthereumNews.com. It seems like summer just began, and here we are in its last throes. Monday marks the last day of summer and the first day of fall, and I’m not sure I’m quite ready for the changing of the seasons. But time waits for no man (or woman) and we must march forth. Soon enough, in fact, it will be March 4th, and winter will be dwindling. So it goes, oh my Pipsqueaks. So it goes. Let’s lay down some dominoes. Looking for Friday’s Pips? Read our guide right here. How To Play Pips In Pips, you have a grid of multicolored boxes. Each colored area represents a different “condition” that you have to achieve. You have a select number of dominoes that you have to spend filling in the grid. You must use every domino and achieve every condition properly to win. There are Easy, Medium and Difficult tiers. Here’s an example of a difficult tier Pips: Pips example Screenshot: Erik Kain As you can see, the grid has a bunch of symbols and numbers with each color. On the far left, the three purple squares must not equal one another (hence the equal sign crossed out). The two pink squares next to that must equal a total of 0. The zig-zagging blue squares all must equal one another. You click on dominoes to rotate them, and will need to since they have to be rotated to fit where they belong. Not shown on this grid are other conditions, such as “less than” or “greater than.” If there are multiple tiles with > or < signs, the total of those tiles must be greater or less than the listed number. It varies by grid. Blank spaces can have anything. The various possible conditions are: = All pips must equal…
Share
BitcoinEthereumNews2025/09/20 08:04
Share