The post How AI And Nation-States Could Put Open-Source Software At Risk appeared on BitcoinEthereumNews.com. NEW YORK, NEW YORK – JULY 19: An information screen informs travellers that train information is not running due to the global technical outage at Canal Street subway station on July 19, 2024 in New York City. Businesses and transport worldwide were affected by a global technology outage that was attributed to a software update issued by CrowdStrike, a cybersecurity firm whose software is used by many industries around the world. (Photo by Adam Gray/Getty Images) Getty Images Open-source software powers much of the modern internet – from cloud infrastructure to government services. As a digital public good, its reliability is essential to the internet and yet increasingly fragile. Despite its ubiquity, most projects are maintained by a small number of volunteers or underfunded developers. Tech giants are spending billions on artificial intelligence, but far less on securing the open-source tools that underpin their products. As The Economist put it, “the software at the heart of the internet is maintained not by giant corporations or sprawling bureaucracies but by a handful of earnest volunteers toiling in obscurity.” The rise of autonomous AI agents could destabilize this ecosystem. Nation-states and cybercriminals may soon weaponize these tools to exploit the openness of open source software. How AI Supercharges Old Threats AI can scan repositories, inject subtle backdoors, generate benign-looking contributions, or impersonate trusted developers. Stormy Peters, vice president for communities at GitHub, noted in ComputerWeekly that “China has the second-largest number of developers on GitHub by country.” That global scale matters because it amplifies the risk. Ryan Ware, an open-source security expert, sees the threat already taking shape. “AI can help with some of the social engineering aspects,” he told me. “It’s already a proven benefit to help people in creating content for social engineering efforts.” In other words, AI doesn’t need… The post How AI And Nation-States Could Put Open-Source Software At Risk appeared on BitcoinEthereumNews.com. NEW YORK, NEW YORK – JULY 19: An information screen informs travellers that train information is not running due to the global technical outage at Canal Street subway station on July 19, 2024 in New York City. Businesses and transport worldwide were affected by a global technology outage that was attributed to a software update issued by CrowdStrike, a cybersecurity firm whose software is used by many industries around the world. (Photo by Adam Gray/Getty Images) Getty Images Open-source software powers much of the modern internet – from cloud infrastructure to government services. As a digital public good, its reliability is essential to the internet and yet increasingly fragile. Despite its ubiquity, most projects are maintained by a small number of volunteers or underfunded developers. Tech giants are spending billions on artificial intelligence, but far less on securing the open-source tools that underpin their products. As The Economist put it, “the software at the heart of the internet is maintained not by giant corporations or sprawling bureaucracies but by a handful of earnest volunteers toiling in obscurity.” The rise of autonomous AI agents could destabilize this ecosystem. Nation-states and cybercriminals may soon weaponize these tools to exploit the openness of open source software. How AI Supercharges Old Threats AI can scan repositories, inject subtle backdoors, generate benign-looking contributions, or impersonate trusted developers. Stormy Peters, vice president for communities at GitHub, noted in ComputerWeekly that “China has the second-largest number of developers on GitHub by country.” That global scale matters because it amplifies the risk. Ryan Ware, an open-source security expert, sees the threat already taking shape. “AI can help with some of the social engineering aspects,” he told me. “It’s already a proven benefit to help people in creating content for social engineering efforts.” In other words, AI doesn’t need…

How AI And Nation-States Could Put Open-Source Software At Risk

6 min read

NEW YORK, NEW YORK – JULY 19: An information screen informs travellers that train information is not running due to the global technical outage at Canal Street subway station on July 19, 2024 in New York City. Businesses and transport worldwide were affected by a global technology outage that was attributed to a software update issued by CrowdStrike, a cybersecurity firm whose software is used by many industries around the world. (Photo by Adam Gray/Getty Images)

Getty Images

Open-source software powers much of the modern internet – from cloud infrastructure to government services. As a digital public good, its reliability is essential to the internet and yet increasingly fragile.

Despite its ubiquity, most projects are maintained by a small number of volunteers or underfunded developers. Tech giants are spending billions on artificial intelligence, but far less on securing the open-source tools that underpin their products.

As The Economist put it, “the software at the heart of the internet is maintained not by giant corporations or sprawling bureaucracies but by a handful of earnest volunteers toiling in obscurity.” The rise of autonomous AI agents could destabilize this ecosystem. Nation-states and cybercriminals may soon weaponize these tools to exploit the openness of open source software.

How AI Supercharges Old Threats

AI can scan repositories, inject subtle backdoors, generate benign-looking contributions, or impersonate trusted developers. Stormy Peters, vice president for communities at GitHub, noted in ComputerWeekly that “China has the second-largest number of developers on GitHub by country.” That global scale matters because it amplifies the risk.

Ryan Ware, an open-source security expert, sees the threat already taking shape. “AI can help with some of the social engineering aspects,” he told me. “It’s already a proven benefit to help people in creating content for social engineering efforts.”

In other words, AI doesn’t need to write malicious code to be dangerous – it just needs to talk like a developer. The same dynamic is unfolding in developer communities. As the Wall Street Journal reported, activity on Stack Overflow has collapsed by more than 90% since the launch of ChatGPT.

That decline matters because, as tech writer Nick Hodges explained in InfoWorld, “Stack Overflow provides much of the knowledge that is embedded in AI coding tools, but the more developers rely on AI coding tools the less likely they will participate in Stack Overflow, the site that produces that knowledge.”

Dan Middleton, chair of the Confidential Computing Consortium’s technical advisory committee, says, “AI agents are already a routine part of both open-source and closed source software maintenance. Many developers rely on automated tools – linters, test runners, dependency updaters – to catch common errors. The transition to AI-assisted development is accelerating.” That acceleration makes it useful to examine how past breaches unfolded.

What Past Breaches Reveal About Today’s Risks

These incidents show how a single weak link can ripple through entire systems – an effect AI could magnify. The XZ Utils backdoor offered a glimpse of how devastating a single compromise can be. Before that came the SolarWinds breach, a Russian operation that infiltrated trusted update channels across the U.S. government and industry.

Even widely used packages can rest on fragile foundations. The Node.js utility fast-glob, downloaded nearly 80 million times a week and embedded in more than 30 Department of Defense projects, is maintained by a single developer in Russia.

HONG KONG – 2019/04/05: In this photo illustration a Russian Federation flag is seen on an Android mobile device with a figure of hacker in the background. (Photo Illustration by Budrul Chukrut/SOPA Images/LightRocket via Getty Images)

LightRocket via Getty Images

While there’s no evidence of wrongdoing, the situation highlights the enormous trust placed in lone maintainers. In an article for The Register, Haden Smith of Hunted Labs noted, “Every piece of code written by Russians isn’t automatically suspect, but popular packages with no external oversight are ripe for the taking by state or state-backed actors.” The growing reliance on single maintainers shows why AI-driven threats could be so destabilizing.

AI Can Turn Small Threats Into Big Ones

With generative AI, such attacks could scale faster and operate with greater stealth. “A proliferation of independent agents can reduce the risk posed by any single compromised tool, but that also makes deep inspection of each tool more difficult,” Middleton said. “On the other hand, consolidating trust into a small set of well-vetted agents improves auditability, yet increases systemic risk.”

That systemic tension extends to volunteer capacity. Ware believes the deeper problem is capacity. “There aren’t enough resources to cover every open-source project with overworked maintainers that find their projects suddenly in use by industry,” he said.

Derek Zimmer, executive director of the Open Source Technology Improvement Fund, told me, “A majority of organizations don’t know nor fully understand how much open source they run, or their level of exposure to these kinds of threats.”

Over time, software continues to become complex, including the amount of direct and indirect dependencies on the software. “This interdependence gives rise to rich software that delivers fantastic features, but the hidden cost is the increased exposure to threats in the supply chain,” Zimmer said.

Exhausted volunteers need more support to reduce risk. “An attack where a maintainer who no longer can or wants to contribute to a critical project can simply hand off the project to a malicious actor is a very real threat, and advocacy for mechanisms to reduce the risks are few and far between,” he warned.

For now, it’s often easy to spot where AI is making contributions, because it tends to add too many extra libraries. “AI conversations are still pretty easy to spot but may not be as easy to catch in a few years. I think we are still far away from the capability being there for AI, but I have no doubt that someone will attempt to do this at scale,” Zimmer cautioned.

When AI Becomes A Spy Tool

With organizations generating more data than ever, the risk of AI-driven surveillance is only increasing. “If China develops the best AI models and DeepSeek on Alibaba Cloud becomes the dominant thing that everyone uses, it would have unfettered access to personal and business secrets,” Zimmer said.

As AI tools integrate into coding assistants and business platforms, unsuspecting users may expose sensitive data. Ware has considered detection tools to flag AI-generated contributions, but admitted that “It would be the beginning of a new cat-and-mouse game that would be ongoing for decades.”

That kind of endless cycle leaves project maintainers under immense pressure. “The open source culture needs to have a wake-up call,” Zimmer said. “Maintainers need to be notified that they are critical parts of the global supply chain.”

Nation-states are constantly searching for new ways to infiltrate their adversaries’ systems. Too many organizations take for granted the unpaid work of open source maintainers. Without greater support, these projects could one day be handed off, whether willingly or through burnout, to hostile actors or even AI agents weaponized by nation-states.

Source: https://www.forbes.com/sites/davidkirichenko/2025/09/18/how-ai-and-nation-states-could-put-open-source-software-at-risk/

Market Opportunity
Threshold Logo
Threshold Price(T)
$0.006845
$0.006845$0.006845
-2.28%
USD
Threshold (T) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Wormhole launches reserve tying protocol revenue to token

Wormhole launches reserve tying protocol revenue to token

The post Wormhole launches reserve tying protocol revenue to token appeared on BitcoinEthereumNews.com. Wormhole is changing how its W token works by creating a new reserve designed to hold value for the long term. Announced on Wednesday, the Wormhole Reserve will collect onchain and offchain revenues and other value generated across the protocol and its applications (including Portal) and accumulate them into W, locking the tokens within the reserve. The reserve is part of a broader update called W 2.0. Other changes include a 4% targeted base yield for tokenholders who stake and take part in governance. While staking rewards will vary, Wormhole said active users of ecosystem apps can earn boosted yields through features like Portal Earn. The team stressed that no new tokens are being minted; rewards come from existing supply and protocol revenues, keeping the cap fixed at 10 billion. Wormhole is also overhauling its token release schedule. Instead of releasing large amounts of W at once under the old “cliff” model, the network will shift to steady, bi-weekly unlocks starting October 3, 2025. The aim is to avoid sharp periods of selling pressure and create a more predictable environment for investors. Lockups for some groups, including validators and investors, will extend an additional six months, until October 2028. Core contributor tokens remain under longer contractual time locks. Wormhole launched in 2020 as a cross-chain bridge and now connects more than 40 blockchains. The W token powers governance and staking, with a capped supply of 10 billion. By redirecting fees and revenues into the new reserve, Wormhole is betting that its token can maintain value as demand for moving assets and data between chains grows. This is a developing story. This article was generated with the assistance of AI and reviewed by editor Jeffrey Albus before publication. Get the news in your inbox. Explore Blockworks newsletters: Source: https://blockworks.co/news/wormhole-launches-reserve
Share
BitcoinEthereumNews2025/09/18 01:55
Kalshi debuts ecosystem hub with Solana and Base

Kalshi debuts ecosystem hub with Solana and Base

The post Kalshi debuts ecosystem hub with Solana and Base appeared on BitcoinEthereumNews.com. Kalshi, the US-regulated prediction market exchange, rolled out a new program on Wednesday called KalshiEco Hub. The initiative, developed in partnership with Solana and Coinbase-backed Base, is designed to attract builders, traders, and content creators to a growing ecosystem around prediction markets. By combining its regulatory footing with crypto-native infrastructure, Kalshi said it is aiming to become a bridge between traditional finance and onchain innovation. The hub offers grants, technical assistance, and marketing support to selected projects. Kalshi also announced that it will support native deposits of Solana’s SOL token and USDC stablecoin, making it easier for users already active in crypto to participate directly. Early collaborators include Kalshinomics, a dashboard for market analytics, and Verso, which is building professional-grade tools for market discovery and execution. Other partners, such as Caddy, are exploring ways to expand retail-facing trading experiences. Kalshi’s move to embrace blockchain partnerships comes at a time when prediction markets are drawing fresh attention for their ability to capture sentiment around elections, economic policy, and cultural events. Competitor Polymarket recently acquired QCEX — a derivatives exchange with a CFTC license — to pave its way back into US operations under regulatory compliance. At the same time, platforms like PredictIt continue to push for a clearer regulatory footing. The legal terrain remains complex, with some states issuing cease-and-desist orders over whether these event contracts count as gambling, not finance. This is a developing story. This article was generated with the assistance of AI and reviewed by editor Jeffrey Albus before publication. Get the news in your inbox. Explore Blockworks newsletters: Source: https://blockworks.co/news/kalshi-ecosystem-hub-solana-base
Share
BitcoinEthereumNews2025/09/18 04:40
Optimizely Named a Leader in the 2026 Gartner® Magic Quadrant™ for Personalization Engines

Optimizely Named a Leader in the 2026 Gartner® Magic Quadrant™ for Personalization Engines

Company recognized as a Leader for the second consecutive year NEW YORK, Feb. 5, 2026 /PRNewswire/ — Optimizely, the leading digital experience platform (DXP) provider
Share
AI Journal2026/02/06 00:47