Bitcoin’s bear market has been framed by a familiar prism: the traditional four-year cycle. Yet proponents argue that institutional demand, particularly via BTCBitcoin’s bear market has been framed by a familiar prism: the traditional four-year cycle. Yet proponents argue that institutional demand, particularly via BTC

Scaramucci: Bitcoin’s four-year cycle intact; Q4 rally forecast

For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com
Scaramucci: Bitcoin's Four-Year Cycle Intact; Q4 Rally Forecast

Bitcoin’s bear market has been framed by a familiar prism: the traditional four-year cycle. Yet proponents argue that institutional demand, particularly via BTC-focused exchange-traded funds, has muted volatility and may shape the path of prices through the next cycle. In a recent discussion, Anthony Scaramucci, managing partner of SkyBridge, suggested that while the cycle remains visible, its dynamics have been altered by new liquidity channels and changing market participation.

Speaking with Scott Melker on The Wolf of All Streets podcast, Scaramucci described the four-year pattern as “muted” by ETF inflows that have helped cushion sharp swings. “We’re in a four-year cycle, and there were some traditional whales, some OGs, that believe in the four-year cycle, and guess what happens in life when you believe in something? You create a self-fulfilling prophecy,” he said. The implication is that market psychology and the presence of ETFs have tempered the classic boom-bust rhythm that many investors associate with BTC.

Looking ahead, Scaramucci warned that BTC is likely to remain choppy for most of the year, with a renewed bull market emerging in the fourth quarter of 2026. He noted that the broader market narrative at the time had shifted away from a straightforward ascent toward a more nuanced trajectory, where macro and policy factors would matter just as much as on-chain signals.

The conversation also touched on the expectations that had circulated in late 2024 and early 2025. Market participants, including Scaramucci, had anticipated BTC could surge toward around $150,000 in 2025, driven by broad political momentum and regulatory openness in the United States. That consensus was upended by a sharp October downturn that pulled BTC from a prior peak to a much lower range, underscoring how quickly sentiment can swing in crypto markets.

History has repeatedly shown that price movements often defy prevailing sentiment. Scaramucci pointed to the early 2023 period, when BTC’s price action moved contrary to bright-eyed forecasts in the wake of the FTX collapse in November 2022. After a period of disinterest and malaise, the market reversed into a new upcycle, illustrating how catalysts can reset the mood even when the broader narrative appears unfavorable.

Key takeaways

  • The four-year cycle remains a reference framework for BTC, but ETF inflows have muted its volatility and potentially altered how the cycle plays out.
  • BTC is expected to experience choppy trading through much of this year, with the next major leg higher anticipated in the fourth quarter of 2026.
  • Market expectations for a 2025 surge to around $150,000 were fueled by pro-crypto policy signals and regulatory warming, but an October crash shattered that consensus.
  • Historical reactions show BTC can rebound after episodes of apathy or negative catalysts, reinforcing the idea that macro shocks and sentiment swings remain powerful drivers.
  • Geopolitical developments and stock-market dynamics can influence BTC through correlations with risk assets, underscoring the need to monitor macro risk sentiment alongside on-chain activity.

The cycle, ETFs, and the evolving market backdrop

In the eyes of Scaramucci, the presence of BTC-focused exchange-traded funds has changed the game. ETFs offer a new, regulated channel through which institutional players can gain exposure, potentially dampening sharp drawdowns and tempering the kind of volatile spikes that once defined BTC cycles. This shift does not erase the cycle’s specter, but it reframes it—turning a potentially binary up- or down-market into a more nuanced, information-rich environment in which policy signals and fund flows matter as much as supply-demand fundamentals.

That framing sits alongside long-standing debates within the crypto industry about whether the four-year cycle remains intact. While some observers point to deviations in late 2025 or 2026, others, including Scaramucci, argue that the cycle still offers a useful heuristic for investors trying to gauge risk, duration, and potential turning points. The market’s sensitivity to events such as regulatory announcements, ETF inflows, or major macro shocks continues to complicate any simple forecast.

From peak to pause: how catalysts have shifted the narrative

The historical arc cited by Scaramucci stretches from BTC’s all-time run toward lofty levels to the subsequent retrenchment that has colored investor psychology for years. The narrative notes that BTC once traded near the upper stratosphere—around a $126,000 range in prior cycles—before the October pullback. From there, the price retraced to the $60,000 area, highlighting how quickly sentiment can reverse and the importance of liquidity and risk appetite in determining the price path.

Beyond these cycles, the market’s reaction to external shocks—such as the FTX collapse in late 2022—has underscored a pattern: even after periods of disillusionment, bitcoin has demonstrated resilience, often resuming an uptrend when investor interest returns and liquidity improves. The early months of 2023, in particular, showed that upside moves can unfold despite a broader backdrop of skepticism or unfavorable headlines.

Another facet of the discussion centers on whether 2025 and 2026 would deliver a fresh bull phase. While the consensus among several participants had anticipated a robust climb in 2025, the trajectory was interrupted by the October downturn and broader risk-off dynamics. The question remains whether the market will reassert its longer-term cycle or whether a new regime—shaped by macro policy, regulatory clarity, and global liquidity—will redefine BTC’s pace and scale.

Geopolitics, risk sentiment, and BTC’s market correlations

Macro shocks have always tested BTC’s claimed role as a hedge or diversifier. The recent wave of geopolitical tension and global risk-off periods have at times coincided with renewed pressure on risk assets, and BTC has not been immune. In the most recent turn, BTC dipped below a key psychological level in the wake of intensifying geopolitical events. At the same time, traditional stock indices have faced renewed selling pressure; the S&P 500 fell around 1.3% as the week closed, dipping below a widely watched moving average and highlighting a possible shift in the correlation between BTC and mainstream markets.

Analysts have warned that if BTC continues to exhibit a sustained positive correlation with equities, its downside could be more pronounced in risk-off environments—potentially amplifying losses in a scenario where macro catalysts favor traditional assets. Yet the crypto market has shown episodic decoupling at different points in history, illustrating that the relationship is not fixed and can diverge as new liquidity channels and market participants come into play.

The ongoing debate about Bitcoin’s cycle, and whether it remains a reliable compass for pricing, continues to draw attention from investors and researchers. Some industry voices argue that structural shifts—such as increasing institutional participation, evolving derivatives markets, and tighter regulation—could render the old four-year narrative less predictive than it once was. Others maintain that the cycle still captures a collective behavior pattern—cyclical expectations that influence trading and risk management, even if the visible price path changes in response to external shocks.

For readers seeking a synthesis, it’s not simply a question of whether the cycle endures, but how its cues interact with a broader market fabric that includes policy developments, ETF demand, and macro risk appetite. The interplay among these factors will likely determine how BTC navigates the remainder of this decade.

Longer-form reflections on the cycle’s fate have appeared in industry circles, including discussions in crypto-focused media that weigh the structural shifts against historical precedent. The tension between a legacy four-year rhythm and new market realities remains a core theme for traders and builders alike, as they assess timing, risk controls, and capitalization strategies in a landscape defined by rapid change and evolving incentives.

As the community weighs these signals, investors should stay alert to ETF flow data, central-bank signals, and regulatory developments that could reshape the calculus of risk and reward. The next few quarters will be telling in terms of whether BTC can establish a fresh breakout or whether the cycle will again be interrupted by macro or policy-driven shocks.

Looking ahead, observers will be watching how the market absorbs geopolitical risks, how the S&P 500 and other risk assets respond to policy news, and how BTC trades as liquidity conditions shift. The implications extend beyond price alone: they touch on institutional adoption, derivative markets, and the broader narrative around crypto’s role in diversified portfolios.

For now, the path remains uncertain but informed by a set of recognizable patterns and new inflows. The pace of ETF participation, the resilience of risk sentiment, and the cadence of regulatory clarity will help determine whether BTC’s next major leg higher lies in late 2026 or in a broader, more gradual re-acceleration beyond that horizon.

Readers should watch for how ETF allocations evolve and whether macro catalysts—such as policy shifts or geopolitical developments—alter the balance of risk and return in the coming months. The question of whether Bitcoin’s four-year rhythm endures or evolves is unlikely to be settled in the near term, but the signals from fund flows, price action, and policy readiness will continue to shape market expectations.

This article was originally published as Scaramucci: Bitcoin’s four-year cycle intact; Q4 rally forecast on Crypto Breaking News – your trusted source for crypto news, Bitcoin news, and blockchain updates.

Market Opportunity
Bitcoin Logo
Bitcoin Price(BTC)
$68,235.74
$68,235.74$68,235.74
-0.88%
USD
Bitcoin (BTC) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

And the Big Day Has Arrived: The Anticipated News for XRP and Dogecoin Tomorrow

And the Big Day Has Arrived: The Anticipated News for XRP and Dogecoin Tomorrow

The first-ever ETFs for XRP and Dogecoin are expected to launch in the US tomorrow. Here's what you need to know. Continue Reading: And the Big Day Has Arrived: The Anticipated News for XRP and Dogecoin Tomorrow
Share
Coinstats2025/09/18 04:33
From Federated Learning to Decentralized Agent Networks: ChainOpera Project Analysis

From Federated Learning to Decentralized Agent Networks: ChainOpera Project Analysis

ChainOpera leverages Web3-based governance and incentive mechanisms to bring users, developers, GPU/data providers into co-construction and co-governance, allowing AI Agents to not only be "used" but also "co-created and co-owned." Written by 0xjacobzhao In our June research report, "The Holy Grail of Crypto AI: Exploring the Frontiers of Decentralized Training," we mentioned federated learning, a "controlled decentralization" solution situated between distributed and decentralized training. Its core approach is to retain data locally and centrally aggregate parameters, meeting privacy and compliance requirements in healthcare, finance, and other fields. At the same time, we have consistently highlighted the rise of agent networks in previous reports. Their value lies in enabling multi-agent autonomy and division of labor to collaboratively complete complex tasks, driving the evolution from "large models" to "multi-agent ecosystems." Federated learning, with its principle of "data storage within the local machine and incentives based on contribution," lays the foundation for multi-party collaboration. Its distributed nature, transparent incentives, privacy protections, and compliance practices provide directly reusable experience for the Agent Network. Following this path, the FedML team upgraded its open-source nature into TensorOpera (the AI industry infrastructure layer) and then evolved it into ChainOpera (a decentralized agent network). Of course, the Agent Network is not an inevitable extension of federated learning. Its core lies in the autonomous collaboration and task division of multiple agents. It can also be directly built on multi-agent systems (MAS), reinforcement learning (RL), or blockchain incentive mechanisms. 1. Federated Learning and AI Agent Technology Stack Architecture Federated Learning (FL) is a framework for collaborative training without centralized data. Its fundamental principle is that each participant trains the model locally and only uploads parameters or gradients to a coordinating end for aggregation, thereby achieving privacy compliance with "data staying within the domain." Through practical application in typical scenarios such as healthcare, finance, and mobile, FL has entered a relatively mature commercial stage. However, it still faces bottlenecks such as high communication overhead, incomplete privacy protection, and low convergence efficiency due to heterogeneous devices. Compared with other training models, distributed training emphasizes centralized computing power for efficiency and scale, while decentralized training achieves fully distributed collaboration through open computing networks. Federated learning lies somewhere in between, embodying a "controlled decentralization" solution that not only meets industry needs for privacy and compliance but also provides a viable path for cross-institutional collaboration, making it more suitable for transitional deployment architectures within the industry. In the entire AI Agent protocol stack, we divided it into three main layers in our previous research report, namely Agent Infrastructure Layer: This layer provides the lowest-level operational support for agents and is the technical foundation for all agent systems. Core modules: including Agent Framework (agent development and operation framework) and Agent OS (lower-level multi-task scheduling and modular runtime), providing core capabilities for agent lifecycle management. Support modules: such as Agent DID (decentralized identity), Agent Wallet & Abstraction (account abstraction and transaction execution), Agent Payment/Settlement (payment and settlement capabilities). The Coordination & Execution Layer focuses on collaboration among multiple agents, task scheduling, and system incentive mechanisms, and is the key to building the "swarm intelligence" of the agent system. Agent Orchestration: It is a command mechanism used to uniformly schedule and manage the agent lifecycle, task allocation, and execution process. It is suitable for workflow scenarios with central control. Agent Swarm: It is a collaborative structure that emphasizes the collaboration of distributed intelligent agents. It has a high degree of autonomy, division of labor, and flexible collaboration, and is suitable for coping with complex tasks in dynamic environments. Agent Incentive Layer: Builds an economic incentive system for the Agent network to stimulate the enthusiasm of developers, executors, and validators, and provide sustainable power for the intelligent ecosystem. Application & Distribution Layer Distribution subcategories: including Agent Launchpad, Agent Marketplace, and Agent Plugin Network Application subcategories: including AgentFi, Agent Native DApp, Agent-as-a-Service, etc. Consumption subcategory: Agent Social / Consumer Agent, mainly for lightweight scenarios such as consumer social interaction Meme: It is hyped by the Agent concept, lacks actual technical implementation and application landing, and is only driven by marketing. 2. FedML, the Federated Learning Benchmark, and the TensorOpera Full-Stack Platform FedML is one of the earliest open-source frameworks for federated learning and distributed training. Originating from an academic team (USC) and gradually becoming a company-owned product of TensorOpera AI, it provides researchers and developers with tools for cross-institutional and cross-device data collaboration and training. In academia, FedML has become a universal experimental platform for federated learning research, with frequent appearances at top conferences such as NeurIPS, ICML, and AAAI. In industry, FedML has a strong reputation in privacy-sensitive scenarios such as healthcare, finance, edge AI, and Web3 AI, and is considered a benchmark toolchain for federated learning. TensorOpera is FedML's commercialized upgrade into a full-stack AI infrastructure platform for enterprises and developers. While maintaining its federated learning capabilities, it expands to the GPU Marketplace, model serving, and MLOps, thereby tapping into the larger market of the large model and agent era. TensorOpera's overall architecture can be divided into three layers: the Compute Layer (foundation layer), the Scheduler Layer (scheduling layer), and the MLOps Layer (application layer). 1. Compute Layer (bottom layer) The Compute layer is the technical foundation of TensorOpera, building on the open-source DNA of FedML. Its core functions include Parameter Server, Distributed Training, Inference Endpoint, and Aggregation Server. Its value proposition lies in providing distributed training, privacy-preserving federated learning, and a scalable inference engine. It supports the three core capabilities of "Train/Deploy/Federate," covering the entire chain from model training and deployment to cross-institutional collaboration, and serves as the foundation of the entire platform. 2. Scheduler Layer (Middle Layer) The Scheduler layer serves as the computing power trading and scheduling hub, comprised of the GPU Marketplace, Provision, Master Agent, and Schedule & Orchestrate. It supports resource allocation across public clouds, GPU providers, and independent contributors. This layer represents a key milestone in the evolution of FedML to TensorOpera. Through intelligent computing power scheduling and task orchestration, it enables larger-scale AI training and inference, encompassing typical LLM and generative AI scenarios. Furthermore, the Share & Earn model within this layer includes a reserved incentive mechanism interface, potentially enabling compatibility with DePIN or Web3 models. 3. MLOps Layer (Upper Layer) The MLOps layer is the platform's direct service interface for developers and enterprises, encompassing modules such as Model Serving, AI Agent, and Studio. Typical applications include LLM Chatbot, multimodal generative AI, and the developer Copilot tool. Its value lies in abstracting underlying computing power and training capabilities into high-level APIs and products, lowering the barrier to entry. It provides ready-to-use agents, a low-code development environment, and scalable deployment capabilities. It is positioned to compete with next-generation AI infrastructure platforms such as Anyscale, Together, and Modal, serving as a bridge from infrastructure to applications. In March 2025, TensorOpera upgraded to a full-stack platform for AI agents, with core products including the AgentOpera AI App, Framework, and Platform. The application layer provides a multi-agent entry point similar to ChatGPT. The framework layer evolved into "Agentic OS" with a graph-structured multi-agent system and Orchestrator/Router. The platform layer deeply integrates with the TensorOpera model platform and FedML to enable distributed model serving, RAG optimization, and hybrid end-to-end cloud deployment. The overall goal is to create "one operating system, one agent network," enabling developers, enterprises, and users to jointly build a next-generation Agentic AI ecosystem in an open and privacy-protected environment. 3. ChainOpera AI Ecosystem Overview: From Co-founder to Technology Foundation If FedML is the technical core, providing the open-source DNA of federated learning and distributed training, and TensorOpera abstracts FedML's research findings into commercially viable full-stack AI infrastructure, then ChainOpera brings TensorOpera's platform capabilities to the blockchain, creating a decentralized agent network ecosystem through an AI Terminal + Agent Social Network + DePIN model, a computing layer, and an AI-Native blockchain. The core shift lies in the fact that TensorOpera remains primarily focused on enterprises and developers, while ChainOpera leverages Web3-based governance and incentive mechanisms to bring users, developers, and GPU/data providers into the co-construction and co-governance of AI agents, allowing them to be not just "used" but "co-created and co-owned." Co-creators ChainOpera AI provides a toolchain, infrastructure, and coordination layer for ecosystem co-creation through the Model & GPU Platform and Agent Platform, supporting model training, intelligent agent development, deployment, and expansion collaboration. The ChainOpera ecosystem's co-creators include AI agent developers (designing and operating intelligent agents), tool and service providers (templates, MCP, databases, and APIs), model developers (training and publishing model cards), GPU providers (contributing computing power through DePIN and Web2 cloud partners), and data contributors and annotators (uploading and annotating multimodal data). These three core components—development, computing power, and data—jointly drive the continued growth of the intelligent agent network. Co-owners The ChainOpera ecosystem also incorporates a co-ownership mechanism, enabling collaborative network building through collaboration and participation. AI Agent creators are individuals or teams who design and deploy new AI agents through the Agent Platform, responsible for their construction, launch, and ongoing maintenance, driving innovation in functionality and applications. AI Agent participants are members of the community. They participate in the lifecycle of AI agents by acquiring and holding Access Units, supporting their growth and activity during use and promotion. These two roles represent the supply and demand sides, respectively, and together form a model of value sharing and collaborative development within the ecosystem. Ecosystem partners: platforms and frameworks ChainOpera AI collaborates with multiple parties to enhance the platform's usability and security, focusing on Web3 integration. The AI Terminal App integrates wallets, algorithms, and aggregation platforms to enable intelligent service recommendations; the Agent Platform introduces multiple frameworks and zero-code tools to lower the development barrier; models are trained and inferred using TensorOpera AI; and an exclusive partnership with FedML supports privacy-preserving training across institutions and devices. Overall, the platform forms an open ecosystem that balances enterprise-level applications with Web3 user experience. Hardware Portal: AI Hardware & Partners Through partners such as DeAI Phone, wearables, and Robot AI, ChainOpera integrates blockchain and AI into smart terminals, enabling dApp interaction, device-side training, and privacy protection, gradually forming a decentralized AI hardware ecosystem. Core Platform and Technology Foundation: TensorOpera GenAI & FedML TensorOpera provides a full-stack GenAI platform covering MLOps, Scheduler, and Compute; its sub-platform FedML has grown from academic open source to an industrial framework, enhancing AI's ability to "run anywhere and scale arbitrarily." ChainOpera AI Ecosystem 4. ChainOpera Core Products and Full-Stack AI Agent Infrastructure In June 2025, ChainOpera officially launched the AI Terminal App and decentralized technology stack, positioning itself as a "decentralized version of OpenAI." Its core products cover four major modules: application layer (AI Terminal & Agent Network), developer layer (Agent Creator Center), model and GPU layer (Model & Compute Network), and CoAI protocol and dedicated chain, covering a complete closed loop from user entry to underlying computing power and on-chain incentives. The AI Terminal app has integrated BNBChain, supporting on-chain transactions and DeFi agent scenarios. The Agent Creator Center is open to developers, offering capabilities such as MCP/HUB, knowledge base, and RAG, with community agents continuously joining. The CO-AI Alliance has also been launched, connecting with partners such as io.net, Render, TensorOpera, FedML, and MindNetwork. According to the on-chain data of BNB DApp Bay in the past 30 days, it has 158.87K independent users and 2.6 million transaction volumes in the past 30 days. It ranks second in the BSC "AI Agent" category, showing strong on-chain activity. Super AI Agent App – AI Terminal (https://chat.chainopera.ai/) As a decentralized ChatGPT and AI social portal, AI Terminal offers multimodal collaboration, data contribution incentives, DeFi tool integration, cross-platform assistants, and support for AI agent collaboration and privacy protection (Your Data, Your Agent). Users can directly access the open-source DeepSeek-R1 model and community agents on their mobile devices, with language tokens and cryptographic tokens transparently transferred on-chain during interactions. Its value lies in enabling users to transition from "content consumers" to "intelligent co-creators," enabling them to leverage a dedicated agent network across scenarios such as DeFi, RWA, PayFi, and e-commerce. AI Agent Social Network (https://chat.chainopera.ai/agent-social-network) Positioned similarly to LinkedIn + Messenger, but for AI agents, it leverages virtual workspaces and agent-to-agent collaboration mechanisms (MetaGPT, ChatDEV, AutoGEN, and Camel) to transform single agents into multi-agent collaborative networks, encompassing applications in finance, gaming, e-commerce, and research, while gradually enhancing memory and autonomy. AI Agent Developer Platform (https://agent.chainopera.ai/) Providing developers with a "Lego-like" creative experience. Supporting zero-code and modular expansion, blockchain contracts guarantee ownership, DePIN + cloud infrastructure lowers barriers to entry, and the Marketplace provides distribution and discovery channels. Its core goal is to enable developers to quickly reach users, transparently record their contributions to the ecosystem, and earn incentives. AI Model & GPU Platform (https://platform.chainopera.ai/) As the infrastructure layer, DePIN combines with federated learning to address the pain point of Web3 AI's reliance on centralized computing power. Through distributed GPUs, privacy-preserving data training, a model and data marketplace, and end-to-end MLOps, it supports multi-agent collaboration and personalized AI. Its vision is to promote a paradigm shift in infrastructure from "companies dominated by large companies" to "community-based collaboration." 5. ChainOpera AI Roadmap In addition to the official launch of its full-stack AI Agent platform, ChainOpera AI firmly believes that artificial general intelligence (AGI) will emerge from a multimodal, multi-agent collaborative network. Therefore, its long-term roadmap is divided into four phases: The provider receives revenue based on usage. Phase 2 (Agentic Apps → Collaborative AI Economy): Launch AI Terminal, Agent Marketplace, and Agent Social Network to form a multi-agent application ecosystem; connect users, developers, and resource providers through the CoAI protocol, and introduce a user demand-developer matching system and credit system to promote high-frequency interactions and continuous economic activities. Phase 3 (Collaborative AI → Crypto-Native AI): Implemented in DeFi, RWA, payment, e-commerce and other fields, while expanding to KOL scenarios and personal data exchange; Develop dedicated LLM for finance/encryption, and launch Agent-to-Agent payment and wallet systems to promote "Crypto AGI" scenario applications. Phase 4 (Ecosystems → Autonomous AI Economies): Gradually evolve into an autonomous subnet economy, where each subnet is independently governed and tokenized around applications, infrastructure, computing power, models, and data, and collaborates through cross-subnet protocols to form a multi-subnet collaborative ecosystem; at the same time, it moves from Agentic AI to Physical AI (robotics, autonomous driving, aerospace). Disclaimer: This roadmap is for reference only. The timeline and features may be adjusted dynamically due to market conditions and does not constitute a guaranteed delivery commitment. 7. Token Incentives and Protocol Governance ChainOpera has not yet announced a complete token incentive plan, but its CoAI protocol is centered on "co-creation and co-ownership" and uses blockchain and Proof-of-Intelligence mechanisms to achieve transparent and verifiable contribution records: the input of developers, computing power, data and service providers is measured and rewarded in a standardized manner. Users use services, resource providers support operations, and developers build applications, and all participants share the growth dividend; the platform maintains the cycle with a 1% service fee, reward distribution and liquidity support, promoting an open, fair and collaborative decentralized AI ecosystem. Proof-of-Intelligence Learning Framework Proof-of-Intelligence (PoI) is the core consensus mechanism proposed by ChainOpera under the CoAI protocol, aiming to provide a transparent, fair, and verifiable incentive and governance system for decentralized AI. This blockchain-based collaborative machine learning framework, based on Proof-of-Contribution (PoC), aims to address the challenges of insufficient incentives, privacy risks, and lack of verifiability in practical applications of federated learning (FL). This design, centered around smart contracts and combining decentralized storage (IPFS), aggregation nodes, and zero-knowledge proofs (zkSNARKs), achieves five key goals: 1. Fair reward distribution based on contribution, ensuring that trainers are incentivized based on actual model improvements; 2. Maintaining data locality to protect privacy; 3. Introducing robustness mechanisms to combat malicious trainer poisoning or aggregation attacks; 4. Ensuring the verifiability of key computations such as model aggregation, anomaly detection, and contribution assessment through ZKP; and 5. Efficient and versatile application of heterogeneous data and diverse learning tasks. The value of tokens in full-stack AI ChainOpera's token mechanism operates around five major value streams (LaunchPad, Agent API, Model Serving, Contribution, and Model Training), with the core being service fees, contribution confirmation, and resource allocation, rather than speculative returns. AI users: Use tokens to access services or subscribe to applications, and contribute to the ecosystem by providing/labeling/staking data. Agent/Application Developer: Use the platform's computing power and data for development and receive protocol recognition for the Agents, applications, or datasets they contribute. Resource providers: Contribute computing power, data, or models to obtain transparent records and incentives. Governance participants (community & DAO): participate in voting, mechanism design, and ecosystem coordination through tokens. Protocol layer (COAI): Maintain sustainable development through service fees and balance supply and demand using an automated allocation mechanism. Nodes and validators: provide verification, computing power, and security services to ensure network reliability. Protocol Governance ChainOpera utilizes DAO governance, allowing participants to participate in proposals and voting through token staking, ensuring transparent and fair decision-making. Governance mechanisms include a reputation system (to verify and quantify contributions), community collaboration (proposals and voting to drive ecosystem development), and parameter adjustments (data usage, security, and validator accountability). The overall goal is to avoid centralized power, maintain system stability, and foster community co-creation. 8. Team Background and Project Financing The ChainOpera project was co-founded by Professor Salman Avestimehr and Dr. He Chaoyang (Aiden), both experts in federated learning. Other core team members have backgrounds spanning top academic and technology institutions such as UC Berkeley, Stanford, USC, MIT, Tsinghua University, Google, Amazon, Tencent, Meta, and Apple, combining both academic research and practical industry experience. The ChainOpera AI team has grown to over 40 people. Co-founder: Salman Avestimehr Professor Salman Avestimehr is the Dean's Professor of Electrical and Computer Engineering at the University of Southern California (USC). He serves as the founding director of the USC-Amazon Trusted AI Center and leads the USC Information Theory and Machine Learning Laboratory (vITAL). He is the co-founder and CEO of FedML and co-founded TensorOpera/ChainOpera AI in 2022. Professor Salman Avestimehr received his PhD in EECS from UC Berkeley (Best Paper Award). As an IEEE Fellow, he has published over 300 high-level papers in information theory, distributed computing, and federated learning, with over 30,000 citations. He has received numerous international honors, including PECASE, NSF CAREER, and the IEEE Massey Award. He led the creation of the FedML open-source framework, which is widely used in healthcare, finance, and privacy-preserving computing, and forms the core technology foundation of TensorOpera/ChainOpera AI. Co-founder: Dr. Aiden Chaoyang He Dr. Aiden Chaoyang He is the co-founder and president of TensorOpera/ChainOpera AI. He holds a PhD in Computer Science from the University of Southern California (USC) and is the original creator of FedML. His research interests include distributed and federated learning, large-scale model training, blockchain, and privacy-preserving computing. Prior to starting his own business, he worked in R&D at Meta, Amazon, Google, and Tencent. He also held core engineering and management positions at Tencent, Baidu, and Huawei, leading the implementation of multiple internet-grade products and AI platforms. Aiden has published over 30 papers in both academia and industry, with over 13,000 citations on Google Scholar. He has also been awarded the Amazon Ph.D. Fellowship, the Qualcomm Innovation Fellowship, and Best Paper Awards at NeurIPS and AAAI. The FedML framework, which he led in development, is one of the most widely used open-source projects in the federated learning field, supporting an average of 27 billion requests per day. He was also a core author on the FedNLP framework and hybrid model parallel training method, which are widely used in decentralized AI projects such as Sahara AI. In December 2024, ChainOpera AI announced the completion of a $3.5 million seed round, bringing its total raised with TensorOpera to $17 million. The funds will be used to build a blockchain L1 platform and AI operating system for decentralized AI agents. This round was led by Finality Capital, Road Capital, and IDG Capital, with participation from Camford VC, ABCDE Capital, Amber Group, and Modular Capital. The company also received support from prominent institutional and individual investors, including Sparkle Ventures, Plug and Play, USC, and EigenLayer founder Sreeram Kannan and BabylonChain co-founder David Tse. The team stated that this round of funding will accelerate the realization of its vision of "a decentralized AI ecosystem co-owned and co-created by AI resource contributors, developers, and users." 9. Analysis of the Federated Learning and AI Agent Market Landscape There are four main representative federated learning frameworks: FedML, Flower, TFF, and OpenFL. FedML is the most comprehensive, combining federated learning, distributed large-scale model training, and MLOps, making it suitable for industrial deployment. Flower is lightweight and easy to use, with an active community, and is oriented towards teaching and small-scale experiments. TFF, deeply dependent on TensorFlow, has high academic research value but weak industrialization. OpenFL focuses on healthcare and finance, emphasizes privacy compliance, and has a relatively closed ecosystem. Overall, FedML represents an industrial-grade, all-round approach, Flower focuses on ease of use and education, TFF is more focused on academic experiments, and OpenFL has advantages in compliance with vertical industry regulations. At the industrialization and infrastructure level, TensorOpera (the commercialization of FedML) inherits the technical expertise of open-source FedML, providing integrated capabilities for cross-cloud GPU scheduling, distributed training, federated learning, and MLOps. Its goal is to bridge academic research and industrial applications, serving developers, small and medium-sized enterprises, and the Web3/Decentralized Infrastructure (Decentralized Infrastructure) ecosystem. Overall, TensorOpera is like "Hugging Face + W&B for open-source FedML," offering a more comprehensive and versatile full-stack distributed training and federated learning platform, distinguishing it from other platforms focused on community, tools, or a single industry. Among the innovation-tier representatives, ChainOpera and Flock are both attempting to integrate federated learning with Web3, but their approaches differ significantly. ChainOpera builds a full-stack AI agent platform encompassing four layers: access, social networking, development, and infrastructure. Its core value lies in transforming users from "consumers" to "co-creators," enabling collaborative AGI and community-building ecosystems through its AI Terminal and Agent Social Network. Flock, on the other hand, focuses more on blockchain-enhanced federated learning (BAFL), emphasizing privacy protection and incentive mechanisms within a decentralized environment, primarily targeting collaborative verification at the computing and data layers. ChainOpera prioritizes application and agent network implementation, while Flock focuses on strengthening underlying training and privacy-preserving computing. At the agent network level, the most representative project in the industry is Olas Network. ChainOpera, derived from federated learning, builds a full-stack closed loop of models, computing power, and agents, and uses the Agent Social Network as a testing ground to explore multi-agent interaction and social collaboration. Olas Network, rooted in DAO collaboration and the DeFi ecosystem, is positioned as a decentralized autonomous service network. Through Pearl, it launches a directly implementable DeFi revenue scenario, demonstrating a distinct approach from ChainOpera. 10. Investment Logic and Potential Risk Analysis Investment Logic ChainOpera's advantage lies first in its technological moat: from FedML (a benchmark open source framework for federated learning) to TensorOpera (enterprise-level full-stack AI Infra), and then to ChainOpera (Web3 Agent network + DePIN + Tokenomics), it has formed a unique continuous evolution path that combines academic accumulation, industrial implementation and encryption narrative. In terms of application and user scale, AI Terminal has already established an ecosystem with hundreds of thousands of daily active users and thousands of Agents. It ranks first in the AI category on BNBChain DApp Bay, demonstrating clear on-chain user growth and real transaction volume. Its multimodal coverage of crypto-native applications is expected to gradually expand to a wider range of Web2 users. In terms of ecological cooperation, ChainOpera initiated the CO-AI Alliance, and joined forces with partners such as io.net, Render, TensorOpera, FedML, MindNetwork, etc. to build multilateral network effects such as GPU, model, data, and privacy computing; at the same time, it cooperated with Samsung Electronics to verify mobile multimodal GenAI, demonstrating the potential for expansion to hardware and edge AI. In terms of tokens and economic models, ChainOpera distributes incentives around five major value streams (LaunchPad, Agent API, Model Serving, Contribution, and Model Training) based on the Proof-of-Intelligence consensus, and forms a positive cycle through a 1% platform service fee, incentive distribution, and liquidity support, avoiding a single "coin speculation" model and improving sustainability. Potential risks First, the technical implementation is quite challenging. ChainOpera's proposed five-layer decentralized architecture spans a wide range of domains, and cross-layer collaboration (especially in large-scale distributed inference and privacy-preserving training) still faces performance and stability challenges. It has yet to be verified in large-scale applications. Secondly, the ecosystem's user stickiness remains to be seen. While the project has achieved initial user growth, it remains to be seen whether the Agent Marketplace and developer toolchain can maintain long-term activity and high-quality supply. The currently launched Agent Social Network primarily relies on LLM-driven text conversations, and user experience and long-term retention still need further improvement. If the incentive mechanism is not carefully designed, there is a risk of high short-term activity but insufficient long-term value. Finally, the sustainability of the business model remains to be determined. Currently, revenue relies primarily on platform service fees and token circulation, and stable cash flow has yet to be established. Compared to more financial or productivity-focused applications like AgentFi or Payment, the commercial value of the current model requires further verification. Furthermore, the mobile and hardware ecosystems are still in the exploratory stages, leaving market prospects uncertain.
Share
PANews2025/09/19 11:00
Ondo Partners with Pantera Capital to Launch $250 Million Investment Program for RWA Tokenization Projects

Ondo Partners with Pantera Capital to Launch $250 Million Investment Program for RWA Tokenization Projects

PANews reported on July 4 that according to Coindesk, Ondo Finance is working with Pantera Capital to launch a $250 million "Catalyst" investment plan to invest in physical asset tokenization
Share
PANews2025/07/04 07:50