Mind Network and BytePlus have signed an agreement to work together to build AI systems with built-in privacy and trust. As reported by Cointelegraph, their partnership brings cryptographic safeguards into practical use, aiming to make privacy and veracity a standard part of AI infrastructure.
The effort matters because AI agents can schedule meetings, trade stocks, or write code. Many have access to sensitive information, including personal data and private keys. As these tools become more common, the big questions whether we can trust what’s produced, and if we can check the systems behind AI agents are secure. Finally, can we do this without exposing user data?
Mind Network, working with BytePlus (the enterprise tech arm of ByteDance), is trying to answer those questions by using a technology called Fully Homomorphic Encryption (FHE). With FHE, cloud servers can process data without ever seeing it unencrypted. This approach avoids the need for trusted environments or complex pre-processing steps that might leak metadata.
As part of the rollout, BytePlus is integrating Mind Network’s “Secure AgenticWorld” framework into its platform. It includes Model Context Protocol (MCP) tools, which let AI systems communicate.
In theory, developers copy a validation node, add their model-specific logic, and run it using BytePlus’ serverless tools. The node performs all checks inside an encrypted circuit, re-encrypts the result, and sends it to a smart contract that logs the outcome. Other encrypted nodes vote on the result, making it nearly impossible for users or third-parties to cheat or leak data. The process scales easily, with organisations choosing to run their own nodes or rely on those run as community nodes, for added trust.
For everyday users, the process is simple. They connect their wallet, sign a small challenge, and start the validation. No private key leaves the user’s wallet, and no plain-text data is stored or exposed.
The integration is already having an impact. Coze, a new agent-building tool, has added the MCP plugin to its resource library. It lets developers quickly add cryptographic validation to their AI agents and publish them with proof that they’re trustworthy. In Lark, an enterprise platform for messaging and documents, built-in AI tools like meeting notes, code reviewers, or translators can now be audited without exposing company secrets. It makes security something that’s built-in from the start – not added later.
The same encryption that hides user data also leaves behind a verifiable trail. Regulators, partners, and end users can confirm that an AI model ran as intended, without needing BytePlus or Mind Network to act as central authorities.
BytePlus provides the infrastructure behind major apps like TikTok, Lark, and Coze. Mind Network, which is backed by Binance Labs and Chainlink, brings Web3 privacy into tools that mainstream developers can use. Together, the hope is to create a space where blockchain-grade verification can meet large-scale deployment.
Mind Network has made its MCP tools open-source on GitHub and released a TypeScript SDK for developers to build with. BytePlus plans to include the plugin in its official marketplace soon. Both companies also plan to host hackathons aimed at helping developers build secure agents.
(Photo by Lavi Perchik)
See also: Former chancellor: UK must act now on cryptocurrency or risk irrelevance
Want to learn from industry leaders? Check out leading events including Digital Transformation Week, IoT Tech Expo, Edge Computing Expo, Intelligent Automation, AI & Big Data Expo, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.