Lucas Matheson (left) and Sam Bourgi (right) on the sidelines of the Futurist Conference in Toronto. Source: Cointelegraph
Advertisement11 hours ago

Building a zero-knowledge trust layer for AI and Web3 — Interview with Polyhedra

Polyhedra co-founder Tiancheng Xie discusses why zero-knowledge proofs are key to trustworthy AI and how new bootcamps are training the engineers who’ll build it.

are racing ahead, but the two questions still keep builders up at night: “Can the outputs be trusted, and is the underlying data really private?” The same tension is also relevant for blockchains, where users want verification without leaking sensitive information or relying on centralized gatekeepers.

Cryptocurrency project Polyhedra sits at the intersection of those challenges. Founded by researchers from UC Berkeley, Stanford and Tsinghua University, the team is turning zero-knowledge (ZK) proofs into production-grade tooling that can verify machine-learning results, move assets across more than 25 blockchains and safeguard user data without exposing a single secret.

Its stack already includes Expander (a high-performance ZK prover), zkPyTorch for developers and zkBridge for crosschain messaging, and it is now rolling out EXPchain, a layer-1 purpose-built for AI.

To unpack why verifiable privacy matters in AI and Web3, Cointelegraph spoke with Tiancheng Xie, Polyhedra’s co-founder and chief technology officer, ahead of the start of its zkML festival in May. A cryptographer who earned his Ph.D. at UC Berkeley, Xie leads the team’s push to turn ZK research into the “trust layer” he believes the next generation of decentralized and AI-powered applications will require.

Cointelegraph: What are the main drivers for verifiable, privacy-preserving infrastructure in AI?

Tiancheng Xie: The main drivers are security, privacy and trust powered by . As AI and blockchain technologies advance, the need to protect sensitive data while maintaining transparency and accountability grows. In AI, privacy-preserving techniques, such as ZKPs, ensure that models can be trained and predictions can be made without exposing confidential information.

In blockchain, verifiable privacy is essential to secure transactions while maintaining user anonymity. ZKPs help verify transactions without revealing transaction details, preserving privacy and security.

With these technologies, AI models and blockchain networks can operate in a trustless environment, ensuring that users and data are protected while allowing reliable and verifiable interactions.

CT: How do ZKPs hold accountable without exposing their underlying data?

TX: ZKPs hold machine learning models accountable by verifying the correctness of computations or predictions without revealing the underlying data. In traditional machine learning, models are trained on sensitive data, and sharing the results can expose private information.