Decentralized knowledge layer Walrus is aiming to offer a “verifiable knowledge basis for AI workflows” along with the Sui stack.
The Sui stack contains knowledge availability and provenance layer Walrus, offchain surroundings Nautilus and entry management layer Seal.
A number of AI groups have already chosen Walrus as their verifiable knowledge platform, with Walrus functioning as “the information layer in a a lot bigger AI stack.”
AI fashions are getting quicker, bigger, and extra succesful. However as their outputs start to form choices in finance, healthcare, enterprise software program, and past, an vital query must be answered—can we really confirm the information and processes behind these outputs?
“Most AI techniques depend on knowledge pipelines that no one outdoors the group can independently confirm,” states Rebecca Simmonds, Managing Govt of the Walrus Basis—an organization which helps the event of decentralized knowledge layer Walrus.
As she explains, there isn’t any normal technique to verify the place knowledge got here from, whether or not it was tampered with, or what was approved to be used within the pipeline. That hole would not simply create compliance threat—it erodes belief within the outputs AI produces.
“It is about transferring from ‘belief us’ to ‘confirm this,'” Simmonds stated, “and that shift issues most in monetary, authorized, and controlled environments the place auditability is not optionally available.”
Why centralized logs aren’t sufficient
Many AI deployments as we speak depend on centralized infrastructure and inside audit logs. Whereas these can present some visibility, they nonetheless require belief within the entity working the system.
Exterior stakeholders haven’t any selection however to belief that the information have not been altered. With a decentralized knowledge layer, integrity is anchored cryptographically, so unbiased events can confirm them with out counting on a single operator.
That is the place Walrus positions itself, as the information basis inside a broader structure known as the Sui Stack. Sui itself is a layer-1 blockchain community that information coverage occasions and receipts onchain, coordinating entry and logging verifiable exercise throughout the stack.
The Sui Stack. Picture: Walrus
“Walrus is the information availability and provenance layer—the place every dataset will get a novel ID derived from its contents,” Simmonds defined. “If the information modifications by even a single byte, the ID modifications. That makes it doable to confirm that the information in a pipeline is precisely what it claims to be, hasn’t been altered, and stays out there.”
Different parts of the Sui Stack construct on that basis. Nautilus lets builders run AI workloads in a safe offchain surroundings and generate proofs that may be checked onchain, whereas Seal handles entry management, letting groups outline and implement who can see or decrypt knowledge, and below what situations.
“Sui then ties all the pieces collectively by recording the principles and proofs onchain,” Simmonds stated “That offers builders, auditors, and customers a shared document they’ll independently examine.”
“No single layer solves the total AI belief downside,” she added. “However collectively, they type one thing vital: a verifiable knowledge basis for AI workflows—knowledge with provable provenance, entry you may implement, computation you may attest to, and an immutable document of how all the pieces was used.”
A number of AI groups have already chosen Walrus as their verifiable knowledge platform, Simmonds stated, together with open-source AI agent platform elizaOS, and blockchain-native AI intelligence platform Zark Lab.
Autonomous brokers making monetary choices on unverifiable knowledge. Take into consideration that for a second.
With Walrus, datasets, fashions, and content material are verifiable by default, so builders can safe AI platforms from potential regulatory non-compliance, inaccurate responses, and erosion…
— Walrus 🦭/acc (@WalrusProtocol) February 18, 2026
Verifiable, not infallible
The phrase “verifiable AI” can sound bold. However Simmonds is cautious about what it does—and would not—indicate.
“Verifiable AI would not clarify how a mannequin causes or assure the reality of its outputs,” she stated. However it will probably “anchor workflows to datasets with provable provenance, integrity, and availability.” As an alternative of counting on vendor claims, she defined, groups can level to a cryptographic document of what knowledge was out there and approved. When knowledge is saved with content-derived identifiers, each modification produces a brand new, traceable model—permitting unbiased events to verify what inputs had been used and the way they had been dealt with.
This distinction is essential. Verifiability is not about promising good outcomes. It is about making the lifecycle of information—the way it was saved, accessed, and modified—clear and auditable. And as AI techniques transfer into regulated or high-stakes environments, this transparency turns into more and more vital.
Why does @WalrusProtocol exist.
As a result of companies that want programmable storage with verifiable knowledge integrity and assured availability had nowhere to go.
We constructed it and so they preserve displaying up. Easy as that!! pic.twitter.com/Ygxe8CFenh
— rebecca simmonds 🦭/acc (@RJ_Simmonds) February 12, 2026
“Finance is a urgent use case,” Simmonds stated, the place “small knowledge errors” can flip into actual losses because of opaque knowledge pipelines.“Having the ability to show knowledge provenance and integrity throughout these pipelines is a significant step towards the type of belief these techniques demand,” she stated, including that it “is not restricted to finance. Any area the place choices have penalties— healthcare, authorized—advantages from infrastructure that may present what knowledge was out there and approved.”
A sensible place to begin
For groups all for experimenting with verifiable infrastructure, Simmonds suggests beginning with the information layer as a “first step” reasonably than trying a wholesale overhaul.
“Many AI deployments depend on centralized storage that is actually troublesome for exterior stakeholders to independently audit,” she stated. “By transferring essential datasets onto content-addressed storage like Walrus, organizations can set up verifiable knowledge provenance and availability—which is the muse all the pieces else builds on.”
Within the coming 12 months, one of many focuses for Walrus is increasing the companions and builders on the platform. “A few of the most fun stuff is what we’re seeing builders construct—from decentralized AI agent reminiscence techniques to new instruments for prototyping and publishing on verifiable infrastructure,” she stated. “In some ways, the group is main the cost, organically.”
“We see Walrus as the information layer in a a lot bigger AI stack,” Simmonds added. “We’re not making an attempt to be the entire reply—we’re constructing the verifiable basis that the remainder of the stack is dependent upon. When that layer is true, new sorts of AI workflows develop into doable.”
Each day Debrief Publication
Begin every single day with the highest information tales proper now, plus unique options, a podcast, movies and extra.