Palo Alto, CA, February sixth, 2026, Chainwire
ZenO opens entry to selfish audio, video, and picture information captured from sensible glasses and smartphones to help the subsequent technology of bodily AI programs.
ZenO as we speak introduced the launch of its public beta, opening entry to a brand new platform designed to gather, anonymize, and construction real-world, first-person information for coaching bodily AI programs equivalent to robots, autonomous brokers, and embodied fashions. ZenO will make the most of Story’s Layer-1 blockchain know-how as its core infrastructure.
The launch comes at a second when bodily AI is transferring from analysis into manufacturing, however the information required to energy these programs hasn’t stored tempo. Robots skilled on scraped internet information or simulations battle with on a regular basis duties that people carry out intuitively. ZenO addresses this mismatch by enabling the gathering of first-person, real-world information (what individuals really see, hear, and do), creating a brand new basis for coaching embodied AI programs at scale.
By coaching on real-world, first-person information, bodily AI fashions might be fine-tuned to raised understand their environments, generalize throughout unpredictable circumstances, and carry out duties extra precisely and reliably as soon as deployed in the true world.
ZenO additionally just lately joined a member of NVIDIA Inception, a worldwide program designed to help startups constructing superior AI applied sciences. By way of this system, ZenO is accelerating the event of its Bodily AI Information Community by leveraging entry to NVIDIA’s GPU ecosystem, technical experience, cloud infrastructure advantages, and go-to-market assets. This help permits ZenO to scale enterprise-grade, rights-cleared information infrastructure required to coach robotics and embodied AI programs working in complicated bodily environments.
The beta is constructed on ZenO’s current MVP, now reside at https://app.zen-o.xyz/, and focuses on validating the end-to-end product movement for real-world information assortment: from seize to high quality assurance and anonymization. The beta will run for about 6–8 weeks.
In contrast to artificial information or scraped on-line content material, bodily AI programs require selfish information: what people really see, hear, and do in real-world environments. ZenO permits contributors to seize steady first-person audio, video, and pictures utilizing both ZenO-branded sensible glasses or their cellphones, following clearly outlined information assortment missions.
How the ZenO Beta Works
In the course of the beta, customers can:
Seize real-world audio, video, and pictures from a first-person perspective
Add information by way of ZenO’s software for automated formatting and integrity checks
Have information reviewed by way of a multi-stage QA course of, together with AI-based screening and human assessment
Mechanically anonymize delicate info, together with faces and identifiable textual content
After anonymization, contributors add structured metadata describing actions and environments inside the footage. Authorised datasets are then securely saved and cataloged inside ZenO’s information market infrastructure.
Incentives and Contributor Rewards
ZenO makes use of a two-stage incentive mannequin for contributors:
Quick rewards for information assortment, paid in XP in the course of the beta section
Income sharing if the contributed information is bought, with contributors receiving a portion of downstream gross sales in stablecoins
This construction aligns contributor incentives with long-term information high quality and business demand, quite than one-time labeling duties.
{Hardware} and Seize Choices
ZenO’s sensible glasses are manufactured by way of an OEM partnership and launched beneath the ZenO model, with specs akin to main client sensible glasses. The glasses help audio and video seize, hands-free operation, and all-day wearability. Contributors can also take part utilizing smartphones, relying on mission necessities.
Onchain Foundations and Future Roadmap
In the course of the beta, ZenO data wallet-signed consent and information identifiers onchain, making a verifiable file of contributor authorization and dataset provenance. Full IP and information rights administration performance is deliberate for a future launch.
ZenO’s long-term roadmap contains writing metadata and licensing info for user-generated datasets onto Story, enabling programmable information rights, clear licensing, and automatic income distribution for AI coaching information.
“The actual world doesn’t appear like the web,” mentioned Daybreak Kim, Co-Founding father of Zeno. “Bodily AI programs want high-quality, rights-cleared, first-person information captured in actual environments. This beta is about proving the muse for the way that information might be collected, structured, and used to coach fashions that really work outdoors the lab.”
ZenO is at present working with early information demand companions and can share traction metrics following the beta interval.
To be taught extra or be a part of the beta, customers can go to https://zen-o.xyz.
About ZenO
ZenO is a Bodily AI information assortment platform centered on capturing real-world, first-person (POV) human habits for coaching robotics and embodied AI programs. By way of sensible glasses and smartphones, ZenO permits contributors to add video and picture information generated from on a regular basis actions. ZenO is designed to help scalable, compliant assortment of real-world information for next-generation Bodily AI programs.
About Story
Story is an AI-native blockchain community designed to function the provenance, licensing, and financial layer for AI information and fashions. Powered by the $IP token, Story permits datasets, fashions, and AI-generated outputs to be registered as mental property, licensed programmatically, and monetized with built-in attribution.
Backed by $136 million from a16z crypto, Polychain Capital, and Samsung Ventures, Story launched its mainnet in February 2025 and is constructing foundational infrastructure for the AI financial system. By making IP native to the information and mannequin lifecycle, Story supplies the belief and financial rails required for AI programs to scale responsibly throughout enterprises, builders, and world markets.
Contact
Head Of CommunicationsHVStory[email protected]






