Our goal with partnering with Constellation is to harness their blockchain based data authentication solutions to provide greater trust to users of our crawl concerned about security and data provenance by anchoring Common Crawl checksums on the blockchain. With Constellation’s decentralised infrastructure, we aim to make a tamper-evident, verifiable dataset of Common Crawl data available to anyone.

By publishing cryptographic hashes of our crawled data on-chain, we ensure that anyone can verify the authenticity of datasets, no matter where they’re stored or used. This is a step toward a more trustworthy and decentralised data ecosystem. We anticipate the delivery of this integration later this year.
Why does this matter? In an era of misinformation and AI-generated content, trustworthy data sources are critical. Blockchain-backed validation of open web archives supports transparency and accountability.
Watch this panel from Constellation’s event, Protecting America and Restoring Trust Using AI & Blockchain, featuring our Executive Advisor Chris Tolles, who speaks on the role of open data in rebuilding public trust.
Erratum:
Content is truncated
Some archived content is truncated due to fetch size limits imposed during crawling. This is necessary to handle infinite or exceptionally large data streams (e.g., radio streams). Prior to March 2025 (CC-MAIN-2025-13), the truncation threshold was 1 MiB. From the March 2025 crawl onwards, this limit has been increased to 5 MiB.
For more details, see our truncation analysis notebook.