90Trust
Verified
๐ Web Verified
Electronic Frontier FoundationonMastodon1d ago
For nearly 30 years, journalists have relied on the Internet Archive to see how stories were originally published, before edits, removals, or changes. We need to safeguard that. https://www.eff.org/deeplinks/2026/03/blocking-internet-archive-wont-stop-ai-it-will-erase-webs-historical-record
Trust Metrics
92
95
88
80
Claim Accuracy92%
Source Quality95%
Framing & Tone88%
Context80%
Analysis Summary
The EFF documents a real problem: the New York Times and other major publishers are now blocking the Internet Archive from preserving their websites, supposedly to prevent AI training. This is verified and well-reported. The article makes a solid legal argument that web archiving is fair use (like Google's book scanning) and that destroying the historical record to fight AI disputes harms the public. The framing is clear-eyed about the underlying AI conflict but emphasizes the collateral damage to journalists and researchers who've relied on the Wayback Machine for 30 years.
Claims Analysis (6)
โFor nearly 30 years, journalists have relied on the Internet Archive to see how stories were originally published, before edits, removals, or changes.โ
Internet Archive launched 1996, Wayback Machine preserves web pages, widely used by journalists and researchers for historical documentation.
โThe New York Times began blocking the Archive from crawling its website, using technical measures that go beyond the web's traditional robots.txt rules.โ
Reported by EFF in March 2026; Times implemented blocking measures against Internet Archive crawlers, documented by Archive staff.
โOther newspapers, including The Guardian, seem to be following suit.โ
Article uses cautious language 'seem to be' โ indicates emerging trend but not comprehensive confirmation of all outlets blocking.
โThe Wayback Machine now contains more than one trillion archived web pages.โ
Internet Archive publicly states 1+ trillion pages preserved; figure consistent with 2025-2026 Archive capacity.
โThe Times says the move is driven by concerns about AI companies scraping news content.โ
Well-documented: NYT cited AI training concerns as primary reason for blocking Archive; lawsuit against OpenAI and other AI firms filed 2023-2024.
โWikipedia alone links to more than 2.6 million news articles preserved at the Archive, spanning 249 languages.โ
Per Archive staff statement cited in article; specific figure from Internet Archive's own analysis.
Verify Yourself
Was this analysis helpful?
Try ClearFeed free โ