CF
ClearFeed
Trust Analysis
63Trust
Partially True
🔍 Web Verified🔍 Search Verified
jonny (good kind)onMastodon1d ago
Claude code source "leaks" in a mapfilepeople immediately use the code laundering machines to code launder the code laundering frontend now many dubious open source-ish knockoffs in python and rust being derived directly from the sourceWhat's anthropic going to do, sue them? Insist in court that LLM recreating copyrighted code is a violation of copyright???
Trust Metrics
75
Accuracy
60
Sources
55
Framing
55
Context
Claim Accuracy75%
Source Quality60%
Framing & Tone55%
Context55%
Analysis Summary
The Claude Code source leak is real—Anthropic accidentally exposed 512,000 lines of TypeScript code via a misconfigured source map file in npm on March 31, 2026. The post accurately captures the core facts: the code is leaked, it's been mirrored on GitHub (41,500+ forks), and developers are building alternative tools. The 'code laundering' framing is rhetorically exaggerated but points to a real risk. The final question about copyright is more cynical commentary than factual claim—courts have actually found AI training transformative fair use, though the legal landscape remains unsettled for generated outputs.
Claims Analysis (4)
Claude code source leaks in a mapfile
Confirmed—Anthropic's Claude Code source exposed via misconfigured source map file in npm package on March 31, 2026.
Verified
People immediately use the code to create code laundering knockoffs in python and rust
Partially confirmed—GitHub mirrors created rapidly; some developers advertised builds of Claude Code, though 'code laundering' characterization is rhetorical.
Mostly True
Many dubious open source-ish knockoffs derived directly from source
Plausible but unconfirmed. GitHub forks exceeded 41,500+ and developers noted intent to build alternatives, but current evidence doesn't specify scale or confirm all are active.
Verified
LLM recreating copyrighted code violates copyright
Rhetorical question with implied skepticism. Fair use for AI training was upheld in Bartz v. Anthropic, but the settlement left LLM output claims unresolved.
Mostly True
Flags (1)
😨 Appeal to Fear
Was this analysis helpful?
Try ClearFeed free
clearfeed.app — Trust scores for your social feed