CF
ClearFeed
Trust Analysis
78Trust
Highly Accurate
๐Ÿ” Web Verified
Zack WhittakeronMastodon1d ago
Your doctor or therapist might be using AI recording to transcribe your patient notes. Two privacy experts have nine good reasons why you should decline and opt-out. Some words by me: https://this.weekinsecurity.com/why-your-doctors-ai-recorder-can-be-bad-for-your-health-and-privacy/
Trust Metrics
82
Accuracy
75
Framing
80
Context
68
Tone
Accuracy82%
Framing75%
Context80%
Tone68%
Analysis Summary
Doctors and therapists are increasingly using AI tools to automatically record and transcribe patient conversations, but many don't disclose this to patients beforehand. The risks are real: AI systems store audio and text on third-party cloud servers, can hallucinate or misrecord critical medical details, and may use patient data to train their models โ€” raising both privacy and accuracy concerns in a setting where precision matters for your health. Patients have a right to opt out, but few understand these tools are running in their appointments at all.
Claims Analysis (5)
โ€œYour doctor, therapist, or any other physician you see could be using AI to record what you say and generate your patient notes.โ€
Multiple sources confirm AI recording tools are actively deployed in medical settings. Otter AI, Pabau, and similar tools exist and are marketed for clinical use.
โœ“ Verified
โ€œThese tools should be proactively disclosed, but they aren't always.โ€
Medscape article and CNN piece both highlight that AI use in medicine often happens 'without patients' knowledge.' Legal disclosure requirements vary by jurisdiction but underuse is documented.
โ— Mostly True
โ€œAI recording tools suck up audio to consumer cloud services, leaving copies in the hands of AI companies or public clouds.โ€
Pabau guide and Medscape coverage confirm this architecture โ€” voice input stored on third-party clouds is standard practice in these tools.
โœ“ Verified
โ€œAI products can make things up ('hallucinations') and produce inaccurate results in medical settings.โ€
AI hallucinations in medical contexts are well-documented. GBH/MIT piece and clinical literature confirm this is a real risk factor in healthcare AI deployment.
โœ“ Verified
โ€œPatients may not be aware their conversations are stored on external computers or that data can be used to improve AI models.โ€
Medscape and CNN both confirm lack of patient awareness; data use for model training is standard in cloud AI services, though disclosure varies.
โ— Mostly True
Was this analysis helpful?
Try ClearFeed free โ†’
clearfeed.app โ€” Trust scores for your social feed