How to Spot Misinformation in Health and Science or Digital Health
In an era where our feeds are filled with “new studies,” “miracle treatments,” and emotional patient stories, digital health accuracy has never been more important. It means ensuring that the information we share or consume about health is based on credible evidence, transparent sources, and current research.
Across TikTok, Instagram, Reddit, YouTube, and X (Twitter), health content now spreads faster than science can keep up. One post can empower thousands of people to advocate for themselves, while another can send just as many down a dangerous path of misinformation.
Most people who share health information mean well. But even the most caring advocates or credible brands can accidentally share misinformation. And in healthcare, without verification, even good intentions can create confusion, anxiety, or worse — influence how someone manages their care.
That’s why taking a moment to pause before we repost, retweet, or reshare is more than good practice — it’s a form of advocacy.
Before amplifying what you see online, ask: Who said this? Where did it come from? And does it hold up under evidence?
This guide — created by Fight2Breathe in collaboration with pRxEngage — offers a simple, 30-second framework to help patients, caregivers, and creators navigate the flood of health information online.
We call it The TRUTH Check, and it’s designed to help you share responsibly, confidently, and compassionately in a digital world that doesn’t always make it easy to tell what’s real.
The TRUTH Check: A 30-Second Filter for Health Posts
T — Trace the Source
Is there a primary source — like a published paper, clinical trial registry, press release, or official guideline — that backs the claim? Or is it a screenshot of someone else’s post, stripped of its context?
R — Reality Check
If a new claim completely contradicts what major research organizations are saying — and there’s no clear explanation why — take a step back.
Medical science moves forward, but it rarely makes U-turns overnight. A good reality check is to see whether other reputable organizations have echoed the same message. If not, the “breakthrough” may not be as solid as it sounds.
Tip: Look for supporting statements from official bodies — not just individuals — like medical societies, research networks, or established advocacy groups.
U — Understand the Study
Not all studies are created equal. Before resharing new “results,” take a moment to understand how the study was done.
Who participated — animals, healthy volunteers, or patients? How big was the study? Was it a small pilot or a large-scale clinical trial?
Knowing these details can completely change how meaningful a result really is. A headline that sounds definitive might be based on a study with ten people — or even lab data that hasn’t been tested in humans yet.
Tip: Look for words like phase 1, phase 2, or phase 3 clinical trial. These describe how advanced the research actually is.
T — Transparency
Trust comes from clarity. Creators, companies, and organizations should be open about why they’re sharing something and what affiliations they have.
Sponsored content isn’t bad — but audiences deserve context.
Tip: Transparency isn’t about discrediting people — it’s about clarity. A trustworthy voice never hides the “why” behind their message.
H — Harm vs. Help
Finally, ask: Could this post do harm?
Even accurate information can cause distress or confusion if shared without context. Before hitting share, consider the impact. If it might harm more than it helps — or if you’re not sure — it’s okay to pause. You can always verify, ask questions, or add your own context before amplifying.
Tip: If you’re sharing something sensitive, add a note like “This is general information — check with your care team for what’s best for you.”
🔎 Bonus Check: AI + Health Misinformation: What to Know
AI can now generate articles, images, and even “experts” that appear trustworthy.
It can make information sound more certain than it is — especially when it’s stripped of human context.
Before trusting or sharing AI-generated content, consider:
- Can you trace it back to a real, identifiable source?
- Was it reviewed by a medical professional?
- Does it include a date, limitations, and who it applies to?
AI can help us learn — but if you can’t verify who stands behind the message, pause before sharing.
🚫 Red Flags to Watch For
- “Miracle” or “secret” cure language
- No links or citations (screenshots only)
- Old data presented as “new”
- Anecdotes used as proof
✅ Green Flags That Build Trust
- Links to full text, registry, or official guideline
- Clear publication dates, limitations, and conflicts of interest
- Plain-language summaries and who the data applies to
In a world where everyone has a microphone,
being loud isn’t the same as being right.
Take 30 seconds to verify before you amplify.
Send this link to a friend or colleague before they share a “too-good-to-be-true” post. 👉 pRxEngage.com/KnowBeforeYouShare