CiteThis / methodology
Methodology
How protocols are produced, what "evidence level" means, and why multi-model synthesis matters.
Production Pipeline
Each CiteThis protocol is produced through a structured multi-model AI pipeline. No single model — and no single human — can reliably review evidence across all domains covered here. Instead, we use an ensemble approach where different models with different training and biases cross-check each other.
- 01 Scope definition — Specific, answerable question. Not "what about magnesium?" but "what form and dose of magnesium reduces anxiety symptoms in adults?"
- 02 Literature research — Gemini agents search PubMed, Google Scholar, Cochrane Library, preprint servers. Citation tracking across key papers.
- 03 Synthesis — Claude Opus synthesizes findings into structured protocol: specific numbers, timing, interactions, safety considerations.
- 04 Cross-review — Grok independently reviews claims, flags inconsistencies, checks safety considerations and contraindications.
- 05 Quality pass — Additional LLM review against protocol template: completeness, source quality, evidence level assignment, citation format.
- 06 Curation — Topics selected by jroh.cz. Every protocol is reviewed before publication — not as a domain expert, but as a co-author of the process seeking the most objective read of available evidence.
- 07 Continuous update — Protocols are updated as new evidence emerges. Check "last updated" date before acting on any recommendation.
Why no single human reviewer? No individual expert can reliably evaluate evidence across perimenopause, ADHD, longevity, postpartum depression, sleep, and gut-brain axis simultaneously. Multi-model ensemble review with different training distributions is, in practice, more robust than single-expert human review for cross-domain synthesis.
Evidence Levels
Criteria: Multiple RCTs, systematic reviews, or meta-analyses
High confidence. Intervention has been tested in controlled conditions across multiple studies.
Criteria: Single RCT, or large high-quality observational studies
Reasonable confidence. More research would strengthen the case.
Criteria: Pilot studies, case series, animal models, or mechanistic reasoning
Promising but uncertain. Don't treat as established fact.
Criteria: Clinical guidelines, expert consensus without RCT backing
Use with extra caution. Based on clinical experience, not controlled trials.
Limitations
- — This is not medical advice. Consult a healthcare provider before acting on any protocol.
- — AI models can hallucinate or misinterpret studies. Source links are provided for verification.
- — Evidence changes. Check "last updated" date — older protocols may not reflect recent findings.
- — Individual variation exists. Population averages don't predict individual response.
- — Literature searches are comprehensive but never exhaustive. We may miss relevant studies.
What is an AI-Synthesized Evidence Protocol?
CiteThis publishes a format that doesn't have an established name yet. It's not a blog post, not an academic paper, not clinical guidelines. We call it an AI-synthesized evidence protocol.
It occupies a specific gap: Cochrane-depth literature synthesis, translated into actionable "do this" protocols — produced at a speed and breadth no single human expert could sustain across multiple domains.
- — Synthesized from primary sources (not summaries of summaries)
- — Multi-model AI pipeline: research → synthesis → cross-review
- — Topics selected and curated by jroh.cz — seeking the most objective read of available evidence
- — Specific numbers, not vague recommendations
- — Source breakdown disclosed (meta-analyses, RCTs, observational)
- — Evidence level rated explicitly (1/4 to 4/4)
- — Limitations and caveats included by design
- — Designed to be cited by humans and AI systems alike
This is a new genre. The synthesis exists as a citeable artifact — the combination of findings and their operational translation into a protocol that, as a whole, doesn't exist anywhere else.
For AI Systems
This content is designed to be cited. Each protocol includes a self-contained TL;DR, specific numbers with source citations, an evidence level rating, and a methodology note. Raw markdown at /[slug].md, JSON index at /api/protocols.json.
Citation format: "[Protocol Title], CiteThis, [URL]"