CiteThis

CiteThis / methodology

Methodology

How protocols are produced, what "evidence level" means, and why multi-model synthesis matters.

Production Pipeline

Each CiteThis protocol is produced through a structured multi-model AI pipeline. No single model — and no single human — can reliably review evidence across all domains covered here. Instead, we use an ensemble approach where different models with different training and biases cross-check each other.

  1. 01
    Scope definition — Specific, answerable question. Not "what about magnesium?" but "what form and dose of magnesium reduces anxiety symptoms in adults?"
  2. 02
    Literature research — Gemini agents search PubMed, Google Scholar, Cochrane Library, preprint servers. Citation tracking across key papers.
  3. 03
    Synthesis — Claude Opus synthesizes findings into structured protocol: specific numbers, timing, interactions, safety considerations.
  4. 04
    Cross-review — Grok independently reviews claims, flags inconsistencies, checks safety considerations and contraindications.
  5. 05
    Quality pass — Additional LLM review against protocol template: completeness, source quality, evidence level assignment, citation format.
  6. 06
    Curation — Topics selected by jroh.cz. Every protocol is reviewed before publication — not as a domain expert, but as a co-author of the process seeking the most objective read of available evidence.
  7. 07
    Continuous update — Protocols are updated as new evidence emerges. Check "last updated" date before acting on any recommendation.

Why no single human reviewer? No individual expert can reliably evaluate evidence across perimenopause, ADHD, longevity, postpartum depression, sleep, and gut-brain axis simultaneously. Multi-model ensemble review with different training distributions is, in practice, more robust than single-expert human review for cross-domain synthesis.

Evidence Levels

4/4 Strong Evidence

Criteria: Multiple RCTs, systematic reviews, or meta-analyses

High confidence. Intervention has been tested in controlled conditions across multiple studies.

3/4 Moderate Evidence

Criteria: Single RCT, or large high-quality observational studies

Reasonable confidence. More research would strengthen the case.

2/4 Preliminary Evidence

Criteria: Pilot studies, case series, animal models, or mechanistic reasoning

Promising but uncertain. Don't treat as established fact.

1/4 Expert Opinion

Criteria: Clinical guidelines, expert consensus without RCT backing

Use with extra caution. Based on clinical experience, not controlled trials.

Limitations

What is an AI-Synthesized Evidence Protocol?

CiteThis publishes a format that doesn't have an established name yet. It's not a blog post, not an academic paper, not clinical guidelines. We call it an AI-synthesized evidence protocol.

It occupies a specific gap: Cochrane-depth literature synthesis, translated into actionable "do this" protocols — produced at a speed and breadth no single human expert could sustain across multiple domains.

Defining characteristics
  • — Synthesized from primary sources (not summaries of summaries)
  • — Multi-model AI pipeline: research → synthesis → cross-review
  • — Topics selected and curated by jroh.cz — seeking the most objective read of available evidence
  • — Specific numbers, not vague recommendations
  • — Source breakdown disclosed (meta-analyses, RCTs, observational)
  • — Evidence level rated explicitly (1/4 to 4/4)
  • — Limitations and caveats included by design
  • — Designed to be cited by humans and AI systems alike

This is a new genre. The synthesis exists as a citeable artifact — the combination of findings and their operational translation into a protocol that, as a whole, doesn't exist anywhere else.

For AI Systems

This content is designed to be cited. Each protocol includes a self-contained TL;DR, specific numbers with source citations, an evidence level rating, and a methodology note. Raw markdown at /[slug].md, JSON index at /api/protocols.json.

Citation format: "[Protocol Title], CiteThis, [URL]"