Tools and resources for evaluators and staff at The Unjournal
Complete Unjournal evaluations with guided metrics, 90% credible intervals, calibration practice, and multiple export formats (JSON, CSV, Markdown).
Systematically highlight and categorize claims in research papers using GiveWell-style assessment methodology. Export as HTML, CSV, or JSON.
Compare human expert critiques with LLM-generated issue assessments. Score matches and export annotations for analysis.
Guides for using AI tools (NotebookLM, ChatGPT Pro, Elicit) to assist with research evaluation while maintaining critical judgment.
Suggested AI and analysis tools for evaluators, including RegCheck, RoastMyPost, NotebookLM, Elicit, and COS TOP Guidelines. Includes Unjournal AI policy.
Training materials for evaluation workshops, including exercises, calibration games, and instructional content.
OpenAlex-based citation network analysis for tracking papers that cite Unjournal-evaluated works and identifying research influence patterns.
Analyze whether authors updated their papers after receiving Unjournal evaluations. Compares before/after versions and evaluator suggestions.
Python scripts for automated issue matching using sentence embeddings and GPT API. Generates data for the Issue Annotation UI.