Unjournal Tools & Interfaces

Tools and resources for evaluators and staff at The Unjournal

Interactive Tools

📝 Evaluation Form

Complete Unjournal evaluations with guided metrics, 90% credible intervals, calibration practice, and multiple export formats (JSON, CSV, Markdown).

🖍️ Claim Highlighter

Systematically highlight and categorize claims in research papers using GiveWell-style assessment methodology. Export as HTML, CSV, or JSON.

🔍 Issue Annotation UI

Compare human expert critiques with LLM-generated issue assessments. Score matches and export annotations for analysis.

Guides & Resources

📚 LLM Guides

Guides for using AI tools (NotebookLM, ChatGPT Pro, Elicit) to assist with research evaluation while maintaining critical judgment.

🛠️ Evaluation Tools Guide

Suggested AI and analysis tools for evaluators, including RegCheck, RoastMyPost, NotebookLM, Elicit, and COS TOP Guidelines. Includes Unjournal AI policy.

🎓 Workshop Materials

Training materials for evaluation workshops, including exercises, calibration games, and instructional content.

Analysis Tools (Python/CLI)

📊 Citation Analysis

OpenAlex-based citation network analysis for tracking papers that cite Unjournal-evaluated works and identifying research influence patterns.

📄 Paper Change Analysis

Analyze whether authors updated their papers after receiving Unjournal evaluations. Compares before/after versions and evaluator suggestions.

🤖 Issue Matching (Backend)

Python scripts for automated issue matching using sentence embeddings and GPT API. Generates data for the Issue Annotation UI.