Founder-led, pre-seed infrastructure startup

Referee is research-integrity infrastructure for high-volume scholarly publishing.

We turn scientific and technical evidence into machine-readable reliability scores through a transparent flaw taxonomy. The taxonomy shows where a paper is weak, what has been checked, and what still requires human judgment. Referee supports structured triage and targeted expert review rather than replacing peer review.

Demo available today: dashboard and paper-checking workflow are live. False positives and coverage depth are still improving.

The problem

Submission volume keeps rising across conferences, journals, and publisher workflows, but intake review capacity does not scale at the same pace. Most teams still depend on fragmented checks and manual triage, which creates bottlenecks and inconsistent screening quality.

High volume, limited intake capacity

Organizers cannot fully review every submission at intake, so weak papers and high-priority papers are often mixed together too late.

Fragmented integrity checks

Current tools typically solve one check at a time, leaving teams without a unified view of what has been evaluated and what remains open.

Low auditability

Traditional peer review was not designed for today's scale or threat models, and its outputs are hard to audit in structured, reusable form.

The solution

Referee is infrastructure for structured research screening and triage. We do not deliver a black-box score and we do not replace expert review. Instead, Referee organizes screening signals into a structured evaluation record that maps reliability signals to explicit flaw categories and unresolved checks.

Transparent reliability scoring

Reliability scores are machine-readable outputs tied to concrete evidence. Each score can be inspected through supporting flaw categories and open questions that still need human judgment.

Current demo scope

A live dashboard and paper-checking workflow are available for walkthroughs today. The platform is early: coverage breadth and false-positive handling are still being improved.

How it works

Our proprietary Common Research Weakness Enumeration (CRWE) is the core framework behind Referee's workflow.

1. Ingest paper evidence

Referee analyzes scientific and technical evidence from submission materials and creates a structured screening context.

2. Map checks to CRWE categories

Screening checks are mapped to explicit flaw classes so teams can see where the paper appears weak and where evidence is incomplete.

3. Record checked vs unresolved items

The system logs what has been checked, what remains unresolved, and which items require expert judgment before final decisions.

4. Produce reliability signals for triage

Outputs become machine-readable reliability signals that support intake triage and targeted human review, not autonomous acceptance or rejection.

Referee workflow from evidence checks to structured evaluation output

Workflow direction: the demo already shows dashboard and paper-checking foundations, while broader coverage is still in progress.

Why it matters

Referee helps teams triage faster, use expert time more effectively, and maintain clear records of research risk without making claims that correctness can be fully automated.

Better triage

Sort submissions by structured risk signals before expert review queues become overloaded.

Better use of expert time

Focus reviewers on unresolved technical weaknesses instead of repeating broad preliminary checks.

Clear documentation and auditability

Keep a transparent record of what was checked, what is weak, and what still needs human decision-making.

Who it is for

Conference organizers

Create structured intake triage when submission volume outpaces available reviewer capacity.

Publishers and journals

Unify fragmented integrity checks and improve visibility into what is complete versus still pending.

Research organizations

Run structured evaluation at scale with transparent records of potential weaknesses and unresolved risks.

Future expansion markets are below this wedge and include investment due diligence, pharma and biotech scouting, grant allocation, and enterprise R&D decisions.

What makes Referee different

Beyond point solutions

Most integrity tools focus on isolated checks such as plagiarism or anomaly detection. Referee adds an evaluation layer that combines checks into a reusable, structured record.

Transparent, challengeable output

Referee reliability signals are linked to explicit flaw categories and unresolved checks, making the output auditable instead of opaque.

Infrastructure mindset

Referee is built as workflow infrastructure for scholarly publishing operations, not as a manifesto or nonprofit movement layer.

Founder-owned technical architecture

The platform is architected and implemented by the founder, with core infrastructure built in Rust for reliability and performance.

Founder

Erik Schneider, founder of Referee

Erik Schneider is the founder of Referee. He owns the vision, product roadmap, system architecture, and implementation end to end.

He defines the evaluation model, translates it into product and technical decisions, and builds the platform in Rust. The focus is practical: deliver auditable screening infrastructure that supports expert judgment in high-volume publishing workflows.

Book a demo

If you run conference, journal, publishing, or research-intake workflows, let's walk through the current product together. The demo includes a live dashboard and paper-checking flow, and we'll be explicit about what is ready now versus still in progress.

Or email directly: erik@referee-project.com