Evidence‑annotated claim graphs make it possible to:
Compare claimed mechanisms across papers in a standardized way
Measure causal vs non‑causal evidence at the relationship level
Study how narrative structure and novelty relate to publication and impact
Automated extraction can miss or misclassify information.
Evidence labels reflect how papers document claims, not whether claims are correct.
Some fields and paper styles are easier to parse than others.
We mitigate these issues with repeated passes, stability aggregation, and validation checks, and we document robustness in the paper.
Extending concept standardization beyond JEL
Expanding corpora beyond NBER/CEPR
Additional validation layers and benchmarking
Improved tooling for querying and reuse (hosted on GitHub)
We are looking for academic, media, government or industry research partners to scale this measurement philosophy (AI for graph extraction).
If you are interested, reach out!