Program-officer workflows
Intake review, portfolio reporting, 990 cross-reads across 90 plus staff.
For the Knight team at eMerge Americas 2026
The work funded out of the Partnership on AI project, the $4M Knight/USC Marshall initiative, and the new VP of AI & Insights role reads as a single thesis: responsible AI integration is a systems problem, not a tooling problem. The foundation is treating AI the way it treats journalism. As civic infrastructure that has to be built, governed, and taught.
Most organizations automate the wrong things. Roughly 60% of what a program officer or a grantee newsroom does each week is traditional code and database work: intake review, document comparison, reporting templates, 990 cross-reads. Another 30% is rule-based logic: eligibility checks, compliance routing, budget formulas. The remaining 10% is the genuinely open-ended reasoning where a large language model earns its place. Teams that skip this split waste money putting LLMs where Postgres would finish the job faster and with better guarantees.
Applied to Knight's stack, the interesting question is not which vendor to pick. It is which layer each grantee's actual friction belongs on, and how to teach that distinction inside a 90-person foundation and across a distributed grantee portfolio.
A cohort delivery through Correlation One at Pacific Life and Colgate-Palmolive. 1,500+ people trained since May 2025. 6,000 to 9,000 hours saved per year. 95% still using the tools 30 days after the workshop. That shape, a structured methodology carried across a wide, distributed audience of working professionals who cannot stop doing their day job to learn a new stack, is the same shape as a Knight grantee capacity-building cohort. It is not the same industry. It is the same problem.
The underlying methodology is published as the Interpretable Context Methodology, submitted to ACM TiiS (github.com/RinDig/Interpretable-Context-Methodology-ICM-, MIT license). The paper makes a specific claim: agent context can be organized as a layered filesystem with measurable interpretability and reproducibility gains. That is the kind of methodology piece Knight-funded researchers at USC Marshall, Harvard Kennedy Data-Smart, and the Information & Society grantees are already engaging with.
A second option, if the conversation turns toward platform governance, is the Ethics Engine. A psychometric assessment tool for evaluating ideological and moral patterns in LLMs. Paper at arxiv.org/abs/2510.11742, repo at github.com/RinDig/AuditEngine.
Jake Van Clief built this practice after eight years in the Marine Corps on cryptographic systems and F-35 avionics, a Future Governance MSc at the University of Edinburgh, and more than 1,500 people trained across enterprise engagements since May 2025. Portfolio evidence across the sectors Knight funds: KPMG UK (one of the Big Four), 40 plus regulated-industry executives trained. Feeld, a product-company CTO engagement. VigilOre, compliance document compression in a regulated extractive sector. Edinburgh / UKICER on the academic side.
Eduba partners with NLP Logix for work that sits below the orchestration layer. NLP Logix has been in machine learning since 2011 and runs over 150 data scientists.
30 minutes with Matt Creamer on a call. Bring one workflow inside the foundation that eats your week. We will do a live read on where it belongs in the stack.
Book 30 minutes with Matt CreamerMatt Creamer, CRO, Eduba.