Virginia Tech School of Public and International Affairs

Virginia Tech School of Public and International Affairs

Job, Internship, and Fellowship Openings for Students and Alumni of the Virginia Tech School of Public and International Affairs

Cothon
Remote
Summer 2026 Research Internship
Internship

Summer 2026 Research Internship

Bias Detection Capability — Terms of Reference


Build bias detection for AI in international development contexts


Cothon is offering a remote research internship for a graduate student to develop our foundational bias detection capability for LLM outputs in humanitarian and international development contexts.


We are building the governance layer between international development organizations and AI systems. Cothon’s platform provides secure LLM workspaces with guardrails, evidence verification, and audit trails for high-stakes aid work. For these teams, unverified claims can end funding relationships and affect vulnerable communities.


Dates: Summer 2026 (approximately 10–12 weeks, flexible start/end)


Supervision and Structure: Fully remote. Reports to Founding Engineer, with regular interaction with Co-Founders (CEO and CPO)


Compensation: Stipend available


Level: Graduate student (MS or PhD) in Computer Science, Data Science, AI/ML, or related field


The Project

LLMs used in humanitarian work carry sector-specific bias risks that generic fairness toolkits miss: geographic bias in crisis framing, cultural assumptions in program design, gender bias in vulnerability assessments, and colonial framing in development narratives. Your work will develop Cothon’s foundational bias detection capability.


Phased Deliverables


Phase 1 — Bias Taxonomy & Research Report (Weeks 1–3)

Define and categorize the bias types most relevant to humanitarian and development AI use cases. Produce a structured research report grounded in sector literature, existing academic frameworks, and real-world examples. This deliverable should reference existing bias frameworks from academic literature and sector publications where available, adapting them to humanitarian AI contexts. The report should be written to a publishable standard.


Example of bias categories to explore:

• Geographic and regional bias in crisis and conflict reporting

• Cultural and normative assumptions in program recommendations

• Gender and demographic bias in vulnerability and needs assessments

• Colonial and power-dynamic framing in development narratives

• Institutional and donor bias in resource allocation

• Local knowledge deprioritization relative to donor framing


Phase 2 — Evaluation Dataset (Weeks 4–7)

Build a labeled test suite of prompts and LLM outputs grounded in the Phase 1 taxonomy. This dataset becomes a durable, reusable asset for ongoing bias evaluation.


Example features of the dataset:

• Humanitarian-context test prompts across bias categories

• Labeled LLM responses with bias annotations and severity ratings

• Documentation of labeling methodology

• Sample validation with Cothon team or sector expert to establish inter-rater baseline

• Open-source the dataset and benchmark


Phase 3 — Detection Prototype (Weeks 8–10)

Develop an LLM-based bias detection approach using the Phase 1 taxonomy, validated against the Phase 2 dataset. We assume the detection mechanism will use structured prompting rather than custom model training.


• Bias detection prompts designed to score outputs against taxonomy categories

• Structured output format: bias type, severity score, explanation, source span

• Performance benchmarks against evaluation dataset

• Technical documentation sufficient for future integration


Extension Opportunity — Cross-Provider Benchmarking

If time permits, benchmark multiple LLM providers against the evaluation dataset to assess comparative bias performance in humanitarian contexts.


What You’ll Gain

• Direct collaboration with founders building at the intersection of AI governance and international development action

• Hands-on experience with production AI systems and Cothon’s development environment and architecture

• Potential to co-author publishable research

• Exposure to agentic coding workflows and business practices

• A concrete, portfolio-ready deliverable with real-world application


Ideal Candidate

• Graduate student (MS or PhD) in Computer Science, Data Science, AI/ML, NLP, or related field

• Familiarity with bias, fairness, or responsible AI concepts

• Experience with Python, modern ML frameworks, and working with LLM APIs

• Strong research methodology and technical writing skills (English)

• Interest in humanitarian technology, international development, or AI governance

• Comfortable working across time zones


Supervision & Structure

This internship is fully remote. The intern will report to the Founding Engineer, with regular interaction across the founding team.


How to Apply

Complete the application form at this link


Required: CV and a brief statement of interest (approximately 250–400 words).

Optional: GitHub profile, writing sample, or other relevant materials.

Questions? Contact Will Culhane, Co-Founder & CPO, at will@cothonai.com


About Cothon

Cothon is an AI governance platform built for humanitarian and development organizations. While 93% of humanitarian workers already use AI tools, only 8% of organizations have governance infrastructure. Cothon sits between organizations and AI systems, controlling inputs and outputs while maintaining audit trails, evidence verification, and human-in-the-loop validation. The founding team brings 30+ years of combined field experience across UN agencies, emergency response, climate finance, and humanitarian data operations.