Abstract
Conflict is the default state of any system with multiple agents and competing goals. It is not an anomaly to eliminate but a structured phenomenon to make legible. This paper argues that current AI tools fail at conflict reasoning because of an architectural mismatch, proposes a neurosymbolic infrastructure — an open ontology, a typed knowledge graph, and ontology-augmented generation — as the right response, and invites the community to critique, fork, and build on it.
1. The problem: conflict is structured, current tools are not
Ask any experienced HR leader, mediator, compliance officer, or political analyst what they actually do all day, and the answer is some variant of the same thing: they read a lot of text, try to figure out who said what to whom and when, reconcile conflicting accounts, track commitments across time, and produce a structured artifact — a memo, a briefing, a report, a decision record — that their institution can act on.
The structure is the product. The structure is also invisible. It lives entirely in the practitioner’s head, and when they leave the institution it leaves with them. Almost everything they use to do this work — email, spreadsheets, word processors, generic AI chat — was built for a different job. The gap is not a productivity problem. It is an information-asymmetry problem, inside the institution and between the parties to every dispute it touches.
2. The research backbone
Conflict is one of the oldest topics in the social sciences. The literature is deep and converges: disputes are not irrational noise, they are structured phenomena with predictable components. Fisher and Ury separate positions from interests. Glasl maps escalation stages. Galtung distinguishes direct, structural, and cultural violence. Axelrod formalizes cooperation under repeated interaction. None of this is controversial in the academy — but very little of it has been computationally operationalized.
The Agentic Conflict Ontology (ACO) is our attempt to give decades of conflict theory a single, typed, queryable representation. Eight primitives, 41+ classes, and a commitment to keep it published, critiqueable, and forkable.
3. The eight primitives
The ACO identifies eight primitives that together capture the structure of any dispute: Actor, Claim, Interest, Constraint, Leverage, Commitment, Event, and Narrative. Every dispute — workplace, commercial, governance, diplomatic — can be decomposed into these. The primitives do not prescribe a resolution; they describe the shape of the thing that needs resolving.
Two distinctions do the most work. First, positions versus interests: what a party states versus what they actually need. Second, claims versus commitments: what a party asserts versus what they have bound themselves to. Almost every sustained conflict has a structural mismatch on one of these two axes.
4. Why LLMs alone cannot do this
Large language models, taken on their own, fail at conflict reasoning in three structural ways. Temporality is flattened — transformers see sequences, not timelines, and cannot reliably reconstruct when a commitment was made, broken, or renegotiated. Causality collapses to co-occurrence — they cannot chain multi-hop inferences of the form “A caused B via mechanism M under condition C.” Provenance is absent — a fluent paragraph with no citation is an assertion, not evidence, and every claim a generic LLM produces is, architecturally, an uncited assertion.
These are not bugs to patch. They are properties of the model family.
5. The neurosymbolic response
The response to these structural failures is not to abandon language models. It is to couple them with a symbolic layer that handles what they cannot — time, causality, provenance — and let each layer do what it is good at. The symbolic layer is a typed knowledge graph, a temporal DAG, and causal edges. Queries against it are deterministic and auditable. The neural layer extracts, summarizes, and narrates over that structure. Fluent language, grounded in deterministic truth.
We call the resulting pattern Ontology Augmented Generation (OAG): a generation step that consults a typed graph first, reasons against its structure, and traces every output claim back to a source span.
6. Products as demonstrations
TACITUS’s five products are not separate businesses. They are five demonstrations of what Dialectica becomes when you pick a surface. PRAXIS is the full workspace. Wind Tunnel runs behavioral simulation against an audience model. ARGUS turns document corpora into queryable conflict intelligence. CONCORDIA structures live mediation dialogue. Conflict Compass is the smallest demo of the engine — paste a dispute, see it structured.
Every product reads and writes the same graph. That is the point.
7. The benchmark: TCGC
Claims about reasoning systems are only credible when they can be measured. The TACITUS Conflict Grammar Corpus is our attempt at a honest, open benchmark for conflict reasoning — 14 task types designed to test the things generic LLMs fail at: interest extraction, commitment tracking, contradiction detection, temporal ordering, causal chain reconstruction.
The TCGC methodology and task types are detailed on the TCGC page. A dataset paper is planned for Q4 2026.
8. Open science as a principle
Three commitments keep the scientific layer honest: the ontology is public and forkable; the pipeline is MIT-licensed; the benchmark is a shared artifact, not a competitive moat. A company that makes claims about reasoning and keeps them behind closed doors does not deserve the benefit of the doubt.
9. Risks we take seriously
Building conflict infrastructure carries risks. Misuse for adversarial analysis against private disputants is the most obvious; institutional capture, where the map becomes a mechanism for one party to win, is the subtler one. We do not pretend these risks vanish with good intent. We are building the ontology openly, publishing the pipeline, and limiting early deployments to partners who can articulate why legibility helps the weaker party as well as the stronger one.
10. What we are asking the community
This paper is an invitation. Four invitations, specifically:
- Read the ontology. Disagree in public.
- Try Conflict Compass against a dispute you know well. Tell us where it breaks.
- Propose a new TCGC task type. Help us find the edges we are missing.
- If you run a pilot, write up what worked and what did not, openly.
Conflict is structured. Until now, the structure was invisible. We are making it visible, in the open.
Cite this
@techreport{tacitus2026vision,
title = {Making Conflict Legible: A Neurosymbolic Infrastructure for Conflict Reasoning},
author = {{TACITUS Research}},
year = {2026},
note = {Working draft v0.9},
url = {https://tacitus.me/research/vision}
}