TACITUS
A typed, cited, time-ordered foundation for the disputes humans actually work on , so the AI in the loop helps, instead of hallucinating the facts.
You manage disagreement , for a living, or by accident. TACITUS turns messy dispute text into a typed, cited, time-ordered map, so the next decision is the hard one, not the confused one.
Generic AI is strong at language, weak at structure. It collapses time, causality, and provenance , the three things HR, legal, policy, peace, and mediation teams cannot lose. TACITUS supplies the foundation underneath.
Infrastructure for humans and for the AI helping them
Every page here is written for both. Language models can brief themselves from /llms.txt, /ontology.json, and the full /for-llms pitch , then return to the human with structure, citations, and a grammar the case actually fits.
ENTRY POINTS
Four entry points today, one engine underneath. Each card leads with what you are trying to do , not with who you are.
Human-friction professionals
See the case , actors, claims, interests , in under a minute.
Paste any grievance, termination, board disagreement, or mediation transcript. TACITUS extracts the primitives, surfaces contradictions, and gives you a shared map every party can argue with.
Institutional analysts
Turn a document corpus into a queryable map of the fight.
Ingest reports, transcripts, and field notes. Build a cited conflict graph across actors and time. Generate the briefing a decision-maker can actually act on , provenance intact.
Researchers & academics
Read the ontology. Use the benchmark. Disagree in public.
8 primitives, 41+ typed classes, MIT-licensed pipeline on GitHub. An open benchmark (TCGC) for reasoning over disagreement. Working notes, open questions, the Vision draft.
Developers & product teams
Plug structured conflict memory into your app.
Managed API, Python and TypeScript SDKs, or self-host the open-source pipeline. Give your product a memory that survives the conversation and a reasoning layer that traces back to source.
ONTOLOGY PLAYGROUND · INTERACTIVE
Pick one of three canonical samples, click Structure it, and watch Dialectica tag every primitive , actors, claims, interests, commitments, constraints, leverage, events, narratives , with full provenance.
INPUT · dispute text
Public criticism on an internal channel, escalated to HR.
Alex publicly criticised Maya on an internal channel on May 12, asserting that Maya missed the Q2 deadline. Maya escalated to HR on May 14, filing a complaint that Alex violated the professionalism policy. Maya wants to keep her role and a written apology. Alex has said continued collaboration depends on a formal written apology. HR is bound by the grievance procedure §4.
8 PRIMITIVES
GRAPH STATS · typed output
Every node is bound to the source span it came from. Every edge is typed. Every claim is auditable back to the input.
THE PROBLEM
The failures are not bugs. They are architectural properties , and each one has a direct response inside the engine.
FAILS · Temporality
Transformers see sequences, not timelines. Ask when a commitment was made, broken, or re-negotiated and a generic LLM guesses plausible dates. Dialectica maintains a temporal DAG under the graph.
TACITUS · response
Temporal DAG under every case
Events are nodes with timestamps. Claims are version-stamped. "Who said what, when" is a graph query, not a guess.
FAILS · Causality
Statistical co-occurrence is not causation. Ask which event caused which response, across three actors and six turns, and a generic LLM invents connections. Dialectica stores causal edges as first-class graph objects.
TACITUS · response
Typed causal edges, first-class
TriggeredBy, EscalatedFrom, BlockedBy, EnabledBy. Chains are explicit, walkable, and cite the source span that justifies each edge.
FAILS · Provenance
A fluent paragraph with no citation is an assertion, not evidence. Dialectica binds every extracted primitive to its source span, so every downstream claim is auditable back to the original document.
TACITUS · response
Every primitive bound to source
Each actor, claim, commitment, event carries a link back to the exact text span it was extracted from. Answers cite by construction.
This is what machine-legible conflict looks like end-to-end. We built a synthetic ceasefire so every ontology primitive, every edge type, and at least one load-bearing contradiction would appear somewhere in the graph.
CASE BRIEFING · FICTIONAL · CONSTRUCTED TO EXERCISE THE ONTOLOGY
Three parties, four weeks, one collapsed agreement. The Government of Taruna and the Keshara Liberation Front (KLF) signed a ceasefire in January, mediated by UNSOM with a local Elder Council witnessing. Three weeks later the village of Mira was raided; each side blamed the other; the emergency guarantor session failed to reconcile contradictory accounts; by Feb 08 the ceasefire had formally collapsed.
Why this matters. An LLM given the full document set will produce a fluent summary , and miss the key inferences. The graph does not miss them: it holds actors as nodes, claims as contested edges, events along a temporal DAG, and commitments tagged with status. The engine can then answer who broke which commitment, when, under which constraint, and what narrative each party adopted , in one deterministic query, with source spans attached.
Below is the same crisis as a TACITUS graph. Hover a labelled node to light its edges. Click to pin. Hover any of the small white dots behind , those are the latent bindings (token spans, sub-events, provenance traces) the engine indexes silently; they surface on hover so you can see the size of what the machine actually holds.
EVENT TIMELINE · TEMPORAL DAG
Ceasefire signed in Nairobi by KLF, Gov. of Taruna, under UNSOM mediation
Village of Mira raided; KLF and Gov. make contradictory claims about initiator
Emergency guarantor session convened; mediators surface both claims with source provenance
Ceasefire collapses. Commitment cm1 (weapons withdrawal) formally violated
WHAT THE GRAPH SURFACES
Each finding is a graph query, not a prose summary. The engine holds the structure; the LLM narrates it , grounded, citable, re-runnable on any new case.
CONTRADICTION
Gov. of Taruna asserts KLF violated the ceasefire on Feb 03. KLF asserts Gov. entered the DMZ first the same day. Same event (e2), same date, opposite attributions. The engine stores both claims (c1 and c2), marks them CONTRADICTS at the edge level, and any generation downstream is forced to cite both rather than quietly pick a side.
INTEREST / POSITION GAP
At the surface this reads as an armed-group dispute. With primitives separated, the underlying interest is regional autonomy (i1), verifiable in the KLF manifesto and field interviews. That opens resolution pathways , federated status, reserved seats, fiscal devolution , that never surface at the position layer.
COMMITMENT DRIFT
Commitment cm1 (weapons withdrawal) is marked VIOLATED. Commitment cm2 (humanitarian corridor, annex §4) remains ACTIVE , distinct node, distinct status, no cascading invalidation. A mediator reading prose alone would see "ceasefire collapsed" and miss that humanitarian access is still legally live.
TEMPORAL CHAIN
Typed edges TRIGGERS, ESCALATES_TO, and VIOLATES connect four events across five days. Not just correlation in a timeline , each edge carries the provenance span that justifies it, so the causal chain is defensible in an after-action review.
INFERENCE · NARRATIVE SHIFT
Narrative n1 ("freedom struggle") and n2 ("counter-terrorism") both attach to event e2 (Mira raid) , the SAME event, two REFRAMES edges. The graph lets you ask "how did each party narrate this event, and when did the framing shift?" , a question generic retrieval cannot structure.
INFERENCE · LEVERAGE ASYMMETRY
Leverage nodes l1 (passes) and l2 (international recognition) quantify power asymmetry. Walk edges WIELDS from each actor and you can inventory who can move the needle in which domain , the precondition to any resolution proposal that expects to stick.
WHY THIS HELPS LLMS
STEP 1
Ask 'who violated what, and when?' , the graph returns typed nodes and edges with timestamps and source-document spans. No statistical guess; the answer is a database read.
STEP 2
Downstream language models receive ontology-shaped context , Actor X asserted Claim Y against Commitment Z , not a wall of prose. Fewer tokens wasted reconstructing structure the graph already knows.
STEP 3
Every extracted primitive carries provenance back to the source span. Any LLM output derived from the graph is natively citable , hallucination on keyed facts is architecturally harder.
STANDARD LLM FLOW
Input prose → attention over tokens → fluent answer without verifiable ground
OAG FLOW (TACITUS)
Input prose → extract → typed graph → deterministic query → fluent answer with source-bound citations
This is what the term neurosymbolic means in practice. The graph carries the structural weight; the language model stays in charge of the language. Each covers the other’s blind spot.
41+ classes · 29+ properties · Pydantic + OWL/Turtle · pip install tacitus-ontology
Explore the full ontology →THE PRODUCT SUITE
Every TACITUS product reads and writes the same typed conflict graph. Structure built in one surface is instantly available in all the others. Pick the surface that matches the work you actually do.
All products are under active development. Experimental. Send feedback.