IntentGraph is a model-agnostic framework for capturing, validating, and reusing human decision-making intelligence — beyond prompts, beyond any single LLM.
Prompts are consumable.
Intent, evidence, reasoning, and verification are assets.
Modern LLM platforms increasingly personalize and adapt to user behavior. While this improves short-term UX, the accumulated intelligence remains locked inside vendor platforms — not portable, not inspectable, not owned by the user.
IntentGraph is built on a simple premise:
If models change, prompts will break.
If reasoning is tied to language, it will expire.
Only structured intent and decisions survive.
We treat human intent as the primary object — not chat logs or prompt text.
Intent is captured as:
- goal
- constraints
- success criteria
This makes the data reusable across models, tools, and future agent architectures.
We do not store chain-of-thought or free-form reasoning text.
Instead, reasoning is abstracted into:
- strategies
- operations (classify, retrieve, decide, etc.)
- decision rules
- uncertainty registers
This abstraction survives:
- different LLMs
- non-language planners
- rule engines
- future world-model-based agents
Decisions are meaningless without grounds.
IntentGraph stores:
- fact snapshots
- official references
- environment observations
Links alone are not enough. Key evidence fields are snapshotted for long-term validity.
Personal judgment is valuable — but insufficient.
Each decision is independently validated through:
- Fact Verification — Are factual claims supported?
- Intent Alignment — Does the output satisfy the declared goal and constraints?
Subjective evaluation sits on top of these objective layers, not instead of them.
LLMs are treated as replaceable execution engines, not knowledge containers.
All stored assets are:
- portable
- inspectable
- versioned
- reusable
- ❌ A prompt library
- ❌ A chat log archive
- ❌ A fine-tuning dataset tied to a single model
- ❌ A chain-of-thought collector
IntentGraph is a decision intelligence ledger.
Intent ├─ Evidence ├─ Reasoning (abstract) ├─ Outcome ├─ Verification │ ├─ Fact Check │ └─ Intent Alignment └─ Temporal Evolution
Each layer is model-agnostic and independently testable.
- Security decision tracking (e.g., CVE patch decisions)
- LLM Ops & reliability judgment
- Code review and risk assessment
- Human-in-the-loop agent systems
- Research on decision robustness beyond language models
This repository currently focuses on:
- Specification design
- Data schema evolution
- Reasoning abstraction rules
- Verification frameworks
Implementation (agents, UI, extensions) is intentionally deferred until the core model proves stable and reusable.
IntentGraph aims to outlive current LLM paradigms.
As AI systems move from language models to planners, tool-using agents, and world models, human intent must remain the stable interface.
This project is an experiment in building that interface — deliberately, explicitly, and portably.
TBD (Design-first phase; licensing will be defined once the specification stabilizes.)