Skip to content

riturajFi/Tinkler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

8 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Tinkler

A repo agent that thinks in loops, not scripts.

Tinkler uses LangGraph to inspect a codebase, choose one action at a time, collect evidence, and only write once it has enough signal.

Python 3.11+ LangGraph Agent Loop OpenAI Structured Decisions


The Pitch

Most repo agents still behave like this:

inspect -> summarize -> write

That looks neat, but it breaks as soon as the repo stops being neat.

Tinkler is built around a tighter loop:

context -> decide -> tool -> observe -> loop -> finalize

It does not assume where the truth lives. It finds it.


Why It Feels Different

Traditional Repo Agent Tinkler
follows a fixed inspection order adapts every turn
plans too much up front chooses one action at a time
writes early stages writes and applies them at the end
treats tool output as terminal turns tool output into new context
fragile on odd repo layouts works better on uneven, real-world codebases

Architecture At A Glance

flowchart TD
    A([User Request]) --> B[init_turn]
    B --> C[build_agent_context]
    C --> D[agent_decide]
    D --> E{route_agent_action}

    E -->|shell_command| F[shell_command]
    E -->|read_file| G[read_file]
    E -->|list_dir| H[list_dir]
    E -->|search_files| I[search_files]
    E -->|write_file| J[write_file]
    E -->|finish| L[check_termination]

    F --> K[record_observation]
    G --> K
    H --> K
    I --> K
    J --> K

    K --> L
    L -->|loop| D
    L -->|stop| M[finalize_answer]
    M --> N[apply_file_write]
    N --> O([Done])

    classDef core fill:#0b1220,stroke:#60a5fa,color:#eff6ff,stroke-width:1.5px;
    classDef tools fill:#1f2937,stroke:#f59e0b,color:#fff7ed,stroke-width:1.5px;
    classDef gates fill:#14532d,stroke:#4ade80,color:#f0fdf4,stroke-width:1.5px;

    class B,C,D,M core;
    class F,G,H,I,J tools;
    class E,K,L,N gates;
Loading

The Core Loop

flowchart LR
    A[Build Context] --> B[Choose One Action]
    B --> C[Run Tool]
    C --> D[Capture Result]
    D --> E[Update Working Summary]
    E --> F{Stop Yet?}
    F -->|No| B
    F -->|Yes| G[Generate Final Answer]
    G --> H[Apply Staged Write]

    classDef dark fill:#111827,stroke:#93c5fd,color:#f9fafb,stroke-width:1.5px;
    classDef accent fill:#3f3f46,stroke:#fbbf24,color:#fffbeb,stroke-width:1.5px;

    class A,B,D,E,F dark;
    class C,G,H accent;
Loading

This is the whole design: every action earns the next action.


Agent Surface

flowchart TD
    A[Decision Model] --> B[shell_command]
    A --> C[read_file]
    A --> D[list_dir]
    A --> E[search_files]
    A --> F[write_file]
    A --> G[finish]

    B --> B1[terminal exploration]
    C --> C1[targeted file inspection]
    D --> D1[structured tree discovery]
    E --> E1[text and symbol lookup]
    F --> F1[stage artifact for later write]
    G --> G1[exit loop]

    classDef root fill:#172554,stroke:#60a5fa,color:#eff6ff,stroke-width:1.5px;
    classDef leaf fill:#1f2937,stroke:#f97316,color:#fff7ed,stroke-width:1.5px;

    class A,B,C,D,E,F,G root;
    class B1,C1,D1,E1,F1,G1 leaf;
Loading

Types and routing live in agent/state.py, agent/actions/schemas.py, and agent/graph.py.


Repo Layout

Tinkler/
β”œβ”€β”€ agent/
β”‚   β”œβ”€β”€ __main__.py
β”‚   β”œβ”€β”€ graph.py
β”‚   β”œβ”€β”€ state.py
β”‚   β”œβ”€β”€ actions/
β”‚   β”‚   β”œβ”€β”€ parser.py
β”‚   β”‚   └── schemas.py
β”‚   β”œβ”€β”€ nodes/
β”‚   β”‚   β”œβ”€β”€ agent_decide.py
β”‚   β”‚   β”œβ”€β”€ build_agent_context.py
β”‚   β”‚   β”œβ”€β”€ check_termination.py
β”‚   β”‚   β”œβ”€β”€ finalize_answer.py
β”‚   β”‚   β”œβ”€β”€ init_turn.py
β”‚   β”‚   β”œβ”€β”€ record_observation.py
β”‚   β”‚   └── route_agent_action.py
β”‚   β”œβ”€β”€ prompts/
β”‚   └── tools/
β”œβ”€β”€ pyproject.toml
└── README.md

What Happens In One Run

sequenceDiagram
    autonumber
    participant U as User
    participant CLI as CLI
    participant G as Graph
    participant M as Model
    participant T as Tool
    participant R as Repo

    U->>CLI: "document this repository"
    CLI->>G: create initial state
    G->>M: provide context
    M-->>G: structured next action
    G->>T: run selected tool
    T->>R: inspect files or stage output
    R-->>T: result
    T-->>G: tool result
    G->>G: record observation
    G->>G: check termination
    loop until enough evidence
        G->>M: updated context
        M-->>G: next action
        G->>T: run tool
        T->>R: inspect
        R-->>T: result
        T-->>G: observation
    end
    G->>M: finalize response
    G->>R: apply staged write
    G-->>CLI: final response
Loading

Quick Start

1. Install

python -m venv .venv
source .venv/bin/activate
pip install -e .

2. Configure

export OPENAI_API_KEY=your_key_here
export OPENAI_MODEL=gpt-4o-mini

3. Run

python -m agent "write a repo summary" --cwd .

With an explicit turn limit:

python -m agent "document this codebase" --cwd . --max-turns 12

Entrypoint: agent/__main__.py

3a. Run The Agent On dummy_repo

From the Tinkler repo root, point --cwd at the bundled sample repository:

python -m agent "Analyze the folder structure" --cwd dummy_repo

With observability logs enabled:

python -m agent "Analyze the folder structure" --cwd dummy_repo --log-level INFO

With a higher turn limit:

python -m agent "Analyze the folder structure" --cwd dummy_repo --max-turns 20 --log-level INFO

What this does:

  • runs agent/__main__.py
  • resolves dummy_repo relative to the current working directory
  • builds the LangGraph workflow from agent/graph.py
  • analyzes /dummy_repo instead of the Tinkler repo itself

4. Analyze Any Repo With The Read-Only CLI

Use the wrapper CLI when you want to point Tinkler at another local repository without allowing file writes. The CLI is a thin consumer over agent/service.py, so other consumers such as a UI can reuse the same runtime entrypoints instead of rebuilding the agent flow.

python -m tinkler_cli analyze ../some-repo

Focus the analysis:

python -m tinkler_cli analyze ../some-repo --focus architecture --trace

Ask a custom question and emit JSON:

python -m tinkler_cli analyze ../some-repo --request "Explain the startup path and main risks." --json

After installation, the same wrapper is also available as:

tinkler analyze ../some-repo

Design Choices

One Action Per Turn

The model is forced to stay grounded. It does not invent a long multi-step script and hope it still makes sense three tool calls later.

Deferred Writes

write_file prepares content first. The actual write is applied later by agent/nodes/apply_file_write.py, after the answer is finalized. That keeps mutation controlled.

Structured Decisions

The model emits typed decisions, parsed through agent/actions/parser.py and agent/actions/schemas.py, before the graph routes execution.


Stack

  • Python 3.11+
  • LangGraph
  • LangChain OpenAI
  • setuptools

Dependency source: pyproject.toml


Key Files


Summary

Tinkler is a small repo agent with a strong constraint: it must learn from each step before taking the next one. That single choice makes the rest of the architecture make sense.

About

πŸ› οΈπŸ€– A LangGraph-powered repo agent that explores codebases step by step, adapts every turn, and writes only after it understands the code.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors