Pydantic-resolve is a framework for composing complex data structures with an intuitive, declarative, resolver-based way, and then let the data easy to understand and adjust.
It provides three major functions to facilitate the acquisition and modification of multi-layered data.
- pluggable resolve methods and post methods, they can define how to fetch and modify nodes.
- transporting field data from ancestor nodes to their descendant nodes, through multiple layers.
- collecting data from any descendants nodes to their ancestor nodes, through multiple layers.
It supports:
- pydantic v1
- pydantic v2
- dataclass
from pydantic.dataclasses import dataclass
If you have experience with GraphQL, this article provides comprehensive insights: Resolver Pattern: A Better Alternative to GraphQL in BFF.
It could be seamlessly integrated with modern Python web frameworks including FastAPI, Litestar, and Django-ninja.
This snippet shows the basic capability of fetching descendant nodes in a declarative way, the specific query details are encapsulated inside the dataloader.
from pydantic_resolve import Resolver
from biz_models import BaseTask, BaseStory, BaseUser
from biz_services import UserLoader, StoryTaskLoader
class Task(BaseTask):
user: Optional[BaseUser] = None
def resolve_user(self, loader=Loader(UserLoader)):
return loader.load(self.assignee_id) if self.assignee_id else None
class Story(BaseStory):
tasks: list[Task] = []
def resolve_tasks(self, loader=Loader(StoryTaskLoader)):
# this loader returns BaseTask,
# Task inhert from BaseTask so that it can be initialized from it, then fetch the user.
return loader.load(self.id)
stories = [Story(**s) for s in await query_stories()]
data = await Resolver().resolve(stories)
then it will transform flat stories into complicated stories with rich details:
BaseStory
[
{ "id": 1, "name": "story - 1" },
{ "id": 2, "name": "story - 2" }
]
Story
[
{
"id": 1,
"name": "story - 1",
"tasks": [
{
"id": 1,
"name": "design",
"user": {
"id": 1,
"name": "tangkikodo"
}
}
]
},
{
"id": 2,
"name": "story - 2",
"tasks": [
{
"id": 2,
"name": "add ut",
"user": {
"id": 2,
"name": "john"
}
}
]
}
]
pip install pydantic-resolve
Starting from pydantic-resolve v1.11.0, both pydantic v1 and v2 are supported.
- Documentation: https://allmonday.github.io/pydantic-resolve/
- Demo: https://github.com/allmonday/pydantic-resolve-demo
- Composition-Oriented Pattern: https://github.com/allmonday/composition-oriented-development-pattern
Building complex data structures requires only 3 systematic steps, let's take Agile's Story for example.
Establish entity relationships as foundational data models (stable, serves as architectural blueprint)

from pydantic import BaseModel
class BaseStory(BaseModel):
id: int
name: str
assignee_id: Optional[int]
report_to: Optional[int]
class BaseTask(BaseModel):
id: int
story_id: int
name: str
estimate: int
done: bool
assignee_id: Optional[int]
class BaseUser(BaseModel):
id: int
name: str
title: str
from aiodataloader import DataLoader
from pydantic_resolve import build_list, build_object
class StoryTaskLoader(DataLoader):
async def batch_load_fn(self, keys: list[int]):
tasks = await get_tasks_by_story_ids(keys)
return build_list(tasks, keys, lambda x: x.story_id)
class UserLoader(DataLoader):
async def batch_load_fn(self, keys: list[int]):
users = await get_tuser_by_ids(keys)
return build_object(users, keys, lambda x: x.id)
DataLoader implementations support flexible data sources, from database queries to microservice RPC calls. (It could be replaced in future optimization)
Based on a specific business logic, create domain-specific data structures through selective schemas and relationship dataloader (stable, reusable across use cases)

from pydantic_resolve import Loader
class Task(BaseTask):
user: Optional[BaseUser] = None
def resolve_user(self, loader=Loader(UserLoader)):
return loader.load(self.assignee_id) if self.assignee_id else None
class Story(BaseStory):
tasks: list[Task] = []
def resolve_tasks(self, loader=Loader(StoryTaskLoader)):
return loader.load(self.id)
assignee: Optional[BaseUser] = None
def resolve_assignee(self, loader=Loader(UserLoader)):
return loader.load(self.assignee_id) if self.assignee_id else None
reporter: Optional[BaseUser] = None
def resolve_reporter(self, loader=Loader(UserLoader)):
return loader.load(self.report_to) if self.report_to else None
Utilize ensure_subset
decorator for field validation and consistency enforcement:
@ensure_subset(BaseStory)
class Story(BaseModel):
id: int
assignee_id: int
report_to: int
tasks: list[BaseTask] = []
def resolve_tasks(self, loader=Loader(StoryTaskLoader)):
return loader.load(self.id)
Once business models are validated, consider optimizing with specialized queries to replace DataLoader for enhanced performance.
Apply presentation-specific modifications and data aggregations (flexible, context-dependent)
Leverage post_field methods for ancestor data access, node transfers, and in-place transformations.

from pydantic_resolve import Loader, Collector
class Task(BaseTask):
__pydantic_resolve_collect__ = {'user': 'related_users'} # Propagate user to collector: 'related_users'
user: Optional[BaseUser] = None
def resolve_user(self, loader=Loader(UserLoader)):
return loader.load(self.assignee_id)
class Story(BaseStory):
tasks: list[Task] = []
def resolve_tasks(self, loader=Loader(StoryTaskLoader)):
return loader.load(self.id)
assignee: Optional[BaseUser] = None
def resolve_assignee(self, loader=Loader(UserLoader)):
return loader.load(self.assignee_id)
reporter: Optional[BaseUser] = None
def resolve_reporter(self, loader=Loader(UserLoader)):
return loader.load(self.report_to)
# ---------- Post-processing ------------
related_users: list[BaseUser] = []
def post_related_users(self, collector=Collector(alias='related_users')):
return collector.values()

class Story(BaseStory):
tasks: list[Task] = []
def resolve_tasks(self, loader=Loader(StoryTaskLoader)):
return loader.load(self.id)
assignee: Optional[BaseUser] = None
def resolve_assignee(self, loader=Loader(UserLoader)):
return loader.load(self.assignee_id)
reporter: Optional[BaseUser] = None
def resolve_reporter(self, loader=Loader(UserLoader)):
return loader.load(self.report_to)
# ---------- Post-processing ------------
total_estimate: int = 0
def post_total_estimate(self):
return sum(task.estimate for task in self.tasks)
from pydantic_resolve import Loader
class Task(BaseTask):
user: Optional[BaseUser] = None
def resolve_user(self, loader=Loader(UserLoader)):
return loader.load(self.assignee_id)
# ---------- Post-processing ------------
def post_name(self, ancestor_context): # Access story.name from parent context
return f'{ancestor_context['story_name']} - {self.name}'
class Story(BaseStory):
__pydantic_resolve_expose__ = {'name': 'story_name'}
tasks: list[Task] = []
def resolve_tasks(self, loader=Loader(StoryTaskLoader)):
return loader.load(self.id)
assignee: Optional[BaseUser] = None
def resolve_assignee(self, loader=Loader(UserLoader)):
return loader.load(self.assignee_id)
reporter: Optional[BaseUser] = None
def resolve_reporter(self, loader=Loader(UserLoader)):
return loader.load(self.report_to)
from pydantic_resolve import Resolver
stories = [Story(**s) for s in await query_stories()]
data = await Resolver().resolve(stories)
Complete!
The framework significantly reduces complexity in data composition by maintaining alignment with entity-relationship models, resulting in enhanced maintainability.
Utilizing an ER-oriented modeling approach delivers 3-5x development efficiency gains and 50%+ code reduction.
Leveraging pydantic's capabilities, it enables GraphQL-like hierarchical data structures while providing flexible business logic integration during data resolution.
Seamlessly integrates with FastAPI to construct frontend-optimized data structures and generate TypeScript SDKs for type-safe client integration.
The core architecture provides resolve
and post
method hooks for pydantic and dataclass objects:
resolve
: Handles data fetching operationspost
: Executes post-processing transformations
This implements a recursive resolution pipeline that completes when all descendant nodes are processed.
Consider the Sprint, Story, and Task relationship hierarchy:
Upon object instantiation with defined methods, pydantic-resolve traverses the data graph, executes resolution methods, and produces the complete data structure.
DataLoader integration eliminates N+1 query problems inherent in multi-level data fetching, optimizing performance characteristics.
DataLoader architecture enables modular class composition and reusability across different contexts.
Additionally, the framework provides expose and collector mechanisms for sophisticated cross-layer data processing patterns.
tox
tox -e coverage
python -m http.server
Current test coverage: 97%