-
Notifications
You must be signed in to change notification settings - Fork 466
Description
Summary
Looking for guidance on the recommended approach when using json-render to generate visualization UI (e.g., line charts, dashboards) backed by large datasets that may exceed the model's context window.
Use Case
I want to enable end users to create data visualizations by prompting, such as:
"Create a line chart showing sales trends over the past year."
However, the underlying dataset could be thousands or tens of thousands of data points—potentially exceeding the LLM's context window if we try to include the full data in the prompt.
Question
What is the recommended approach in json-render for handling this scenario?
Options I'm Considering
-
Data binding by path/reference — pass only metadata or a reference path to the AI, not the full dataset. The AI generates the JSON structure with
valuePathordataPathpointing to the actual data, which is resolved at render time. -
Sampling/aggregation before prompt — pre-process large datasets to reduce size before sending to the AI.
-
Streaming/chunking — send data to the AI in chunks (if the model/API supports it), though this may affect the coherence of the generated UI structure.
-
Hybrid approach — AI generates the UI component skeleton with placeholders or references, then later populates or updates it with the actual data.