You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Many kv stores have quite low limits for data size per row. For example Cloudflare's Durable Objects only can store a max of 128 KiB per row. I'm not sure how to solve for this other than breaking up the data present in the document into multiple rows.
It would be interesting to look at from a design perspective whether this problem would be best solved by the protocol, or whether this should be only a consideration for the implementors of the trait.
The text was updated successfully, but these errors were encountered:
Technically for values beyond upper limit, you could slice them up and store as separate rows, i.e.: {key}: {val:300KiB} as {key:0}: {val:128KiB} + {key:1}: {val:128KiB} + {key:1}: {val:44KiB} then on read of particular key just read all entries with key: prefix. It's kinda similar to how Postgres stores big data using TOAST tables.
Since you're giving the Cloudflare as an example, I suspect that you're talking about using yrs in server-client setup (where clients are user devices and server is app hosted on Cloudflare). While this is still in early stages, we have plans to make yrs Doc store-compatible implementation that will works directly on the underlying persistent storage rather than in memory data structure.
Many kv stores have quite low limits for data size per row. For example Cloudflare's Durable Objects only can store a max of 128 KiB per row. I'm not sure how to solve for this other than breaking up the data present in the document into multiple rows.
It would be interesting to look at from a design perspective whether this problem would be best solved by the protocol, or whether this should be only a consideration for the implementors of the trait.
The text was updated successfully, but these errors were encountered: