Skip to content

Conversation

@staab
Copy link
Member

@staab staab commented Nov 17, 2025

This NIP describes a backwards-compatible strategy for implementing conflict-free lists which allows for scaling the number of entries in a list beyond relay event size limits, without creating an excess of events that need to be downloaded.

It takes advantage of existing parameterized-replaceable event support by defining d not as the list's identifier, but as a list's shard. If a client wants to add an entry to a list, but it can't find an existing shard to update, it can always create a new one to avoid conflicts. Clients can then download all shards of a list to get a complete view.

If proper outbox relay selection is observed, new shards should be created only when all of a user's relays fail to respond to a request for a list. As a result, very few shards should exist for any given list. Clients can also proactively consolidate shards if the number of shards gets too high.

Backwards compatibility is supported by simply updating the legacy list in parallel. The difference is that if the legacy list can't be found, a new instance of it should not be created.

This NIP only addresses "standard" kind 1xxxx lists, since kind 3xxxx are already conflict free (if you can't find one, you don't know its d tag and so you're unlikely to overwrite it).

@vitorpamplona
Copy link
Collaborator

Not a bad concept, but I don't see this actually deprecating Kind 3 or any of the NIP-51 lists.

Also, you are turning the risk of one race condition into many smaller race conditions.

@staab
Copy link
Member Author

staab commented Nov 17, 2025

Yeah, I don't think we'll ever get rid of kind 3. I included the whole nip 51 table for completeness, but this is mostly useful for kind 3 and 10000.

Also, you are turning the risk of one race condition into many smaller race conditions.

I don't think this is right. It splits concurrent updates across more shards, which helps with smaller race conditions too. But in any case, replaceable events will never be the correct solution for high-frequency updates. Although I could see a sophisticated client using this NIP to implement much more sophisticated sharding approaches, like some sort of round-robin update scheme that further reduces collisions.

@vitorpamplona
Copy link
Collaborator

I think most clients will try to load and modify all of the lists, creating more race conditions. For instance:

  1. I follow you on Coracle, coracle saves in it's own list.
  2. Amethyst loads amethyst and coracle lists, builds the feed.
  3. I unfollow you on Amethyst, which unfollows on Coracle's d-tag
  4. If I turn on Coracle and that relay isn't online and follow somebody else, Coracle will update it's cached event and override what I did from Amethyst.

Now do that for all 10 or so lists that each user will have.

@staab
Copy link
Member Author

staab commented Nov 17, 2025

Yes, but how is that different from what already happens today?

Edit: For example, every time I update a list in coracle I attempt to re-load the latest version: https://github.com/coracle-social/welshman/blob/master/packages/app/src/commands.ts#L120

@flox1an
Copy link

flox1an commented Nov 19, 2025

Not sure if I'm missing something, but how would this allow a consistent ordering of items in the list? This might work for set usecases like the "follow list" where the ordering is not relevant.

@fabianfabian
Copy link
Contributor

backwards compatibility 👍

@staab
Copy link
Member Author

staab commented Nov 19, 2025

Not sure if I'm missing something, but how would this allow a consistent ordering of items in the list?

You're right, I don't think there would be a way to maintain sort order as written. Is that relevant for any current kind 1xxxx lists? I can definitely see how it would matter for named lists (which this PR doesn't cover), and maybe pins, communities, and emojis.

I think the trade-off is worth it (I have also personally nuked my nip 29 communities list, which is painful). Maybe sort order can be maintained through consolidation — you could one shard as "primary" and any other shards could be automatically appended to that shard and deleted. This probably doesn't need to be specified, it can probably be a client behavior thing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants