Skip to content

Conversation

@staab
Copy link
Member

@staab staab commented Nov 25, 2025

This NIP was sketched out in a session at nostr.xxx. It's based on @pablof7z's original migration proposal with some elements from #2114.

Differences from prior art:

  • There is no holding period of 30-60 days. Migration events can be validated immediately.
  • Instead of migrating directly from the parent to the successor, an intermediate, single-use migration key is introduced. This can be stored however a user prefers, but it's recommended to use social key recovery with shamir secret sharding, which prevents single points of failure in the user's backup strategy.
  • Migration is only used to signal to clients that users should update their follow lists (and mutes, follow packs, etc), NOT establish a live link between an old key and a new key (or its successors). This means that migrations mean mentions etc are not maintained, messages are lost, etc., but implementation is much less onerous (and can be done manually by users as well).

Important downsides:

  • The worst case scenario occurs when an attacker gets key material and the user hasn't yet published a precommit, in which case the attacker can execute the migration flow himself. This is worse than the status quo, since currently users retain the ability to spam an identity. For this reason, migration should always be presented to a user with context, not automatically implemented. Additional affordances, like migration attestations (where web of trust is used to validate the authenticity of a migration) might be added to buttress user certainty.
  • There is no recourse for all key material being lost. Social key recovery on its own compromises the integrity of keys as a root identity, which I think is a bridge too far. However, informal migration is still possible on the social layer.
  • This NIP relies on a central group of trusted relays to guarantee precommit availability and event validation (ordering is guaranteed by OTS, but not completeness). I would be interested in a proposal to put precommits on a blockchain directly in order to remove this flaw. An OP_RETURN in the form of <40 bytes of pubkey 1><40 bytes of pubkey 2> might suffice.

@fiatjaf
Copy link
Member

fiatjaf commented Nov 26, 2025

Can't we soften the need for the central group of trusted relays by enforcing a migration period during which the legitimate owner who just got his keys stolen can find his older timestamped event (which might be stored somewhere or published on niche relays no one remembered to check) and republish it (to the new relays people are checking)?

@staab
Copy link
Member Author

staab commented Nov 26, 2025

I think that only makes things less clear. Having the requirement that the event is immediately available forces us to solve the problem of where to put the events. Anyway, where would they publish the event to once they find it? It's the same problem.

@fiatjaf
Copy link
Member

fiatjaf commented Nov 26, 2025

In the current situation what is the set of relays that everybody has to agree upon?
relay.damus.io, nos.lol, relay.primal.net? Will that remain the canonical set forever? We can't know. Maybe these 3 become unresponsive or delete the events overtime, how are we going to agree on the new set of canonical relays? And if we change, what happens with people that had published their events to these?


Now if we have the period clients can choose the set of relays where to look for these events much more flexibly. And the waiting period is large enough so people who lost their keys can get notified (for example, by their clients that will display some notice, or by their friends who have clients that will display something). And in these notices there will be the name of the relays where the migration event was seen (say, relay.facebook.com and relay.google.com, because 20 years have passed and Nostr is very big and these big companies are the ones running the biggest relays). Then people can go find their events and publish them to relay.facebook.com and relay.google.com, then the situation automatically fixes itself.


Of course this kind of a shitty solution, and maybe it's broken, but I think it's less bad than picking a set of relays now and trusting them forever.

@staab
Copy link
Member Author

staab commented Nov 26, 2025

relay.damus.io, nos.lol, relay.primal.net

We need something similar to promenade coordinators/signers, or indexer relays like purplepag.es. These are relays whose owners are trusted, and which can be relied upon to have a complete record of migration events. Change can happen over time because we should really have like 10+ of these relays, so if one goes offline events are available elsewhere, and people who are hard-coding these urls (or maintaining lists) can migrate. The relays should replicate content using negentropy, so if someone doesn't publish to all relays, the event gets propagated anyhow.

The waiting period creates poor UX, where someone's account is in limbo for a pretty long time. This is at least annoying to users who want to migrate and forget about it, but especially bad if an attacker has their key and they can't dissociate themselves immediately.

@staab staab mentioned this pull request Nov 26, 2025

For each `migration` key committed to in a `kind 360` event, use the [codex32 standard](https://secretcodex32.com/) for Shamir secret sharing to split the key into `n` shards.

For each shard, create a `kind 362` event with a `p` tag indicating the original user's pubkey, and a `shard` tag containing the shard. This event MUST NOT be signed, but MUST be wrapped following [NIP 59](59.md).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is missing the symmetric encryption; shard-holders shouldn't be trusted to not collude with key material.

This sounds harder than it is; all this means is that the payload should be encrypted symmetrically before sharding.


These risks can be mitigated (but not solved) in classically nostr fashion by spreading trust across a large number of `migration` relays. These relays MUST be independently run, and their owners MUST have an incentive to prevent account theft, which can only be assessed manually. All of these relays would have to collude in order to prevent a correct migration from occurring.

However these relays are chosen, the entire network MUST use the same set. Network partitioning is unacceptable in this case. Selection of actual relays is left to the nostr community.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is necessary; all we need is that relays that signal support for this NIP:

  • don't allow backdated events
  • don't allow NIP-9 deleting 36x events

@pablof7z
Copy link
Member

Hadn't we softly reserved NIP-41 to be used for this identity migration thing?

@pablof7z
Copy link
Member

I don't think the "only use the first 360 event is a good idea (nor possible without centralizing on X relays).

What if the user changes their mind about opting out or lose the key to the migration pubkey?

Also, why the need for an in-between migration pubkey? Not sure what's the benefit of that; why not just whitelist the next pubkey instead of a migration event pubkey?

@pablof7z
Copy link
Member

Is this missing attestation? How do followers attest to "I verified out-of-band the person really changed pubkeys"? I think it'd be valuable to
Have this as an explicit action; not just a change in the kind:3 of people. Like if my key is compromised and I roll onto the new one I would call @dergigi and @fiatjaf and others that I'm close with to get them to attest and give a clear social weight to that migration event. They could publish it, even as a kind:1 with some tags, confirming the OOB check.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants