You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pages/spicedb/concepts/datastores.mdx
+140-9Lines changed: 140 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -96,27 +96,158 @@ For more information, refer to the [Datastore Migration documentation](/spicedb/
96
96
|`datastore-relationship-integrity-current-key-filename`| Current key filename for relationship integrity checks |`--datastore-relationship-integrity-current-key-filename="foo"`|
97
97
|`datastore-relationship-integrity-expired-keys`| Config for expired keys for relationship integrity checks |`--datastore-relationship-integrity-expired-keys="foo"`|
98
98
99
-
#### Overlap Strategy
99
+
#### Understanding the New Enemy Problem with CockroachDB
100
100
101
-
In distributed systems, you can trade-off consistency for performance.
101
+
CockroachDB is a Spanner-like datastore supporting global, immediate consistency, with the mantra "no stale reads."
102
+
The CockroachDB implementation should be used when your SpiceDB service runs in multiple geographic regions, and Google's Cloud Spanner is unavailable (e.g. AWS, Azure, bare metal.)
102
103
103
-
CockroachDB datastore users that are willing to rely on more subtle guarantees to mitigate the [New Enemy Problem] can configure `--datastore-tx-overlap-strategy`.
104
+
In order to prevent the new-enemy problem, we need to make related transactions overlap.
105
+
We do this by choosing a common database key and writing to that key with all relationships that may overlap.
106
+
This tradeoff is cataloged in our blog post "[The One Crucial Difference Between Spanner and CockroachDB][crdb-blog]", and means we are trading off write throughput for consistency.
CockroachDB datastore users that are willing to rely on more subtle guarantees to mitigate the [New Enemy Problem] can configure the overlap strategy with the flag `--datastore-tx-overlap-strategy`.
107
114
The available strategies are:
108
115
109
116
| Strategy | Description |
110
117
| --- | --- |
111
-
|`static` (default) | All writes overlap to guarantee safety at the cost of write throughput |
118
+
|`static` (default) | All writes overlap to protect against the [New Enemy Problem] at the cost of write throughput |
112
119
|`prefix`| Only writes that contain objects with same prefix overlap (e.g. `tenant1/user` and `tenant2/user` can be written in concurrently) |
113
-
|`request`| Only writes with the same `io.spicedb.requestoverlapkey` header overlap enabling applications to decide on-the-fly which writes have causual dependencies. Writes without any header act the same as `insecure`. |
114
-
|`insecure`| No writes overlap, providing the best write throughput, but possibly leaving you vulnerable to the [New Enemy Problem]|
120
+
|`request`| Only writes with the same `io.spicedb.requestoverlapkey`gRPC request header overlap enabling applications to decide on-the-fly which writes have causual dependencies. Writes without any header act the same as `insecure`. |
121
+
|`insecure`| No writes overlap, providing the best write throughput, does not protect against the [New Enemy Problem]|
115
122
116
-
For more information, refer to the [CockroachDB datastore README][crdb-readme] or our blog post "[The One Crucial Difference Between Spanner and CockroachDB][crdb-blog]".
123
+
Depending on your application, `insecure` may be acceptable, and it avoids the performance cost associated with the `static` and `prefix` options.
124
+
If the [New Enemy Problem] is not a concern for your application, consider using the `insecure` strategy.
Using `insecure` overlap strategy for SpiceDB with CockroachDB means that it is _possible_ that timestamps for two subsequent writes will be out of order.
129
+
When this happens, it's _possible_ for the [New Enemy Problem] to occur.
130
+
131
+
Let's look at how likely this is, and what the impact might actually be for your workload.
132
+
133
+
##### When can timestamps be reversed?
134
+
135
+
Before we look at how this can impact an application, let's first understand when and how timestamps can be reversed in the first place.
136
+
137
+
- When two writes are made in short succession against CockroachDB
138
+
- And those two writes hit two different gateway nodes
139
+
- And the CRDB gateway node clocks have a delta `D`
140
+
- And the writes touch disjoint sets of relationships
141
+
- And those two writes are sent within the time delta `D` between the gateway nodes
142
+
- And the writes land in ranges whose followers are disjoint sets of nodes
143
+
- And other independent cockroach processes (heartbeats, etc) haven't coincidentally synced the gateway node clocks during the writes.
144
+
145
+
Then it's possible that the second write will be assigned a timestamp earlier than the first write. In the next section we'll look at whether that matters for your application, but for now let's look at what makes the above conditions more or less likely:
146
+
147
+
-**Clock skew**. A larger clock skew gives a bigger window in which timestamps can be reversed. But note that CRDB enforces a max offset between clocks, and getting within some fraction of that max offset will kick the node from the cluster.
148
+
-**Network congestion**, or anything that interferes with node heartbeating. This increases the length of time that clocks can be desynchronized befor Cockroach notices and syncs them back up.
149
+
-**Cluster size**. When there are many nodes, it is more likely that a write to one range will not have follower nodes that overlap with the followers of a write to another range. It also makes it more likely that the two writes will have different gateway nodes. On the other side, a 3 node cluster with `replicas: 3` means that all writes will sync clocks on all nodes.
150
+
-**Write rate**. If the write rate is high, it's more likely that two writes will hit the conditions to have reversed timestamps. If writes only happen once every max offset period for the cluster, it's impossible for their timestamps to be reversed.
151
+
152
+
The likelihood of a timestamp reversal is dependent on the cockroach cluster and the application's usage patterns.
153
+
154
+
##### When does a timestamp reversal matter?
155
+
156
+
Now we know when timestamps _could_ be reversed. But when does that matter to your application?
157
+
158
+
The TL;DR is: only when you care about the New Enemy Problem.
159
+
160
+
Let's take a look at a couple of examples of how reversed timestamps may be an issue for an application storing permissions in SpiceDB.
161
+
162
+
##### Neglecting ACL Update Order
163
+
164
+
Two separate `WriteRelationship` calls come in:
165
+
166
+
-`A`: Alice removes Bob from the `shared` folder
167
+
-`B`: Alice adds a new document `not-for-bob.txt` to the `shared` folder
168
+
169
+
The normal case is that the timestamp for `A` < the timestamp for `B`.
170
+
171
+
But if those two writes hit the conditions for a timestamp reversal, then `B < A`.
172
+
173
+
From Alice's perspective, there should be no time at which Bob can ever see `not-for-bob.txt`.
174
+
She performed the first write, got a response, and then performed the second write.
175
+
176
+
But this isn't true when using `MinimizeLatency` or `AtLeastAsFresh` consistency.
177
+
If Bob later performs a `Check` request for the `not-for-bob.txt` document, it's possible that SpiceDB will pick an evaluation timestamp such that `B < T < A`, so that the document is in the folder _and_ bob is allowed to see the contents of the folder.
178
+
179
+
Note that this is only possible if `A - T < quantization window`: the check has to happen soon enough after the write for `A` that it's possible that SpiceDB picks a timestamp in between them.
180
+
The default quantization window is `5s`.
181
+
182
+
##### Application Mitigations for ACL Update Order
183
+
184
+
This could be mitigated in your application by:
185
+
186
+
- Not caring about the problem
187
+
- Not allowing the write from `B` within the max_offset time of the CRDB cluster (or the quantization window).
188
+
- Not allowing a Check on a resource within max_offset of its ACL modification (or the quantization window).
189
+
190
+
##### Mis-apply Old ACLs to New Content
191
+
192
+
Two separate API calls come in:
193
+
194
+
-`A`: Alice remove Bob as a viewer of document `secret`
195
+
-`B`: Alice does a `FullyConsistent``Check` request to get a ZedToken
196
+
-`C`: Alice stores that ZedToken (timestamp `B`) with the document `secret` when she updates it to say `Bob is a fool`.
197
+
198
+
Same as before, the normal case is that the timestamp for `A` < the timestamp for `B`, but if the two writes hit the conditions for a timestamp reversal, then `B < A`.
199
+
200
+
Bob later tries to read the document. The application performs an `AtLeastAsFresh``Check` for Bob to access the document `secret` using the stored Zedtoken (which is timestamp `B`.)
201
+
202
+
It's possible that SpiceDB will pick an evaluation timestamp `T` such that `B < T < A`, so that bob is allowed to read the newest contents of the document, and discover that Alice thinks he is a fool.
203
+
204
+
Same as before, this is only possible if `A - T < quantization window`: Bob's check has to happen soon enough after the write for `A` that it's possible that SpiceDB picks a timestamp in between `A` and `B`, and the default quantization window is `5s`.
205
+
206
+
##### Application Mitigations for Misapplying Old ACLs
207
+
208
+
This could be mitigated in your application by:
209
+
210
+
- Not caring about the problem
211
+
- Waiting for max_offset (or the quantization window) before doing the fully-consistent check.
212
+
213
+
##### When does a timestamp reversal _not_ matter?
214
+
215
+
There are also some cases when there is no New Enemy Problem even if there are reversed timestamps.
216
+
217
+
###### Non-sensitive domain
218
+
219
+
Not all authorization problems have a version of the New Enemy Problem, which relies on there being some meaningful
220
+
consequence of hitting an incorrect ACL during the small window of time where it's possible.
221
+
222
+
If the worst thing that happens from out-of-order ACL updates is that some users briefly see some non-sensitive data,
223
+
or that a user retains access to something that they already had access to for a few extra seconds, then even though
224
+
there could still effectively be a "New Enemy Problem," it's not a meaningful problem to worry about.
225
+
226
+
###### Disjoint SpiceDB Graphs
227
+
228
+
The examples of the New Enemy Problem above rely on out-of-order ACLs to be part of the same permission graph.
229
+
But not all ACLs are part of the same graph, for example:
230
+
231
+
```haskell
232
+
definition user {}
233
+
234
+
definition blog {
235
+
relation author: user
236
+
permission edit = author
237
+
}
238
+
239
+
defintion video {
240
+
relation editor: user
241
+
permission change_tags = editor
242
+
}
243
+
```
244
+
245
+
`A`: Alice is added as an `author` of the Blog entry `new-enemy`
246
+
`B`: Bob is removed from the `editor`s of the `spicedb.mp4` video
247
+
248
+
If these writes are given reversed timestamps, it is possible that the ACLs will be applied out-or-order and this would
249
+
normally be a New Enemy Problem. But the ACLs themselves aren't shared between any permission computations, and so there
0 commit comments