You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been thinking more about #5, and that patching arrays is basically like patching text lines. Text diff aggregates contiguous individual line changes into hunks, which contain zero or more inserted lines and zero or more removed lines. It does this so that some number of lines on either side of the hunk are, by definition, unchanged, and thus can be used as stable context for adjusting the patch hunk offset (which is analogous to JSON Patch path).
This proposal specifically deals with aggregating contiguous array changes in that way is basically equivalent to JavaScript Array.prototype.splice. So, I'm proposing a new JSON Patch operation: "splice".
A "splice" operation contains an array of items to add and an array of items to remove. It must target an array index, and not an object key.
Application
Applying a "splice" consists of three steps:
Using the same comparison algorithm as "test", compare each item in "remove" with the corresponding items in the target array, using "path" as the starting index. If any comparison fails, the entire "splice" must also fail.
Remove the same number of items from the target array, starting at "path", as are present in the "remove" property. This "blind remove" is guaranteed to be safe since step 1 tests to ensure they are the correct items.
Insert the span of items in the "add" property into the target array at "path", shifting existing items to the right.
(Note: It's possible to combine steps 1 and 2. I'm not sure which is better, separate or combined)
It's more compact for contiguous array operations, since the object wrapper "{}", "op", and "path" are not repeated for each individual item being added or removed. This advantage increases as the number of adds + removes increases.
It can be made safe without the need to preceed with "test". See "Relationship to "test"" below
It can be inverted easily. See "Inversion" below
Although not specifically a part of this proposal, it could include surrounding context for smarter patch algorithms. See "Patch context" below.
Possible variations
Could use a number for "remove". This would make for more compact patches, at the expense of making "splice" non-invertible.
Could use hash values for "remove", instead of the actual items. Unfortunately, this also prevents patch inversion, and means the patch producer and consumer must use the same hash algorithm when processing the "remove" portion of the splice.
(Others?)
Relationship to "test"
The current "remove" operation has an obvious relationship to "test": to make a patch safer, the producer can preceed every "remove" with a "test" that matches the value of the item being removed in the subsequent "remove". The "splice" already contains the actual items being removed, and thus they can be tested without requiring N "test" ops to preceed the "splice" (where N = number of removed items).
Inversion
Inverting "splice" is trivial: simply swap the values of "add" and "remove":
The "splice" op aggregates contiguous changes. Thus by definition, items before and after the contiguous changes are unchanged. This allows for future expansion to include before/after patch context similar to that used by textual diff/patch tools such as GNU diff and patch. In such tools, patch context increases the accuracy of patching even when the target document has diverged from the original.
The text was updated successfully, but these errors were encountered:
I agree with @ken107, I do not think we should require comparison by value. This makes it then not possible for splice to be used with objects as array values. I think this is adding too much logic into a simple patch operation.
I've been thinking more about #5, and that patching arrays is basically like patching text lines. Text diff aggregates contiguous individual line changes into hunks, which contain zero or more inserted lines and zero or more removed lines. It does this so that some number of lines on either side of the hunk are, by definition, unchanged, and thus can be used as stable context for adjusting the patch hunk offset (which is analogous to JSON Patch
path
).This proposal specifically deals with aggregating contiguous array changes in that way is basically equivalent to JavaScript
Array.prototype.splice
. So, I'm proposing a new JSON Patch operation: "splice".splice
Syntax
A "splice" operation contains an array of items to add and an array of items to remove. It must target an array index, and not an object key.
Application
Applying a "splice" consists of three steps:
(Note: It's possible to combine steps 1 and 2. I'm not sure which is better, separate or combined)
Example application
Applying the above patch:
Advantages over separate "add" and "remove"
Possible variations
Relationship to "test"
The current "remove" operation has an obvious relationship to "test": to make a patch safer, the producer can preceed every "remove" with a "test" that matches the value of the item being removed in the subsequent "remove". The "splice" already contains the actual items being removed, and thus they can be tested without requiring N "test" ops to preceed the "splice" (where N = number of removed items).
Inversion
Inverting "splice" is trivial: simply swap the values of "add" and "remove":
Patch context
The "splice" op aggregates contiguous changes. Thus by definition, items before and after the contiguous changes are unchanged. This allows for future expansion to include before/after patch context similar to that used by textual diff/patch tools such as GNU diff and patch. In such tools, patch context increases the accuracy of patching even when the target document has diverged from the original.
The text was updated successfully, but these errors were encountered: